@tenthlegacy
Joined May 2025
Attempts at making the 1b model count characters and apply logic to responses.
PRESET
6
Prompt for maximum effectiveness with Qwen3-based models, incorporating best practices in prompt engineering, alignment with Qwen’s documented behavior, and cognitive scaffolding that supports step-by-step reasoning.
PRESET
14
Improve reasoning and logic fixing how many r's are in mirror and other hallucination traps.
PRESET
13
Universal logic that stops hallucination 90% of tests with low quality models and increases accuracy.
PRESET
28