"Any issue in LLM-generated code is solely due to YOU, the user." Problems trace back to two root causes: inadequate prompt engineering or insufficient context management.
// The patterns below fix both. No more blaming the model.
// CORE_MODULES
SEVEN LESSONS
The complete pattern library. Error logging, commands-as-apps, hooks, context hygiene, subagent control, tool stacks, and prompt engineering.
QUICK REFERENCE
Situation-action lookup tables. When Claude does something wrong, when context gets polluted, when you need reliable workflows.
// EXECUTABLE_SKILLS
/log_error
Systematic failure capture. Fork conversations, interview Claude about what went wrong, document the exact prompt that broke things.
/log_success
Capture what works. Document effective prompts, extract generalizable templates, build a library of reliable patterns.
Models degrade ~50% at 50k tokens. Context is finite. Every token must earn its place. These patterns help you stay lean and effective across long sessions.
// Treat context like RAM. Clean it. Manage it. Respect its limits.