Seven core lessons
Extracted from 2000+ hours of LLM development. Each lesson addresses a fundamental challenge in agentic workflows and provides actionable implementation steps.
// THE_PROBLEM
When Claude makes a mistake, reconstructing what went wrong is difficult because the context, prompts, and execution path are ephemeral. Without systematic capture, you lose the ability to learn from failures.
// FAILURE_CATEGORIES
Generated false information
Neglected explicit directives
Forgot prior conversation
Applied incorrect methodology
Started but abandoned tasks
// IMPLEMENTATION
- 1. Develop logging commands that preserve conversation context and classify failures systematically
-
2.
Establish organized storage using
.claude/error-logs/with timestamped entries - 3. Create success logging to identify which prompts consistently produce reliable results
KEY INSIGHT: "The tightest feedback loops produce the fastest learning." Convert ephemeral failures into documented lessons.
// CORE_CONCEPT
Slash commands transform Claude into a service platform. They're deterministic triggers for multi-step operations—not probabilistic natural language requests. The intelligence lives in the skill, but you get reliable execution through the command.
// CAPABILITIES
- Launch parallel subagents for concurrent work
- Access files, repositories, and browsers
- Chain multiple operations together
- Route to different models based on task requirements
// IMPLEMENTATION
- 1. Identify repetitive workflows with multiple consistent steps
- 2. Design simple interfaces with memorable command names
-
3.
Create command files in
.claude/commands/ - 4. Consider model routing for optimal performance across phases
// EXAMPLE
A /presentation-to-shorts command can replace "what would traditionally require a content analysis tool, a project management system, multiple team handoffs, and hours of coordination."
// PROBLEM_&_SOLUTION
Permission prompts exist for safety but disrupt workflow continuity. Rather than removing all safeguards through --dangerously-skip-permissions alone, combine this flag with hook-based validation for autonomous operation within safe boundaries.
// IMPLEMENTATION
- 1. Identify constraints — operations that should never execute autonomously (recursive deletions, force pushes, production config changes)
-
2.
Deploy pre-execution hooks — create bash scripts in
.claude/hooks/ - 3. Register hooks — configure through JSON settings
- 4. Validate thoroughly — test blocked patterns intentionally
// EXAMPLE: Block dangerous rm operations
if echo "$COMMAND" | grep -qE "rm\s+(-rf|-fr|--recursive).*(/|~|\$HOME)"; then
echo "BLOCKED"
fi
KEY INSIGHT: Programmatic control through deterministic code rules creates superior outcomes compared to relying solely on probabilistic model guidance.
// WARNING_SIGNS
- Multiple unrelated projects in one session
- Forgotten instructions in CLAUDE.md
- CLAUDE.md files exceeding 50 lines
- Claude referencing stale earlier discussions
- Declining response quality over longer sessions
// COMPACTION_STRATEGIES
Disable automatic compaction, display context usage status
Use /clear with repo-specific config between work contexts
Assign Claude as coordinator launching specialized subagents for isolated tasks
Invoke /compact at appropriate boundaries
Implement /handoff {NOTES} for clean session transitions
// DOUBLE_ESCAPE_TIME_TRAVEL
A powerful but underutilized feature allowing conversation restoration:
- Conversation-only restore: Clean polluted discussions, preserve code
- Full restore: Revert both conversation and code to previous state
KEY INSIGHT: Treat context as a finite resource. Irrelevant tokens actively impair performance. Rigorous hygiene practices significantly enhance results.
// DEFAULT_BEHAVIOR
Claude spawns Sonnet or Haiku subagents by default, even for knowledge-intensive tasks that would benefit from more capable models. You must take explicit control over subagent selection.
// ACTIONABLE_STEPS
- 1. Force Opus for complex tasks — Add "Always launch opus subagents" to config for knowledge-intensive work
- 2. Monitor activity — Track which subagents spawn, their tasks, and completion status
- 3. Keep tasks focused — One clear objective per subagent with well-defined inputs/outputs
- 4. Parallelize work — Spawn multiple independent subagents to maintain clean contexts
- 5. Prioritize quantity over load — More subagents >> More tasks per subagent
// CRITICAL_WARNINGS
Subagent-to-subagent info passing can compound errors. Use deterministic verification between handoffs.
Multiple unrelated tasks create context pollution. Split into separate focused agents.
Complex reasoning delegated to lighter models produces inferior results.
// CORE_PRINCIPLE
Context windows are finite resources. Rather than accumulating tools, ruthlessly evaluate whether each integration genuinely justifies its token consumption.
// RECOMMENDED_ESSENTIALS
Provides current framework documentation. Prevents reliance on outdated training data and reduces API hallucinations.
Enables browser automation and visual debugging. Dev Browser is preferable for faster execution.
// IMPLEMENTATION
- Audit existing tools weekly to identify unused integrations
- Evaluate whether new tools warrant the overhead
- Start minimally with only Context7 and Dev Browser
- Monitor when context limitations actually emerge
- Leverage Claude's native capabilities before seeking MCPs
KEY INSIGHT: Constraint as opportunity—a streamlined toolkit preserves context capacity for actual problem-solving.
// TWO_OBSERVATIONS
Voice captures nuance better than typing for most people
Prompt engineering follows repeatable patterns with templatable structures
// REPROMPTER_WORKFLOW
- 1. Activate via keybind
- 2. Dictate requirements verbally
- 3. System poses clarifying questions
- 4. Provide voice answers
- 5. Automated generation of structured prompt with XML tags, role assignments, embedded best practices
// ALTERNATIVE_APPROACHES
Voice capture + global keybind + template-based enhancement + fast model processing
"Instead of trying to write the perfect prompt upfront, ask the model to interview you about what you want"
KEY INSIGHT: Eliminate manual prompt typing. Leverage voice input or conversational interviews to focus on solving actual problems instead of formatting.