Module 01

Seven core lessons

Extracted from 2000+ hours of LLM development. Each lesson addresses a fundamental challenge in agentic workflows and provides actionable implementation steps.

The Problem

When Claude makes a mistake, reconstructing what went wrong is difficult because the context, prompts, and execution path are ephemeral. Without systematic capture, you lose the ability to learn from failures.

Failure Categories

Hallucination

Generated false information

Instruction ignored

Neglected explicit directives

Context lost

Forgot prior conversation

Wrong tool

Applied incorrect methodology

Incomplete execution

Started but abandoned tasks

Implementation

  1. 1. Develop logging commands that preserve conversation context and classify failures systematically
  2. 2. Establish organized storage using .claude/error-logs/ with timestamped entries
  3. 3. Create success logging to identify which prompts consistently produce reliable results

KEY INSIGHT: "The tightest feedback loops produce the fastest learning." Convert ephemeral failures into documented lessons.

Core Concept

Slash commands transform Claude into a service platform. They're deterministic triggers for multi-step operations—not probabilistic natural language requests. The intelligence lives in the skill, but you get reliable execution through the command.

Capabilities

  • Launch parallel subagents for concurrent work
  • Access files, repositories, and browsers
  • Chain multiple operations together
  • Route to different models based on task requirements

Implementation

  1. 1. Identify repetitive workflows with multiple consistent steps
  2. 2. Design simple interfaces with memorable command names
  3. 3. Create command files in .claude/commands/
  4. 4. Consider model routing for optimal performance across phases

Example

A /presentation-to-shorts command can replace "what would traditionally require a content analysis tool, a project management system, multiple team handoffs, and hours of coordination."

Problem & Solution

Permission prompts exist for safety but disrupt workflow continuity. Rather than removing all safeguards through --dangerously-skip-permissions alone, combine this flag with hook-based validation for autonomous operation within safe boundaries.

Implementation

  1. 1. Identify constraints — operations that should never execute autonomously (recursive deletions, force pushes, production config changes)
  2. 2. Deploy pre-execution hooks — create bash scripts in .claude/hooks/
  3. 3. Register hooks — configure through JSON settings
  4. 4. Validate thoroughly — test blocked patterns intentionally

Example: Block dangerous rm operations

if echo "$COMMAND" | grep -qE "rm\s+(-rf|-fr|--recursive).*(/|~|\$HOME)"; then
    echo "BLOCKED"
fi

KEY INSIGHT: Programmatic control through deterministic code rules creates superior outcomes compared to relying solely on probabilistic model guidance.

Warning Signs

  • Multiple unrelated projects in one session
  • Forgotten instructions in CLAUDE.md
  • CLAUDE.md files exceeding 50 lines
  • Claude referencing stale earlier discussions
  • Declining response quality over longer sessions

Compaction Strategies

Manual compaction control

Disable automatic compaction, display context usage status

Clear breakpoints

Use /clear with repo-specific config between work contexts

Orchestrator pattern

Assign Claude as coordinator launching specialized subagents for isolated tasks

Strategic manual compaction

Invoke /compact at appropriate boundaries

Custom handoff command

Implement /handoff {NOTES} for clean session transitions

Double-Escape Time Travel

A powerful but underutilized feature allowing conversation restoration:

  • Conversation-only restore: Clean polluted discussions, preserve code
  • Full restore: Revert both conversation and code to previous state

KEY INSIGHT: Treat context as a finite resource. Irrelevant tokens actively impair performance. Rigorous hygiene practices significantly enhance results.

Default Behavior

Claude spawns Sonnet or Haiku subagents by default, even for knowledge-intensive tasks that would benefit from more capable models. You must take explicit control over subagent selection.

Actionable Steps

  1. 1. Force Opus for complex tasks — Add "Always launch opus subagents" to config for knowledge-intensive work
  2. 2. Monitor activity — Track which subagents spawn, their tasks, and completion status
  3. 3. Keep tasks focused — One clear objective per subagent with well-defined inputs/outputs
  4. 4. Parallelize work — Spawn multiple independent subagents to maintain clean contexts
  5. 5. Prioritize quantity over load — More subagents >> More tasks per subagent

Critical Warnings

Hallucination chains

Subagent-to-subagent info passing can compound errors. Use deterministic verification between handoffs.

Subagent overload

Multiple unrelated tasks create context pollution. Split into separate focused agents.

Model mismatch

Complex reasoning delegated to lighter models produces inferior results.

Core Principle

Context windows are finite resources. Rather than accumulating tools, ruthlessly evaluate whether each integration genuinely justifies its token consumption.

Recommended Essentials

Context7 MCP

Provides current framework documentation. Prevents reliance on outdated training data and reduces API hallucinations.

Dev Browser / Playwright MCP

Enables browser automation and visual debugging. Dev Browser is preferable for faster execution.

Implementation

  • Audit existing tools weekly to identify unused integrations
  • Evaluate whether new tools warrant the overhead
  • Start minimally with only Context7 and Dev Browser
  • Monitor when context limitations actually emerge
  • Leverage Claude's native capabilities before seeking MCPs

KEY INSIGHT: Constraint as opportunity—a streamlined toolkit preserves context capacity for actual problem-solving.

Two Observations

Speaking is 3-4x faster

Voice captures nuance better than typing for most people

Automation potential

Prompt engineering follows repeatable patterns with templatable structures

Reprompter Workflow

  1. 1. Activate via keybind
  2. 2. Dictate requirements verbally
  3. 3. System poses clarifying questions
  4. 4. Provide voice answers
  5. 5. Automated generation of structured prompt with XML tags, role assignments, embedded best practices

Alternative Approaches

Option 1: Reprompter system

Voice capture + global keybind + template-based enhancement + fast model processing

Option 2: Model interviews

"Instead of trying to write the perfect prompt upfront, ask the model to interview you about what you want"

KEY INSIGHT: Eliminate manual prompt typing. Leverage voice input or conversational interviews to focus on solving actual problems instead of formatting.