← All posts

2 Months with Claude Code: A Field Report

Daily use for two months — what clicked, what burned hours, and the features nobody writes about but everyone should use.

April 8, 2026 11 min read Review
TL;DR

Claude Code isn't Copilot. It's an agent that reads the whole codebase, plans, edits across many files, runs tests, and fixes its own mistakes. The wins come from the .claude folder, subagents, hooks, MCP, and knowing when to switch models. The losses come from long CLAUDE.md files and polluted context.

Two months in, I've spent more hours in a terminal with Claude than in conversation with humans. Courses, certifications, real projects, real breakages, real fixes. This is the field report — what actually works, what doesn't, and the features that changed how I ship.

Claude Code isn't what you think

The single biggest mistake: treating it like Copilot. It isn't. It's an agent that reads the codebase, plans an approach, edits files across the project, runs tests, sees errors, and fixes them. You tell it the goal — it figures out the steps.

It lives in the terminal by default, but also runs as a VS Code and JetBrains plugin, a desktop app, a web app at claude.ai/code, in Slack, in GitHub Actions, and remotely from mobile. The terminal is still the fastest surface for real work.

Pick the right model, not the biggest

Match effort to task. Switch mid-session with /model opus when a hard problem lands. In week one I burned roughly $200 on Opus because it "felt smarter." Sonnet would have done most of it for a fraction of the cost.

Reasoning depth is independent of model. /effort low for syntax questions, /effort high for debugging, /effort max for anything going to production. Tune both, not just the model.

The .claude folder is the real product

Most people miss 80% of the value here. There are actually two .claude directories: one in the project (committed, shared with the team) and one in ~/.claude/ (personal, machine-local).

CLAUDE.md — the most important file

Loaded into the system prompt every session. Whatever's there, Claude follows. Consistently.

What belongs in it:

What doesn't belong: anything a linter handles, full docs that should live in a linked file, theoretical paragraphs.

Keep it under 200 lines. Longer files eat context and instruction-following degrades. My 400-line CLAUDE.md was half-ignored. Cut to 150 — everything improved.

The rules/ folder — modular at team scale

Every markdown file inside .claude/rules/ loads alongside CLAUDE.md. Each file stays focused (api-conventions.md, testing.md, security.md). Even better, add a YAML header with paths: and the rule only fires when Claude touches matching files:

---
paths:
  - "src/api/**/*.ts"
  - "src/handlers/**/*.ts"
---
All handlers return { data, error } shape.
Use zod for request body validation.

Features that changed how I work

Multi-file editing

This is where Claude Code leaves other tools behind. I refactored a full Express app from callbacks to async/await in one session — 23 files, all correct. Diffs shown for review per file before applying. No Tab-completion tool gets near this.

Subagents

It took me three weeks to start using these, and I regret every day of it.

Claude can spawn specialized subagents that run in isolation — read-only explorers on Haiku, planners before implementing, general agents for clean-context multi-step tasks. When you run a full test suite in the main conversation, hundreds of lines of output pollute the context. Subagents do the dirty work and return a compressed summary.

Custom agents

Drop a markdown file in .claude/agents/security-reviewer.md with a YAML header describing the role, allowed tools, and model. Now Claude auto-delegates security reviews — or you call it directly with /security-reviewer. Restrict tools (a security auditor only needs Read, Grep, Glob). Use Haiku for cheap read-heavy work, Opus for deep analysis.

MCP — connect Claude to everything

Model Context Protocol is where Claude Code goes from coding helper to workflow orchestrator. Connect it to GitHub, Slack, PostgreSQL, Jira, Figma, Sentry, Notion, Playwright, and more via .mcp.json.

Real commands I run:

One tool, connected to everything. No tab switching. No copy-paste between systems.

Hooks — deterministic automation

CLAUDE.md instructions are guidance. Hooks are deterministic — shell scripts that fire at defined events, every time, no exceptions.

A PreToolUse hook that blocks rm -rf, git push --force, and DROP TABLE has saved me twice already. A Stop hook running npm test prevents false "done" claims.

Skills — reusable workflows

Skills are packaged workflows Claude invokes when the conversation matches, or you launch with a slash command. Unlike simple commands, skills can include companion files (@DETAILED_GUIDE.md). The viral /last30days skill scans Reddit, X, and HN over 30 days on any topic and returns ready-to-use prompts. Personal skills live in ~/.claude/skills/ and work in every project.

Workflows worth stealing

Interview me

Starting a complex project? Don't write a 500-word prompt. Just say:

claude "interview me in detail about what i want to build"

Let Claude ask the questions. Ten minutes of Q&A builds better context than any prompt you'd write from scratch.

Research, then implement

Shift+Tab twice to enter plan mode. Ask Claude to explore and produce a plan. Review. Then — and only then — approve and implement. The quality jump from "just do it" to "understand first, then do it" is enormous on legacy code.

Parallel worktrees

Run claude --worktree auth-feature and claude --worktree billing-feature in separate terminals. Two features developed simultaneously, isolated branches, merge when ready. I sometimes run three at once.

Context management

This is the #1 skill separating good users from great. The context window is about 200K tokens, and it runs out faster than you expect. Actively manage it:

Golden rule: after two failed attempts, stop. Don't push through. /clear and a fresh prompt beats a polluted context full of failed approaches. I learned this at the cost of six hours.

Mistakes I made so you don't have to

  1. Novel-length CLAUDE.md — kept under 200 lines or it gets half-ignored.
  2. No /clear between tasks — CSS-bug context has no business in an API redesign.
  3. Fighting instead of restarting — polluted context compounds. Reset.
  4. Ignoring subagents — test output shouldn't live in the main conversation.
  5. Opus for everything — Sonnet handles 90% of tasks just as well.
  6. Skipping plan mode — always plan when 3+ files are involved.
  7. Passive about context — treat the window as a resource, not a given.

Claude Code vs Cursor vs Copilot

Claude Code wins on heavy lifting — refactors, architecture, debugging, anything touching more than three files. Cursor wins on Tab autocomplete and inline edits inside the IDE. Copilot has the cheapest tier and the safest enterprise story.

You don't have to pick. Experienced developers average 2.3 AI coding tools. Most power users run Cursor for inline flow and Claude Code for everything else.

The bottom line

Claude Code isn't a tool. It's a partner that reads the entire codebase, follows your standards, runs tests, remembers your preferences, and gets better every time you use it.

Developers who learn to work with it — not paste into it — are shipping at a pace that was unthinkable a year ago. Stop saving articles. Start building. If you want the short reference, the cheatsheet and best practices pages are the fastest way in. If you want deeper territory, the 15 hidden features write-up is a good next stop.

Want the full Claude Code reference? Open the cheatsheet →