Skills externalize project knowledge so Claude remembers what you forget. Structure them as main file + references for complex domains.
Part of the Claude Code Field Guide series.
Skills are the core of my Claude Code stack. They're how I teach Claude to do my work so I don't have to remember everything.
The Problem Skills Solve
Every project has accumulated knowledge: why certain architectural decisions were made, which data sources are reliable, what the known edge cases are. This knowledge lives in your head, scattered docs, or tribal memory that evaporates when you context-switch.
Skills externalize that knowledge. When I return to a project after months away, I don't need to re-learn the codebase. Claude already knows it.
Simple Skills
A basic skill is a markdown file with instructions:
# Data Pipeline Skill
Run pipelines with: `python -m pipelines.main --date YYYY-MM-DD`
Known issues:
- GPS data before 2024 has timezone bugs
- Temperature sensor on device OK-YUU77 reads 5° high
This alone saves hours of archaeology when revisiting old projects.
Complex Skills with Reference Files
For complex domains, I split skills into a main file plus reference documentation:
.claude/skills/takeoff-detection/
├── skill.md # Quick reference, thresholds, usage
└── reference/
├── algorithm.md # Two-stage detection logic
├── aircraft-behavior.md # Per-aircraft patterns
├── investigation-queries.md # SQL for debugging
└── validation.md # Threshold analysis
The main skill.md stays focused on what Claude needs to act. Reference files provide depth for investigation and edge cases. Claude can pull in the relevant reference when needed without loading everything into context.
Here's a real example. My takeoff detection skill for aircraft telemetry has a main file with quick-reference tables:
# Takeoff Detection Algorithm
## Quick Reference
| Threshold | Value | Notes |
|-----------|-------|-------|
| RPM takeoff power | 4000 | Significant power application |
| Course tolerance | +/-10 deg | Must be aligned with runway |
| IAS rotation speed | 75 kph | Liftoff speed for distance metrics |
## Key Files
| File | Purpose |
|------|---------|
| `flight_phase_detection.py` | Phase detection logic |
| `mart_takeoff_performance.sql` | Aggregated metrics |
## Quick Investigation Query
SELECT datetime, rpm1, indicated_airspeed_kph
FROM silver_layer.telemetry
WHERE operation_flight_id = '{flight_id}'
AND datetime BETWEEN takeoff_moment - INTERVAL 30 SECOND
AND takeoff_moment
## Detailed Documentation
See reference files for: algorithm details, aircraft-specific behavior,
validation queries, rolling start detection.
The reference files (reference/algorithm.md, reference/aircraft-behavior.md) contain the deep technical documentation—detection rules, known limitations, per-aircraft quirks. Claude reads the main skill for quick answers, dives into references for investigation.
Migration Decision Skills
Another pattern: skills that capture research and decision-making, not just operational knowledge.
My Spark-to-Polars migration skill documents a comprehensive evaluation:
# Spark to Polars Migration
## Decision Status: POSTPONED (Dec 2025)
Migration is 50/50 - no urgent need, but Polars is viable when a reason arises.
### Why Postponed
- Local Spark processing works and costs $0
- No active pain point
- Working system with 4500+ lines of battle-tested code
### When to Revisit
- Starting a greenfield project
- Data volume grows significantly
- Want to move to Cloud Run Jobs (Polars fits better)
## Research Findings
| Tool | Sweet Spot | Your Fit |
|------|-----------|----------|
| Spark | TB-scale distributed | Overkill for 17MB |
| Polars | MB to ~100GB single-node | Perfect fit |
## Translation Reference
[700+ lines of Spark → Polars pattern translations]
## Verdict
If migrating, migrate everything. Hybrid is worst of both worlds.
This skill saves future-me from re-researching the same question. Claude can reference the decision and reasoning without repeating the analysis.
Skills as Documentation Abstraction
A pattern I've refined: skills as an abstraction layer over project documentation.
Instead of duplicating information, skills reference canonical docs and add LLM-specific guidance. The key: force skills to reference specific files and sections, not vague pointers. This keeps context efficient—Claude loads exactly what it needs.
# Architecture Skill
## References (load on demand)
- System overview: `docs/architecture.md#system-design`
- Service patterns: `docs/architecture.md#service-patterns`
- ADRs: `docs/decisions/` (check before proposing new patterns)
## Constraints to Enforce
- All new services must use the event bus
- No direct database access from API handlers
- Feature flags for all user-facing changes
## When Making Architectural Changes
1. Read the relevant ADR in `docs/decisions/`
2. Check if similar decisions exist
3. Follow established patterns
4. **Update docs if behavior changes**
This keeps documentation as the single source of truth while giving Claude a unified interface to access it.
Keeping Docs Current
Skills reference docs, but docs go stale. I enforce automatic updates in my global CLAUDE.md:
## Documentation
When you change behavior that's documented:
1. Find the relevant doc file
2. Update it to reflect the change
3. If no doc exists and the change is significant, ask if I want one created
This creates a virtuous cycle: skills point to docs, Claude updates docs when code changes, docs stay accurate. Without this rule, I found documentation drifted from reality within weeks.
Semi-Automatic Workflow Skills
Some processes need human decisions at key points. Skills can guide these:
# Release Skill
## Pre-release Checklist
1. Run `npm test` - all must pass
2. Run `npm run build` - verify no errors
3. **ASK USER**: Which version bump? (patch/minor/major)
4. Update CHANGELOG.md
5. **ASK USER**: Review changelog entry
6. Create git tag
7. **ASK USER**: Ready to push to production?
8. Run `./deploy.sh production`
Claude follows the script, pauses for decisions, then continues. The workflow is documented and repeatable without being fully automated.
Making Skills Actually Activate
Skills have a problem: Claude often ignores them. I wrote about my solution in Forcing Claude Code Skills to Actually Activate. The short version: a hook that forces explicit skill evaluation before every response, achieving 84% activation vs 20% baseline.
Next: Subagents: Orchestration Without Context Pollution
Back to Claude Code Field Guide