From theory to practice
The previous sessions explained what AI is, how models work, which tools exist, and how maturity evolves. That helps you understand the map. But understanding and operating are different things.
There’s no universal recipe. What works for a senior dev in a large project is different from what works for a PM validating an idea. But the principles apply to everyone.
Why this matters
Most AI frustration comes from two places:
- Bad input: the request was vague, so the answer was generic. That’s not the model’s fault. It’s a specification problem.
- Missing validation: the output was accepted without checking, and the issue appeared in production. That’s not the tool’s fault. It’s a process problem.
This session attacks both: what to do before asking, and what to do before accepting.
Before asking: the context checklist
Before sending anything to AI, answer four questions.
1. What is the goal?
Not what you want AI to do. What final outcome do you need?
“Generate a modal component” isn’t a useful goal. “I need a confirmation modal shown when a user tries to delete an account, with cancel and confirm actions, and background scroll locked” is a goal.
2. What is the context?
What does the model need to know beyond the request?
- Which files are relevant? Types, existing components, patterns
- Which framework or library are we using?
- Which team conventions matter?
- Is there something similar in the project that should be used as reference?
3. What are the constraints?
What should the model not do?
- “Do not add an external dependency”
- “Keep compatibility with the current API”
- “Do not change the public interface of this module”
- “Use CSS Modules, not Tailwind”
Constraints prevent the model from making decisions you’ll need to undo later.
4. What output format do you expect?
How should the result come back?
- A full file or a diff?
- With tests or without tests?
- With explanation or only code?
- Folder structure or everything in one file?
Before accepting: the validation checklist
The model generated something. Before accepting, check it.
Do tests pass?
If the project has tests, run them. If the model generated tests, read them first. A test that passes but tests nothing creates fake confidence.
Do types pass?
Run typecheck. AI can generate code that looks right but only fails at compile time.
Does lint pass?
Lint catches what AI ignores: import order, naming, and style rules.
Did you read the code?
Not skimmed. Read. Every unread line can hide a bug, security issue, or design decision that doesn’t fit your context. For a 50-line component, two minutes is enough.
Does it solve the original problem?
Compare the output with the original goal. The model can generate something technically correct that doesn’t solve the actual problem.
Real examples
PM: using AI to refine specs
Situation: Beatriz is a PM writing a spec for push notifications.
How she operates:
- Writes a draft spec with the requirements she knows
- Sends it to the model: “Review this spec and point out gaps. Which edge cases did I miss?”
- The model flags missing details: OS-level settings, retry behavior, notification priority
- Updates the spec and sends it to the team
What she doesn’t do: ask the model to write the spec from zero. The domain knowledge is hers; the model finds holes.
Junior dev: learning and implementing
Situation: Lucas is a junior dev implementing pagination in an API he’s never touched.
How he operates:
- Reads the existing API code first
- Asks the model to explain cursor pagination versus offset/limit
- Chooses cursor-based and asks for the implementation using existing project types
- Reviews the code, runs tests, and adjusts what doesn’t match team conventions
What he doesn’t do: ask “make pagination” before understanding the concept.
Senior dev: accelerating and reviewing
Situation: Carlos needs to migrate 15 Express controllers to Fastify.
How he operates:
- Migrates the first controller manually to define the pattern
- Writes a rules file describing the migration conventions
- Uses a coding agent to migrate the remaining controllers one by one, validating tests each time
- Reviews diffs: did the agent follow the pattern? Do tests pass?
What he doesn’t do: say “migrate everything” without defining the pattern first.
Building repeatable artifacts
What separates “using AI sometimes” from “operating AI-native” is having artifacts that make the process repeatable.
Rules files
Files that tell the agent how to operate in the project (AGENTS.md, CLAUDE.md in Claude Code, .cursor/rules/). A good rules file is specific and short: “TypeScript strict. CSS Modules. Tests with Vitest. Components in PascalCase.”
Prompt templates
Reusable prompts for common situations: generating tests from specs, reviewing PRs for security and performance, explaining modules to newcomers. They’re starting points, not rigid scripts.
Validation checklists
The “before accepting” checklist can live in the repo, in the PR template, or in CI steps. The important part is that it exists and people use it.
Validation scripts
Commands that automate checks: npm run typecheck && npm run lint && npm run test. Pre-commit hooks and CI that block merges make the process less dependent on discipline.
Where this breaks
-
Checklists as bureaucracy. If the checklist becomes a ritual you fill without thinking, it’s lost the point.
-
Skipping validation when rushed. “I know it’s fine, no need to run tests” is how many production bugs are born.
-
Automating too early. If you don’t know the right process yet, automation freezes the wrong one. Do it manually, find the pattern, then automate.
Interactive block
Takeaway
- Before asking: define goal, context, constraints, and format
- Before accepting: run tests, typecheck, lint, and read the code
- Start with a rules file for your project. It has the highest return on effort
- Manual first, then pattern, then automation. In that order