About Me
I build reliable AI coding agents at enterprise scale. The answer isn't better models — it's better context engineering.
My methodology: let friction surface the gaps. Every time I correct an agent twice, that correction becomes a persistent constraint. The result is behavioral specifications across the full Claude Code toolkit — CLAUDE.md, Skills, Rules, Hooks, Commands, Memory — that make agents reliable from the first prompt.
This approach has shipped 4 production applications with compounding velocity. Each project faster than the last as learnings accumulate.
I'm now focused on the next level: moving from task prompting (making agents do X reliably) to behavioral engineering (shaping how agents reason before they act). Task prompts make agents good at specific jobs. Behavioral design makes them reliable at novel problems.
Context Engineering Principles
A systematic approach to making AI coding agents reliable at enterprise scale
Friction-Driven Refinement
Don't write comprehensive prompts upfront. Let conversation friction surface what's missing, then extract patterns into persistent context.
Full-Stack Context Architecture
Leverage the complete Claude Code toolkit: CLAUDE.md for behavioral specs, Skills for capabilities, Rules for constraints, Hooks for automation, Commands for workflows, Memory for persistence. Context inheritance flows from project root through directory hierarchy.
Behavioral Constraints Over Instructions
Explicit "NEVER" rules prevent the most common agent failure modes. Negative constraints are more reliable than positive instructions.
Tools Over Protocols
Local CLI tools (Puppeteer, Chrome DevTools) have far less overhead than MCP for agent self-verification. Give agents simple bash tools, not protocol negotiation.
Meta-Prompting
Use Claude to generate implementation prompts from Figma designs via MCP tools. The human curates; Claude compiles design → spec → code.
Compound Learning Loops
Document every solved problem in searchable format. Force agents to search past solutions before planning. The system gets smarter with every debugging session—knowledge compounds across sessions and projects.
Artifacts
Actual prompt engineering work and methodology artifacts
Implementation Prompts
Multi-step Figma-to-React implementation specifications with 1500+ line prompts including component inventories, state machines, and acceptance criteria. Generated via Claude + Figma MCP iteration, executed by Claude Code.
CLAUDE.md Examples
Frontend behavioral specifications with "NEVER" constraints, service layer architecture enforcement, autonomous debugging workflows, and agent delegation triggers that make agents deterministic executors.
Key Insight
The goal isn't to remove agent thinking—it's to shape it. Deterministic specs handle the 80% where execution matters. Behavioral constraints handle the 20% where agents must reason. The art is knowing which is which.
Philosophy
"We are not prompting anymore. We are orchestrating."
Context engineering is greater than prompting. The CLAUDE.md is where the magic lives — it's the constitution that makes agents reliable.
Each project teaches you where humans are doing work agents could do. Find the bottleneck. Give the agent eyes and hands. Encode the learnings. Compound.
The next frontier isn't better prompts—it's designing agent behavior. Not "do X" but "think this way before deciding." L2 prompting tells agents what to do. L3 engineering shapes how they reason. The difference: one makes agents execute, the other makes them reliable at novel tasks.
Education
Bachelor's Degree in Mathematics and Computer Science
University of California at Riverside
Design Options (change activeOption in code to preview)
✓ Option 1: Minimal Brutalist
Clean left border accent. Refined, professional.
✓ Option 2: Technical Circuit
Grid pattern + angular accent. Engineering-focused.
✓ Option 3: Editorial Asymmetric
Bold diagonal crop. Magazine-style design.
✓ Option 4: Organic Topology
Flowing curves. Mathematical + organic blend.