About Me
I've spent the last 18 months discovering what makes AI coding agents succeed or fail at enterprise scale. The answer isn't better models — it's better context engineering.
My methodology: let friction surface the gaps. Every time I have to correct an agent twice, that correction becomes a persistent constraint. The result is CLAUDE.md files that function as "compressed histories of agent failures" — behavioral specifications that make agents reliable from the first prompt.
I've applied this approach to ship 4 production applications in under 6 months, with increasingly radical velocity as learnings compound across projects.
Context Engineering Principles
A systematic approach to making AI coding agents reliable at enterprise scale
Friction-Driven Refinement
Don't write comprehensive prompts upfront. Let conversation friction surface what's missing, then extract patterns into persistent context.
Directory Hierarchy for Context Inheritance
Parent CLAUDE.md files contain domain knowledge (functional specs, business rules). Child CLAUDE.md files contain implementation knowledge (tooling, patterns, constraints). Agents inherit both.
Behavioral Constraints Over Instructions
Explicit "NEVER" rules prevent the most common agent failure modes. Negative constraints are more reliable than positive instructions.
Tools Over Protocols
Local CLI tools (Puppeteer, Chrome DevTools) have far less overhead than MCP for agent self-verification. Give agents simple bash tools, not protocol negotiation.
Meta-Prompting
Use Claude to generate implementation prompts from Figma designs via MCP tools. The human curates; Claude compiles design → spec → code.
Artifacts
Actual prompt engineering work and methodology artifacts
Implementation Prompts
Multi-step Figma-to-React implementation specifications with 1500+ line prompts including component inventories, state machines, and acceptance criteria. Generated via Claude + Figma MCP iteration, executed by Claude Code.
CLAUDE.md Examples
Frontend behavioral specifications with "NEVER" constraints, service layer architecture enforcement, autonomous debugging workflows, and agent delegation triggers that make agents deterministic executors.
Key Insight
These prompts aren't instructions — they're deterministic specifications that remove decision-making from the agent. The agent becomes an executor, not an architect.
Philosophy
"We are not prompting anymore. We are orchestrating."
Context engineering is greater than prompting. The CLAUDE.md is where the magic lives — it's the constitution that makes agents reliable.
Each project teaches you where humans are doing work agents could do. Find the bottleneck. Give the agent eyes and hands. Encode the learnings. Compound.
Education
Bachelor's Degree in Mathematics and Computer Science
University of California at Riverside
Design Options (change activeOption in code to preview)
✓ Option 1: Minimal Brutalist
Clean left border accent. Refined, professional.
✓ Option 2: Technical Circuit
Grid pattern + angular accent. Engineering-focused.
✓ Option 3: Editorial Asymmetric
Bold diagonal crop. Magazine-style design.
✓ Option 4: Organic Topology
Flowing curves. Mathematical + organic blend.