We are witnessing a fundamental shift in software engineering. We are moving from writing code to orchestrating agents. Tools like Cursor, Antigravity, and Claude Code are not just autocomplete enginesβthey are capable junior developers.
But here creates the paradox: To make AI autonomous, you need to be stricter than ever.
If you ask an AI to "make a button," you get chaos. If you give an AI a rigid standard operating procedure (SOP) and the right tools, you get pixel-perfect, testable, production-ready code.
π The Blueprint: The "Rule File"
Context is king. The most powerful file in your repository isn't `index.ts`βit's your **Rule File**. Depending on your agent of choice, this lives in .cursorrules (Cursor), .antigravity/rules (Antigravity), or CLAUDE.md (Claude). This file tells the AI how to behave before it writes a single character.
By defining your "Atomic Component Guidelines" in memory, you transform the AI from a guesser into a conformer.
// 1. Always use functional components // 2. Maps tokens from @design/system // 3. Must include *.stories.tsx // 4. Verify with Puppeteer MCP
βοΈ Step 1: The Atomic Structure
Ambiguity is the enemy of automation. We define a folder structure so rigid that the AI creates it deterministically every time.
Every component is a universe. It contains its logic, its visual states (stories), its interface, and its tests.
π¨ Step 2: Context Extraction via Figma MCP
Historically, developers (and AI) "eyeballed" designs. "That looks like 16px padding."
With the Model Context Protocol (MCP), we can give agents direct access to the source of truth. The AI doesn't see a picture of the design; it reads the design data.
The Workflow
Technical Requirement
npx -y @modelcontextprotocol/server-figmaThis server connects your agent to the Figma REST API (requires PAT). It allows the agent to inspect node hierarchy and extract exact tokens.
- Connect: Agent calls the
get_filetool to fetch the entire Figma file structure. - Inspect: It then calls
get_file_nodeswith a specificnode_idto get granular component data. - Extract: It receives JSON data:
{ paddingTop: 16, fills: [{ color: { r: 0.2, g: 0.4, b: 1 } }] }. - Map: It maps these raw values to your codebase's tokens (e.g.,
$spacing-md).
// Step 1: Get the file structure
await mcp.call("get_file", { file_key: "abc123XYZ" });
// Step 2: Drill into a specific component node
const nodeData = await mcp.call("get_file_nodes", {
file_key: "abc123XYZ",
ids: ["42:1337"] // The Figma node ID
});
// Returns: { nodes: { "42:1337": { document: { ... }, styles: { ... } } } }How it works under the hood
The Figma MCP Server acts as a bridge. It authenticates with the Figma API and exposes tools to the AI agent.
Instead of "viewing" pixels, the agent inspects the Scene Graph. It reads the Frame properties, iterates through Children, and extracts specific STYLE references. This means it can distinguish between a hardcoded #F00 and a semantic Danger/Red token.
π How Agents Connect to MCP Servers
AI agents (like Cursor, Claude Code, or Antigravity) don't natively "know" Figma or Puppeteer. Instead, they communicate with MCP Servers that act as bridges to external tools.
βββββββββββββββββββββββ
β AI Agent β (Cursor, Claude Code, etc.)
β Your IDE / CLI β
βββββββββββ¬ββββββββββββ
β MCP Protocol (JSON-RPC)
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β MCP Server Layer β
ββββββββββββββββββββββ¬βββββββββββββββββββββββββββββ€
β server-figma β server-puppeteer β
β (Figma REST API) β (Headless Chrome) β
ββββββββββββββββββββββ΄βββββββββββββββββββββββββββββ
β β
βΌ βΌ
Figma Cloud Local Browser- You configure MCP servers in your agent's settings (e.g.,
mcp.json). - Agent spawns/connects to these server processes on startup.
- Agent calls tools like
get_fileorpuppeteer_screenshotβthe server does the actual work.
π§± Step 3: Implementation via Design System
The AI generates the component using only valid components from your Design System. No <div> soup. If the design needs a layout, it uses <Box> or <Stack>. If it needs text, it uses <Typography>.
π§ͺ Step 4: The Laboratory (Storybook)
How do you verify a component in isolation? Storybook.
The AI is mandated to write a .stories.tsx file. This isn't just for documentationβit's for verification. The AI must define the 'Default' state, the 'Error' state, and the 'Loading' state.
πΈ Step 5: Visual Verification via Puppeteer
Here is the magic loop. Once the code is written and the story is running, the agent becomes the tester.
The Self-Correction Loop π
Technical Requirement
npx -y @modelcontextprotocol/server-puppeteerThe "eyes" of your agent. This server launches a headless Chrome instance that the agent can control to capture screenshots for verification.
- 1. Navigate:Agent uses
puppeteer_navigateto open the Storybook URL. - 2. Capture:Agent calls
puppeteer_screenshotto take a picture of what it built. - 3. Compare:Agent compares the screenshot (Actual) with the Figma screenshot (Expected).
- 4. Fix:"The padding looks 4px too small." The agent rewrites the code and repeats the loop.
// Step 1: Navigate to the Storybook story
await mcp.call("puppeteer_navigate", {
url: "http://localhost:6006/?path=/story/button--primary"
});
// Step 2: Capture the rendered component
const screenshot = await mcp.call("puppeteer_screenshot", {
name: "button-primary-actual",
width: 800,
height: 600
});
// Step 3: Agent analyzes the image vs. expected design
// If mismatch detected, agent edits code and repeats.Conclusion
By combining Rule Files, Figma MCP, Storybook, and Puppeteer, we create a closed-loop system where AI agents can safely and autonomously build UI.
The role of the developer shifts from "pixel pusher" to "system architect." You define the constraints; the agents fill the space.
