AI agents building atomic components
Engineering

AI-First Component Development: The Ultimate Guide to Automating UI with Agents πŸ€–

β€’8 min read

We are witnessing a fundamental shift in software engineering. We are moving from writing code to orchestrating agents. Tools like Cursor, Antigravity, and Claude Code are not just autocomplete enginesβ€”they are capable junior developers.

But here creates the paradox: To make AI autonomous, you need to be stricter than ever.

If you ask an AI to "make a button," you get chaos. If you give an AI a rigid standard operating procedure (SOP) and the right tools, you get pixel-perfect, testable, production-ready code.

πŸ“œ The Blueprint: The "Rule File"

Context is king. The most powerful file in your repository isn't `index.ts`β€”it's your **Rule File**. Depending on your agent of choice, this lives in .cursorrules (Cursor), .antigravity/rules (Antigravity), or CLAUDE.md (Claude). This file tells the AI how to behave before it writes a single character.

By defining your "Atomic Component Guidelines" in memory, you transform the AI from a guesser into a conformer.

.cursorrules / .antigravity / CLAUDE.md
// 1. Always use functional components
// 2. Maps tokens from @design/system
// 3. Must include *.stories.tsx
// 4. Verify with Puppeteer MCP

βš›οΈ Step 1: The Atomic Structure

Ambiguity is the enemy of automation. We define a folder structure so rigid that the AI creates it deterministically every time.

Every component is a universe. It contains its logic, its visual states (stories), its interface, and its tests.

src/components/
β”œβ”€β”€ ComponentName/
β”œβ”€β”€ ComponentName.tsx
β”œβ”€β”€ ComponentName.stories.tsx
β”œβ”€β”€ interfaces.ts
└── __tests__/

🎨 Step 2: Context Extraction via Figma MCP

Historically, developers (and AI) "eyeballed" designs. "That looks like 16px padding."

With the Model Context Protocol (MCP), we can give agents direct access to the source of truth. The AI doesn't see a picture of the design; it reads the design data.

The Workflow

Technical Requirement
npx -y @modelcontextprotocol/server-figma

This server connects your agent to the Figma REST API (requires PAT). It allows the agent to inspect node hierarchy and extract exact tokens.

  1. Connect: Agent calls the get_file tool to fetch the entire Figma file structure.
  2. Inspect: It then calls get_file_nodes with a specific node_id to get granular component data.
  3. Extract: It receives JSON data: { paddingTop: 16, fills: [{ color: { r: 0.2, g: 0.4, b: 1 } }] }.
  4. Map: It maps these raw values to your codebase's tokens (e.g., $spacing-md).
// Agent MCP Tool Call (Conceptual)
// Step 1: Get the file structure
await mcp.call("get_file", { file_key: "abc123XYZ" });

// Step 2: Drill into a specific component node
const nodeData = await mcp.call("get_file_nodes", {
  file_key: "abc123XYZ",
  ids: ["42:1337"] // The Figma node ID
});

// Returns: { nodes: { "42:1337": { document: { ... }, styles: { ... } } } }

How it works under the hood

The Figma MCP Server acts as a bridge. It authenticates with the Figma API and exposes tools to the AI agent.

Instead of "viewing" pixels, the agent inspects the Scene Graph. It reads the Frame properties, iterates through Children, and extracts specific STYLE references. This means it can distinguish between a hardcoded #F00 and a semantic Danger/Red token.

πŸ”Œ How Agents Connect to MCP Servers

AI agents (like Cursor, Claude Code, or Antigravity) don't natively "know" Figma or Puppeteer. Instead, they communicate with MCP Servers that act as bridges to external tools.

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚     AI Agent        β”‚  (Cursor, Claude Code, etc.)
β”‚  Your IDE / CLI     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
          β”‚ MCP Protocol (JSON-RPC)
          β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚              MCP Server Layer                   β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  server-figma      β”‚    server-puppeteer        β”‚
β”‚  (Figma REST API)  β”‚    (Headless Chrome)       β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
          β”‚                       β”‚
          β–Ό                       β–Ό
    Figma Cloud            Local Browser
  • You configure MCP servers in your agent's settings (e.g., mcp.json).
  • Agent spawns/connects to these server processes on startup.
  • Agent calls tools like get_file or puppeteer_screenshotβ€”the server does the actual work.

🧱 Step 3: Implementation via Design System

The AI generates the component using only valid components from your Design System. No <div> soup. If the design needs a layout, it uses <Box> or <Stack>. If it needs text, it uses <Typography>.

πŸ§ͺ Step 4: The Laboratory (Storybook)

How do you verify a component in isolation? Storybook.

The AI is mandated to write a .stories.tsx file. This isn't just for documentationβ€”it's for verification. The AI must define the 'Default' state, the 'Error' state, and the 'Loading' state.

πŸ“Έ Step 5: Visual Verification via Puppeteer

Here is the magic loop. Once the code is written and the story is running, the agent becomes the tester.

The Self-Correction Loop πŸ”„

Technical Requirement
npx -y @modelcontextprotocol/server-puppeteer

The "eyes" of your agent. This server launches a headless Chrome instance that the agent can control to capture screenshots for verification.

  • 1. Navigate:Agent uses puppeteer_navigate to open the Storybook URL.
  • 2. Capture:Agent calls puppeteer_screenshot to take a picture of what it built.
  • 3. Compare:Agent compares the screenshot (Actual) with the Figma screenshot (Expected).
  • 4. Fix:"The padding looks 4px too small." The agent rewrites the code and repeats the loop.
// Agent MCP Tool Call (Conceptual)
// Step 1: Navigate to the Storybook story
await mcp.call("puppeteer_navigate", {
  url: "http://localhost:6006/?path=/story/button--primary"
});

// Step 2: Capture the rendered component
const screenshot = await mcp.call("puppeteer_screenshot", {
  name: "button-primary-actual",
  width: 800,
  height: 600
});

// Step 3: Agent analyzes the image vs. expected design
// If mismatch detected, agent edits code and repeats.

Conclusion

By combining Rule Files, Figma MCP, Storybook, and Puppeteer, we create a closed-loop system where AI agents can safely and autonomously build UI.

The role of the developer shifts from "pixel pusher" to "system architect." You define the constraints; the agents fill the space.


A
Written by Abhishek Singh
Exploring the future of AI-assisted engineering.