Back to Guides

AI Coding Spec Template That Actually Ships

My AI kept building the wrong thing. Then I wrote specs this way. Template inside that works for any AI coding tool.

AI Coding Spec Template That Actually Ships - Featured Image

Here's a fun experiment: give the same prompt to an AI twice. You'll get two completely different outputs. Sometimes wildly different. And if you're working on anything more complex than a button component, that randomness will absolutely wreck your project.

I learned this after watching my AI coding agent build a user settings page... three times. Each version had different field names, different layouts, different everything. My codebase looked like it was built by three different developers who never talked to each other. Because, technically, it was.

The fix? An AI coding spec template that tells your AI exactly what to build. Not in vague terms. In explicit, detailed, "there's only one way to interpret this" terms.

Key Takeaways:

  • Traditional PRDs don't work for AI—you need specs written for machine consumption
  • Every AI spec needs 6 critical sections (most tutorials only cover 2-3)
  • A good spec reduces "oops, wrong thing" iterations by 70%+

In This Article

Why Your AI Keeps Building the Wrong Thing

Let me be direct: the problem isn't your AI tool. It's your input.

In This Article

In This Article

Most people write prompts like this:

"Build a dashboard with user stats and some charts"

Then they're shocked—SHOCKED—when the AI generates something that looks nothing like what they imagined. The charts are the wrong type. The stats show the wrong data. The layout is all wrong.

But here's the thing: that prompt could describe 10,000 different dashboards. Your AI isn't a mind reader. It made reasonable guesses based on vague instructions.

GitHub's engineering team recently declared that for AI coding agents, "the specification becomes the source of truth." This isn't just philosophy—it's practical reality. The spec IS the product definition now.

The difference between a vague prompt and a proper spec is the difference between saying "make me food" and handing someone a recipe. One produces chaos. The other produces dinner.

This ties directly into what we cover in our AI coding workflow guide—specs are the foundation everything else builds on.

The 6 Sections Every AI Spec Needs

After breaking dozens of projects with incomplete specs, I've landed on six sections that make the difference between "nailed it" and "what even is this."

SectionPurposeExample
ContextWhat exists, what we're building on"Existing React 18 app with Tailwind, Zustand for state"
ObjectiveThe single goal (one sentence)"Add a user activity log to the settings page"
RequirementsExplicit features, behaviors, constraints"Show last 50 activities, include timestamp and action type"
UI SpecificationVisual details, layout, responsive behavior"Card-based layout, 3 columns desktop, 1 column mobile"
Data ContractShape of data, types, sources"ActivityItem: {id: string, action: string, timestamp: Date}"
Success CriteriaHow we know it's done"User can view, filter, and paginate activity list"

This is really just context engineering applied to project planning. You're giving the AI everything it needs to succeed on the first try.

Let's break each section down.

1. Context: What Already Exists

Your AI doesn't know about your codebase unless you tell it. This section sets the stage:

  • Tech stack (React, Vue, vanilla JS, etc.)
  • Styling approach (Tailwind, CSS modules, styled-components)
  • State management (Zustand, Redux, Context)
  • Existing components it should use or match
  • Relevant file paths

Bad: "We have a React app." Good: "React 18 app using Tailwind CSS, shadcn/ui components, and Zustand for global state. User authentication already exists via /contexts/AuthContext."

2. Objective: One Sentence, One Goal

Here's where most specs fall apart. People write five paragraphs explaining the vision. Your AI doesn't need a vision. It needs a mission.

One sentence. What are we building?

Bad: "We want to improve the user experience by giving users more insight into their account activity, which will help with engagement and potentially reduce support tickets."

Good: "Build an activity log component that displays the user's recent actions on their settings page."

Save the "why" for your team meetings. Your AI needs the "what."

3. Requirements: The Non-Negotiables

This is where you get specific. Every feature, every constraint, every behavior.

Write requirements as a checklist. Use the word "must" liberally.

REQUIREMENTS: - Must display the 50 most recent activities - Must show: action type, timestamp, and related entity (if any) - Must support infinite scroll pagination - Must be filterable by action type (all, edits, deletions, settings changes) - Must display "No activity yet" state for new users - Must be accessible (keyboard navigable, screen reader labels)

If something is optional, mark it explicitly as optional. Otherwise, assume everything is a "must."

4. UI Specification: What It Looks Like

This is frontend-specific and critical. Don't make your AI guess the layout.

Include:

  • Layout structure (grid, flexbox, exact columns)
  • Spacing (use your design system tokens or explicit values)
  • Typography hierarchy (what's H2, what's body text)
  • Colors (especially if they indicate state)
  • Responsive behavior (breakpoints, what changes)
  • Animation (if any)
UI SPECIFICATION: Layout: Vertical stack with 16px gap between items Each activity item: - Card with white background, subtle border (border-gray-200) - Padding: 16px - Left: icon for action type (24x24px) - Center: action description (text-sm) + timestamp (text-xs, text-gray-500) - Right: related entity link (if applicable) Responsive: - Desktop: max-width 600px, centered - Mobile: full width with 16px horizontal margin Empty state: centered text with illustration above

5. Data Contract: Shape of the Data

AI tools love to invent data structures. Unless you tell them exactly what the data looks like, they'll hallucinate fields that don't exist.

Define your types:

DATA CONTRACT: interface ActivityItem { id: string; action: 'edit' | 'delete' | 'settings_change' | 'login'; description: string; timestamp: Date; entityId?: string; entityType?: 'document' | 'project' | 'team'; entityName?: string; } Data source: GET /api/user/activity?limit=50&offset=0 Returns: { items: ActivityItem[], hasMore: boolean }

This prevents the nightmare scenario where your AI generates beautiful UI that expects

activity.createdAt
but your API returns
activity.timestamp
.

6. Success Criteria: Definition of Done

How do you know the feature is complete? This isn't just for the AI—it's for you.

SUCCESS CRITERIA: □ Activity list renders with mock data □ Infinite scroll loads more items when scrolling near bottom □ Filter dropdown filters the list by action type □ Empty state displays when no activities exist □ Component is keyboard accessible □ Component matches existing design system styles □ No TypeScript errors □ Renders correctly on mobile (375px) and desktop (1280px)

This becomes your testing checklist after the AI delivers.

Frontend-Specific Spec Template

Here's the template I use for every frontend feature. Copy it, fill in the blanks, get better results:

Why Your AI Keeps Building the Wrong Thing

Why Your AI Keeps Building the Wrong Thing

# Feature Spec: [Feature Name] ## Context - **Tech Stack**: [React/Vue/etc], [Tailwind/CSS modules/etc], [State management] - **Existing Components**: [List relevant existing components to use/match] - **File Location**: [Where this component should live] - **Related Files**: [Files this will interact with] ## Objective [One sentence describing what we're building] ## Requirements - Must [requirement 1] - Must [requirement 2] - Must [requirement 3] - Optional: [nice-to-have] ## UI Specification **Layout**: [Describe structure] **Spacing**: [Use tokens or exact values] **Colors**: [Reference design system or specify] **Typography**: [What text styles] **Responsive**: - Mobile (< 768px): [behavior] - Desktop (≥ 768px): [behavior] **States**: - Loading: [description] - Empty: [description] - Error: [description] ## Data Contract ```typescript // Types here

Data Source: [API endpoint or data source] Shape: [Expected response structure]

Success Criteria

  • [Criteria 1]
  • [Criteria 2]
  • [Criteria 3]

Out of Scope

  • [Explicitly list what this feature does NOT include]
The "Out of Scope" section is underrated. It prevents your AI from over-engineering or adding features you didn't ask for. ## Real Example: Spec for a Dashboard Feature Let me show you this template in action. Say I'm building a stats overview for a SaaS dashboard: ```markdown # Feature Spec: Dashboard Stats Overview ## Context - **Tech Stack**: React 18, Tailwind CSS, shadcn/ui, React Query for data fetching - **Existing Components**: Card, Badge, Skeleton (from shadcn/ui) - **File Location**: src/components/dashboard/StatsOverview.tsx - **Related Files**: src/api/stats.ts, src/types/dashboard.ts ## Objective Build a stats overview section showing 4 key metrics at the top of the dashboard. ## Requirements - Must display 4 stat cards in a horizontal row - Must show: Total Users, Active Projects, Revenue (MTD), Conversion Rate - Must include trend indicator (up/down arrow + percentage vs last period) - Must fetch data via React Query with 30-second refresh - Must show skeleton loaders while loading - Must handle API errors gracefully with retry option ## UI Specification **Layout**: CSS Grid, 4 equal columns with 16px gap **Each Card**: - shadcn Card component - Padding: 24px - Top: Label (text-sm, text-muted-foreground) - Middle: Value (text-2xl, font-bold) - Bottom: Trend badge (green for positive, red for negative) **Responsive**: - Mobile (< 768px): 2x2 grid - Tablet (768-1024px): 2x2 grid - Desktop (≥ 1024px): 1x4 row **States**: - Loading: 4 Skeleton cards matching layout - Error: Error message with "Retry" button replacing cards ## Data Contract ```typescript interface DashboardStat { label: string; value: number | string; trend: { direction: 'up' | 'down' | 'flat'; percentage: number; }; format: 'number' | 'currency' | 'percentage'; } interface StatsResponse { stats: DashboardStat[]; lastUpdated: string; }

Data Source: GET /api/dashboard/stats Refresh: Every 30 seconds via React Query

Success Criteria

  • 4 stat cards render with correct data
  • Trend indicators show correct direction and color
  • Loading state shows skeletons
  • Error state shows message + retry button
  • Responsive layout works at all breakpoints
  • Data refreshes every 30 seconds without UI flicker

Out of Scope

  • Detailed stat breakdowns (separate feature)
  • Date range selector
  • Exportable data
With this spec, your AI has zero ambiguity. It knows exactly what to build. This approach also helps you avoid [the 70% wall](/blog/tutorials/vibe-coding-70-percent-wall-finish-project)—that frustrating point where vibe-coded projects stall because the AI keeps producing inconsistent output. ## Common Spec Mistakes That Derail AI I've seen these wreck projects over and over: ### 1. Mixing Multiple Features One spec = one feature. If you're describing a stats overview AND a notification system in the same spec, break them apart. Your AI will try to do everything at once and do nothing well. ### 2. Using Ambiguous Adjectives "Clean design" means nothing. "Good spacing" means nothing. "Modern look" means nothing. Be explicit: "16px padding, 8px border-radius, Inter font, blue-600 for primary actions." ### 3. Forgetting Error States Your happy path spec is 90% complete. The loading state, empty state, and error state handle the other 90% of real-world usage. (Yes, that's 180%. Error handling is that important.) ### 4. Assuming Data Shape Never assume your AI knows what data looks like. Define every field, every type, every edge case (What if `entityName` is null? What if the array is empty?). ### 5. Skipping "Out of Scope" Without explicit boundaries, your AI will add features. It wants to be helpful. Tell it exactly where to stop being helpful. ## Spec Checklist Before You Prompt Before you send your spec to any AI tool, run through this checklist:

PRE-FLIGHT CHECKLIST:

□ Context section names specific tech stack and relevant files □ Objective is ONE sentence □ Requirements use "must" language (no "might" or "could") □ UI spec includes exact spacing and breakpoints □ Data contract includes TypeScript types □ All states covered (loading, empty, error, success) □ Success criteria are testable (not "works well") □ Out of scope explicitly lists what NOT to build □ No ambiguous adjectives (clean, nice, good, modern) □ If referencing existing components, include file paths

Pass this checklist, and your AI will build what you actually want. Fail it, and you're rolling the dice. ## You Might Also Like - **[Vibe Coding Best Practices Guide](/blog/guides/vibe-coding-best-practices-guide-2025)** - Master the fundamentals of AI-assisted development - **[Context Engineering for AI Coding](/blog/guides/context-engineering-ai-coding-guide-2025)** - Give your AI the information it needs to succeed - **[The 70% Wall](/blog/tutorials/vibe-coding-70-percent-wall-finish-project)** - Why vibe projects stall and how to break through ## Frequently Asked Questions ### What's the difference between a PRD and an AI coding spec? Traditional PRDs are written for humans—they include business context, user research, timelines. AI coding specs are written for machines—they include technical constraints, exact data shapes, and explicit UI details. A PRD might say "users need to see their activity"; an AI spec says "display 50 ActivityItem objects in a vertical Card layout with 16px spacing." ### How long should an AI coding spec be? For a single component or feature: 1-2 pages. If your spec is longer, you're probably trying to describe multiple features. Split them up. Your AI processes information better in focused chunks anyway. ### Should I write specs for small changes too? For a one-liner like "change the button color to blue-500"? No. For anything that involves layout decisions, data handling, or multiple states? Yes. The rule of thumb: if you'd need more than one sentence to describe the change to a human developer, write a spec. ### What tools work best with detailed specs? Any AI coding tool benefits from good specs—Claude Code, Cursor, 0xMinds, v0, Bolt. The format in this article works across all of them. The key is the *structure* of your spec, not the specific tool consuming it. ### How do I iterate on AI output that doesn't match my spec? Point to the specific spec section that was violated: "The UI Specification says 16px gap, but you used 8px. Please fix." AI tools respond much better to specific corrections that reference documented requirements than to vague feedback like "the spacing feels off." --- *Written by the 0xMinds Team. We build AI tools for frontend developers. [Try 0xMinds free](https://0xminds.com)*
Share this article