Skip to main content
FB.
leadership ai claude-code onboarding cursor team engineering non-technical ai-coding

Non-Technical to Production in 10 Days with Cursor AI

A non-developer modified 80 files in production in 10 days with Cursor AI. Exact timeline, AI config setup, and what it means for engineering teams in 2026.

10 min read

TL;DR

WhatDetails
WhoAugustin, product profile, no coding background, used Lovable for prototyping
GoalContribute to production codebase at Méthode Aristote
ToolCursor (chosen over Claude Code for the visual IDE experience)
Day 10First PR in production: 80 files modified, 15 bug fixes
Day 21Multiple PRs per day, at the pace of experienced senior developers
The 3 factorsAdapted AI config (-59% tokens), structured architecture (3-tier, replicable patterns), measured progression
What it meansThe bottleneck to contributing isn’t writing code anymore. It’s understanding structure.

Non-technical contributor (2026): someone with product domain knowledge and AI tool proficiency but no prior coding experience, capable of shipping to production inside an AI-assisted, pattern-consistent codebase.

December 2025. Augustin came to me with a direct request: “Train me. I want to ship to production.”

He wasn’t a developer. He’d used Lovable to prototype interfaces, but he was blocked by the actual codebase: no terminal experience, no TypeScript, no understanding of how tRPC routers connect to service layers. I’d been building the platform as AI Founding Engineer for 5 months at that point, with a configuration I’d spent evenings tuning.

My boss asked what I thought. He told me the engineering team at Cartesia (the group we belong to: experienced developers, strict TDD, people who’d been doing this for years) would have said no immediately. “That’s not how this works.”

I said: “Deal. Let’s go.”


The setup

First friction: Windows vs Mac

My entire workflow ran on macOS. Unix commands, shell hooks, zshrc configurations, slash commands: none of it would work on his machine. Before thinking about AI, we had an OS compatibility problem.

We benchmarked Claude Code vs Cursor for his situation. The conclusion was clear: Cursor was better adapted. Less terminal dependency, a more visual IDE experience, a UI that doesn’t require knowing what a shell prompt is.

The AI config problem

My Claude Code configuration had grown over months of iteration. By the time Augustin joined, I had a 703-line generated CLAUDE.md covering my entire setup: all 6 feature modules active, macOS environment, every MCP server configured, direct_brutal communication tone, RTK enforcement rules, the full stack.

Giving him that configuration would have been noise. The MCP server documentation for tools he wasn’t running. The macOS shell setup for a Windows machine. The advanced debugging methodology for someone learning what a file path is.

The solution was a profile system. His YAML profile produced 289 lines. Mine produced 703. The difference: 59% token reduction, achieved by excluding everything irrelevant to his context.

# profiles/augustin.yaml
id: augustin
display_name: "Augustin"
os: windows
tool: cursor
communication:
  tone: ultra_concise
modules: []   # No advanced modules — not relevant yet
LayerFlorian (703 lines)Augustin (289 lines)
Foundation (shared)Business domain, session types, user rolesIdentical — always included
OS & toolingmacOS, zsh, RTK hooksWindows, Cursor UI
Communication toneDirect/brutal advisorUltra-concise
Feature modules6 activeNone
MCP serversFull stack configuredExcluded
Advanced debugFull methodologyExcluded
Net result703 lines289 lines (-59%)

But the token reduction isn’t the interesting part. What mattered more was what was included: the business domain. The AI Augustin worked with already knew that a “session” in our codebase meant either SUPERVISED (1 hour with a tutor) or AUTONOMOUS (30 minutes solo), with a lifecycle of SCHEDULED → STARTED → COMPLETED. It knew the 15-minute tolerance rule before cancellation. It knew the 7 user roles and what permissions each carried.

That business knowledge came from the foundation layer of the config, shared across all profiles. When Augustin asked the AI to help with a bug in the session scheduling component, the AI wasn’t guessing what “session” meant. It already knew.

The AI he interacted with was calibrated for where he was, not where I was, but it carried the same understanding of what the product was actually doing.

The first week: iteration, not training

We didn’t do a week of sessions where I explained the codebase. We iterated daily. Hooks that didn’t work on Windows. Skills that assumed Unix paths. Cursor compatibility issues that needed workarounds. A lot of ping-pong, back and forth. It took time, but it built something: a configuration that actually worked for his setup.


The timeline

Days 1–3    Days 4–6      Days 7–10         Day 10               Day 21
    │            │              │                │                    │
    ▼            ▼              ▼                ▼                    ▼
[Context]   [Safe edits]  [Bug fixes]     [First PR prod]    [Multiple PRs/day]
 0 files     Typos, KB     Isolated        80 files             Senior pace
             Notion specs  tickets         15 fixes
                                           2.5d work, 1d review

What follows is the exact progression. Not smoothed, not optimized for the story.

Days 1–3: Zero code, full context

Augustin queried the codebase using Cursor’s AI features to understand what existed. He improved specs in Notion using AI assistance, not writing code but using AI to write clearer requirements. Zero files modified, and that was intentional.

The 3-tier architecture of the codebase matters here. Every service follows the same pattern: Router handles validation only, Service handles business logic and permissions, Repository handles CRUD only. When the pattern is consistent, an AI can explain it reliably. When it isn’t, the AI gives you the average of contradictory patterns.

Days 4–6: Safe changes

The changes were small on purpose: typos corrected, Markdown files in the knowledge base updated with business terminology, documentation patches. Nothing that touched the actual application logic.

That last category wasn’t trivial, actually. Augustin knew the business domain better than I did. He’d been working with the founders for months. When he updated the knowledge base, he was feeding the system the business rules the AI would use to help every developer on the team. The safe changes were also useful ones.

The goal wasn’t to build confidence through success. It was to build familiarity with the process: making a change, seeing it in Cursor, understanding what a commit is, what a PR is, what review feedback looks like.

Days 7–10: Isolated bug fixes with Cursor AI

I gave him specific tickets. “The button doesn’t appear at the right moment.” “This click redirects to the wrong URL.” Bugs I’d identified as isolated, well-contained, safe to hand off.

He worked with Cursor’s AI to understand the relevant components, make the change, test it locally. I pointed to the exact file when needed.

Day 10: First PR in production

80 files modified. 15 bug fixes in one PR.

Two and a half days of work on his side. One full day of review on mine.

“Functionally, everything was broken. But it worked.”

That quote is accurate. The PR had bugs in the way it handled edge cases. Some of it wasn’t just CSS and redirects. There was real React hook logic in there, the kind of thing where wrong assumptions about component lifecycle produce subtle and intermittent failures. The review caught those. I used Claude myself to navigate the 80-file diff, asking it to flag the patterns that looked off given what it knew about the codebase architecture. The risk of reviewing AI-generated code you don’t fully understand runs in both directions. Comprehension debt isn’t only a problem when you’re the one shipping.

The final merged version was clean. But what stuck with me was the fact that 80 files could be modified by someone who had never opened a terminal, with a day of review rather than a week, and the result was shippable.

I was more proud of that merge than any of the 49 releases we’d shipped in the previous 7 months. Not because the code was perfect, but because it proved the model worked. Someone else had stepped inside the system I’d built and shipped something real.

Day 21: Multiple PRs per day

Three weeks after discovering Cursor, Augustin was shipping at the pace of senior developers I’d worked with in my best teams. Not every type of work: he doesn’t touch the backend, doesn’t design data models, doesn’t handle the security layer. But within the scope of frontend work and pattern replication, the throughput was real and measurable.


The 3 factors that made it work

1. Adapted AI context (not just smaller config)

The profile system meant his AI understood the project at the right level of detail. Business rules, architecture conventions, the specific patterns used in the codebase, without the advanced tooling noise that was irrelevant to him.

Give it the right context and the output is useful. Too much context confuses it; too little and it hallucinates conventions. 289 lines wasn’t minimal out of simplicity; it was minimal because that’s what was actually needed for his workflow, plus the full foundation of business and architecture knowledge shared by every profile.

The distinction matters: his AI wasn’t just aware of the code structure. It understood what the product was supposed to do. That’s what let it navigate a component change without treating the codebase like an unfamiliar library.

2. Replicable architecture

The 3-tier architecture isn’t just a quality decision. It’s a transmission decision.

When I create a new service, the entire chain is replicable: tRPC router with Zod validation, service layer with business logic and permission checks, repository with typed Prisma queries. The AI knows the pattern because it’s consistent across 45+ services. It can apply it because there’s no ambiguity about where each piece of logic lives.

Augustin doesn’t write novel architecture. He replicates established patterns, and that distinction is the key. If the codebase was inconsistent (different patterns in different places, business logic scattered across layers), the AI’s output would reflect that inconsistency. Architecture discipline creates the conditions for non-technical contribution. You can’t retrofit this. It either exists when you need it or it doesn’t.

3. Measured progression

Days 1–3 without touching code wasn’t wasted time. It was investment in context before output.

The progression was deliberate: understanding before modifying, safe modifications before consequential ones, isolated bugs before multi-component features. Each step built the foundation for the next. Skipping steps would have produced faster early commits and slower overall progress.

The temptation when someone new joins is to give them something real to do immediately, to prove the model works. The more reliable path is to let the understanding accumulate before the output does. That’s counterintuitive when velocity is the goal, but Augustin’s day 21 pace validated the patience of day 3.


Current state

The division of work is clear:

Florian: Backend, services, data models. He lays the foundations and makes the architecture calls.

Augustin: Frontend, pattern replication. No technical innovation on his end, but he replicates established patterns with real consistency.

When I say “Augustin”, I mean him and his AI assistant. That’s the unit. Not a developer in the traditional sense, not a tool user, but a human-AI pair that ships. The distinction matters because the productivity figures only make sense in that framing. Augustin without Cursor wouldn’t be shipping 80-file PRs in 10 days. Cursor without Augustin’s product judgment and domain knowledge would be generating plausible-sounding code for a business it doesn’t understand.

ContributorCommitsShareRole
Florian90282%Backend, architecture, review
Nico605%Product design
Augustin535%Frontend, pattern replication

Of Florian’s 902 commits, the majority were co-authored with Claude, making it the second most active contributor when measured by co-authorship. The human-AI pairs are visible in the workflow, not as a metaphor but as a daily operating mode.

Two more people are onboarding now using the same model. Ben (the founder) started with simpler tickets. Matisse, a product profile, began contributing to discovery work. The system Augustin proved out is spreading, one profile YAML at a time.


What this says about recruiting in 2026

The traditional hiring filter is implementation fluency: can you write the code from scratch? That filter made sense when the bottleneck was writing. In 2026, the bottleneck is comprehension: understanding a system well enough to extend it with AI assistance. Those are different filters, and they select for different people.

Augustin can’t write a tRPC router from first principles. He can extend one because the pattern is established, the AI understands it, and the review process catches the gaps. That’s a different capability than traditional development, and it’s genuinely useful for a certain class of work.

This doesn’t mean developers are unnecessary. The 3-tier architecture that made Augustin’s contribution possible required someone who understood why those layers exist. Catching the edge cases in his Day 10 PR review also required someone who could reason about the system end-to-end. The MCP server configurations, the security guardrails, the database optimization: none of that is Augustin’s domain, and none of it will be anytime soon.

This is what structured vibe coding looks like in practice: not prompt-and-pray, but prompt-within-constraints, inside an architecture built to be replicable.

What changed is the entry point. You used to need fluency in the implementation layer to contribute to a codebase. Now you need enough structural understanding to extend existing patterns with AI in the loop, and a review process that catches what the AI misses. Those are different skills. They existed before, but they weren’t useful in this context until now.

The Cartesia team would have said no. They were right by their model. Their model doesn’t match what we observed in the 10 days that followed.


The profile system described here (modular YAML profiles producing different CLAUDE.md outputs per developer) is part of the AI configuration architecture built at Méthode Aristote. Documented in the Claude Code Ultimate Guide.