You're Using AI Coding Tools Wrong

TL;DR: Most developers use AI like autocomplete. I tracked 19 days: 7,583 lines/day with 93.5% test coverage. The win isn't speed—it's comprehensive testing, continuous docs, and proactive monitoring baked in.


Most developers use AI coding tools like fancy autocomplete.

Type a function signature, hit tab, get a completion. Maybe ask it to write a utility function. Generate some boilerplate.

That's leaving 90% of the value on the table.

I'm building an AI assistant for executives using Cursor with Claude Sonnet. Over 19 days, I tracked every commit, every line of code, every refactor.

The results surprised me—not because of speed, but because of what speed enabled.

The Numbers

Over 19 days of active development:

For context: a senior engineer typically writes 50-100 net lines per day.

That's a 76-152x multiplier.

But here's what matters more than speed: the quality.

The Rework Rate Tells the Real Story

Of the 185,799 lines I wrote, I deleted 49,307 lines. That's a 26.5% rework rate.

Is that bad? No. That's exactly normal.

Industry research shows human engineers have a 20-40% rework rate. You write code, you learn something, you refactor. That's healthy software development.

The AI didn't just vomit code that I kept. It helped me iterate rapidly—write something, test it, realize it's wrong, rewrite it better.

That 26.5% rework rate means I was building, not just generating.

What Most People Miss

When you use AI as autocomplete, you're optimizing for:

When you use AI as a development partner, you're optimizing for:

The difference is strategic vs tactical.

What I Actually Did

Here's how I used AI differently:

1. Tested Everything, Immediately

Traditional development: Write feature → Ship it → Write tests later (maybe)

My approach: Write feature → Generate comprehensive tests → Run tests → Fix bugs → Ship

Result: 93.5% test coverage. Industry average is 60-80%.

I found 10 production-level bugs during one testing session—before any users saw them. The ROI of catching bugs in tests vs production is roughly 10x.

2. Documented Everything, Continuously

Traditional development: Write code → Hope someone documents it later (they don't)

My approach: Generate docs alongside code. Architecture decisions, implementation plans, API references, troubleshooting guides.

Result: 48,333 lines of documentation. Most projects have minimal or outdated docs.

Why? Because with AI, documentation isn't an afterthought—it's free. Generate it as you go.

3. Built Observability First, Not Last

Traditional development: Ship feature → It breaks → Scramble to add logging

My approach: Generate monitoring dashboard and alerting as core features, not "phase 2."

Result: Production-ready monitoring from day one. I knew when things broke before anyone had to tell me.

4. Iterated on Architecture Rapidly

When a human engineer commits to an architecture, the switching cost is high. You've invested hours writing code—you're anchored.

With AI: I tried four different architecture approaches in the first three days. Rapid prototyping, test the assumptions, throw away what doesn't work.

The final architecture is better because I could afford to explore.

The Files That Changed 20+ Times

Some files went through heavy iteration:

Is that thrashing? No. That's refinement.

The dashboard needed UX iteration—I could try variations fast and keep what worked.

The Gmail webhook hit production bugs—I could debug and fix same-day.

High churn on user-facing features is normal when you can iterate cheaply.

What You Should Do Differently

If you're using AI for coding, here's what to change:

Stop: Using AI as autocomplete

Start: Using AI as a development partner

Ask it to generate tests. Ask it to write documentation. Ask it to review your architecture and suggest alternatives.

Stop: Accepting first output

Start: Iterating rapidly

The first solution is rarely the best. With AI, iteration is cheap. Generate three approaches, evaluate them, pick the best.

Stop: Skipping tests because they're tedious

Start: Generating comprehensive test suites

You can have 90%+ test coverage without the tedium. Generate tests alongside features. Fix bugs before production.

Stop: Letting docs fall behind

Start: Generating docs continuously

Architecture decisions, API references, troubleshooting guides—generate them as you code. They cost nothing and save hours later.

Stop: Building first, observability later

Start: Monitoring and logging as core features

Generate dashboards, alerting, and runbooks upfront. You'll ship more confidently and debug faster.

The Real Productivity Gain

The 76-152x productivity multiplier isn't about typing speed.

It's about:

When you use AI as autocomplete, you save a few minutes typing.

When you use AI as a development partner, you build better systems faster.

The Mindset Shift

The hardest part isn't the tools—it's the mindset.

Most developers think: "AI helps me write code faster."

The better framing: "AI removes the friction from engineering practices I know I should do but usually skip."

You know you should write tests. You know you should document. You know you should build monitoring.

But they're tedious, so you defer them.

With AI, they're no longer tedious. So you actually do them.

That's where the real multiplier comes from.

What Didn't Work

AI as a development partner isn't perfect. Here's what I learned the hard way:

Over-reliance on generation. Early on, I let AI generate entire features without understanding the architecture. Result: 3 rewrites when I realized the foundation was wrong. Now I design first, then use AI to implement.

Test quality varies. AI writes tests that pass but don't always test the right things. I caught this when "100% coverage" still had bugs. Now I review test logic, not just coverage numbers.

Documentation drift. AI-generated docs are great initially but drift as code changes. I spent a day deleting outdated docs because I didn't enforce "update docs when code changes." Automation creates docs fast; it also creates stale docs fast.

The tool encourages premature optimization. Because AI can refactor quickly, I over-engineered early. Spent 2 days building complex E2E test infrastructure before validating if I needed it. (I didn't. Integration tests were enough.)

The lesson: AI amplifies your process. If your process is sloppy, AI makes sloppy code faster.

What I'm Still Learning

This isn't a victory lap. There are real tradeoffs and open questions:

I don't know the long-term maintainability yet. Will this code be easy to maintain in 6 months? 2 years? I can tell you it has good test coverage and documentation, which helps. But time will tell.

The rework rate is normal, but it's still 26.5%. That's a lot of deleted code. Some of it was exploration (good). Some of it was mistakes (expected). But I'm watching this closely.

I built this solo. How does this approach work with a team? With code review? With multiple people generating code via AI? Different questions.

The tools are evolving fast. What I learned in the past three weeks might be outdated in three months. This is a moving target.

Where I am now

Three weeks in, 93.5% test coverage, comprehensive docs, monitoring in place. It looks impressive.

But I'm tracking a different metric: can this run for a week without me touching it?

Not yet. I'm still fixing edge cases, tightening error handling, hardening the foundation.

The AI got me here fast. But "fast" and "ready" aren't the same thing.

That's the real lesson: use AI as a development partner to build systems that endure, not just features that demo. The multiplier isn't in typing faster—it's in testing thoroughly, documenting continuously, and monitoring proactively.

If you're using AI as autocomplete, you're optimizing for the wrong thing.


Next: Vibe-Coded Software