My 12-Person Leadership Team Lives in a Markdown File

TL;DR: I don't have a CTO, CFO, or Legal counsel. But I consult them before every major decision. Built a 12-persona framework that shows tradeoffs—then I make the call. Automated bureaucracy that works.


I'm building an AI assistant for executives. Solo. No co-founder, no advisors, no board.

But I still need perspectives I don't have: security, compliance, GTM, customer success, finance.

So I built them.

The origin: Started as a prompt engineering hack to get design feedback on features. Needed a designer perspective, so I defined one. Then security. Then compliance. Kept adding until I had 12 personas covering every domain.

Now before any major decision, I run it through the team.

Example: What metrics should the monitoring dashboard show?

Two audiences, completely different needs:

I could have guessed. Instead, I ran it through my leadership team.

The Solution: Automated Bureaucracy

I built a 12-persona framework in an 887-line markdown file. Each persona represents a domain I need: CTO, CFO, Legal, Security, Product, Customer Success, etc.

Before any major decision, I "bring it to the team" and get feedback from all 12 perspectives.

The insight: You don't need to hire a leadership team to think like one. You just need to systematically embody their perspectives.

The 12 Personas

Technical & Product:

  1. Chief Technology Officer — AI/ML strategy, scalability, vendor selection
  2. VP Product / UX Designer — User experience, emotional impact
  3. Chief Compliance Officer — Regulatory risk, legal compliance
  4. VP Infrastructure / SRE — Reliability, monitoring, incident response
  5. Director of Data Engineering — ML infrastructure, analytics, data quality

Business & GTM:

  1. Chief Revenue Officer — Pricing, competitive positioning, GTM strategy
  2. VP Business Development — Partnerships, integrations, enterprise readiness
  3. VP Customer Success — Onboarding, retention, support

Advisory & Oversight:

  1. General Counsel — Contracts, IP, liability, regulatory compliance
  2. Board Member / Strategic Advisor — Unit economics, TAM, defensibility
  3. Chief Financial Officer — Cash management, burn rate, financial controls
  4. Chief Information Security Officer — Threat modeling, access control, SOC 2

Each persona has:

Real Example 1: Monitoring Dashboard

Decision: What metrics should the monitoring dashboard show?

Instead of full 12-person review, I focused on 4 personas:

Director of Data Science (my needs):

"I need to debug AI: model accuracy over time, confidence distribution, fallback usage rate, cost per email, analysis latency (P50/P90/P99), queue depth. If Claude is degrading, I need to catch it FAST."

Director of Customer Success (future hire needs):

"I need user health: expired Gmail watches, sync gaps, churn risk (no login 7+ days), activation rate, feature adoption. The 'users needing attention' list is CRITICAL—that's my daily action queue."

CTO (quick check):

"These are orthogonal concerns. Don't combine them. Build TABS—separate views for separate audiences. Data Science tab for technical metrics, Users tab for health metrics."

Product Designer (UX input):

"If you combine these on one screen, it's overwhelming. Who's the primary user TODAY? You are. Build Data Science tab first. Stub out Users tab structure for future CS hire."

Decision: Built dashboard with 4 tabs (Users, System, AI/ML, Queue). Both audiences served from day 1.

Outcome: AI/ML tab caught 3 production issues in first week (cost spike, queue depth warning, model degradation). Users tab is ready for when I eventually hire—built it right the first time instead of rebuilding later.

Real Example 2: Encryption at Rest Timing

Decision: Should we build encryption at rest now (2-3 weeks of dev time) or wait until later?

Context: Security architecture is fully designed but not implemented. Under 20 beta users, no enterprise customers yet.

CISO (Security Specialist):

"The design is solid. But encryption at rest protects against DATABASE BREACH. What's our threat model? We're beta with under 20 users. We have RLS policies. No enterprise customers requiring SOC 2. Building now = 2-3 weeks. What are we NOT building instead?"

Legal Counsel:

"Do current or near-term customers require encryption at rest contractually? If not, it's not blocking. But if we're talking to enterprise prospects, they WILL ask. Having it designed but not implemented is worse than not having it—shows we know it's needed but haven't prioritized security."

CTO (Tech Evangelist):

"Build the parachute before you jump. We're not jumping yet (no enterprise customers). Ship personalization (the moat), THEN build security when we're selling to customers who require it."

Board Member:

"What's the strategic milestone? If it's proving product/market fit, you don't need encryption at rest. If it's closing first enterprise deal, you DO. When's that deal happening? If it's 6 months out, you have time."

CFO:

"2-3 weeks of dev time = opportunity cost (features not built). What's the revenue impact? If encryption unblocks an enterprise deal, build it. If we're not selling to enterprise for 6 months, we're building insurance we don't need yet."

Synthesis:

My Decision: Build it now anyway.

The panel made the economically rational argument. But I don't want to earn trust over time on security—I want to be uncompromised from day one. If I'm asking executives to trust me with their email and calendar, encryption at rest isn't negotiable.

The lesson: The framework shows you the tradeoffs. You still make the call. Sometimes the right decision isn't the economically optimal one.

Real Example 3: Drift Detection Complexity

Decision: Build full academic drift detection (4 dimensions: log-ratio divergence, KL divergence, Spearman correlation, Bayesian updates) or start simple?

Data Engineer:

"Do we have ENOUGH DATA? KL divergence needs 50+ samples for statistical significance. Spearman needs 20+ paired observations. Users need 3-4 weeks of data minimum. If we launch this on day 7, it'll throw garbage numbers. Start simple: track response time changes week-over-week."

CTO (Tech Evangelist):

"Four dimensions using KL divergence, Spearman correlation, Bayesian updates—this screams 'overengineered.' Are we writing a paper or shipping a product? Start with: 'Are you responding to recruiting emails faster than product emails despite saying product is priority?' That's ONE metric. Ship it."

Board Member:

"What problem are we solving? User says 'I want to focus on product' but keeps responding to recruiting emails. Do we need four statistical dimensions? Or just track category response times and flag inversions?"

VP Product / Designer:

"How do we SHOW drift to users? 'Your KL divergence is 0.43' means nothing. 'You said focus on product, but you're responding to recruiting 2x faster' is actionable. The simpler the algorithm, the simpler the explanation."

CFO:

"Full 4-dimensional implementation = 2 weeks dev time. Simple version (response time + category tracking) = 2 days. Will the complex version provide 10x more value initially? Probably not. Start simple, iterate based on user feedback."

Synthesis:

Decision: Start simple—track response time by category, flag inversions. But build the data infrastructure to support advanced modeling when we have enough data.

The personas prevented premature optimization while ensuring we don't paint ourselves into a corner. Simple algorithm now, with the pipes ready for complexity later.

Why This Works

1. Forces Multi-Perspective Thinking

Solo builders optimize for what they know (usually tech).

Without personas: "I think about architecture, scalability, performance."
With personas: "What would the CFO say? Legal? Customer Success?"

Result: Catch financial, legal, UX, GTM issues before they're expensive.

2. Provides Structure for Intuition

Intuition exists but is hard to articulate.

Before: "This feels wrong, but I can't explain why."
After: "The Board Member would ask: 'What's defensible here?' And the answer is: Nothing. That's why it feels wrong."

3. Scales Expert Judgment

Can't afford a 12-person leadership team at seed stage. Can afford a markdown file.

How to Build Your Own

Step 1: Identify Your Blind Spots

What domains do you NOT have expertise in?

For me (technical background):

Your personas should cover your blind spots.

Step 2: Define 8-12 Personas

Don't go below 8 (not enough coverage) or above 15 (too many voices).

Recommended core set:

  1. CTO / Tech Lead
  2. Product Manager / Designer
  3. CFO / Finance
  4. CRO / GTM Lead
  5. Customer Success
  6. Legal / Compliance
  7. Security / Infrastructure
  8. Board Member / Strategic Advisor

Add domain-specific:

Step 3: Document Each Persona

For each persona, write:

1. Role & Expertise — What domain do they own?

2. How They Think — What lens do they view decisions through?

3. Key Questions They Ask — 7-10 specific, actionable questions

4. Red Flags They Catch — What mistakes do they prevent?

5. Positive Signals They Value — What "done right" looks like

6. Example Feedback — 2-3 realistic examples from your domain

Step 4: Test with Real Decisions

Pick 3 recent decisions. Run them through retroactively.

Ask: "Would this have changed my decision?"

If yes → Framework is working.
If no → Personas are too generic. Make them domain-specific.

Step 5: Iterate Based on Mistakes

Every time you make a mistake, ask: "Which persona would have caught this?"

Example: Launched feature without considering mobile UX.
→ Add Product Designer question: "How does this work on mobile?"

Your framework should evolve as you learn.

Advanced Pattern: Weight by Domain Relevance

Not every persona matters equally for every decision.

Technical Architecture Decision:

Pricing Decision:

Don't average opinions equally. Weight by relevance.

Common Objections

"This sounds like overthinking. Just ship fast."

Personas ENABLE speed by showing you the tradeoffs clearly.

Encryption timing example: Panel said wait 2-3 weeks, build it later. I overrode them and built it now. Prevented potentially catastrophic trust issue if I'd asked executives to trust me with their email without encryption. Sometimes the economically rational decision isn't the right one.

"I can't embody perspectives I don't have expertise in."

You don't need to be an expert. You need to know what questions experts ask.

You don't need to know: How to write a data processing agreement.
You DO need to know: "Would Legal ask about data processing agreements?"

The persona prompts you to research, not answer off the top of your head.

"AI can do this. Just ask Claude."

AI needs structure to give good feedback.

Without structure: "Claude, should I build encryption now or later?" → Generic answer

With persona: "Claude, review this as my CFO. Here's the CFO persona: [paste]. What would they say?" → Specific, actionable feedback

AI amplifies the framework. It doesn't replace it.

Where I am now

A few weeks into building this AI assistant, I've used the framework for every major decision: encryption timing, monitoring architecture, drift detection complexity, tool selection, CI enforcement.

The pattern: run it through the personas, see the tradeoffs, make the call.

Sometimes I follow the consensus (monitoring dashboard: build both tabs). Sometimes I override it (encryption: build now anyway, not later).

The value isn't that the personas give me "the answer." It's that they show me what I'm not seeing.

I'm building solo, but I'm not thinking solo. That's what the 887-line markdown file does.

Automated bureaucracy. Systematic perspective-taking. Better decisions without the payroll.


Related: Hands-Off Workflow · Context Management at 1M Tokens