My 12-Person Leadership Team Lives in a Markdown File
TL;DR: I don't have a CTO, CFO, or Legal counsel. But I consult them before every major decision. Built a 12-persona framework that shows tradeoffs—then I make the call. Automated bureaucracy that works.
I'm building an AI assistant for executives. Solo. No co-founder, no advisors, no board.
But I still need perspectives I don't have: security, compliance, GTM, customer success, finance.
So I built them.
The origin: Started as a prompt engineering hack to get design feedback on features. Needed a designer perspective, so I defined one. Then security. Then compliance. Kept adding until I had 12 personas covering every domain.
Now before any major decision, I run it through the team.
Example: What metrics should the monitoring dashboard show?
Two audiences, completely different needs:
- Me (debugging AI/ML): Model accuracy, cost tracking, latency, queue health
- Future Customer Success hire: User health, churn risk, activation rate, "users needing attention" list
I could have guessed. Instead, I ran it through my leadership team.
The Solution: Automated Bureaucracy
I built a 12-persona framework in an 887-line markdown file. Each persona represents a domain I need: CTO, CFO, Legal, Security, Product, Customer Success, etc.
Before any major decision, I "bring it to the team" and get feedback from all 12 perspectives.
The insight: You don't need to hire a leadership team to think like one. You just need to systematically embody their perspectives.
The 12 Personas
Technical & Product:
- Chief Technology Officer — AI/ML strategy, scalability, vendor selection
- VP Product / UX Designer — User experience, emotional impact
- Chief Compliance Officer — Regulatory risk, legal compliance
- VP Infrastructure / SRE — Reliability, monitoring, incident response
- Director of Data Engineering — ML infrastructure, analytics, data quality
Business & GTM:
- Chief Revenue Officer — Pricing, competitive positioning, GTM strategy
- VP Business Development — Partnerships, integrations, enterprise readiness
- VP Customer Success — Onboarding, retention, support
Advisory & Oversight:
- General Counsel — Contracts, IP, liability, regulatory compliance
- Board Member / Strategic Advisor — Unit economics, TAM, defensibility
- Chief Financial Officer — Cash management, burn rate, financial controls
- Chief Information Security Officer — Threat modeling, access control, SOC 2
Each persona has:
- Role & expertise
- How they think (mental model)
- 7-10 key questions they always ask
- Red flags they catch
- Example feedback
Real Example 1: Monitoring Dashboard
Decision: What metrics should the monitoring dashboard show?
Instead of full 12-person review, I focused on 4 personas:
Director of Data Science (my needs):
"I need to debug AI: model accuracy over time, confidence distribution, fallback usage rate, cost per email, analysis latency (P50/P90/P99), queue depth. If Claude is degrading, I need to catch it FAST."
Director of Customer Success (future hire needs):
"I need user health: expired Gmail watches, sync gaps, churn risk (no login 7+ days), activation rate, feature adoption. The 'users needing attention' list is CRITICAL—that's my daily action queue."
CTO (quick check):
"These are orthogonal concerns. Don't combine them. Build TABS—separate views for separate audiences. Data Science tab for technical metrics, Users tab for health metrics."
Product Designer (UX input):
"If you combine these on one screen, it's overwhelming. Who's the primary user TODAY? You are. Build Data Science tab first. Stub out Users tab structure for future CS hire."
Decision: Built dashboard with 4 tabs (Users, System, AI/ML, Queue). Both audiences served from day 1.
Outcome: AI/ML tab caught 3 production issues in first week (cost spike, queue depth warning, model degradation). Users tab is ready for when I eventually hire—built it right the first time instead of rebuilding later.
Real Example 2: Encryption at Rest Timing
Decision: Should we build encryption at rest now (2-3 weeks of dev time) or wait until later?
Context: Security architecture is fully designed but not implemented. Under 20 beta users, no enterprise customers yet.
CISO (Security Specialist):
"The design is solid. But encryption at rest protects against DATABASE BREACH. What's our threat model? We're beta with under 20 users. We have RLS policies. No enterprise customers requiring SOC 2. Building now = 2-3 weeks. What are we NOT building instead?"
Legal Counsel:
"Do current or near-term customers require encryption at rest contractually? If not, it's not blocking. But if we're talking to enterprise prospects, they WILL ask. Having it designed but not implemented is worse than not having it—shows we know it's needed but haven't prioritized security."
CTO (Tech Evangelist):
"Build the parachute before you jump. We're not jumping yet (no enterprise customers). Ship personalization (the moat), THEN build security when we're selling to customers who require it."
Board Member:
"What's the strategic milestone? If it's proving product/market fit, you don't need encryption at rest. If it's closing first enterprise deal, you DO. When's that deal happening? If it's 6 months out, you have time."
CFO:
"2-3 weeks of dev time = opportunity cost (features not built). What's the revenue impact? If encryption unblocks an enterprise deal, build it. If we're not selling to enterprise for 6 months, we're building insurance we don't need yet."
Synthesis:
- Critical: No blocking customer need right now
- Important: Don't want to be caught flat-footed if enterprise deal appears
- Decision weight: CTO + Board + CFO say "wait" (timeline-based)
My Decision: Build it now anyway.
The panel made the economically rational argument. But I don't want to earn trust over time on security—I want to be uncompromised from day one. If I'm asking executives to trust me with their email and calendar, encryption at rest isn't negotiable.
The lesson: The framework shows you the tradeoffs. You still make the call. Sometimes the right decision isn't the economically optimal one.
Real Example 3: Drift Detection Complexity
Decision: Build full academic drift detection (4 dimensions: log-ratio divergence, KL divergence, Spearman correlation, Bayesian updates) or start simple?
Data Engineer:
"Do we have ENOUGH DATA? KL divergence needs 50+ samples for statistical significance. Spearman needs 20+ paired observations. Users need 3-4 weeks of data minimum. If we launch this on day 7, it'll throw garbage numbers. Start simple: track response time changes week-over-week."
CTO (Tech Evangelist):
"Four dimensions using KL divergence, Spearman correlation, Bayesian updates—this screams 'overengineered.' Are we writing a paper or shipping a product? Start with: 'Are you responding to recruiting emails faster than product emails despite saying product is priority?' That's ONE metric. Ship it."
Board Member:
"What problem are we solving? User says 'I want to focus on product' but keeps responding to recruiting emails. Do we need four statistical dimensions? Or just track category response times and flag inversions?"
VP Product / Designer:
"How do we SHOW drift to users? 'Your KL divergence is 0.43' means nothing. 'You said focus on product, but you're responding to recruiting 2x faster' is actionable. The simpler the algorithm, the simpler the explanation."
CFO:
"Full 4-dimensional implementation = 2 weeks dev time. Simple version (response time + category tracking) = 2 days. Will the complex version provide 10x more value initially? Probably not. Start simple, iterate based on user feedback."
Synthesis:
- Data availability: Not enough samples for statistical significance yet
- Value proof: Need to validate simple version works before adding complexity
- User communication: Complex metrics hard to explain
- Decision weight: 6 of 7 personas say "start simple"
Decision: Start simple—track response time by category, flag inversions. But build the data infrastructure to support advanced modeling when we have enough data.
The personas prevented premature optimization while ensuring we don't paint ourselves into a corner. Simple algorithm now, with the pipes ready for complexity later.
Why This Works
1. Forces Multi-Perspective Thinking
Solo builders optimize for what they know (usually tech).
Without personas: "I think about architecture, scalability, performance."
With personas: "What would the CFO say? Legal? Customer Success?"
Result: Catch financial, legal, UX, GTM issues before they're expensive.
2. Provides Structure for Intuition
Intuition exists but is hard to articulate.
Before: "This feels wrong, but I can't explain why."
After: "The Board Member would ask: 'What's defensible here?' And the answer is: Nothing. That's why it feels wrong."
3. Scales Expert Judgment
Can't afford a 12-person leadership team at seed stage. Can afford a markdown file.
How to Build Your Own
Step 1: Identify Your Blind Spots
What domains do you NOT have expertise in?
For me (technical background):
- ❌ GTM / Sales
- ❌ Legal / Compliance
- ❌ Financial planning
- ❌ Customer Success
- ✅ Engineering / Product (I got this)
Your personas should cover your blind spots.
Step 2: Define 8-12 Personas
Don't go below 8 (not enough coverage) or above 15 (too many voices).
Recommended core set:
- CTO / Tech Lead
- Product Manager / Designer
- CFO / Finance
- CRO / GTM Lead
- Customer Success
- Legal / Compliance
- Security / Infrastructure
- Board Member / Strategic Advisor
Add domain-specific:
- Data Engineer (if AI/ML product)
- VP Engineering (if managing team)
- VP Business Development (if partnerships critical)
Step 3: Document Each Persona
For each persona, write:
1. Role & Expertise — What domain do they own?
2. How They Think — What lens do they view decisions through?
3. Key Questions They Ask — 7-10 specific, actionable questions
4. Red Flags They Catch — What mistakes do they prevent?
5. Positive Signals They Value — What "done right" looks like
6. Example Feedback — 2-3 realistic examples from your domain
Step 4: Test with Real Decisions
Pick 3 recent decisions. Run them through retroactively.
Ask: "Would this have changed my decision?"
If yes → Framework is working.
If no → Personas are too generic. Make them domain-specific.
Step 5: Iterate Based on Mistakes
Every time you make a mistake, ask: "Which persona would have caught this?"
Example: Launched feature without considering mobile UX.
→ Add Product Designer question: "How does this work on mobile?"
Your framework should evolve as you learn.
Advanced Pattern: Weight by Domain Relevance
Not every persona matters equally for every decision.
Technical Architecture Decision:
- Primary weight: CTO, Infrastructure Engineer (80%)
- Secondary weight: Security, Data Engineer (15%)
- Advisory weight: Everyone else (5%)
Pricing Decision:
- Primary weight: CRO, CFO, Board Member (80%)
- Secondary weight: Customer Success, Product (15%)
- Advisory weight: Everyone else (5%)
Don't average opinions equally. Weight by relevance.
Common Objections
"This sounds like overthinking. Just ship fast."
Personas ENABLE speed by showing you the tradeoffs clearly.
Encryption timing example: Panel said wait 2-3 weeks, build it later. I overrode them and built it now. Prevented potentially catastrophic trust issue if I'd asked executives to trust me with their email without encryption. Sometimes the economically rational decision isn't the right one.
"I can't embody perspectives I don't have expertise in."
You don't need to be an expert. You need to know what questions experts ask.
You don't need to know: How to write a data processing agreement.
You DO need to know: "Would Legal ask about data processing agreements?"
The persona prompts you to research, not answer off the top of your head.
"AI can do this. Just ask Claude."
AI needs structure to give good feedback.
Without structure: "Claude, should I build encryption now or later?" → Generic answer
With persona: "Claude, review this as my CFO. Here's the CFO persona: [paste]. What would they say?" → Specific, actionable feedback
AI amplifies the framework. It doesn't replace it.
Where I am now
A few weeks into building this AI assistant, I've used the framework for every major decision: encryption timing, monitoring architecture, drift detection complexity, tool selection, CI enforcement.
The pattern: run it through the personas, see the tradeoffs, make the call.
Sometimes I follow the consensus (monitoring dashboard: build both tabs). Sometimes I override it (encryption: build now anyway, not later).
The value isn't that the personas give me "the answer." It's that they show me what I'm not seeing.
I'm building solo, but I'm not thinking solo. That's what the 887-line markdown file does.
Automated bureaucracy. Systematic perspective-taking. Better decisions without the payroll.
Related: Hands-Off Workflow · Context Management at 1M Tokens