The PM wanted a property called "customer sentiment." I asked her to define it precisely enough for an AI to extract it from a sales call transcript. She couldn't — not because she didn't understand her customers, but because the gap between a human intuition and a machine-readable specification is wider than anyone expects. We spent a full session rewriting that one field description. That session was more valuable than the three discovery calls that preceded it.
That moment captures something I kept running into across two decades of building and implementing solutions: the bottleneck is never the technology. It's the process of translating what people know about their product into something a system can act on.
The process everyone pretends works
If you've done serious AI implementation, you've lived some version of this.
The discovery calls. You book 90 minutes. The client's team shows up — product lead, a senior engineer, maybe a data person. You ask where user data lives. Someone says "Postgres." Someone else says "also Segment." A third person mentions a legacy Mongo instance they're "migrating off of... eventually." You ask about the notification system. Nobody's sure who owns it. You ask for codebase access. Someone creates a Jira ticket. That ticket takes four days.
The codebase sessions. You got access. You're reading code. You find three different user ID formats. You find an event tracking system that fires 47 distinct events, but only 12 are documented. You find a sendEmail() function called from nine places with nine different templates, none of which reference anything about who the user actually is beyond name and email. You call engineering about the 35 undocumented events. Half are deprecated but never removed. The other half are "experimental features from last quarter."
The alignment meeting. Sales wants personalized outreach. Product wants smarter onboarding. CS wants churn signals. Marketing wants campaign intelligence. The VP of Engineering wants to know if this increases page load time. Someone asks "can't we just do all of it?" You explain phasing. Everyone agrees on priorities, but two people plan to relitigate in a side channel later.
The schema debates. You propose entity types. Someone wants to capture 40 fields per entity. You explain that every field the AI extracts needs a precise description or extraction quality falls off a cliff, so let's start with the 15 that matter most. Debate about which 15. This is where the PM tried to define "customer sentiment."
The integration document. Where the new system sits in the stack. Which functions get wrapped. Caching strategy. Rate limits. The phasing: what ships in week one, what's "phase two" — which everyone privately suspects means "never." The CTO has questions that reveal constraints nobody mentioned. You revise.
Total time: 6–10 weeks. 40–80 hours of SA time. 20–30 hours across the client team. And the output — the architecture, the schema, the integration plan — is a document. A good document. But it doesn't execute anything. It doesn't validate itself against the codebase. And by the time implementation starts, two features have shipped that the document doesn't account for.
The pattern was always the same: most of the time wasn't spent architecting. It was spent understanding — digging through code, reconciling what people said with what the system actually does, slowly building a mental model of a product that already existed but nobody had a complete picture of.
The reasoning was transferable. The availability wasn't.
Here's what kept challenging me. The questions I asked in discovery were the same regardless of industry. The schema design principles were the same. The integration anti-patterns were the same. The production readiness checklist was the same.
The judgment was repeatable. It just lived in my head, and in the heads of a few hundred other experienced SAs around the world.
I decided to encode that process into an AI Skill — a structured reasoning system that installs into any AI-powered IDE. Claude Code, Cursor, Windsurf, others.
I called it "Personize Solution Architect." With Skills, what the AI gets isn't a prompt or a chatbot personality. It's structured knowledge: decision frameworks, constraint systems, integration patterns, industry-specific schemas, and a nine-area production audit built from every mistake we've seen teams make.
The Skill walks through a structured architecture sequence: discovery of product surfaces, filtering real AI opportunities from shallow ones, planning an implementation path, proposing schema and memory models, defining governed generation patterns, suggesting integration strategies, and reviewing production risks. These are the same steps experienced architects already follow. The difference is that the Skill produces a first draft of that thinking in minutes.
That changes the starting point of the conversation. Not by replacing human judgment — the output benefits from review, and it's stronger at some phases than others — but by performing a system-aware first pass that normally takes teams weeks to reach.
What a system-aware first pass actually looks like
You point the Skill at your codebase and ask it to architect governed personalization.
It reads the code and the documentation without sharing anything with Personize or me yet (so no approval is required, but full access can be granted). It traces every user.email, every event.track(), every sendEmail() and notify() call. It finds the notification system nobody mentioned. It finds the 35 undocumented events. It finds the three user ID formats and flags the inconsistency. It maps data flows across the application and classifies what it finds: structured data that should be stored directly, unstructured content that needs AI extraction, real-time events that should be memorized as they happen.
Then it applies a filter that takes a human SA multiple sessions to develop conviction around: it proposes only the opportunities that actually justify the architecture. Swapping a CTA based on user role? That's template logic — you don't need governed memory for it. Writing a unique CTA that references the visitor's industry, journey stage, open support tickets, and your brand voice? That requires unified memory, governance, and generation working together. The Skill only proposes things that need all three layers.
It designs the schema with precise property descriptions — the kind that drive high extraction accuracy, the kind that normally take weeks of iteration because nobody realizes the difference between a weak description and a strong one until they're debugging extraction quality in production.
It writes the integration plan using your data models, your field names, your existing functions. Personalization wraps existing delivery — it doesn't replace it. If the personalization layer goes down, your product keeps working. That's a lesson we watched teams learn the hard way. We encoded it.
The meetings don't disappear. But they change. Instead of "what should we build?" the conversation starts from "is this the right opportunity? Is this schema correct? Should the workflow run here or here? What do we launch first?"
We have established guidelines and examples for ten industries to facilitate initial setup, and we are continuously refining and optimizing these with our users.
What the Skill covers today
- Codebase discovery — maps data flows, flags inconsistencies, classifies what to store vs. extract vs. stream
- Opportunity filtering — proposes only use cases that justify the full memory + governance + generation stack
- Schema design — generates precise field descriptions with source, format, examples, and edge cases
- Governed generation — defines brand voice constraints and compliance rules before any content generates
- Integration planning — writes the wiring plan using your actual data models and function names
- Production audit — nine-area review covering rate limits, fallback behavior, feedback loops, privacy exposure, and more
Schema design and governed generation each deserve their own post — the decisions made at those two phases quietly determine whether a personalization system works in production or just works in demos. More on that soon.
What I think this actually means
Solution architecture is a domain of professional judgment. The kind of work people assume needs a human because it requires reasoning about unfamiliar situations, not just retrieval of known answers.
And we encoded it. Not perfectly. But the "Solution Architect" Skill does real architecture work on products it has never seen, and the output holds up well enough to change the starting point of every conversation that follows.
If this works for solution architecture — and it does, across industries from healthcare to energy management — the same encoding approach works for security audits, compliance reviews, infrastructure planning, data architecture. Any domain where expert reasoning is repeatable but bottlenecked by human availability.
The things the Skill surfaces in the first five minutes will tell you whether the weeks-long process you've been running was discovering complexity, or just rediscovering what the code already knew.
How can you incorporate our Skills into your AI tools?
npx skills add personizeai/personize-skillsThat gives you the full set of Personize Skills. The source is open on GitHub — you can browse it, fork it, or point your AI tools directly at it.