The Architecture Pattern That’s Winning in Both Human and Agentic Worlds
The same pattern transforming enterprise talent management is what makes AI agent systems work. Not metaphor—the same principle, different materials.
Enterprise calls it Skills-Based Organization (SBO). AI doesn’t have a name yet—but Agent Skills has a standard now so progress is being made. The through-line? Composability beats monoliths when speed matters.
Organizations and AI systems that separate concerns recombine capabilities faster than competitors. Not a prediction—the data already shows it.
What SBO Actually Means (And Why It Matters Now)
Traditional org design starts with roles: you hire a Project Manager, a Data Analyst, a Learning Designer. The role is the container. Skills live inside it—bundled and invisible until you need them elsewhere.
SBO flips this. Map discrete capabilities first, then compose roles dynamically based on what work requires. A “campaign manager” role might require stakeholder synthesis (high), budget modeling (medium), content strategy (low). You assemble based on who has those capabilities—not who has “Campaign Manager” on their resume.
The shift sounds simple. The results aren’t:
Organizations using SBO approaches are 63% more likely to meet business goals
81% of leaders report that skills-based approaches drive organizational agility
Retention improves by 15-25%—because people can see growth pathways that aren’t locked to rigid job titles
That last one matters. When skills are visible and transferable, people understand how their current work builds toward their next capability. The progression is explicit, not opaque.
But here’s the thing—this only works if you can actually separate the concerns. You need to know what the discrete capabilities are, how they combine, and what governance sits around them. You need, in other words, an architecture.
The Three-Layer Pattern (Roles, Skills, Tools)
SBO research breaks the pattern into three layers:
Layer 1: Roles = WHO does the work. Personas, expertise levels, decision-making authority. A senior architect has different judgment boundaries than a junior developer—not because of tools, but because of what they’re trusted to decide.
Layer 2: Skills/Capabilities = WHAT can be done. Discrete, composable techniques. “Stakeholder synthesis” is a skill. “Budget modeling” is a skill. “API integration” is a skill. These exist independently of any specific role.
Layer 3: Tools = Enablers for capability execution. The spreadsheet doesn’t make you good at budget modeling—it’s the substrate skills operate on.
The separation matters because it lets you ask better questions:
Do we have the capability we need? (Layer 2)
Who on the team has it at the level we need? (Layer 1)
What do they need access to in order to execute? (Layer 3)
When these layers blur—when “Project Manager” means person, skills, and tools all at once—you lose the ability to recombine. You’re stuck with monolithic bundles.
That’s where the AI parallel gets interesting.
Claude Code’s Skills Architecture
Claude Code has the same three-layer separation. Different materials:
Layer 1: Agents = WHO does the work. Specialized personas with decision-making authority. The voice-writer agent doesn’t just execute—it judges what constitutes “good enough,” when to iterate, when to escalate. The research-agent decides whether to use quick lookup or deep research mode based on context. These are roles with expertise boundaries.
Layer 2: Skills = WHAT can be done. Discrete competencies that agents invoke. analyze-syntax-patterns is a skill. validate-voice-output is a skill. perplexity-research is a skill. Any agent can use them—they’re not locked to a single persona.
Layer 3: MCP (Model Context Protocol) = Tools that enable capability execution. Perplexity API, Braintrust logging, file system access, research indexes. The infrastructure that skills operate on.
The rest of Claude Code’s features fill in the connective tissue (SHARPCOM for short):
Hooks: Event-driven triggers (like “when validation score drops below 80%, iterate”)
Rules: Governance and constraints (like “never generate voice content from main agent context—always delegate”)
Plugins: Capability packages that can be added or removed
Commands: User-invokable shortcuts (
/write-as,/clone-voice)Output Styles: Presentation layer adaptation (blog vs LinkedIn vs newsletter)
The architecture separates concerns so you can recombine them. Need a new workflow? Compose existing skills differently. Need a new capability? Add a skill without rebuilding agents. Need a new tool integration? Add it via MCP without touching the skill layer.
This is the same pattern SBO uses for human organizations. And it works for the same reason.
Why Composability Wins (When Speed Matters)
What both patterns enable:
Dynamic team formation. In SBO, you don’t restructure the org chart when requirements shift—you recombine capabilities. In SHARPCOM, you don’t rewrite agents—you invoke different skills.
Clear growth pathways. In SBO, people see how acquiring a skill expands their available roles. In agent systems, adding a skill expands what every agent can do—skills aren’t locked inside agents, they’re shared infrastructure.
Faster response to change. When a new requirement emerges, you’re asking “do we have this capability?” not “do we have this exact role configured this exact way?” The first question has more possible answers.
The agility improvements from SBO research? Conservative. When you see capabilities clearly and compose them flexibly, you’re not starting from scratch. You’re remixing instead of rebuilding.
The velocity advantage compounds. Monolithic systems—org structures or AI architectures—rebuild more surface area when requirements change. Composable systems swap components.
The Governance Layer (Why Rules Matter)
Here’s where both get tricky: separation of concerns only works with clear governance about recombination.
In SBO:
What skill level is required for what decision authority?
How do capabilities need to combine for this work context?
What are the boundaries between roles?
In SHARPCOM:
Never generate voice content from main agent—delegate to
voice-writerResearch goes through
research-agent, not ad-hoc callsValidation thresholds determine iteration vs. escalation
Without governance, composability becomes chaos. You get agents calling tools directly instead of using skills. You get skills duplicated across agents instead of shared. You get the illusion of modularity with none of the benefits.
The rules are what make the separation meaningful. They’re the constraints that enable recombination.
What This Looks Like in Practice
Concrete example: I need a voice-matched LinkedIn post about a research finding.
Monolithic: Write it myself, check against my voice profile manually, revise until it feels right, post.
Composable: Invoke /write-as lr linkedin with a brief. The system:
Routes to
voice-writeragent (Layer 1: role with decision authority)Which loads
voice-profiles/lr.yamland invokesgenerate-in-voiceskill (Layer 2: discrete capability)Then invokes
validate-voice-outputskill to score the result (Layer 2: different capability, same agent)Uses Perplexity API via skill script (or MCP) if research is needed (Layer 3: tool infrastructure)
Iterates if validation score < 80%
Saves with proper formatting
Each piece is doing one thing. The agent orchestrates. The skills execute. The tools enable. The rules govern.
And here’s what matters: tomorrow, when I need a newsletter instead of a LinkedIn post, I don’t rebuild the system. I change the format parameter and invoke a different output style. The voice-matching capability is the same. The validation logic is the same. Only the presentation layer changes.
That’s composability. And it’s the same pattern that lets SBO organizations reassign people to new projects without restructuring the entire team.
The Skills Transfer Nobody Talks About
Here’s what’s interesting for anyone managing teams: the skills you use to design modular codebases transfer directly to designing modular organizations.
If you’ve refactored a monolithic function into composable utilities, you know the pattern:
Identify the discrete responsibilities
Extract them into single-purpose functions
Define clear interfaces between them
Add governance about how they combine
That’s SBO—applied to people and capabilities instead of code and functions.
And if you’re coming from org design without that refactoring experience, watching effective AI agent systems compose gives you a working model. SHARPCOM isn’t metaphor. It’s reference implementation.
The cross-domain synthesis isn’t accidental. Modularity principles are substrate-independent. They work on code. They work on organizations. They work on AI systems. The materials change. The architecture doesn’t.
Where Most Implementations Fail
Both SBO and composable agent systems fail in predictable ways:
Fake modularity. You say you’ve separated concerns, but boundaries are porous. Agents that should delegate to skills call tools directly. Roles that should invoke discrete capabilities bundle everything together. Vocabulary of composability without discipline.
Over-engineering. Seventeen micro-skills when three would do. Roles so narrow that recombination becomes its own complexity tax. Modularity should reduce cognitive load, not increase it.
No governance. Discrete capabilities, no rules for how they combine. Every composition is ad-hoc. Flexibility becomes chaos—no shared understanding of how things fit together.
The middle path—the one that actually works—is: separate the concerns that actually need to be separate, define clear interfaces, then add just enough governance to make recombination predictable.
Which brings me to what you can actually do with this.
First Steps (Wherever You’re Starting From)
If you’re designing teams (Team Topologies is a solid starting framework):
Map discrete capabilities your work requires. Not job titles—capabilities. What needs to be done? Start with five. You’ll find more, but five shows the pattern.
Audit who has what, at what level. High/medium/low fidelity is enough. The point is making skills visible, not building a database.
Try composing one team based on capabilities instead of roles. Pick a small project. Assign based on who has the skills the work needs, not who has the “right” job title. Watch what breaks. That’s your governance gap.
If you’re building AI workflows:
Separate agent (WHO) from skills (WHAT) from tools (enablers). You probably have agents that do everything inline. Extract one discrete capability—validation, research, formatting—into a standalone skill multiple agents can invoke.
Define one rule for how capabilities combine. Something like “research always goes through the research skill, never direct API calls.” Enforce it. See if it reduces chaos or just adds ceremony.
Watch where you’re tempted to rebuild instead of recombine. That’s your signal something isn’t modular enough.
If you’re doing both (and if you’re reading this, you probably are):
Notice the parallels. The refactoring you do in code teaches you how to refactor organizations. The team design principles you learn transfer to agent architecture. This isn’t metaphor—it’s the same pattern.
Organizations that figure this out first—see capabilities clearly, compose flexibly, govern lightly—move faster than competitors. AI systems built this way already do.
Composability is competitive advantage. Are you building for it or against it?
I want to hear what you’re seeing. Team design, agent architecture, or the intersection—what’s working? What’s breaking? What governance actually helps versus what just adds weight?
Implementations matter. We figure this out together.
