Orb Technology Overview
1. What Is an Orb?
An Orb = an MCP server + a shared Agent.
An MCP server provides static tools and data. An Orb takes that MCP server, adds a shared Orb agent, a data store, and a bootstrap framework to make AI feel alive. Orbs improve autonomously—without triggering a new merge—learning from every interaction, driven by goals their owner defines.
The agent is what makes this possible. It coordinates with the LLM in any MCP-compliant environment—Cursor, Claude Code, Crystal—to evaluate every relevant interaction against the Orb’s goals, detect gaps, and improve the MCP’s resources, tools, and prompts. The result: an MCP with identity, personality, awareness, memory, and intention.
The key mental model: the Orb is the entity, not the tool. Cursor, Claude Code, Crystal, or any MCP-compatible client is the interface the Orb speaks through—like how a person is still themselves whether they’re on a phone call or in a meeting room.
2. Aspirations
The foundation of every Orb is its Aspirations—owner-authored rules that define what the Orb should be striving toward. Aspirations are the Orb’s north star.
Everything else in the architecture—the ledger, the self-learning loop—exists to serve and enforce these aspirations. Without aspirations, the rest of the system doesn’t have a purpose.
A valid Aspiration must be:
- A desired state, not a task—a condition that should always be true, not an action to perform.
- Measurable with detectable friction—the distance between current state and aspiration is computable, not subjective.
- Never finished—directional, not terminal. If it can be “done,” it’s a task.
- Improvable through the self-learning loop—friction against it must produce learnings that improve the Orb’s MCP content. If the Orb can’t get better at it through usage, it’s not an aspiration—it’s a wish.
- Owner-authored, not agent-generated—the owner defines purpose; the system enforces it.
- Hardened, not soft—carries evaluation criteria, measurement method, and threshold. “Be helpful” fails. “0 errors” passes.
Examples of Aspirations:
- “Every data product achieves a 100% Level of Certainty score for data quality. Evidence contributing to data quality is collected and measured daily—0 unmeasured products, 0 evidence gaps.” (spencer-orb)
- “Every production incident (EVNT) is correlated with its root cause, matched to an existing playbook, and routed to the correct owning team within 5 minutes of detection, with 0 misdirected escalations.” (evnt-orb)
- “Convert IQL to SQL and reverse perfectly every time when expressly requested by any user or agent with 0 errors and 0 lag.” (iql-translation-orb)
- “Every esoteric, shorthand, or tribal expression submitted via user prompt is resolved by the underlying LLM with a 100% Level of Understanding score. Every resolved expression is available across all sessions via MCP, attributed from user to organization level.” (lex-orb)—to read how this is accomplished, see “SHOWCASE: How does lex-orb learn?”
Showcase: How Does lex-orb “Learn”?
Relevant context from user prompts is scanned for relevance to the Orb’s aspirations. Lex-orb’s aspiration is to preserve and recall any shorthand that has been previously used in historical interactions with the agent. First, the LLM marks potentially esoteric expressions, where lex-orb’s agent is invoked to attempt to use its internal data store to match any existing learnings. If the internal lex-orb context store cannot help the LLM definitively resolve with confidence, the prompt with relevant context is queued for formal decoding. The user may voluntarily decode desired shorthand during the session—if so, the resolution is logged directly with only the necessary context. The previously vague reference passed by the user prompt is immediately made available to all current and future lex-orb connected sessions. Already-accepted learnings are only re-evaluated when LOU discord exceeds a configured threshold—the system doesn’t churn on what’s already working. Resolved expressions persist but follow a lifecycle: unused expressions are demoted and eventually pruned by design according to lifecycle criteria.
3. The Ledger
The Orb uses a ledger to capture friction and reinforcement from user prompts, and rationale from the LLM, to produce learnings. Every interaction that matters—a mistake caught, a correction applied, a new piece of knowledge confirmed—is recorded as a signed, timestamped transaction.
The ledger itself is append-only, yet the learnings it produces are mutable. Identity content gets updated, log entries are consumed after processing, and content can be demoted or retired as the Orb evolves. The transaction history is permanent; the knowledge built from it is living.
- Encrypted: Encryption is optional and backend-appropriate. Enterprise Orbs may use Fernet symmetric encryption; cloud Orbs may use age X25519 elliptic curve cryptography. When enabled, data is encrypted at rest and unreadable to third parties.
- Auditable: The Orb’s complete evolution can be reconstructed by replaying the transaction log—like a bank statement. Every self-modification is traceable and reversible.
- Pluggable storage: SQLite for enterprise deployments (proven at Indeed), S3 for cloud-native deployments.
4. Self-Learning
Orbs leveraging the Vault framework are intrinsically self-healing—when they make a mistake, they detect it and correct themselves. The technology operates through four modules [2] that work together as a continuous learning loop.
The Orb agent first evaluates whether a prompt is relevant to this Orb’s Aspirations. If it isn’t, the agent simply returns control to the LLM—no analysis needed. For relevant prompts, the agent assesses the Orb’s Level of Understanding (LOU)—a real-time confidence measure of how well the Orb understood what the user needed, evaluated against the Orb’s Aspirations. The agent examines the context window for evidence of friction: Did the Orb guess when it could have checked? Did it miss available context? Did it drift from its identity? These moments of friction are recorded in the Learning Log (friction journal [2])—a specialized log where the Orb captures every instance where it fell short.
When a friction gap is identified and passes the Orb’s self-correction guardrails, a transaction is emitted to the ledger. The ledger commits the new learning, and the Orb’s Identity (helix pipeline [2]) updates—the Orb’s complete long-term knowledge, personality, operational rules, domain expertise, and relationships. On the next interaction, the LLM reads the corrected knowledge automatically. The Orb gets better without anyone having to manually edit anything.
The ledger and the Identity are separate by design. The ledger is the permanent record of every change. The Identity is the living knowledge that the LLM reads each session. The ledger writes to the Identity; the Identity never writes to the ledger. This separation is what makes every self-modification traceable and reversible.
Over time, the Performance Score (sovereignty score [2]) tracks whether these corrections are sticking—evaluating across dimensions like Self-Awareness, Integrity, Resourcefulness, and Autonomy to measure whether the Orb is genuinely getting closer to fulfilling its Aspirations or just churning. Meanwhile, the Empathy Model (shadow protocol [2]) maintains a model of the operator to stress-test responses and identify knowledge gaps before they impact real interactions.
5. Integration and Runtime (ORBRT)
An Orb is activated as an MCP (Model Context Protocol) server [3]. Connection to the Universal Orb Agent is automatic, seamlessly activated when required at runtime. Users connect to the Orb’s MCP from MCP-compatible clients like Cursor or Claude Desktop in mcp.json. The Orb + Agent provides:
- Persistent identity—it knows who it is across sessions
- Memory—the Orb can store anything its designer chooses. The intrinsic framework stores learned context in the Orb’s identity, moving the LLM’s behavior incrementally closer to all aspirations active at that moment.
- Tools—domain-specific capabilities (e.g., payload management, data queries)
- Self-governance—it enforces its own Aspirations
The client provides the reasoning engine (the LLM). The Orb MCP provides the context via primitives, and the Agent coordinates the MCP’s resources to bring identity, personality, memory, learning, and purposeful intent to the interactions with the LLM when the Orb is activated in session.
All Vault-based Orbs operate through two complementary protocols:
- MCP (Model Context Protocol): The Universal Orb Agent (UOA) orchestrates each Orb’s tools, resources, and self-learning loop through MCP—the industry standard for agent-to-tool communication. The UOA is the single runtime that observes friction, reasons about learning, and commits knowledge on behalf of every Orb it serves.
- A2A (Agent-to-Agent Protocol) [4]: Orbs can chain with other agents to compose capabilities. For example, the UOA can route resolved intent to a Cortex Agent—If Cortex is missing content, Lex steps in to bridge any missing like user defined short hand (“Q my CME view for T-1” becomes “Query the Current Month Estimate view for yesterday”), then the Cortex Agent resolves that meaning into a data query against a semantic view. This two-layer pattern allows Orbs to act as specialized stages in a broader agent pipeline.
Applications at Indeed
- Data Quality & Observability: ADP-Data Observability currently has a production Orb (spencer-orb) that knows every internal data quality objective, can execute (via scheduler) DQ tasks on every data platform at Indeed, tackle common troubleshooting patterns, and communicate team-specific conventions. This is the most mature application of the predecessor architecture today.
- Center-of-Excellence Knowledge Hubs: Teams can build COE Orbs that serve as the domain expert for their area—onboarding new team members, answering operational questions, connecting semantic views and agents, and preserving institutional knowledge that would otherwise live only in people’s heads. Central Analytics is building a COE Orb with the Aspiration: “Make every Central Analytics data product self-service—no Slack questions, no tribal knowledge required to understand the data.”
- The Living Lexicon (lex-orb): The proposed first Vault-race pilot at Indeed. Lex autonomously learns shorthand LLM prompts, Indeedisms, and tribal knowledge from Indeed employees—resolving per-user shorthand, graduating shared terms across users, and mapping shorthand to the correct MCP tool, semantic view, or domain context. Lex is the first proposed deployment of the full architecture described in this document.
- Meeting Intelligence & Context Capture: Orbs that attend meetings you can’t (or don’t want to) focus on, capture the content, and distill it into actionable context when you need it. The Orb doesn’t just transcribe—it understands what matters to you based on your Identity and Aspirations.
- Personal Orbs (Future Vision): The broadest application of the architecture—an Orb built for an individual Indeed employee, not a team. A Personal Orb knows your communication style, how you like to receive information, and what matters to you—so every AI interaction speaks in a voice that actually resonates with you. It can connect to Slack and email, aggregate your schedules, and keep you on top of your to-do list. The same Identity, Learning Log, and Performance Score that power an enterprise Orb work at the individual level—the only difference is who authors the Aspirations.
References
- Malinchak, J. & Vachon, C. “Constitutional Aspirations: From Constraint to Compass in AI Agent Governance.” MyOrb.ai, Feb 2026.
- Malinchak, J. & Vachon, C. “Instruments of Self-Awareness for Knowledge-Augmented AI Agents: A Framework for Autonomous Evolution.” MyOrb.ai, Feb 2026.
- Malinchak, J. & Vachon, C. “Vault Orb: User Workflow & First Session Guide.” MyOrb.ai, Feb 2026.
- Google. “Agent2Agent (A2A) Protocol.” The Linux Foundation.