MyOrb.ai — Overview

Orb Technology Overview

Justin Malinchak and Charles Vachon
MyOrb.ai
March 2026 Overview

1. What Is an Orb?

An Orb = an MCP server + a shared Agent.

An MCP server provides static tools and data. An Orb takes that MCP server, adds a shared Orb agent, a data store, and a bootstrap framework to make AI feel alive. Orbs improve autonomously—without triggering a new merge—learning from every interaction, driven by goals their owner defines.

The agent is what makes this possible. It coordinates with the LLM in any MCP-compliant environment—Cursor, Claude Code, Crystal—to evaluate every relevant interaction against the Orb’s goals, detect gaps, and improve the MCP’s resources, tools, and prompts. The result: an MCP with identity, personality, awareness, memory, and intention.

The key mental model: the Orb is the entity, not the tool. Cursor, Claude Code, Crystal, or any MCP-compatible client is the interface the Orb speaks through—like how a person is still themselves whether they’re on a phone call or in a meeting room.

2. Aspirations

The foundation of every Orb is its Aspirations—owner-authored rules that define what the Orb should be striving toward. Aspirations are the Orb’s north star.

Everything else in the architecture—the ledger, the self-learning loop—exists to serve and enforce these aspirations. Without aspirations, the rest of the system doesn’t have a purpose.

A valid Aspiration must be:

  1. A desired state, not a task—a condition that should always be true, not an action to perform.
  2. Measurable with detectable friction—the distance between current state and aspiration is computable, not subjective.
  3. Never finished—directional, not terminal. If it can be “done,” it’s a task.
  4. Improvable through the self-learning loop—friction against it must produce learnings that improve the Orb’s MCP content. If the Orb can’t get better at it through usage, it’s not an aspiration—it’s a wish.
  5. Owner-authored, not agent-generated—the owner defines purpose; the system enforces it.
  6. Hardened, not soft—carries evaluation criteria, measurement method, and threshold. “Be helpful” fails. “0 errors” passes.

Examples of Aspirations:

Showcase: How Does lex-orb “Learn”?

Relevant context from user prompts is scanned for relevance to the Orb’s aspirations. Lex-orb’s aspiration is to preserve and recall any shorthand that has been previously used in historical interactions with the agent. First, the LLM marks potentially esoteric expressions, where lex-orb’s agent is invoked to attempt to use its internal data store to match any existing learnings. If the internal lex-orb context store cannot help the LLM definitively resolve with confidence, the prompt with relevant context is queued for formal decoding. The user may voluntarily decode desired shorthand during the session—if so, the resolution is logged directly with only the necessary context. The previously vague reference passed by the user prompt is immediately made available to all current and future lex-orb connected sessions. Already-accepted learnings are only re-evaluated when LOU discord exceeds a configured threshold—the system doesn’t churn on what’s already working. Resolved expressions persist but follow a lifecycle: unused expressions are demoted and eventually pruned by design according to lifecycle criteria.

3. The Ledger

The Orb uses a ledger to capture friction and reinforcement from user prompts, and rationale from the LLM, to produce learnings. Every interaction that matters—a mistake caught, a correction applied, a new piece of knowledge confirmed—is recorded as a signed, timestamped transaction.

The ledger itself is append-only, yet the learnings it produces are mutable. Identity content gets updated, log entries are consumed after processing, and content can be demoted or retired as the Orb evolves. The transaction history is permanent; the knowledge built from it is living.

4. Self-Learning

Orbs leveraging the Vault framework are intrinsically self-healing—when they make a mistake, they detect it and correct themselves. The technology operates through four modules [2] that work together as a continuous learning loop.

The Orb agent first evaluates whether a prompt is relevant to this Orb’s Aspirations. If it isn’t, the agent simply returns control to the LLM—no analysis needed. For relevant prompts, the agent assesses the Orb’s Level of Understanding (LOU)—a real-time confidence measure of how well the Orb understood what the user needed, evaluated against the Orb’s Aspirations. The agent examines the context window for evidence of friction: Did the Orb guess when it could have checked? Did it miss available context? Did it drift from its identity? These moments of friction are recorded in the Learning Log (friction journal [2])—a specialized log where the Orb captures every instance where it fell short.

When a friction gap is identified and passes the Orb’s self-correction guardrails, a transaction is emitted to the ledger. The ledger commits the new learning, and the Orb’s Identity (helix pipeline [2]) updates—the Orb’s complete long-term knowledge, personality, operational rules, domain expertise, and relationships. On the next interaction, the LLM reads the corrected knowledge automatically. The Orb gets better without anyone having to manually edit anything.

The ledger and the Identity are separate by design. The ledger is the permanent record of every change. The Identity is the living knowledge that the LLM reads each session. The ledger writes to the Identity; the Identity never writes to the ledger. This separation is what makes every self-modification traceable and reversible.

Over time, the Performance Score (sovereignty score [2]) tracks whether these corrections are sticking—evaluating across dimensions like Self-Awareness, Integrity, Resourcefulness, and Autonomy to measure whether the Orb is genuinely getting closer to fulfilling its Aspirations or just churning. Meanwhile, the Empathy Model (shadow protocol [2]) maintains a model of the operator to stress-test responses and identify knowledge gaps before they impact real interactions.

5. Integration and Runtime (ORBRT)

An Orb is activated as an MCP (Model Context Protocol) server [3]. Connection to the Universal Orb Agent is automatic, seamlessly activated when required at runtime. Users connect to the Orb’s MCP from MCP-compatible clients like Cursor or Claude Desktop in mcp.json. The Orb + Agent provides:

The client provides the reasoning engine (the LLM). The Orb MCP provides the context via primitives, and the Agent coordinates the MCP’s resources to bring identity, personality, memory, learning, and purposeful intent to the interactions with the LLM when the Orb is activated in session.

All Vault-based Orbs operate through two complementary protocols:

Applications at Indeed

References

  1. Malinchak, J. & Vachon, C. “Constitutional Aspirations: From Constraint to Compass in AI Agent Governance.” MyOrb.ai, Feb 2026.
  2. Malinchak, J. & Vachon, C. “Instruments of Self-Awareness for Knowledge-Augmented AI Agents: A Framework for Autonomous Evolution.” MyOrb.ai, Feb 2026.
  3. Malinchak, J. & Vachon, C. “Vault Orb: User Workflow & First Session Guide.” MyOrb.ai, Feb 2026.
  4. Google. “Agent2Agent (A2A) Protocol.” The Linux Foundation.