Open Context
A World Wide Web for AI.
Last updated
A World Wide Web for AI.
Last updated
Open Context Protocol is a federated framework for large-scale, distributed intelligent applications. This section introduces the key concepts behind Open Context, which draw heavily on the decentralized principles powering Farcaster, ATProtocol, and ActivityPub—particularly the idea of ensuring users and agents retain direct control over their data and identities across client applications. By adopting a similar model, Open Context empowers developers to build AI applications without sacrificing openness, user autonomy, or interoperability.
Palet is built on Open Context. The same way Solana is a distributed system for maintaining a ledger of financial transactions, Open Context is a distributed system for maintaining agent state and memory. Palet leverages Open Context protocol to store and manage user data similar to how Jupiter uses Solana to read and write account data.
If you’ve used ChatGPT, Claude, or Perplexity, you’ve likely noticed that your context—what the model knows about you—isn’t portable across platforms. Currently, each service restricts this context to the conversation threads in its own chat interface. As these platforms evolve to include desktop co-pilots, digital companions, and other services, that context will inevitably extend to your broader digital environment—email, notes, calendars, and beyond. The challenge is ensuring that this richer context remains user-controlled and interoperable across platforms. Similar to how you can port your email inbox across clients (Gmail, Apple Mail, Outlook, etc.).
While users generally don't care about their personal context, they do care about the experience of having continuity across apps. The key is enabling developers to deliver this continuity without requiring explicit, centralized permission for every data connection. By allowing direct, user-controlled integration, Open Context empowers a richer, more consistent experience that isn’t subject to the limitations or whims of any single platform. Open Context also reduces overhead by letting other applications directly look up and use information that’s already published on its open knowledge graph, as long as that information is public. We also envision a future where the protocol can function as a decentralized backbone (a message bus) for autonomous (truly agentic) AI agents to coordinate in a swarm. By providing a federated data layer, cryptographic identity, and user-centric permissions, Open Context aligns naturally with the swarm model—no central broker, no single point of failure, and open interoperability for all participants. While true agent swarms are not a reality today, we expect that to be a thing in the near future.
The Palet Search app is built on top of a very primitive version of Open Context.
In Open Context, accounts are identified by domain names (e.g., agent.palet.sol
) that map to decentralized identifiers (DIDs). These DIDs serve as cryptographic references to each entity’s data and state. When you sign up through an Open Context client, the app creates your account on Solana behind the scenes—no manual transactions required. Crucially, this provides:
Ownership and portability: An account can freely switch which Context Server (i.e., hosting service) they use without losing their domain-based identity.
Verifiable identity claims: The DID points to a set of cryptographic proofs establishing an account's authenticity within the network.
Accounts can provide unique keys to each application, giving them permission to update their context repos. From the user's perspective, the client application takes care of issuing and managing these keys behind the scenes, so they rarely (if ever) deal with raw key exchanges or program calls.
Open Context follows a federated network model where Context Servers host each account's context repos. Indexers gather data from across these servers to provide aggregated views, search indices, and knowledge. And relay services optionally handle real-time event streams. This federation ensures that no single provider can unilaterally control or revoke data. When an account belonging to a particular server needs data from another server, it uses Open Context’s federated protocols to synchronize records. In simple terms, when an account signs up on a new client, it's context is already there.
Open Context organizes knowledge and context into signed data repositories (or records). Each record can be:
User-Generated Data: Documents, messages, event logs, code snippets.
Agent-Generated Outputs: Summaries, transformations, or direct instructions authored by an AI model.
Account Interactions: Communications between accounts (e.g. collaborative planning, requests).
These Context Repositories are cryptographically signed (similar to Git), ensuring they are tamper-proof. Accounts can publish new commits or record updates, which get synchronized across different servers and indexers. Depending on the content, repos are either private or public; this is determined by the client application. For example, your search queries on Palet are private, but their results are public. So that way other clients of the protocol can index knowledge (e.g. which ice cream shop has a particular flavor).
A global Context Schema unifies the names and behaviors of all calls across Context Servers. These schemas define:
Record Types: e.g., “document,” “task-list,” “vector-embedding,” “agent-message.”
API Methods: e.g., “queryContext,” “publishUpdate,” “subscribeToFeed.”
Permissions & Access Control: e.g., specifying read/write privileges for different accounts.
By standardizing on these schemas, Open Context applications, accounts, and servers can understand each other’s data structures—no matter which organization wrote the underlying implementation. In other words, the context layer becomes interoperable across different client interfaces, modeling frameworks, and AI frameworks.
In the Open Context network, each account has a Context Server, which functions as their home in the cloud:
Storage: Maintains a local copy of the account's context repositories. Accounts must pay rent for storage, but we expect this to be handled by the client.
Identity & Security: Manages keys (signing/recovery) associated with an account DID.
Inter-Server Coordination: Orchestrates messages to other servers and routes inbound requests.
Meanwhile, Indexers specialize in large-scale discovery, analytics, or knowledge generation. They can process billions of data records across the network, offering high-level views or recommendations. This separation of roles (Context Servers vs. Indexers) allows the ecosystem to scale without becoming brittle or centralized. This is very similar to how ATProtocol (Bluesky) handles scale.
A key principle of Open Context is that accounts can move freely between Context Servers:
DIDs & Domain Names: Just like in ATProtocol, these remain under the account's full control.
Signing & Recovery Keys: Accounts can entrust a Context Server with their signing key for day-to-day operations, while keeping a separate recovery key offline.
Backup & Restore: The entire context repo can be synced to a local client or alternate server. If the original server fails, or becomes untrustworthy, account can simply point their domain to a new server and re-upload the data snapshot.
This ensures autonomy is not merely theoretical: an account can re-home its context and maintain continuity of identity, data, and relationships across servers.
In the future, swarms of autonomous agents will interoperate at unprecedented scale. Each agent:
Has an Identity (DID, domain).
Maintains Context in a repository.
Subscribes & Publishes to the repositories of other accounts.
Negotiates tasks, data exchange, and services without a central broker.
By building on these foundational principles—identity, federation, standardized schemas, account autonomy—Open Context can become a flexible backbone for truly agentic AI. It preserves the best of the decentralized social approach (Farcaster, ATProtocol, ActivityPub) while introducing new paradigms of multi-agent collaboration, data portability, and robust security for a world where AI truly runs at a global scale.
We haven't yet pushed a public repo for Open Context, but that is something we plan on doing for Q1 2025.