You may be using agents to write code, make API calls, and build applications, but a lot of protocol documentation is written for people who browse websites and gather information from code repositories. We've made some improvements that can change how you use LLMs and agents to work in the Atmosphere.

Lexicon Garden is a feature-rich Lexicon schema platform that helps you understand and interact with lexicons through both your browser and your AI tooling. In this post, I'll introduce two new features: an llms.txt endpoint for AI-friendly documentation and the Model Context Protocol (MCP) endpoint, which lets AI assistants explore and use ATProtocol methods.

The llms.txt Standard

The llmstxt.org proposal sets a standard for websites to offer machine-readable documentation that works well for large language models. Lexicon Garden uses this standard in two main ways.

Site-Level Documentation

The main endpoint at GET /llms.txt returns a detailed markdown document with the following:

  • Overview of ATProtocol lexicons and their types (records, queries, procedures, subscriptions)

  • Complete XRPC API reference with parameters, response formats, and example requests

  • MCP tool documentation

  • A dynamic list of all authoritatively-hosted lexicons with links to their individual llms.txt files

curl https://garden.lexicon.garden/llms.txt

This provides agents with all the information they need about Lexicon Garden, so they do not have to read HTML or scrape web pages.

Per-Lexicon Documentation

Each lexicon schema has its own llms.txt endpoint at GET /lexicon/{did}/{nsid}/llms.txt. This endpoint offers:

  • Schema definition with all type information

  • Property tables for records, queries, and procedures

  • Validation constraints (minLength, maxLength, enum values, etc.)

  • Community-contributed examples

  • Raw JSON schema for programmatic consumption

If an AI agent needs to build a valid app.bsky.feed.post record, this endpoint provides all the structured information required. There is no need to guess or make up field names.

Model Context Protocol Integration

The MCP endpoint at POST /mcp uses JSON-RPC 2.0. This lets AI agents interact with Lexicon Garden through a standard tool interface. Three tools are available.

describe_lexicon

This read-only tool retrieves detailed schema information for any ATProtocol lexicon:

{
    "jsonrpc": "2.0",
    "id": 1,
    "method": "tools/call",
    "params": {
        "name": "describe_lexicon",
        "arguments": {
            "lexicon": "app.bsky.feed.post"
        }
    }
}

The tool accepts an NSID and an optional identity. If no identity is given, it finds the main source using Lexicon resolution.

The response includes the full schema definition, authority DID and CID, and up to 20 real-world examples. Each response also adds random safety tags to help prevent prompt injection attacks from harmful schema content.

create_record_cid

Agents working with ATProtocol records often need to generate content identifiers:

{
    "jsonrpc": "2.0",
    "id": 1,
    "method": "tools/call",
    "params": {
        "name": "create_record_cid",
        "arguments": {
            "record": {
                "$type": "app.bsky.feed.post",
                "text": "Hello from MCP!",
                "createdAt": "2025-01-11T00:00:00.000Z"
            }
        }
    }
}

This process uses DAG-CBOR encoding and SHA-256 hashing to create CIDs that follow ATProtocol rules.

invoke_xrpc

The most advanced tool lets AI agents make authenticated XRPC calls for users:

{
    "jsonrpc": "2.0",
    "id": 1,
    "method": "tools/call",
    "params": {
        "name": "invoke_xrpc",
        "arguments": {
            "method": "com.atproto.repo.listRecords",
            "params": {
                "repo": "did:plc:example",
                "collection": "app.bsky.feed.post",
                "limit": 10
            }
        }
    }
}

This tool retrieves the lexicon schema from the main source, checks all inputs against the schema before making a call, uses GET for queries and POST for procedures, and supports service proxying with the atproto_proxy parameter to route through the user's PDS to AppView or other services.

If validation fails, the error response includes the expected schema structure, required fields, and what was actually provided. This gives agents the context they need to self-correct.

The invoke_xrpc_guide Prompt

MCP prompts offer context-sensitive guidance. The invoke_xrpc_guide prompt helps agents understand how to properly use specific XRPC methods:

{
    "jsonrpc": "2.0",
    "id": 1,
    "method": "prompts/get",
    "params": {
        "name": "invoke_xrpc_guide",
        "arguments": {
            "method": "com.atproto.repo.createRecord",
            "atproto_proxy": "true"
        }
    }
}

The prompt creates guidance based on the arguments. It gives method-specific advice about required fields and examples, service proxying documentation when needed, validation context, and best practices such as using describe_lexicon first and including $type fields.

ATProtocol OAuth: A Simple and Secure Proxy Approach

To make secure interactions easier for AI agents, Lexicon Garden manages the complex parts of ATProtocol OAuth authentication behind the scenes. Instead of requiring every agent to handle technical details, Lexicon Garden acts as a secure, easy-to-use proxy for authentication.

The Problem

ATProtocol OAuth requires DPoP (Demonstration of Proof-of-Possession), which binds tokens to specific cryptographic keys. This creates real challenges for MCP clients:

  • DPoP keys must be stored securely and used for every request

  • Token refresh requires the same DPoP key

  • Service proxying adds another layer of complexity

Most OAuth libraries do not support DPoP by default. Expecting every AI agent implementation to handle this correctly can lead to security issues and frustrated developers.

The Approach

To make things simpler, Lexicon Garden handles the technical parts of authentication for you. It manages the special keys and tokens required by ATProtocol, and takes care of logging in, granting access, and making secure requests. As a result, AI agents and developers can use familiar, standard authentication methods without worrying about new security mechanisms. Lexicon Garden translates these simple requests into the more complex protocol steps in the background.

Security and Trust

The tradeoff is trust. Clients must trust Lexicon Garden with their ATProtocol access tokens. This approach works well for first-party integrations but may not fit every use case.

Several measures protect against misuse:

  • Client-DID binding — once a client authenticates as a specific DID, it cannot authenticate as a different identity

  • Resource validation — tokens are bound to specific resource URIs (RFC 8707)

  • Short-lived tokens — access tokens expire quickly; refresh tokens enable renewal

An AI agent using Lexicon Garden might:

An AI agent interacting with Lexicon Garden might:

  • Read /llms.txt to understand available APIs

  • Call describe_lexicon to understand a specific schema

  • Request the invoke_xrpc_guide prompt for method-specific guidance

  • Authenticate through the OAuth flow

  • Use invoke_xrpc to make authenticated calls

Each layer gives the context needed for the next. Agents can interact with ATProtocol services without hardcoded knowledge of every schema. They can discover and learn as they go.

This is what AI-native protocol tooling looks like. The documentation is made for machines, the tools give structured feedback, and the authentication flows don't sacrifice security for convenience. The protocol stays open and federated, and the tooling makes it accessible.