Enabling Fast APIs for Agents: How We Built an MCP Server for Protobuf.ai

Protobuf.ai needed AI agents to lint schemas, detect breaking changes, and enforce governance—without manual CLI workflows. We built an MCP server that makes protobuf tooling agent-native.

The Client Challenge

Protobuf.ai came to us with a clear problem: the Protocol Buffer ecosystem was stuck in a CLI-first world. Developers using gRPC and protobufs had powerful tools for linting, breaking change detection, and validation—but every workflow required manual command-line invocations.

Meanwhile, their users were increasingly working with AI assistants. Developers weren't running lint commands manually—they were asking Claude or Cursor to "check this schema for issues." The AI would hallucinate lint rules or miss breaking changes entirely, because it had no programmatic access to protobuf tooling.

The Gap: Advice vs. Action

Ask any AI assistant "why should I use protobuf?" and you'll get a reasonable answer: performance, type safety, schema evolution. Ask it to help you adopt protobufs and it'll suggest a sensible sequence: start with one service, define schemas, generate clients.

But here's what the AI can't do without tooling access:

  • Validate that your schema actually compiles
  • Check if your field naming follows conventions
  • Detect if your changes will break existing clients
  • Generate working SDK code in your target language
  • Enforce your organization's specific lint rules

Without MCP access, the AI is just talking. It gives advice, but can't take action. Hand it a schema and ask "is this correct?" — it'll say something that sounds right. But it's guessing. It has no way to actually run the validation.

This is the gap Protobuf.ai asked us to close.

The ask was straightforward: build an MCP server that exposes protobuf governance capabilities to AI agents. Make linting, breaking change detection, protovalidate rules, and SDK generation available through the Model Context Protocol.

Why MCP for Protocol Buffers?

The existing protobuf tooling ecosystem is mature. Established tools provide excellent linting, schema registries handle versioning, and protovalidate enforces runtime constraints. But all of these tools assume a human running CLI commands.

That assumption is breaking down. When an AI agent helps a developer:

  • Design a new gRPC service
  • Add fields to an existing message
  • Review a PR that modifies schemas
  • Generate client SDKs in a new language

...the agent needs to know whether those changes are valid. Not "looks valid to me"—actually valid according to lint rules, actually compatible with existing clients, actually enforceable at runtime.

MCP bridges this gap. Instead of the AI guessing at protobuf conventions, it can call tools that definitively answer: "Is this schema valid? What rules does it violate? Will this change break existing clients?"

What We Built

The Protobuf.ai MCP server exposes four core capability sets:

1. Intelligent Linting

Beyond basic syntax checking, the MCP server enforces Google's Protocol Buffer style guide and configurable rule sets. AI agents can request lint checks with specific strictness levels:

// Agent requests lint check
{
  "tool": "protobuf_lint",
  "input": {
    "schema": "message UserProfile { ... }",
    "ruleSet": "DEFAULT",
    "autoFix": true
  }
}

// MCP server returns actionable results
{
  "issues": [
    {
      "rule": "ENUM_ZERO_VALUE_SUFFIX",
      "severity": "warning",
      "message": "Enum zero value should end with _UNSPECIFIED",
      "suggestion": "USER_STATUS_UNSPECIFIED",
      "fixable": true
    }
  ],
  "fixedSchema": "..."
}

The agent doesn't guess at naming conventions—it gets definitive answers with automated fixes. Rules include MESSAGE_PASCAL_CASE, FIELD_LOWER_SNAKE_CASE, ENUM_VALUE_PREFIX, and the full set familiar to anyone who's configured protobuf linting.

2. Breaking Change Detection

This is where the MCP server provides value that no amount of AI training data can replicate. When an agent proposes schema changes, it can check whether those changes will break existing clients:

// Agent checks proposed change
{
  "tool": "protobuf_breaking_check",
  "input": {
    "current": "message Order { int64 id = 1; }",
    "proposed": "message Order { string id = 1; }"
  }
}

// MCP server detects the break
{
  "breakingChanges": [{
    "type": "type_change",
    "element": "Order.id",
    "severity": "critical",
    "description": "Field type changed from int64 to string",
    "impact": "100% - Complete incompatibility"
  }],
  "evolutionPlan": {
    "steps": ["Add deprecated field", "Dual support period", "Migrate clients"],
    "risk": "high"
  }
}

The server doesn't just flag the problem—it provides an evolution plan. For teams with hundreds of microservices and thousands of schema consumers, this is the difference between a safe deployment and an outage.

3. Protovalidate Integration

Schema structure is one thing. Runtime validation is another. The MCP server understands protovalidate constraints and can generate them from natural language:

// Agent requests validation rules
{
  "tool": "protobuf_validate",
  "input": {
    "message": "CreateUserRequest",
    "constraints": "email must be valid, age must be 18-120, username 3-50 chars"
  }
}

// MCP server generates validation annotations
{
  "validatedSchema": `
    message CreateUserRequest {
      // @validate: email format required
      string email = 1;
      // @validate: range 18-120
      int32 age = 2;
      // @validate: length 3-50 characters
      string username = 3;
    }
  `,
  "validationRules": {
    "email": {"format": "email", "required": true},
    "age": {"minimum": 18, "maximum": 120},
    "username": {"minLength": 3, "maxLength": 50}
  }
}

Developers describe constraints in plain English. The agent translates them to protovalidate syntax. No more looking up annotation formats.

4. SDK Generation

The final piece: generating client libraries. The MCP server integrates with protobuf.ai's SDK generation pipeline, letting agents produce type-safe clients on demand:

// Agent generates TypeScript client
{
  "tool": "protobuf_generate_sdk",
  "input": {
    "schema": "payment.proto",
    "language": "typescript",
    "framework": "connect-es"
  }
}

// Returns ready-to-use client code
{
  "files": {
    "payment_pb.ts": "...",
    "payment_connect.ts": "..."
  },
  "installCommand": "npm install @connectrpc/connect"
}

The Governance Layer

Beyond individual tools, the MCP server enables something more powerful: AI-assisted governance at scale.

Enterprise protobuf deployments have hundreds of services, thousands of schemas, and dozens of teams. Keeping everyone aligned on conventions, preventing breaking changes, and enforcing validation rules is a full-time job for platform teams.

With the MCP server, governance becomes proactive:

Pre-commit validation: AI assistants check schemas before code is even pushed. Issues are caught in the editor, not in CI.

PR review automation: When a PR modifies .proto files, agents can automatically check lint rules, breaking changes, and validation coverage.

Onboarding acceleration: New team members ask their AI assistant how to structure schemas. The assistant consults the MCP server and returns answers consistent with organizational standards.

Cross-team consistency: The same rules apply whether you're on the payments team or the notifications team. The MCP server is the single source of truth.

The Developer Journey to Protobuf

Adopting Protocol Buffers isn't a weekend project. Teams considering the move from JSON/REST to gRPC and protobufs face a steep learning curve: new syntax, new tooling, new mental models for schema evolution. The traditional path looks like this:

  1. Discovery: Developer reads documentation, watches conference talks, skims tutorials
  2. Experimentation: Write first .proto file, figure out protoc flags, debug compilation errors
  3. Integration: Generate code, wire up services, discover runtime issues
  4. Governance: Realize you need lint rules, breaking change detection, validation—start over with proper tooling

This journey takes weeks. Developers bounce between documentation sites, Stack Overflow, and trial-and-error. Many teams abandon the effort midway, deciding protobufs are "too complex" when really the tooling was just inaccessible.

With an MCP server, the journey transforms. Here's what it looks like:

Developer: "Help me define a protobuf schema for a user service with email, name, and age fields."

AI + MCP: Generates schema, calls lint tool, returns validated .proto file with correct naming conventions.

Developer: "Add a phone number field."

AI + MCP: Calls breaking change detector, confirms safe addition, updates schema, regenerates SDK.

Developer: "Actually, change the age field from int to string."

AI + MCP: "That's a breaking change—existing clients expect an integer. Here's a migration plan: add a new field, deprecate the old one, migrate over two releases."

The developer never opened documentation. Never debugged protoc flags. Never guessed at compatibility rules. The AI handled it—not by knowing protobuf conventions from training data, but by calling tools that definitively validate every change.

What took weeks now takes an afternoon. This is the real value proposition: not just making existing protobuf users more productive, but making protobuf adoption accessible to teams who would otherwise never try.

The Competitive Angle

The protobuf tooling market is established. Well-funded incumbents have built excellent CLI tools for linting, breaking change detection, and schema management. But those tools are built for humans running terminal commands.

Protobuf.ai's MCP server doesn't replace those tools. It makes them accessible to AI agents. When developers increasingly work through AI assistants, that accessibility becomes a competitive advantage.

This is the pattern we see across developer tools. The underlying capabilities matter—but so does how AI agents can use them. MCP is the bridge.

Technical Implementation

For teams considering similar integrations, here's what the MCP server architecture looks like:

  • Stateless tools: Each MCP tool (lint, breaking check, validate, generate) is stateless. No session management required.
  • Schema parsing: We use protobuf.js for parsing with custom visitors for rule checking. Full protoc integration for SDK generation.
  • Edge deployment: The server runs on Cloudflare Workers for global low-latency access. AI assistants get sub-50ms responses.
  • Resource exposure: Schema registry contents are exposed as MCP resources. Agents can browse available schemas and their versions.

The implementation follows the patterns we documented in MCP Protocol Explained—tools for actions, resources for data, clean JSON-RPC interfaces.

Make Your Developer Tools Agent-Accessible

We help companies build MCP servers that let AI agents use their products. Whether you're building for protobufs, databases, or any developer tool—the pattern is the same.

Schedule a Demo

Key Takeaways

Building this MCP server reinforced several principles:

  1. Governance tools need agent access. Linting and validation are exactly the kind of "is this correct?" questions AI assistants get asked constantly. If they can't call your tools, they'll guess.
  2. Breaking change detection is high-value. No amount of training data teaches an AI model the specific compatibility rules of your schema format. Programmatic access is the only reliable path.
  3. Generation workflows benefit most. When agents can generate SDKs, validation rules, and boilerplate, they become dramatically more useful. The MCP server turns "here's how you might do it" into "here's working code."
  4. Enterprise governance scales with AI. Platform teams can't review every schema change. AI assistants with MCP access can—applying consistent rules across the organization.

What's Next

Protobuf.ai continues to expand MCP capabilities. On the roadmap:

  • Semantic search: Find schemas by meaning, not just by name. "Show me all schemas related to payments" works even if no schema contains the word "payment."
  • Impact analysis: Before changing a schema, see exactly which services and clients depend on it. AI agents can explain the blast radius.
  • Migration assistance: When breaking changes are necessary, generate migration code and client updates automatically.

For developer tools companies watching the AI transition, this is the playbook: take your CLI capabilities and make them callable by AI agents. The tools that do this become the ones AI recommends.

Resources

About DevExp.ai

DevExp.ai helps companies make their products discoverable to AI agents. We build the infrastructure layer between your content and the agent web—llms.txt generation, MCP servers, agent-readable APIs, and trust verification.

Get in Touch →