Claude
Cursor
Copilot

Multi-Client Compatibility

One MCP Server

Compatibility Guide
12 min read
January 20, 2025

Multi-Client Compatibility: Claude, Cursor, Copilot—One MCP Server

All Tools Directory Team
MCPCompatibilityMulti-ClientIntegration

The Goal & Mental Model

You want one MCP server that multiple clients can use without forking code. Treat your server like a capabilities provider and each client as a transport + UX adapter. Standardize contracts at the edge, then shim per-client differences with adapters and config, not with custom tool logic.

Key principles

  • Stable contracts (tool names, arg/response schemas).
  • Negotiation (announce versions/features, accept fallbacks).
  • Thin adapters for transport, auth, and streaming quirks.
  • Comprehensive tests that mimic each target client.

Transport & Protocol Differences (What Actually Varies)

Connection Types

  • HTTP/JSON for simple requests
  • WebSocket for real-time streaming
  • SSE for server-sent events

Discovery Methods

  • Static config files
  • /tools endpoint for dynamic discovery
  • Environment variables for configuration

Authentication

  • API key header names vary
  • OIDC bearer token support
  • mTLS for service-to-service

Streaming Styles

  • Chunked JSON lines vs framed messages
  • One-shot replies for simple responses
  • Error handling varies by client

Server Stance

Support both streaming and one-shot, accept Bearer or x-api-key, and document the exact content-type and envelope you emit.

Capability Matrix (Quick Reference)

CapabilityClaudeCursorCopilotGeneric Agent
Tool Discovery (/tools)prefers static
One-shot responses
Streaming chunkslimited contexts
Large output pagination
Auth header (default)Authorization: Bearerx-api-key or BearerBearerconfigurable
WebSocket supportvaries
JSON error with path

Legend: ✅ native, ⚠️ partial / depends on environment.
(Design your server so all cells still work via a fallback path.)

Config Examples (Copy/Paste)

1) Generic JSON config (static)

{
  "name": "acme-mcp",
  "server": "https://mcp.acme.com",
  "auth": { "type": "bearer", "env": "ACME_MCP_TOKEN" },
  "capabilities": ["files.read","repo.search","math.add"],
  "transport": { "mode": "https+streaming", "timeout_ms": 2000 }
}

2) Per-client overrides (YAML)

defaults:
  base_url: https://mcp.acme.com
  timeout_ms: 2000
  auth:
    header: Authorization
    scheme: Bearer
    env: ACME_MCP_TOKEN

clients:
  claude:
    transport: websocket
  cursor:
    auth:
      header: x-api-key
      scheme: none
      env: ACME_MCP_KEY
  copilot:
    transport: https
    streaming: disabled

3) Minimal runtime discovery endpoint

@app.get("/tools")
def list_tools():
    return {
      "version": "1.3.0",
      "features": ["streaming","pagination","retry-after"],
      "tools": [
        {"name":"files.read","args":{"path":"string","max_bytes":"int"},"returns":{"content":"string"}},
        {"name":"repo.search","args":{"q":"string","limit":"int"},"returns":{"results":"array"}}
      ]
    }

Client Quirks & Gotchas

Common Issues

  • Strict schema validation: treat unknown fields as errors; return error.path to help clients self-correct.
  • Chunk size: some UIs choke on >64KB chunks—paginate or stream smaller increments.
  • Tool name casing: keep lowercase with dots (repo.search); avoid spaces/uppercase.
  • Binary data: always base64 + explicit encoding.
  • Deadlines: honor deadline_ms and include latency_ms in responses; some clients infer retries from it.
  • Idempotency: mark safe replays; clients may retry on transient failures.

Versioning, Negotiation & Fallbacks

Version Management

  • Semantic versions for server + contracts
  • Feature flags (streaming, delta, rich-errors)
  • Canary support for testing

Fallback Strategy

  • Client version negotiation
  • Feature detection via /tools
  • Automatic fallbacks (streaming → pagination)

Security & Permissions Across Clients

Security Considerations

  • Scopes map to capabilities: tools:files.read, tools:repo.search.
  • Per-client tokens/keys; never share across tenants.
  • Rate limits per client to prevent noisy neighbor issues.
  • Egress controls on the server regardless of client trust.
  • Audit fields: client_id, request_id, scope, latency_ms, bytes_out.

Link to your hardening guide for auth and sandboxing.

Validation Runbook & Test Harness

Smoke Matrix (run on every change)

Discovery: /tools returns version + features.
One-shot: each tool works with small payload.
Streaming: chunked mode verified (where supported).
Pagination: returns next_cursor for large outputs.
Auth: Bearer and x-api-key paths.
Errors: structured error with error.code & error.path.
Timeouts: server honors deadline_ms; client doesn't hang.

Contract Tests (CI)

  • Golden request/response fixtures per tool and per client profile.
  • Schema diff alerts on breaking changes.
  • Load a "compat profile" via env var to emulate client behaviors.

FAQs

What's the fastest path to "works everywhere"?

Keep contracts simple, support both one-shot and streaming, accept Bearer and API-key auth, and ship per-client config presets.

Do I need separate servers per client?

No—build one server with adapters. Separate servers only for different trust boundaries or radically different latency needs.

How do I keep clients from breaking on updates?

Expose versions and features via /tools, add shims for additive changes, and pin clients to a tested compatibility profile.

Key Takeaways

  • Design philosophy: One server, multiple adapters, stable contracts
  • Transport flexibility: Support HTTP, WebSocket, and SSE with fallbacks
  • Authentication: Accept both Bearer tokens and API keys
  • Client quirks: Handle schema validation, chunk sizes, and error formats
  • Testing: Comprehensive smoke tests and contract validation
  • Versioning: Semantic versions with feature flags and fallbacks