Skip to main content
RouterLink exposes three API surfaces. Pick the one that matches the SDK or tool you already use:
SurfaceBase URLWhen to use
OpenAI-compatiblehttps://router-link.world3.ai/v1OpenAI SDK, LangChain, anything that speaks the OpenAI Chat Completions schema
Anthropic-compatiblehttps://router-link.world3.ai/api/anthropicAnthropic SDK, Claude Code, anything that speaks /v1/messages
RouterLink nativehttps://router-link.world3.ai/api/v1Account, billing, key management
Beta environment base URL is https://router-link-beta.world3.ai. Same paths, separate credit balance.

Authentication

All endpoints use bearer-token auth. Get a key from the Console and pass it in the Authorization header:
Authorization: Bearer rl-...
For the Anthropic-compatible surface, the SDK sends the key in the x-api-key header — both work.

OpenAI-compatible

Chat Completions

POST /v1/chat/completions
Drop-in compatible with OpenAI’s Chat Completions API. Streaming, tool use, vision, JSON mode, and structured outputs all work the same way.
from openai import OpenAI

client = OpenAI(
    base_url="https://router-link.world3.ai/v1",
    api_key="<your-routerlink-key>",
)

stream = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Stream me a haiku."}],
    stream=True,
)
for chunk in stream:
    print(chunk.choices[0].delta.content or "", end="")

Cross-provider routing

The same endpoint accepts any model name from any supported provider — RouterLink dispatches to the right upstream:
{ "model": "gpt-4o" }              // OpenAI
{ "model": "claude-sonnet-4-6" }   // Anthropic
{ "model": "gemini-2.5-pro" }      // Google
For the live model list, call GET /v1/models or check the Models page in the Console.

Other endpoints

  • POST /v1/embeddings — text embeddings
  • POST /v1/images/generations — image generation
  • POST /v1/audio/transcriptions — speech-to-text
  • POST /v1/audio/speech — text-to-speech

Anthropic-compatible

Native /v1/messages and /v1/messages/count_tokens under /api/anthropic:
import anthropic

client = anthropic.Anthropic(
    base_url="https://router-link.world3.ai/api/anthropic",
    api_key="<your-routerlink-key>",
)

msg = client.messages.create(
    model="claude-sonnet-4-6",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Hi!"}],
)
Supported features: streaming, tool use, prompt caching (cache_control), extended thinking, vision, and the Files API (/v1/files).

Streaming

Both surfaces support Server-Sent Events. Set stream: true (OpenAI) or use the Anthropic streaming client. SSE event names match the upstream provider exactly — your existing parsing code keeps working.

Errors

Errors follow the upstream provider’s schema, with a stable RouterLink-specific code in the error.type field:
HTTPerror.typeMeaning
400routerlink_invalid_requestMalformed payload or unsupported parameter
401routerlink_unauthorizedMissing or invalid API key
403routerlink_forbiddenKey disabled, account suspended, or model not enabled
404routerlink_not_foundUnknown route or resource
429routerlink_budget_exceededPer-key monthly budget cap hit — raise the cap or add credits
429routerlink_rate_limitedPer-key request rate limit hit
502routerlink_upstream_errorUpstream provider returned an error
503routerlink_upstream_unavailableUpstream provider unreachable

Rate limits & budgets

Each API key has two independent guardrails:
  • Rate limit — requests per minute, configured per key. Returns 429 routerlink_rate_limited with a Retry-After header.
  • Monthly budget — hard cap on USD spend per calendar month. Returns 429 routerlink_budget_exceeded. Email alerts fire at 70 / 90 / 100% of the cap.
Both are configured in the Console under API Keys → Edit.

Webhooks

Coming soon — subscribe to billing events, budget alerts, and key lifecycle changes. Track the roadmap for status.

SDKs and integrations

  • OpenAI SDKs — Python, Node, Go, .NET — all work without modification, just override base_url.
  • Anthropic SDKs — Python, TypeScript — set base_url to the /api/anthropic path.
  • LangChain / LlamaIndex — use the OpenAI provider with our base URL.
  • Claude Code — see the dedicated guide.
  • AI SDK (Vercel) — use @ai-sdk/openai-compatible with our base URL.

Need help?

  • Operational status: status.routerlink.ai
  • Issues / feature requests: file via the Console Support menu