跳到主要内容

Anthropic 聊天补全 (OpenAI 协议)

通过 OpenAI 兼容接口调用 Claude 系列模型。

双协议支持

Anthropic 模型支持两种调用方式:

  • OpenAI 协议 (/v1/chat/completions) — 本页面,与 OpenAI 格式完全一致
  • Anthropic 原生协议 (/v1/messages) — 查看 Messages API

如果你已有 OpenAI SDK 集成,直接更改 model 参数即可切换到 Claude 模型。如果你更熟悉 Anthropic SDK,可以使用原生协议。

请求

POST https://api.clawdrouter.com/v1/chat/completions

请求头

名称必填类型说明
AuthorizationstringBearer YOUR_API_KEY
Content-Typestringapplication/json
Request-Idstring客户系统生成的唯一业务标识(用于请求追踪和问题排查)

请求体

请求参数与 OpenAI 聊天补全 相同。以下列出核心参数:

参数类型必填默认值说明
modelstring模型标识,参考模型列表
messagesarray对话消息列表
streambooleanfalse是否启用流式输出
temperaturenumber1采样温度,取值 0~2
top_pnumber1核采样参数
max_tokensinteger4096生成的最大 Token 数
stopstring / array停止序列
toolsarray工具列表 (Function Calling)
tool_choicestring / object工具调用控制
response_formatobject输出格式控制

完整参数说明请参考 OpenAI 聊天补全参数

请求示例

基础请求

curl https://api.clawdrouter.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "claude-sonnet-4-6",
"messages": [
{"role": "user", "content": "你好"}
]
}'

流式请求

curl https://api.clawdrouter.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-N \
-d '{
"model": "claude-sonnet-4-6",
"messages": [
{"role": "user", "content": "写一首关于春天的诗"}
],
"stream": true
}'

多模态输入(图片)

Anthropic 模型支持两种 content 格式:字符串格式和数组格式。数组格式适用于多模态输入场景:

curl https://api.clawdrouter.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "claude-sonnet-4-6",
"messages": [
{
"role": "user",
"content": [
{"type": "text", "text": "描述这张图片"},
{
"type": "image_url",
"image_url": {
"url": "data:image/png;base64,base64_encoded_data..."
}
}
]
}
]
}'

响应

非流式响应

{
"id": "chatcmpl-93e77518-06fb-410a-afb1-c26449930dfb",
"created": 1761193047,
"model": "claude-sonnet-4-6",
"object": "chat.completion",
"system_fingerprint": null,
"choices": [
{
"finish_reason": "stop",
"index": 0,
"message": {
"content": "你好!很高兴认识你。请问有什么我可以帮助你的吗?",
"role": "assistant",
"tool_calls": null,
"function_call": null
}
}
],
"usage": {
"completion_tokens": 28,
"prompt_tokens": 10,
"total_tokens": 38,
"prompt_tokens_details": {
"cached_tokens": 0
},
"cache_creation_input_tokens": 0,
"cache_read_input_tokens": 0
}
}

响应字段说明

字段类型说明
idstring本次补全的唯一标识
objectstring固定为 chat.completion
createdinteger创建时间的 Unix 时间戳
modelstring实际使用的模型版本
choicesarray补全结果列表
choices[].messageobject模型生成的消息
choices[].finish_reasonstring停止原因:stop(正常结束)/ length(达到长度限制)/ tool_calls(调用工具)
usageobjectToken 用量统计

Anthropic 特有的 Token 统计字段

Anthropic 模型的 usage 中包含额外的缓存相关统计:

字段类型说明
cache_creation_input_tokensinteger本次请求创建缓存所消耗的输入 Token 数
cache_read_input_tokensinteger本次请求从缓存中读取的输入 Token 数

流式响应

流式模式下,响应以 SSE (Server-Sent Events) 格式返回:

data: {"id":"chatcmpl-xxx","object":"chat.completion.chunk","choices":[{"index":0,"delta":{"role":"assistant","content":"你"},"finish_reason":null}]}

data: {"id":"chatcmpl-xxx","object":"chat.completion.chunk","choices":[{"index":0,"delta":{"content":"好"},"finish_reason":null}]}

data: {"id":"chatcmpl-xxx","object":"chat.completion.chunk","choices":[{"index":0,"delta":{},"finish_reason":"stop"}]}

data: [DONE]

使用 Python SDK

from openai import OpenAI

client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://api.clawdrouter.com/v1",
)

# 基础调用
completion = client.chat.completions.create(
model="claude-sonnet-4-6",
messages=[
{"role": "user", "content": "用简单的话解释量子计算"}
],
)

print(completion.choices[0].message.content)

流式调用

stream = client.chat.completions.create(
model="claude-sonnet-4-6",
messages=[
{"role": "user", "content": "写一篇关于人工智能的短文"}
],
stream=True,
)

for chunk in stream:
content = chunk.choices[0].delta.content
if content is not None:
print(content, end="", flush=True)

使用 Node.js SDK

import OpenAI from "openai";

const client = new OpenAI({
apiKey: "YOUR_API_KEY",
baseURL: "https://api.clawdrouter.com/v1",
});

async function main() {
const completion = await client.chat.completions.create({
model: "claude-sonnet-4-6",
messages: [
{ role: "user", content: "用简单的话解释量子计算" },
],
});

console.log(completion.choices[0].message.content);
}

main();

流式调用

async function streamChat() {
const stream = await client.chat.completions.create({
model: "claude-sonnet-4-6",
messages: [
{ role: "user", content: "写一篇关于人工智能的短文" },
],
stream: true,
});

for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content;
if (content) {
process.stdout.write(content);
}
}
}

streamChat();