Anthropic: Claude Opus 4.1
anthropic/claude-opus-4.1
Claude Opus 4.1 is an updated version of Anthropic’s flagship model, offering improved performance in coding, reasoning, and agentic tasks. It achieves 74.5% on SWE-bench Verified and shows notable gains in multi-file code refactoring, debugging precision, and detail-oriented reasoning. The model supports extended thinking up to 64K tokens and is optimized for tasks involving research, data analysis, and tool-assisted reasoning.
ByanthropicInput typeOutput type
Recent activity on Claude Opus 4.1
Tokens processed per day
Thoughput
(tokens/s)
ProvidersMin (tokens/s)Max (tokens/s)Avg (tokens/s)
Anthropic0.4637.296.56
Vertex AI0.5543.0515.71
Amazon Bedrock0.3819.0314.19
First Token Latency
(ms)
ProvidersMin (ms)Max (ms)Avg (ms)
Anthropic161132322300.24
Vertex AI1644329867054.29
Amazon Bedrock361260124187.60
Providers for Claude Opus 4.1
ZenMux Provider to the best providers that are able to handle your prompt size and parameters, with fallbacks to maximize uptime.
Latency
3.43
s
Throughput
4.38
tps
Uptime
100.00
%
Recent uptime
Oct 10,2025 - 3 PM100.00%
Price
Input
$ 15
/ M tokens
Output
$ 75
/ M tokens
Cache read
$ 1.5
/ M tokens
Cache write 5m
$ 18.75
/ M tokens
Cache write 1h
$ 30
/ M tokens
Cache write
-
Web search
$ 0.01
/ request
Model limitation
Context
200.00K
Max output
32.00K
Supported Parameters
max_completion_tokens
temperature
top_p
frequency_penalty
-
presence_penalty
-
seed
-
logit_bias
-
logprobs
-
top_logprobs
-
response_format
-
stop
tools
tool_choice
parallel_tool_calls
Model Protocol Compatibility
openai
anthropic
Data policy
Prompt training
false
Prompt Logging
Zero retention
Moderation
Responsibility of developer
Status Page
status page
Latency
-
Throughput
-
Uptime
100.00
%
Recent uptime
Oct 10,2025 - 3 PM100.00%
Price
Input
$ 15
/ M tokens
Output
$ 75
/ M tokens
Cache read
$ 1.5
/ M tokens
Cache write 5m
$ 18.75
/ M tokens
Cache write 1h
-
Cache write
-
Web search
-
Model limitation
Context
32.00K
Max output
32.00K
Supported Parameters
max_completion_tokens
temperature
top_p
frequency_penalty
-
presence_penalty
-
seed
-
logit_bias
-
logprobs
-
top_logprobs
-
response_format
-
stop
tools
tool_choice
parallel_tool_calls
Model Protocol Compatibility
openai
anthropic
Data policy
Prompt training
false
Prompt Logging
Zero retention
Moderation
Responsibility of developer
Status Page
status page
Latency
-
Throughput
-
Uptime
100.00
%
Recent uptime
Oct 10,2025 - 3 PM100.00%
Price
Input
$ 15
/ M tokens
Output
$ 75
/ M tokens
Cache read
$ 1.5
/ M tokens
Cache write 5m
$ 18.75
/ M tokens
Cache write 1h
-
Cache write
-
Web search
-
Model limitation
Context
200.00K
Max output
32.00K
Supported Parameters
max_completion_tokens
temperature
top_p
frequency_penalty
-
presence_penalty
-
seed
-
logit_bias
-
logprobs
-
top_logprobs
-
response_format
-
stop
tools
tool_choice
parallel_tool_calls
Model Protocol Compatibility
openai
anthropic
Data policy
Prompt training
false
Prompt Logging
Zero retention
Moderation
Responsibility of developer
Status Page
status page
Sample code and API for Claude Opus 4.1
ZenMux normalizes requests and responses across providers for you.
OpenAI-PythonPythonTypeScriptOpenAI-TypeScriptcURL
python
from openai import OpenAI

client = OpenAI(
  base_url="https://zenmux.ai/api/v1",
  api_key="<ZenMux_API_KEY>",
)

completion = client.chat.completions.create(
  model="anthropic/claude-opus-4.1",
  messages=[
    {
      "role": "user",
      "content": [
        {
          "type": "text",
          "text": "What is in this image?"
        }
      ]
    }
  ]
)
print(completion.choices[0].message.content)