Google: Gemini 2.5 Flash
google/gemini-2.5-flash
Gemini 2.5 Flash is Google's state-of-the-art workhorse model, specifically designed for advanced reasoning, coding, mathematics, and scientific tasks. It includes built-in "thinking" capabilities, enabling it to provide responses with greater accuracy and nuanced context handling. Additionally, Gemini 2.5 Flash is configurable through the "max tokens for reasoning" parameter, as described in the documentation.
BygoogleInput typeOutput type
Recent activity on Gemini 2.5 Flash
Tokens processed per day
Thoughput
(tokens/s)
ProvidersMin (tokens/s)Max (tokens/s)Avg (tokens/s)
Google Vertex37.3241.84148.82
SkyRouter25.85232.2381.43
First Token Latency
(ms)
ProvidersMin (ms)Max (ms)Avg (ms)
Google Vertex157558942996.19
SkyRouter152338832388.82
Providers for Gemini 2.5 Flash
ZenMux Provider to the best providers that are able to handle your prompt size and parameters, with fallbacks to maximize uptime.
Latency
-
Throughput
-
Uptime
100.00
%
Recent uptime
Oct 10,2025 - 3 PM100.00%
Price
Input
$ 0.3
/ M tokens
Output
$ 2.5
/ M tokens
Cache read
$ 0.075
/ M tokens
Cache write 5m
-
Cache write 1h
$ 1
/ M tokens
Cache write
-
Web search
$ 0.035
/ request
Model limitation
Context
1.05M
Max output
65.53K
Supported Parameters
max_completion_tokens
temperature
top_p
frequency_penalty
-
presence_penalty
-
seed
logit_bias
-
logprobs
-
top_logprobs
-
response_format
-
stop
tools
tool_choice
parallel_tool_calls
-
Model Protocol Compatibility
openai
anthropic
-
Data policy
Prompt training
false
Prompt Logging
Zero retention
Moderation
Responsibility of developer
Status Page
status page
Latency
1.68
s
Throughput
36.15
tps
Uptime
100.00
%
Recent uptime
Oct 10,2025 - 3 PM100.00%
Price
Input
$ 0.3
/ M tokens
Output
$ 2.5
/ M tokens
Cache read
$ 0.075
/ M tokens
Cache write 5m
-
Cache write 1h
$ 1
/ M tokens
Cache write
-
Web search
$ 0.035
/ request
Model limitation
Context
1.05M
Max output
65.53K
Supported Parameters
max_completion_tokens
temperature
top_p
frequency_penalty
-
presence_penalty
-
seed
logit_bias
-
logprobs
-
top_logprobs
-
response_format
-
stop
tools
tool_choice
parallel_tool_calls
-
Model Protocol Compatibility
openai
anthropic
-
Data policy
Prompt training
false
Prompt Logging
Zero retention
Moderation
Responsibility of developer
Sample code and API for Gemini 2.5 Flash
ZenMux normalizes requests and responses across providers for you.
OpenAI-PythonPythonTypeScriptOpenAI-TypeScriptcURL
python
from openai import OpenAI

client = OpenAI(
  base_url="https://zenmux.ai/api/v1",
  api_key="<ZenMux_API_KEY>",
)

completion = client.chat.completions.create(
  model="google/gemini-2.5-flash",
  messages=[
    {
      "role": "user",
      "content": [
        {
          "type": "text",
          "text": "What is in this image?"
        }
      ]
    }
  ]
)
print(completion.choices[0].message.content)