Google: Gemini 2.5 Flash Lite
google/gemini-2.5-flash-lite
Gemini 2.5 Flash-Lite is a lightweight reasoning model in the Gemini 2.5 family, optimized for ultra-low latency and cost efficiency. It offers improved throughput, faster token generation, and better performance across common benchmarks compared to earlier Flash models. By default, "thinking" (i.e. multi-pass reasoning) is disabled to prioritize speed, but developers can enable it via the Reasoning API parameter to selectively trade off cost for intelligence.
BygoogleInput typeOutput type
Recent activity on Gemini 2.5 Flash Lite
Tokens processed per day
Thoughput
(tokens/s)
ProvidersMin (tokens/s)Max (tokens/s)Avg (tokens/s)
Google Vertex24.43471.41158.93
SkyRouter32.31431.9593.17
First Token Latency
(ms)
ProvidersMin (ms)Max (ms)Avg (ms)
Google Vertex119227341765.00
SkyRouter115121451493.12
Providers for Gemini 2.5 Flash Lite
ZenMux Provider to the best providers that are able to handle your prompt size and parameters, with fallbacks to maximize uptime.
Latency
-
Throughput
-
Uptime
100.00
%
Recent uptime
Oct 10,2025 - 3 PM100.00%
Price
Input
$ 0.1
/ M tokens
Output
$ 0.4
/ M tokens
Cache read
$ 0.025
/ M tokens
Cache write 5m
-
Cache write 1h
$ 1
/ M tokens
Cache write
-
Web search
$ 0.035
/ request
Model limitation
Context
1.05M
Max output
65.53K
Supported Parameters
max_completion_tokens
temperature
top_p
frequency_penalty
-
presence_penalty
-
seed
logit_bias
-
logprobs
-
top_logprobs
-
response_format
-
stop
tools
tool_choice
parallel_tool_calls
-
Model Protocol Compatibility
openai
anthropic
-
Data policy
Prompt training
false
Prompt Logging
Zero retention
Moderation
Responsibility of developer
Status Page
status page
Latency
1.50
s
Throughput
46.55
tps
Uptime
100.00
%
Recent uptime
Oct 10,2025 - 3 PM100.00%
Price
Input
$ 0.1
/ M tokens
Output
$ 0.4
/ M tokens
Cache read
$ 0.025
/ M tokens
Cache write 5m
-
Cache write 1h
$ 1
/ M tokens
Cache write
-
Web search
$ 0.035
/ request
Model limitation
Context
1.05M
Max output
65.53K
Supported Parameters
max_completion_tokens
temperature
top_p
frequency_penalty
-
presence_penalty
-
seed
logit_bias
-
logprobs
-
top_logprobs
-
response_format
-
stop
tools
tool_choice
parallel_tool_calls
-
Model Protocol Compatibility
openai
anthropic
-
Data policy
Prompt training
false
Prompt Logging
Zero retention
Moderation
Responsibility of developer
Sample code and API for Gemini 2.5 Flash Lite
ZenMux normalizes requests and responses across providers for you.
OpenAI-PythonPythonTypeScriptOpenAI-TypeScriptcURL
python
from openai import OpenAI

client = OpenAI(
  base_url="https://zenmux.ai/api/v1",
  api_key="<ZenMux_API_KEY>",
)

completion = client.chat.completions.create(
  model="google/gemini-2.5-flash-lite",
  messages=[
    {
      "role": "user",
      "content": [
        {
          "type": "text",
          "text": "What is in this image?"
        }
      ]
    }
  ]
)
print(completion.choices[0].message.content)