Google: Gemini 2.0 Flash Lite
google/gemini-2.0-flash-lite-001
Gemini 2.0 Flash Lite offers a significantly faster time to first token (TTFT) compared to Gemini Flash 1.5, while maintaining quality on par with larger models like Gemini Pro 1.5, all at extremely economical token prices.
BygoogleInput typeOutput type
Recent activity on Gemini 2.0 Flash Lite
Tokens processed per day
Thoughput
(tokens/s)
ProvidersMin (tokens/s)Max (tokens/s)Avg (tokens/s)
Google Vertex5.98100.3513.45
First Token Latency
(ms)
ProvidersMin (ms)Max (ms)Avg (ms)
Google Vertex60712931076.03
Providers for Gemini 2.0 Flash Lite
ZenMux Provider to the best providers that are able to handle your prompt size and parameters, with fallbacks to maximize uptime.
Latency
0.75
s
Throughput
10.28
tps
Uptime
100.00
%
Recent uptime
Oct 10,2025 - 3 PM100.00%
Price
Input
$ 0.075
/ M tokens
Output
$ 0.3
/ M tokens
Cache read
$ 0.01875
/ M tokens
Cache write 5m
-
Cache write 1h
$ 1
/ M tokens
Cache write
-
Web search
$ 0.035
/ request
Model limitation
Context
1.05M
Max output
8.19K
Supported Parameters
max_completion_tokens
temperature
top_p
frequency_penalty
-
presence_penalty
-
seed
logit_bias
-
logprobs
-
top_logprobs
-
response_format
-
stop
tools
tool_choice
parallel_tool_calls
-
Model Protocol Compatibility
openai
anthropic
-
Data policy
Prompt training
false
Prompt Logging
Zero retention
Moderation
Responsibility of developer
Status Page
status page
Sample code and API for Gemini 2.0 Flash Lite
ZenMux normalizes requests and responses across providers for you.
OpenAI-PythonPythonTypeScriptOpenAI-TypeScriptcURL
python
from openai import OpenAI

client = OpenAI(
  base_url="https://zenmux.ai/api/v1",
  api_key="<ZenMux_API_KEY>",
)

completion = client.chat.completions.create(
  model="google/gemini-2.0-flash-lite-001",
  messages=[
    {
      "role": "user",
      "content": [
        {
          "type": "text",
          "text": "What is in this image?"
        }
      ]
    }
  ]
)
print(completion.choices[0].message.content)