inclusionAI: Ring-flash-2.0
inclusionai/ring-flash-2.0
Trained on over 20 trillion tokens and refined with supervised fine-tuning and multi-stage reinforcement learning, the model demonstrates strong performance against dense models up to 40B parameters. It excels in complex reasoning, code generation, and frontend development.
ByinclusionaiInput typeOutput type
Recent activity on Ring-flash-2.0
Tokens processed per day
Thoughput
(tokens/s)
ProvidersMin (tokens/s)Max (tokens/s)Avg (tokens/s)
Theta39.01178.1138.91
First Token Latency
(ms)
ProvidersMin (ms)Max (ms)Avg (ms)
Theta279838405.56
Providers for Ring-flash-2.0
ZenMux Provider to the best providers that are able to handle your prompt size and parameters, with fallbacks to maximize uptime.
Latency
0.34
s
Throughput
184.46
tps
Uptime
100.00
%
Recent uptime
Oct 10,2025 - 3 PM100.00%
Price
Input
$ 0.28
/ M tokens
Output
$ 2.8
/ M tokens
Cache read
-
Cache write 5m
-
Cache write 1h
-
Cache write
-
Web search
-
Model limitation
Context
128.00K
Max output
32.00K
Supported Parameters
max_completion_tokens
temperature
top_p
frequency_penalty
presence_penalty
seed
-
logit_bias
-
logprobs
top_logprobs
-
response_format
stop
tools
-
tool_choice
-
parallel_tool_calls
-
Model Protocol Compatibility
openai
anthropic
-
Data policy
Prompt training
false
Prompt Logging
Zero retention
Moderation
Responsibility of developer
Sample code and API for Ring-flash-2.0
ZenMux normalizes requests and responses across providers for you.
OpenAI-PythonPythonTypeScriptOpenAI-TypeScriptcURL
python
from openai import OpenAI

client = OpenAI(
  base_url="https://zenmux.ai/api/v1",
  api_key="<ZenMux_API_KEY>",
)

completion = client.chat.completions.create(
  model="inclusionai/ring-flash-2.0",
  messages=[
    {
      "role": "user",
      "content": [
        {
          "type": "text",
          "text": "What is in this image?"
        }
      ]
    }
  ]
)
print(completion.choices[0].message.content)