inceptionChat
Inception: Mercury
inception/mercury
128KContext Window
32KMax Output
Supported Protocols:max_tokensstoptemperaturetoolstool_choiceresponse_formatstructured_outputs
Online
Mercury is the first diffusion large language model (dLLM). Applying a breakthrough discrete diffusion approach, the model runs 5-10x faster than even speed optimized models like GPT-4.1 Nano and Claude 3.5 Haiku while matching their performance. Mercury's speed enables developers to provide responsive user experiences, including with voice agents, search interfaces, and chatbots. Read more in the [blog post] (https://www.inceptionlabs.ai/blog/introducing-mercury) here.
Capabilities
🔧 Function CallingText GenerationCode GenerationAnalysis & Reasoningmodels.reasoning
Technical Specs
Input Modality
Text
Output Modality
Text
Arch
—
Default Temperature
0
Pricing
Pay per use, no monthly feesInput Token< ¥0.001/1K Token
Output Token< ¥0.001/1K Token
Quick Start
from openai import OpenAI
client = OpenAI(
base_url="https://api.uniontoken.ai/v1",
api_key="YOUR_UNIONTOKEN_API_KEY",
)
response = client.chat.completions.create(
model="inception/mercury",
messages=[
{"role": "user", "content": "Hello!"}
],
)
print(response.choices[0].message.content)FAQ
Inception: Mercury
inception/mercury
In< ¥0.001/1K
Out< ¥0.001/1K
Context Window128K
Max Output32K
Related Models
View All → →Ready to get started?
Get 1M free tokens on registration, no monthly fees or minimum spend
Register Now →