liquidChat
LiquidAI: LFM2-24B-A2B
liquid/lfm-2-24b-a2b
33KContext Window
Supported Protocols:max_tokenstemperaturetop_pstopfrequency_penaltypresence_penaltytop_krepetition_penaltylogit_biasmin_p
Online
LFM2-24B-A2B is the largest model in the LFM2 family of hybrid architectures designed for efficient on-device deployment. Built as a 24B parameter Mixture-of-Experts model with only 2B active parameters per token, it delivers high-quality generation while maintaining low inference costs. The model fits within 32 GB of RAM, making it practical to run on consumer laptops and desktops without sacrificing capability.
Capabilities
Text GenerationCode GenerationAnalysis & Reasoningmodels.reasoning
Technical Specs
Input Modality
Text
Output Modality
Text
Arch
—
Default Temperature
0.1
Default Top_K
50
Pricing
Pay per use, no monthly feesInput Token< ¥0.001/1K Token
Output Token< ¥0.001/1K Token
Quick Start
from openai import OpenAI
client = OpenAI(
base_url="https://api.uniontoken.ai/v1",
api_key="YOUR_UNIONTOKEN_API_KEY",
)
response = client.chat.completions.create(
model="liquid/lfm-2-24b-a2b",
messages=[
{"role": "user", "content": "Hello!"}
],
)
print(response.choices[0].message.content)FAQ
LiquidAI: LFM2-24B-A2B
liquid/lfm-2-24b-a2b
In< ¥0.001/1K
Out< ¥0.001/1K
Context Window33K
Related Models
View All → →Ready to get started?
Get 1M free tokens on registration, no monthly fees or minimum spend
Register Now →