AI Models/mistral/Mistral: Mixtral 8x7B Instruct
mistralChat

Mistral: Mixtral 8x7B Instruct

mistralai/mixtral-8x7b-instruct
33KContext Window
16KMax Output
Supported Protocols:max_tokenstemperaturetop_pstopfrequency_penaltypresence_penaltyrepetition_penaltytop_kseedmin_presponse_formattoolstool_choice
Online

Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47 billion parameters. Instruct model fine-tuned by Mistral. #moe

Capabilities

🔧 Function CallingText GenerationCode GenerationAnalysis & Reasoningmodels.reasoning

Technical Specs

Input Modality
Text
Output Modality
Text
Arch
Default Temperature
0.3

Pricing

Pay per use, no monthly fees
Input Token< ¥0.001/1K Token
Output Token< ¥0.001/1K Token

Quick Start

from openai import OpenAI

client = OpenAI(
    base_url="https://api.uniontoken.ai/v1",
    api_key="YOUR_UNIONTOKEN_API_KEY",
)

response = client.chat.completions.create(
    model="mistralai/mixtral-8x7b-instruct",
    messages=[
        {"role": "user", "content": "Hello!"}
    ],
)

print(response.choices[0].message.content)

FAQ

Ready to get started?

Get 1M free tokens on registration, no monthly fees or minimum spend

Register Now →