meta-llamaChat
Meta: Llama 3.3 70B Instruct
meta-llama/llama-3.3-70b-instruct
131KContext Window
16KMax Output
Supported Protocols:max_tokenstemperaturetop_pstopfrequency_penaltypresence_penaltyrepetition_penaltytop_kseedmin_presponse_formattoolstool_choice
Online
The Meta Llama 3.3 multilingual large language model (LLM) is a pretrained and instruction tuned generative model in 70B (text in/text out). The Llama 3.3 instruction tuned text only model is optimized for multilingual dialogue use cases and outperforms many of the available open source and closed chat models on common industry benchmarks. Supported languages: English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai. [Model Card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_3/MODEL_CARD.md)
Capabilities
🔧 Function CallingText GenerationCode GenerationAnalysis & Reasoningmodels.reasoning
Technical Specs
Input Modality
Text
Output Modality
Text
Arch
—
Pricing
Pay per use, no monthly feesInput Token< ¥0.001/1K Token
Output Token< ¥0.001/1K Token
Quick Start
from openai import OpenAI
client = OpenAI(
base_url="https://api.uniontoken.ai/v1",
api_key="YOUR_UNIONTOKEN_API_KEY",
)
response = client.chat.completions.create(
model="meta-llama/llama-3.3-70b-instruct",
messages=[
{"role": "user", "content": "Hello!"}
],
)
print(response.choices[0].message.content)FAQ
Meta: Llama 3.3 70B Instruct
meta-llama/llama-3.3-70b-instruct
In< ¥0.001/1K
Out< ¥0.001/1K
Context Window131K
Max Output16K
Related Models
View All → →Meta: Llama 3.2 90B Vision Instruct
meta-llama/llama-3.2-90b-vision-instruct
< ¥0.001/1K
Meta: Llama 3.2 3B Instruct
meta-llama/llama-3.2-3b-instruct
< ¥0.001/1K
Meta: Llama 3.2 1B Instruct
meta-llama/llama-3.2-1b-instruct
< ¥0.001/1K
Meta: Llama 3.2 11B Vision Instruct
meta-llama/llama-3.2-11b-vision-instruct
< ¥0.001/1K
Ready to get started?
Get 1M free tokens on registration, no monthly fees or minimum spend
Register Now →