nvidiaChat
NVIDIA: Llama 3.1 Nemotron 70B Instruct
nvidia/llama-3.1-nemotron-70b-instruct
131KContext Window
16KMax Output
Supported Protocols:max_tokenstemperaturetop_pstopfrequency_penaltypresence_penaltyrepetition_penaltytop_kseedmin_presponse_formattoolstool_choice
Online
NVIDIA's Llama 3.1 Nemotron 70B is a language model designed for generating precise and useful responses. Leveraging [Llama 3.1 70B](/models/meta-llama/llama-3.1-70b-instruct) architecture and Reinforcement Learning from Human Feedback (RLHF), it excels in automatic alignment benchmarks. This model is tailored for applications requiring high accuracy in helpfulness and response generation, suitable for diverse user queries across multiple domains. Usage of this model is subject to [Meta's Acceptable Use Policy](https://www.llama.com/llama3/use-policy/).
Capabilities
🔧 Function CallingText GenerationCode GenerationAnalysis & Reasoningmodels.reasoning
Technical Specs
Input Modality
Text
Output Modality
Text
Arch
—
Pricing
Pay per use, no monthly feesInput Token< ¥0.001/1K Token
Output Token< ¥0.001/1K Token
Quick Start
from openai import OpenAI
client = OpenAI(
base_url="https://api.uniontoken.ai/v1",
api_key="YOUR_UNIONTOKEN_API_KEY",
)
response = client.chat.completions.create(
model="nvidia/llama-3.1-nemotron-70b-instruct",
messages=[
{"role": "user", "content": "Hello!"}
],
)
print(response.choices[0].message.content)FAQ
NVIDIA: Llama 3.1 Nemotron 70B Instruct
nvidia/llama-3.1-nemotron-70b-instruct
In< ¥0.001/1K
Out< ¥0.001/1K
Context Window131K
Max Output16K
Related Models
View All → →NVIDIA: Nemotron-4 340B Instruct
nvidia/nemotron-4-340b-instruct
< ¥0.001/1K
NVIDIA: Nemotron Nano 9B V2
nvidia/nemotron-nano-9b-v2
< ¥0.001/1K
NVIDIA: Llama 3.1 Nemotron Nano 8B v1
nvidia/llama-3.1-nemotron-nano-8b-v1
< ¥0.001/1K
NVIDIA: Llama 3.3 Nemotron Super 49B v1
nvidia/llama-3.3-nemotron-super-49b-v1
< ¥0.001/1K
Ready to get started?
Get 1M free tokens on registration, no monthly fees or minimum spend
Register Now →