perplexityChat
Perplexity: Embed V1 0.6B
perplexity/pplx-embed-v1-0.6b
32KContext Window
Supported Protocols:max_tokenstemperaturetop_ptop_kfrequency_penaltypresence_penaltyweb_search_options
Online
pplx-embed-v1-0.6B is one of Perplexity's state-of-the-art text embedding models built for real-world, web-scale retrieval. pplx-embed-v1 is optimized for standard dense text retrieval with the 0.6B parameter model targeting lightweight, low-latency embedding generation.
Capabilities
Text GenerationCode GenerationAnalysis & Reasoningmodels.reasoning
Technical Specs
Input Modality
Text
Output Modality
Text
Arch
—
Pricing
Pay per use, no monthly feesInput Token< ¥0.001/1K Token
Output Token< ¥0.001/1K Token
Quick Start
from openai import OpenAI
client = OpenAI(
base_url="https://api.uniontoken.ai/v1",
api_key="YOUR_UNIONTOKEN_API_KEY",
)
response = client.chat.completions.create(
model="perplexity/pplx-embed-v1-0.6b",
messages=[
{"role": "user", "content": "Hello!"}
],
)
print(response.choices[0].message.content)FAQ
Perplexity: Embed V1 0.6B
perplexity/pplx-embed-v1-0.6b
In< ¥0.001/1K
Out< ¥0.001/1K
Context Window32K
Related Models
View All → →Perplexity: Llama 3.1 Sonar 8B Online
perplexity/llama-3.1-sonar-small-128k-online
< ¥0.001/1K
Perplexity: Llama 3.1 Sonar 70B Online
perplexity/llama-3.1-sonar-large-128k-online
< ¥0.001/1K
Perplexity: Llama3 Sonar 70B
perplexity/llama-3-sonar-large-32k-chat
< ¥0.001/1K
Perplexity: Llama3 Sonar 8B Online
perplexity/llama-3-sonar-small-32k-online
< ¥0.001/1K
Ready to get started?
Get 1M free tokens on registration, no monthly fees or minimum spend
Register Now →