cognitivecomputationsChat
Dolphin 2.6 Mixtral 8x7B đŹ
cognitivecomputations/dolphin-mixtral-8x7b
33KContext Window
Online
This is a 16k context fine-tune of [Mixtral-8x7b](/models/mistralai/mixtral-8x7b). It excels in coding tasks due to extensive training with coding data and is known for its obedience, although it lacks DPO tuning. The model is uncensored and is stripped of alignment and bias. It requires an external alignment layer for ethical use. Users are cautioned to use this highly compliant model responsibly, as detailed in a blog post about uncensored models at [erichartford.com/uncensored-models](https://erichartford.com/uncensored-models). #moe #uncensored
Capabilities
Text GenerationCode GenerationAnalysis & Reasoningmodels.reasoning
Technical Specs
Input Modality
Text
Output Modality
Text
Arch
â
Pricing
Pay per use, no monthly feesInput Token< ÂĽ0.001/1K Token
Output Token< ÂĽ0.001/1K Token
Quick Start
from openai import OpenAI
client = OpenAI(
base_url="https://api.uniontoken.ai/v1",
api_key="YOUR_UNIONTOKEN_API_KEY",
)
response = client.chat.completions.create(
model="cognitivecomputations/dolphin-mixtral-8x7b",
messages=[
{"role": "user", "content": "Hello!"}
],
)
print(response.choices[0].message.content)FAQ
Dolphin 2.6 Mixtral 8x7B đŹ
cognitivecomputations/dolphin-mixtral-8x7b
In< ÂĽ0.001/1K
Out< ÂĽ0.001/1K
Context Window33K
Related Models
View All â âDolphin Llama 3 70B đŹ
cognitivecomputations/dolphin-llama-3-70b
< ÂĽ0.001/1K
Dolphin 2.9.2 Mixtral 8x22B đŹ
cognitivecomputations/dolphin-mixtral-8x22b
< ÂĽ0.001/1K
Dolphin3.0 R1 Mistral 24B
cognitivecomputations/dolphin3.0-r1-mistral-24b
< ÂĽ0.001/1K
Dolphin3.0 Mistral 24B
cognitivecomputations/dolphin3.0-mistral-24b
< ÂĽ0.001/1K
Ready to get started?
Get 1M free tokens on registration, no monthly fees or minimum spend
Register Now â