Mixtral 8x22b Instruct
Mistral's official instruct fine-tuned version of [Mixtral 8x22B](/models/mistralai/mixtral-8x22b). It uses 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. Its strengths include: - strong math, coding, and reasoning - large context length (64k) - fluency in English, French, Italian, German, and Spanish See benchmarks on the launch announcement [here](https://mistral.ai/news/mixtral-8x22b/). #moe
定價資訊
快速開始
只需更改 base_url 即可透過 BazaarLink API 使用 Mixtral 8x22b Instruct:
from openai import OpenAI
client = OpenAI(
base_url="https://bazaarlink.ai/api/v1",
api_key="sk-bl-YOUR_API_KEY",
)
response = client.chat.completions.create(
model="mistralai/mixtral-8x22b-instruct",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://bazaarlink.ai/api/v1",
apiKey: "sk-bl-YOUR_API_KEY",
});
const response = await client.chat.completions.create({
model: "mistralai/mixtral-8x22b-instruct",
messages: [{ role: "user", content: "Hello!" }],
});
console.log(response.choices[0].message.content);為什麼透過 BazaarLink 使用 Mixtral 8x22b Instruct?
- ✓美元計費(台幣報價)+ 統一發票 — 台灣團隊無需海外刷卡
- ✓OpenAI 相容 API — 無需改寫程式碼
- ✓自動故障轉移 — 同模型多供應商備援
- ✓中文客服支援 — 在地團隊即時協助
Frequently Asked Questions
What is Mixtral 8x22b Instruct?
Mistral's official instruct fine-tuned version of [Mixtral 8x22B](/models/mistralai/mixtral-8x22b). It uses 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. Its strengths include: - strong math, coding, and reasoning - large context length (64k) - fluency in English, French, Italian, German, and Spanish See benchmarks on the launch announcement [here](https://mistral.ai/news/mixtral-8x22b/). #moe
How much does the Mixtral 8x22b Instruct API cost?
Mixtral 8x22b Instruct costs $2.0000 per 1K input tokens and $6.0000 per 1K output tokens when accessed through BazaarLink.
How do I use Mixtral 8x22b Instruct with the OpenAI SDK?
Set base_url to "https://bazaarlink.ai/api/v1" and use model ID "mistralai/mixtral-8x22b-instruct". All OpenAI SDK methods (chat.completions, embeddings, streaming) work without code changes.
What is the context window for Mixtral 8x22b Instruct?
Mixtral 8x22b Instruct supports a context window of 65,536 tokens.
Is Mixtral 8x22b Instruct available for free?
Mixtral 8x22b Instruct is a paid model. BazaarLink offers free trial credits on registration so you can test it without a credit card.