BazaarLinkBazaarLink
登入

Lfm 2 24b A2b

LFM2-24B-A2B is the largest model in the LFM2 family of hybrid architectures designed for efficient on-device deployment. Built as a 24B parameter Mixture-of-Experts model with only 2B active parameters per token, it delivers high-quality generation while maintaining low inference costs. The model fits within 32 GB of RAM, making it practical to run on consumer laptops and desktops without sacrificing capability.

定價資訊

輸入價格
$0.0300
/ 百萬 tokens
輸出價格
$0.1200
/ 百萬 tokens
上下文視窗
33K
tokens
供應商liquid
模態text->text
發布日期2026年2月
Model IDliquid/lfm-2-24b-a2b

快速開始

只需更改 base_url 即可透過 BazaarLink API 使用 Lfm 2 24b A2b:

Python
from openai import OpenAI

client = OpenAI(
    base_url="https://bazaarlink.ai/api/v1",
    api_key="sk-bl-YOUR_API_KEY",
)

response = client.chat.completions.create(
    model="liquid/lfm-2-24b-a2b",
    messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)
TypeScript
import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://bazaarlink.ai/api/v1",
  apiKey: "sk-bl-YOUR_API_KEY",
});

const response = await client.chat.completions.create({
  model: "liquid/lfm-2-24b-a2b",
  messages: [{ role: "user", content: "Hello!" }],
});
console.log(response.choices[0].message.content);

為什麼透過 BazaarLink 使用 Lfm 2 24b A2b?

  • 美元計費(台幣報價)+ 統一發票 — 台灣團隊無需海外刷卡
  • OpenAI 相容 API — 無需改寫程式碼
  • 自動故障轉移 — 同模型多供應商備援
  • 中文客服支援 — 在地團隊即時協助
立即試用← 查看所有模型

Frequently Asked Questions

What is Lfm 2 24b A2b?

LFM2-24B-A2B is the largest model in the LFM2 family of hybrid architectures designed for efficient on-device deployment. Built as a 24B parameter Mixture-of-Experts model with only 2B active parameters per token, it delivers high-quality generation while maintaining low inference costs. The model fits within 32 GB of RAM, making it practical to run on consumer laptops and desktops without sacrificing capability.

How much does the Lfm 2 24b A2b API cost?

Lfm 2 24b A2b costs $0.0300 per 1K input tokens and $0.1200 per 1K output tokens when accessed through BazaarLink.

How do I use Lfm 2 24b A2b with the OpenAI SDK?

Set base_url to "https://bazaarlink.ai/api/v1" and use model ID "liquid/lfm-2-24b-a2b". All OpenAI SDK methods (chat.completions, embeddings, streaming) work without code changes.

What is the context window for Lfm 2 24b A2b?

Lfm 2 24b A2b supports a context window of 32,768 tokens.

Is Lfm 2 24b A2b available for free?

Lfm 2 24b A2b is a paid model. BazaarLink offers free trial credits on registration so you can test it without a credit card.

Support
Support
Hi! How can we help you?
Send a message and we'll get back to you soon.
Lfm 2 24b A2b API Pricing & Documentation — liquid