← All Models

lfm2-24b-a2b

Public

LFM2 is a family of hybrid models designed for on-device deployment. LFM2-24B-A2B is the largest model in the family, a 24B MoE model with only 2B active parameters per token, fitting in 32 GB of RAM for deployment on consumer laptops and desktops.

58.2K Downloads

12 stars

Capabilities

Minimum system memory

14GB

Tags

24B
lfm2_moe

README

LFM2 24B A2B by Liquid AI

LFM2 is a family of hybrid models designed for on-device deployment. LFM2-24B-A2B is the largest model in the family, a 24B MoE model with only 2B active parameters per token, fitting in 32 GB of RAM for deployment on consumer laptops and desktops.

Excels at agentic tool use, document summarization, Q&A, and local RAG pipelines. Supports 9 languages.

Supports a context length of 32k.

Parameters

Custom configuration options included with this model

Repeat Penalty
1.05
Temperature
0.05
Top K Sampling
50