LFM2 is a new generation of hybrid models developed by Liquid AI, specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency.
To run the smallest LFM2, you need at least 220 MB of RAM. The largest one may require up to 700 MB.
LFM2 models are available in gguf and mlx formats.

LFM2 is a new generation of hybrid models developed by Liquid AI, specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency.
LFM2 includes weights of several post-trained checkpoints with 350M, 700M, and 1.2B parameters available in LM Studio. They provide the following key features to create AI-powered edge applications:
Due to their small size, Liquid recommends to fine-tuning LFM2 models on narrow use cases to maximize performance. They are particularly suited for agentic tasks, data extraction, RAG, creative writing, and multi-turn conversations. However, Liquid does not recommend using them for tasks that are knowledge-intensive or require programming skills.
| Property | LFM2-350M | LFM2-700M | LFM2-1.2B |
|---|---|---|---|
| Parameters | 354,483,968 | 742,489,344 | 1,170,340,608 |
| Layers | 16 (10 conv + 6 attn) | 16 (10 conv + 6 attn) | 16 (10 conv + 6 attn) |
| Context length | 32,768 tokens | 32,768 tokens | 32,768 tokens |
| Vocabulary size | 65,536 | 65,536 | 65,536 |
| Precision | bfloat16 | bfloat16 | bfloat16 |
| Training budget | 10 trillion tokens | 10 trillion tokens | 10 trillion tokens |
| License | LFM Open License v1.0 | LFM Open License v1.0 | LFM Open License v1.0 |
Supported languages: English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish.
LFM2 models are released under a custom lfm1.0 license.