Model

lfm2

Public

Hybrid architecture model intended for local use, by Liquid AI

Use cases

Minimum system memory

700MB

Tags

1.2B
lfm2

README

Liquid LFM2 1.2B

Liquid LFM2 models are blazingly fast models that are designed for local use. Compared to similarly sized models, this model excels at mathematics, instruction following, and multilingual understanding.

LFM2 uses a hybrid architecture that performs efficiently on both CPU and GPU.

Supports a context length of 32k.

Parameters

Custom configuration options included with this model

Min P Sampling
0.15
Repeat Penalty
1.05
Temperature
0.3

Sources

The underlying model files this model uses