Liquid LFM2 models are blazingly fast models that are designed for local use. Compared to similarly sized models, this model excels at mathematics, instruction following, and multilingual understanding.
LFM2 uses a hybrid architecture that performs efficiently on both CPU and GPU.
Supports a context length of 32k.
Parameters
Custom configuration options included with this model