Documentation
Getting Started
Predicting with LLMs
Agentic Flows
Text Embedding
Tokenization
Getting Started
Predicting with LLMs
Agentic Flows
Text Embedding
Tokenization
Embedding
Generate embeddings for input text. Embeddings are vector representations of text that capture semantic meaning. Embeddings are a building block for RAG (Retrieval-Augmented Generation) and other similarity-based tasks.
If you don't yet have an embedding model, you can download a model like nomic-ai/nomic-embed-text-v1.5
using the following command:
lms get nomic-ai/nomic-embed-text-v1.5
To convert a string to a vector representation, pass it to the embed
method on the corresponding embedding model handle.
import lmstudio as lms
model = lms.embedding_model("nomic-embed-text-v1.5")
embedding = model.embed("Hello, world!")
On this page
Prerequisite: Get an Embedding Model
Create Embeddings