Documentation
Predicting with LLMs
Agentic Flows
Text Embedding
Tokenization
Manage Models
Model Info
API Reference
The .act()
call
LLMs are largely text-in, text-out programs. So, you may ask "how can an LLM use a tool?". The answer is that some LLMs are trained to ask the human to call the tool for them, and expect the tool output to to be provided back in some format.
Imagine you're giving computer support to someone over the phone. You might say things like "run this command for me ... OK what did it output? ... OK now click there and tell me what it says ...". In this case you're the LLM! And you're "calling tools" vicariously through the person on the other side of the line.
rounds
"We introduce the concept of execution "rounds" to describe the combined process of running a tool, providing its output to the LLM, and then waiting for the LLM to decide what to do next.
Execution Round
• run a tool → ↑ • provide the result to the LLM → │ • wait for the LLM to generate a response │ └────────────────────────────────────────┘ └➔ (return)
A model might choose to run tools multiple times before returning a final result. For example, if the LLM is writing code, it might choose to compile or run the program, fix errors, and then run it again, rinse and repeat until it gets the desired result.
With this in mind, we say that the .act()
API is an automatic "multi-round" tool calling API.
import { LMStudioClient, tool } from "@lmstudio/sdk";
import { z } from "zod";
const client = new LMStudioClient();
const multiplyTool = tool({
name: "multiply",
description: "Given two numbers a and b. Returns the product of them.",
parameters: { a: z.number(), b: z.number() },
implementation: ({ a, b }) => a * b,
});
const model = await client.llm.model("qwen2.5-7b-instruct");
await model.act("What is the result of 12345 multiplied by 54321?", [multiplyTool], {
onMessage: (message) => console.info(message.toString()),
});
The model selected for tool use will greatly impact performance.
Some general guidance when selecting a model:
The following code demonstrates how to provide multiple tools in a single .act()
call.
import { LMStudioClient, tool } from "@lmstudio/sdk";
import { z } from "zod";
const client = new LMStudioClient();
const additionTool = tool({
name: "add",
description: "Given two numbers a and b. Returns the sum of them.",
parameters: { a: z.number(), b: z.number() },
implementation: ({ a, b }) => a + b,
});
const isPrimeTool = tool({
name: "isPrime",
description: "Given a number n. Returns true if n is a prime number.",
parameters: { n: z.number() },
implementation: ({ n }) => {
if (n < 2) return false;
const sqrt = Math.sqrt(n);
for (let i = 2; i <= sqrt; i++) {
if (n % i === 0) return false;
}
return true;
},
});
const model = await client.llm.model("qwen2.5-7b-instruct");
await model.act(
"Is the result of 12345 + 45668 a prime? Think step by step.",
[additionTool, isPrimeTool],
{ onMessage: (message) => console.info(message.toString()) },
);
The following code creates a conversation loop with an LLM agent that can create files.
import { Chat, LMStudioClient, tool } from "@lmstudio/sdk";
import { existsSync } from "fs";
import { writeFile } from "fs/promises";
import { createInterface } from "readline/promises";
import { z } from "zod";
const rl = createInterface({ input: process.stdin, output: process.stdout });
const client = new LMStudioClient();
const model = await client.llm.model();
const chat = Chat.empty();
const createFileTool = tool({
name: "createFile",
description: "Create a file with the given name and content.",
parameters: { name: z.string(), content: z.string() },
implementation: async ({ name, content }) => {
if (existsSync(name)) {
return "Error: File already exists.";
}
await writeFile(name, content, "utf-8");
return "File created.";
},
});
while (true) {
const input = await rl.question("You: ");
// Append the user input to the chat
chat.append("user", input);
process.stdout.write("Bot: ");
await model.act(chat, [createFileTool], {
// When the model finish the entire message, push it to the chat
onMessage: (message) => chat.append(message),
onPredictionFragment: ({ content }) => {
process.stdout.write(content);
},
});
process.stdout.write("\n");
}