LLaVA v1.5

Haotian et al

llava

The original LLaVA vision-enabled model, supporting image input and textual instruction following.

Model info

Model

LLaVA v1.5

Author

Haotian et al

Arch

llava

Parameters

7B

Size on disk

about 4.45 GB

Format

gguf

Download and run LLaVA v1.5

Open in LM Studio to view download options

Use LLaVA v1.5 in your code

💡 LM Studio needs to be installed and run at least once for this to work. Don't have it yet? Get it here.

CLI Bootstrap

npx lmstudio install-cli # (only needed once)

Model Load

lms load second-state/llava-v1.5-7b-gguf
Alternatively, load the model in the LM Studio app.

Use LLaVA v1.5 via an OpenAI-like API

Reuse your existing OpenAI client code and point it to LM Studio instead.

Python example
# Example: reuse your existing OpenAI client code
from openai import OpenAI

# Point to the local server
client = OpenAI(base_url="http://localhost:1234/v1", 
                api_key="lm-studio") # not used

completion = client.chat.completions.create(
  model="second-state/llava-v1.5-7b-gguf",
  messages=[
    {"role": "system", "content": "Always answer in rhymes."},
    {"role": "user", "content": "Introduce yourself."}
  ],
  temperature=0.7,
)

print(completion.choices[0].message)

Develop

Learn more