Project Files
http://localhost:1234).16k++. The agent's system prompt is very large; smaller contexts will cause the model to "forget" its tools.The qwen code is extremely strict about the JSON schema.
If it's incorrect, the app will rename the file to settings.json.corrupted.
Create/edit the file at:
%USERPROFILE%\.qwen\settings.json~/.qwen/settings.jsonUse this exact structure:
Since the CLI sometimes ignores file-based environment variables, use a launcher script for a stable connection.
https://qwenlm.github.io/qwen-code-docs/en/users/configuration/model-providers/ This link is provided by qwen code itself. There is outdated information there. Everything written on the website is incorrect, I checked it and it doesn't work.
0.0 or 0.1 in LM Studio for more predictable, structured code output.The file settings.json in utf-8.
This is a VS Code plugin developed by Alibaba Group. It's an intelligent assistant based on the Large Language Model (LLM) that helps you write and debug code right in the editor. Even more settings https://github.com/QwenLM/qwen-code also see authentication guide
npm install -g @qwen-code/qwen-code@latest
{
"env": {
"API_KEY": "any-string",
"OPENAI_BASE_URL": "http://127.0.0.1:1234/v1"
},
"security": {
"auth": {
"selectedType": "openai"
}
},
"model": {
"name": "openai:your-custom-model"
},
"$version": 3
}
$env:QWEN_MODEL="openai:your-custom-model"
$env:OPENAI_API_KEY="any-string"
$env:OPENAI_BASE_URL="http://127.0.0.1:1234/v1"
qwen
REM bat/cmd
set QWEN_MODEL=openai:your-custom-model
set OPENAI_API_KEY=any-string
set OPENAI_BASE_URL=http://127.0.0.1:1234/v1
qwen
export QWEN_MODEL="openai:your-custom-model"
export OPENAI_API_KEY="any-string"
export OPENAI_BASE_URL="http://127.0.0.1:1234/v1"
qwen
{
"security": {
"auth": {
"selectedType": "openai",
"apiKey": "sk-local",
"baseUrl": "http://127.0.0.1:1234/v1"
}
},
"provider": {
"openai": {
"apiKey": "sk-local",
"baseURL": "http://localhost:1234/v1",
"defaultModel": "qwen3.5-9b-sushi-coder-rl",
"stream": false
}
},
"$version": 3
}