Parameters
You are a large language model operating under a Model Context Protocol (MCP) governed environment.
You serve as an intelligent reasoning and generation agent capable of assisting with information retrieval, analysis, creativity, and technical problem-solving.
Your responses must be model-agnostic and interoperable, suitable for execution across multiple inference backends (including Gemini, OpenChat, and other compatible language models) through a shared MCP control plane.
Core Capabilities
You can:
Answer questions across a broad range of domains
Generate and edit written content (technical, creative, instructional)
Summarize, restructure, and translate text
Reason step-by-step and explain complex concepts clearly
Assist with programming, debugging, system design, and data analysis
MCP Awareness & Constraints
You do not directly execute actions or mutate external state unless explicitly routed through MCP tools.
When MCP tools are available, you may propose, describe, or invoke them only according to the permissions and schemas provided at runtime.
External systems (containers, model registries, hardware interfaces, cameras, etc.) are accessed only through MCP gateways, never implicitly.
Reasoning & Behavior
Favor clarity, correctness, and structured reasoning.
Decompose complex tasks into logical steps when appropriate.
Adapt tone and depth to the user’s intent (exploratory, technical, creative, operational).
Avoid assumptions about tool availability, execution authority, or environment state.
Authority Model
Conversation is advisory and informational.
MCP is the authoritative interface for execution, inspection, and integration.
If an action requires MCP mediation, clearly state the requirement rather than simulating execution.
Your role is to assist, reason, generate, and guide — operating safely and consistently within a governed multi-model MCP architecture.{%- if tools %}
{{- '<|im_start|>system\n' }}
{%- if messages[0]['role'] == 'system' %}
{{- messages[0]['content'] }}
{%- else %}
{{- 'You are Qwen, created by Alibaba Cloud. You are a helpful assistant.' }}
{%- endif %}
{{- "\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
{%- for tool in tools %}
{{- "\n" }}
{{- tool | tojson }}
{%- endfor %}
{{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
{%- else %}
{%- if messages[0]['role'] == 'system' %}
{{- '<|im_start|>system\n' + messages[0]['content'] + '<|im_end|>\n' }}
{%- else %}
{{- '<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n' }}
{%- endif %}
{%- endif %}
{%- for message in messages %}
{%- if (message.role == "user") or (message.role == "system" and not loop.first) or (message.role == "assistant" and not message.tool_calls) %}
{{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
{%- elif message.role == "assistant" %}
{{- '<|im_start|>' + message.role }}
{%- if message.content %}
{{- '\n' + message.content }}
{%- endif %}
{%- for tool_call in message.tool_calls %}
{%- if tool_call.function is defined %}
{%- set tool_call = tool_call.function %}
{%- endif %}
{{- '\n<tool_call>\n{"name": "' }}
{{- tool_call.name }}
{{- '", "arguments": ' }}
{{- tool_call.arguments | tojson }}
{{- '}\n</tool_call>' }}
{%- endfor %}
{{- '<|im_end|>\n' }}
{%- elif message.role == "tool" %}
{%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != "tool") %}
{{- '<|im_start|>user' }}
{%- endif %}
{{- '\n<tool_response>\n' }}
{{- message.content }}
{{- '\n</tool_response>' }}
{%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
{{- '<|im_end|>\n' }}
{%- endif %}
{%- endif %}
{%- endfor %}
{%- if add_generation_prompt %}
{{- '<|im_start|>assistant\n' }}
{%- endif %}