← All Models

glm-4.7-flash

Public

GLM 4.7 Flash is a 30B A3B MoE model form Z.ai. It supports a context length of 128k tokens and achieves strong performance on coding benchmarks among models of similar scale.

1.8K Downloads

4 stars

Capabilities

Reasoning

Minimum system memory

16GB

Tags

30B
glm4_moe_lite

README

GLM 4.7 Flash by Z.ai

GLM 4.7 Flash is a 30B A3B MoE model form Z.ai. It supports a context length of 128k tokens and achieves strong performance on coding benchmarks among models of similar scale.

Custom Fields

Special features defined by the model author

Enable Thinking

: boolean

(default=true)

Controls whether the model will think before replying

Clear Thinking

: boolean

(default=false)

Controls whether thinking content is cleared from history

Parameters

Custom configuration options included with this model

Repeat Penalty
Disabled
Temperature
0.2
Top K Sampling
50
Top P Sampling
0.95

Sources

The underlying model files this model uses