Model

gpt-oss-safeguard-120b

Public

Use cases

Reasoning

Minimum system memory

65GB

Tags

120B
gpt-oss

README

gpt-oss-safeguard-120b

gpt-oss-safeguard-120b is a safety reasoning model by OpenAI, built-upon their original gpt-oss release. With these models, you can classify text content based on safety policies that you provide and perform a suite of foundational safety tasks. These models are intended for safety use cases. For other applications, we recommend using gpt-oss.

This 120b variant is designed for production, general purpose, high reasoning use cases that fits into a single H100 GPU (117B parameters with 5.1B active parameters).

This model is released under a permissive Apache 2.0 license and it features configurable reasoning effort—low, medium, or high, so users can balance output quality and latency based on their needs. The model offers full chain-of-thought visibility to support easier debugging and increased trust, though this output is not intended for end users.

This model supports a context length of 131k.

Custom Fields

Special features defined by the model author

Reasoning Effort

: select

(default=low)

Controls how much reasoning the model should perform.

Parameters

Custom configuration options included with this model

Min P Sampling
0.05
Repeat Penalty
1.1
Temperature
0.8
Top K Sampling
40
Top P Sampling
0.8

Sources

The underlying model files this model uses