Skip to main content
Devin for Terminal supports multiple AI models. You can choose the best model for your task — whether you need maximum capability, speed, or cost efficiency.

Available Models

ModelConfig Name(s)Description
Claude Opus 4.6opus, claude-opus-4.6Most capable — best for complex, multi-step tasks (default)
Claude Opus 4.5claude-opus-4.5Previous generation Opus
Claude Sonnet 4.5sonnet, claude-sonnet-4.5Fast and capable — good balance of speed and quality
Claude Sonnet 4claude-sonnet-4Previous generation Sonnet
SWE 1.5swe, swe-1.5Fastest model, at up to 950 tokens/sec
SWE 1.5 Freeswe-1.5-freeFree tier of SWE 1.5
Codex 5.3codex, codex-5.3OpenAI Codex model optimized for code
Gemini 3 Progemini, gemini-3-proGoogle’s pro-tier model
Gemini 3 Flashgemini-3-flashGoogle’s fast model
Short names like opus, sonnet, swe, codex, and gemini always resolve to the latest version in that model family.

Reasoning / Thinking Levels

Some models support configurable reasoning levels, which control how much compute the model spends “thinking” before responding. You can set the thinking level with /thinking during a session.

Codex 5.3

LevelDescription
LowLow reasoning
MediumMedium reasoning (default)
HighHigh reasoning
X-HighMaximum reasoning

Claude Models

Claude Opus 4.6, Opus 4.5, and Sonnet 4.5 support thinking at Off and High levels.

Setting the Model

devin --model opus -- refactor this module
devin --model sonnet -- explain this code

Model Selection Tips

Complex refactoring

Use opus for multi-file refactors, architecture changes, and tasks requiring deep reasoning.

Quick edits

Use swe for straightforward edits, bug fixes, and questions — it’s the fastest model.

Cost-sensitive

Use swe-1.5-free when budget matters.
Enterprise teams can restrict which models are available through Team Settings.