Define LLM configurations that agents and teams use for inference.
- Anthropic (Claude)
- OpenAI (GPT)
agno/models/anthropic
Configures an Anthropic Claude model.Config
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
id | string | Yes | — | Model identifier (e.g., claude-sonnet-4-20250514) |
api_key | Field[string] | Yes | — | Anthropic API key. Use a FieldReference to inject from a secret |
max_tokens | integer | No | 8192 | Maximum tokens in responses |
temperature | float | No | — | Sampling temperature (0.0–1.0) |
top_p | float | No | — | Nucleus sampling parameter |
top_k | integer | No | — | Top-k sampling parameter |
stop_sequences | list[string] | No | — | Sequences that stop generation |
Outputs
| Field | Type | Description |
|---|---|---|
spec | object | Serialized model spec for runtime reconstruction |
Example
Dependencies
Depends on: Nothing directly, butapi_key typically uses a FieldReference to a pragma/secret resource.
Depended on by:
agno/agent— as the required modelagno/team— as the optional team lead modelagno/memory/manager— as the optional classification model
Notes
- Both model types are stateless — they wrap configuration without making API calls during creation.
- The
api_keyfield supports FieldReferences for injecting secrets from apragma/secretresource. - The
base_urlfield on the OpenAI model allows you to use OpenAI-compatible APIs (e.g., Azure OpenAI, local models).