Skip to main content
PATCH
/
settings
/
llm
Update Llm Settings
curl --request PATCH \
  --url https://api.example.com/settings/llm \
  --header 'Content-Type: application/json' \
  --data '
{
  "provider": "<string>",
  "performance_profile": "fast"
}
'
{
  "organization_id": "<string>",
  "provider": "<string>",
  "performance_profile": "fast",
  "updated_at": "2023-11-07T05:31:56Z"
}

Body

application/json

Patch body for PATCH /settings/llm.

Either field may be omitted, in which case the existing value is preserved. Callers that want to unset either value must delete and recreate the settings row from the engine side.

Attributes: provider: New LLM provider resource id, or "platform_default" to fall back to the shared platform-managed provider. performance_profile: New performance profile tier.

provider
string | null
performance_profile
enum<string> | null

User-facing LLM performance profile selection.

Identifies which tier of model an organization wants platform agents to use. The profile is chosen by the user; the API resolves it to a concrete model from the selected provider's catalog entries.

Available options:
fast,
balanced,
reasoning

Response

Successful Response

Per-organization LLM settings for platform agents.

Holds the organization's choice of LLM provider resource and performance profile. The API reads these settings whenever a platform agent is invoked to resolve the concrete model and credentials to use.

Attributes: organization_id: Owning organization. provider: Resource identifier of the LLM provider resource the organization has selected. Opaque string; format matches the rest of the SDK's resource id references (SurrealDB record id shape, e.g. 'resources:anthropic_default_abc123'). Use 'platform_default' to reference the shared platform provider. performance_profile: Which tier of catalog model to use for platform agent invocations. updated_at: Timestamp of the last settings change.

organization_id
string
required
provider
string
required
performance_profile
enum<string>
required

User-facing LLM performance profile selection.

Identifies which tier of model an organization wants platform agents to use. The profile is chosen by the user; the API resolves it to a concrete model from the selected provider's catalog entries.

Available options:
fast,
balanced,
reasoning
updated_at
string<date-time>
required