Skip to main content

Managing Models

The Models page is where you manage the LLM model catalog and provider credentials. Navigate to Models in the sidebar to access it.

Models Catalog

The page has two tabs: Catalog and Provider Credentials.

Catalog

The catalog lists all available LLM models that assistants can use. Models come pre-seeded with common providers (Anthropic, OpenAI, Google, etc.) and can be customized.

Model Table

Each model shows:

ColumnDescription
Model Name & IDDisplay name and provider-specific model identifier
ProviderLLM provider with colored badge (e.g., Anthropic, OpenAI, Google)
CategoryTier badge — Premium, Standard, or Budget
Input CostCost per 1M input tokens
Output CostCost per 1M output tokens
Context WindowMaximum input context size
Max OutputMaximum output tokens
CredentialLinked provider credential (if any)
StatusAvailability toggle with warning if API key is missing

Filtering

Use the Provider dropdown above the table to filter models by provider (e.g., All Providers, Anthropic, OpenAI).

Adding a Model

Click Add Model to register a new LLM in the catalog.

FieldDescriptionRequired
Model IDProvider-specific model identifier (e.g., claude-sonnet-4-20250514)Yes
Display NameHuman-readable nameYes
ProviderLLM providerYes
CategoryPremium, Standard, or BudgetYes
Input Cost per 1M TokensPricing for input tokensYes
Output Cost per 1M TokensPricing for output tokensYes
Context WindowMaximum context sizeNo
Max Output TokensMaximum generation lengthNo
DescriptionModel description and capabilitiesNo
FeaturesSupported features (e.g., vision, function calling)No
Release DateWhen the model was releasedNo
AvailableWhether assistants can select this modelYes

Editing and Deleting

Click a model row to edit its configuration. Use the delete action to remove a model from the catalog.

tip

Use the Category field to organize models into tiers. When configuring an assistant's model, models are grouped by category so administrators can easily choose between Premium models (for complex reasoning) and Budget models (for high-volume simple tasks).

Provider Credentials

The Provider Credentials tab lets you centralize API key management for LLM providers.

Provider Credentials

Instead of configuring API keys in environment variables or per-model, you can create named credentials and link them to multiple models.

Creating a Credential

Click New Credential to create a provider credential.

FieldDescriptionRequired
NameA descriptive name (e.g., "Production Anthropic Key")Yes
ProviderThe LLM provider this credential is forYes
API KeyThe provider's API keyYes

Linking to Models

After creating a credential, link it to models in the Catalog tab. Models linked to a credential will use its API key for inference requests. The Credential column in the catalog table shows which credential each model uses.

Benefits

  • Centralized rotation — Update one credential instead of multiple configurations
  • Provider isolation — Use different API keys for different providers
  • Visibility — See at a glance which models have valid credentials
note

Models without a linked credential fall back to environment-variable-based API keys. Provider credentials are optional but recommended for production deployments.

Model Categories

Models are organized into three tiers to help with assistant configuration:

CategoryUse CaseExample Models
PremiumComplex reasoning, coding, analysisClaude Opus 4, GPT-4o
StandardGeneral-purpose tasksClaude Sonnet 4, GPT-4o Mini
BudgetHigh-volume, simple tasksClaude Haiku 3.5

When configuring an assistant's model settings, the model selector groups available models by these categories.

Next Steps