Skip to main content

LLM Configuration

Get Current LLM Config

Retrieves the current LLM configuration.

GET/api/llm/current

Responses

Success (200)

{
"config": {
"provider": "openai",
"model": "gpt-4o"
}
}

List LLM Providers

Gets a list of all available LLM providers and their models.

GET/api/llm/providers

Responses

Success (200)

{
"providers": {
"openai": {
"name": "Openai",
"models": ["gpt-4o", "gpt-4-turbo"],
"supportedRouters": ["in-built", "vercel"],
"supportsBaseURL": true
},
"cohere": {
"name": "Cohere",
"models": ["command-r-plus", "command-r", "command", "command-light"],
"supportedRouters": ["vercel"],
"supportsBaseURL": false
}
}
}

Switch LLM

Switches the LLM configuration.

POST/api/llm/switch

Request Body

  • provider (string, optional)
  • model (string, optional)
  • router ("vercel" | "in-built", optional)
  • apiKey (string, optional)
  • baseURL (string, optional)
  • maxInputTokens (number, optional)
  • sessionId (string, optional)

Responses

Success (200)

{
"ok": true,
"data": {
"provider": "openai",
"model": "gpt-4o",
"router": "vercel"
},
"issues": []
}

Error (400)

{
"ok": false,
"issues": [
{
"code": "schema_validation",
"message": "...",
"path": ["provider"],
"severity": "error",
"context": {"field": "provider"}
}
]
}