Home
Configure
Models

Models

Here is a list of supported model providers:

OpenAI

OpenAI (opens in a new tab)

Below is an example OpenAI provider config:

llm:
  models:
    - id: openai
      openai:
        base_url: https://api.openai.com/v1
        chat_endpoint: /chat/completions
        model: gpt-3.5-turbo
        api_key: <YOUR API KEY>
        default_params:
          temperature: 0.8
          top_p: 1
          max_tokens: 100
          n: 1
          frequency_penalty: 0
          presence_penalty: 0
          seed: 42

Anthropic

Anthropic (opens in a new tab)

Below is an example Anthropic provider config:

llm:
  models:
    - id: anthropic
      anthropic:
        base_url: https://api.anthropic.com/v1
        api_version: "2023-06-01"
        chat_endpoint: /messages
        model: claude-instant-1.2
        api_key: <YOUR API KEY>
        default_params:
          system: You are a helpful assistant.
          temperature: 1
          max_tokens: 250

Ollama

Ollama (opens in a new tab) is a great way to serve Open Source LLMs locally and beyond.

Here is an example Ollama configuration using the OpenAI client:

llm:
  models:
    - id: ollama
      openai:
        base_url: http://localhost:11434
        chat_endpoint: /v1/chat/completions
        model: llama3.2
        api_key: ollama # Required, but ignored
        default_params:
          temperature: 0.8
          top_p: 0.9
          num_ctx: 2048
          top_k: 40