Home
Configure
Engine

Arcade Engine configuration

Arcade Engine's configuration is a YAML file (opens in a new tab) with the following sections:

  • api - Configures the server for specific protocols
  • llm/models - Defines a collection of AI models available for routing
  • tools - Configures tools for AI models to use
  • auth - Configures user authorization providers and token storage
  • telemetry - Configures telemetry and observability

Specify a config file

To start the Arcade Engine, pass a config file:

engine --config /path/to/config.yaml

Dotenv files

Arcade Engine automatically loads environment variables from .env files in the directory where it was called. Use the --env flag to specify a path:

engine --env .env.dev --config config.yaml

Secrets

Arcade Engine supports two ways of passing sensitive information like API keys:

Environment variables:

llm:
  models:
    - id: primary
      openai:
        api_key: ${env:OPENAI_API_KEY}

External files (useful in cloud setups):

llm:
  models:
    - id: primary
      openai:
        api_key: ${file:/path/to/secret}

API configuration

HTTP is the only supported protocol for Arcade Engine's API. The following configuration options are available:

  • api.development (optional, default: false) - Enable development mode, with more logging and simple actor authentication
  • api.http.host (default: localhost) - Address to which Arcade Engine binds its server (e.g., localhost or 0.0.0.0)
  • api.http.read_timeout (optional, default: 30s) - Timeout for reading data from clients
  • api.http.write_timeout (optional, default: 1m) - Timeout for writing data to clients
  • api.http.idle_timeout (optional, default: 30s) - Timeout for idle connections
  • api.http.max_request_body_size (optional, default: 4Mb) - Maximum request body size

Sample configuration:

api:
  http:
    development: true
    host: 0.0.0.0
    port: 9099

LLM (model) configuration

The llm.models section defines the models that the engine can route to, and the API keys or parameters for each model. Each item in the models array must have a unique id and a provider-specific configuration.

This example shows configuration for connecting to OpenAI and Azure OpenAI:

llm:
  models:
    - id: primary
      openai:
        api_key: ${env:OPENAI_API_KEY}
        default_params:
          temperature: 0
    - id: secondary
      azureopenai:
        api_key: ${env:AZURE_OPENAI_API_KEY}
        model: "engine-GPT-35"
        base_url: "https://mydeployment.openai.azure.com/"

For OpenAI, Cohere, and OctoML, only an API key is needed. For Azure OpenAI, specify a model name and base URL.

For more details on model configuration and model-specific parameters, see model configuration.

Routing

When client code calls the Arcade LLM API, it specifies a model by name. Arcade Engine matches the model name to a model in the configuration and routes the request to the appropriate provider.

If more than one LLM provider is configured, Arcade Engine will attempt to route to the correct provider using known model names.

from openai import OpenAI
 
client = OpenAI(base_url="http://localhost:9099/v1")
 
response = client.chat.completions.create(
    messages=[
        {"role": "user", "content": "Who is the CEO of Apple?"},
    ],
    model="gpt-4o",
)

If two language providers have the same models or a custom model is used, the provider can be explicitly specified using its id.

For example, given an llm.models configuration with one provider named primary, this request will be routed to primary with the model gpt-4o:

response = client.chat.completions.create(
    messages=[
        {"role": "user", "content": "Who is the CEO of Apple?"},
    ],
    model="primary/gpt-4o",
)

Tools configuration

Arcade Engine orchestrates tools that AI models can use. Tools are executed by distributed workers called actors, which are grouped into directors.

The tools.directors section configures the actors that are available to service tool calls:

  directors:
    - id: default
      enabled: true
      actors:
        - id: "localactor"
          enabled: true
          http:
            uri: "http://localhost:8002"
            timeout: 30
            retry: 3
            secret: ${env:ARCADE_ACTOR_SECRET}
 

When an actor is added to an enabled director, all of the tools hosted by that actor will be available to the LLM and the Arcade API.

HTTP actor configuration

The http sub-section configures the HTTP client used to call the actor's tools:

  • uri (required) - The base URL of the actor's tools
  • secret (required) - Secret used to authenticate with the actor
  • timeout (required) - Timeout for calling the actor's tools
  • retry (required) - Number of retries to attempt

Actors must be configured with a secret that is used to authenticate with the actor. This ensures that actors are not exposed to the public internet without security.

If api.development = true, the secret will default to "dev" for local development only. In production, the secret must be set to a random value.

Auth configuration

Arcade Engine manages auth for tools and agents. This configuration controls what providers are available, and how tokens are stored.

Token store

When users authorize with an external service, their tokens are stored securely in the token store. Tokens are later retrieved from the store when AI models need to perform authorized actions.

Arcade Engine supports in-memory and Redis-based token stores.

In-memory

The in-memory token store is not persistent and is erased when the Engine process shuts down. It's intended for local development and testing.

auth:
  token_store:
    in_memory:
      max_size: 10000

Redis

The Redis-based token store is persistent and can be used in production environments.

  token_store:
    redis:
      addr: "redis:6379"
      password: ${env:REDIS_PASSWORD}
 

Auth providers

The auth.providers section defines the providers that users can authorize with. Arcade Engine supports many built-in auth providers, and can also connect to any OAuth 2.0-compatible authorization server.

The providers array contains provider definitions, each with an id and provider-specific configuration:

  token_store:
    redis:
      addr: "redis:6379"
      password: ${env:REDIS_PASSWORD}
 
  providers:
    - id: github
      enabled: true
      client_id: ${env:GITHUB_CLIENT_ID}
      client_secret: ${env:GITHUB_CLIENT_SECRET}
 
    - id: hooli
      enabled: true
      type: oauth2
      client_id: ${env:HOOLI_CLIENT_ID}
      client_secret: ${env:HOOLI_CLIENT_SECRET}
      oauth2:
        # Connection details here...
 

The id of the provider is used to reference the provider in code, and must be unique.

The auth providers page includes configuration details for each supported provider.

Telemetry configuration

Arcade supports logs, metrics, and traces with OpenTelemetry (opens in a new tab).

If you are using the Arcade Engine locally, you can set the environment field to local. This will only output logs to the console.

To connect to OpenTelemetry compatible collectors, set the necessary OpenTelemetry environment variables (opens in a new tab) in the .env file.

environment and version are fields that are added to the telemetry attributes, which can be filtered on later.

telemetry:
  environment: local
  version: ${env:VERSION}
  logging:
    level: debug # debug, info, warn, error, fatal
    encoding: console

Notes

  • The Engine service name is set to arcade_engine
  • Traces currently cover the /v1/health and /v1/chat/completions endpoints as well as authentication attempts