Skip to content

LLM Proxy (Middleman)

Middleman is Hawk's built-in LLM proxy. It runs on ECS Fargate and routes model API calls to providers (OpenAI, Anthropic, Google Vertex, DeepSeek, Fireworks, and more) with automatic token refresh and access control.

How It Works

When evaluations run on the cluster, Inspect AI sends model API calls through Middleman instead of directly to providers. Middleman:

  1. Authenticates the request using the runner's scoped credentials
  2. Routes the request to the correct provider API
  3. Handles token refresh and retries
  4. Enforces model group permissions

Setting Up API Keys

Store provider API keys in AWS Secrets Manager:

cd hawk
./scripts/dev/set-api-keys.sh <env> OPENAI_API_KEY=sk-...

This stores the key and restarts Middleman. Set multiple keys at once:

./scripts/dev/set-api-keys.sh <env> OPENAI_API_KEY=sk-... ANTHROPIC_API_KEY=sk-ant-...

Supported Providers

OPENAI_API_KEY, ANTHROPIC_API_KEY, GEMINI_API_KEY, DEEPINFRA_TOKEN, DEEPSEEK_API_KEY, FIREWORKS_API_KEY, MISTRAL_API_KEY, OPENROUTER_API_KEY, TOGETHER_API_KEY, XAI_API_KEY.

Bypassing the Proxy

To use your own API keys instead of Middleman, pass them as secrets and disable the proxy's token refresh:

runner:
  environment:
    INSPECT_ACTION_RUNNER_REFRESH_URL: ""

Then pass your API key as a secret:

hawk eval-set config.yaml --secret OPENAI_API_KEY

Model Configuration

Model configurations are stored in the database. Models are organized into model groups for access control — users must belong to a model's group to use it.

Deploying Changes

Middleman runs on ECS Fargate. Deployments are triggered by pushing to the main branch, which builds a new Docker image and updates the ECS service via CI/CD.

Running Locally

cd middleman
# Add API keys to .env (see example.env)
docker compose up --build

Testing the Passthrough API

uv run scripts/exercise_passthrough.py --help

This script tests the passthrough API against multiple providers (Anthropic, OpenAI, OpenRouter).