LLM Proxy (Middleman)¶
Middleman is Hawk's built-in LLM proxy. It runs on ECS Fargate and routes model API calls to providers (OpenAI, Anthropic, Google Vertex, DeepSeek, Fireworks, and more) with automatic token refresh and access control.
How It Works¶
When evaluations run on the cluster, Inspect AI sends model API calls through Middleman instead of directly to providers. Middleman:
- Authenticates the request using the runner's scoped credentials
- Routes the request to the correct provider API
- Handles token refresh and retries
- Enforces model group permissions
Setting Up API Keys¶
Store provider API keys in AWS Secrets Manager:
This stores the key and restarts Middleman. Set multiple keys at once:
Supported Providers¶
OPENAI_API_KEY, ANTHROPIC_API_KEY, GEMINI_API_KEY, DEEPINFRA_TOKEN, DEEPSEEK_API_KEY, FIREWORKS_API_KEY, MISTRAL_API_KEY, OPENROUTER_API_KEY, TOGETHER_API_KEY, XAI_API_KEY.
Bypassing the Proxy¶
To use your own API keys instead of Middleman, pass them as secrets and disable the proxy's token refresh:
Then pass your API key as a secret:
Model Configuration¶
Model configurations are stored in the database. Models are organized into model groups for access control — users must belong to a model's group to use it.
Deploying Changes¶
Middleman runs on ECS Fargate. Deployments are triggered by pushing to the main branch, which builds a new Docker image and updates the ECS service via CI/CD.
Running Locally¶
Testing the Passthrough API¶
This script tests the passthrough API against multiple providers (Anthropic, OpenAI, OpenRouter).