Provider Management

Providers are the LLM services that TokenHub routes requests to. TokenHub ships with adapter support for OpenAI, Anthropic, and vLLM (OpenAI-compatible).

Registration Methods

The ~/.tokenhub/credentials file is a declarative JSON file processed at startup. Providers are persisted to the database and API keys are stored in the vault (when unlocked via TOKENHUB_VAULT_PASSWORD). The file is idempotent — it can remain in place across restarts.

The file must have 0600 permissions and live outside the source tree.

{
  "providers": [
    {
      "id": "openai",
      "type": "openai",
      "base_url": "https://api.openai.com",
      "api_key": "sk-..."
    },
    {
      "id": "anthropic",
      "type": "anthropic",
      "base_url": "https://api.anthropic.com",
      "api_key": "sk-ant-..."
    },
    {
      "id": "ollama-local",
      "type": "openai",
      "base_url": "http://localhost:11434"
    }
  ],
  "models": [
    {
      "id": "gpt-4o",
      "provider_id": "openai",
      "weight": 8,
      "max_context_tokens": 128000,
      "input_per_1k": 0.0025,
      "output_per_1k": 0.01
    }
  ]
}
FieldTypeRequiredDescription
idstringYesUnique provider identifier
typestringYesProvider type: openai, anthropic, or vllm
base_urlstringYesProvider API base URL
api_keystringNoAPI key (stored in vault when available, omit for keyless providers)
enabledboolNoWhether the provider is active (default: true)

Override the default path with TOKENHUB_CREDENTIALS_FILE.

Admin API / tokenhubctl

Providers can be registered and managed dynamically via the admin API or tokenhubctl at any time after the service starts.

Admin UI

The setup wizard at /admin walks through adding providers interactively.

API Operations

Create or Update a Provider

curl -X POST http://localhost:8080/admin/v1/providers \
  -H "Content-Type: application/json" \
  -d '{
    "id": "openai-prod",
    "type": "openai",
    "enabled": true,
    "base_url": "https://api.openai.com",
    "cred_store": "vault",
    "api_key": "sk-..."
  }'

Or with tokenhubctl:

tokenhubctl provider add '{"id":"openai-prod","type":"openai","base_url":"https://api.openai.com","api_key":"sk-..."}'
FieldTypeRequiredDescription
idstringYesUnique provider identifier
typestringYesProvider type: openai, anthropic, or vllm
enabledboolNoWhether the provider is active (default: true)
base_urlstringYesProvider API base URL
cred_storestringNoWhere to store credentials: vault or none
api_keystringNoAPI key (stored according to cred_store)

List Providers

curl http://localhost:8080/admin/v1/providers
tokenhubctl provider list

The tokenhubctl provider list command merges providers from both the persistent store and the runtime engine, showing base URLs derived from adapter health endpoints and indicating whether each provider is store-persisted or runtime-only.

API keys are never returned in list responses.

Edit a Provider

Partial updates via PATCH:

curl -X PATCH http://localhost:8080/admin/v1/providers/openai \
  -H "Content-Type: application/json" \
  -d '{"base_url": "https://api.openai.com", "enabled": true}'

Or:

tokenhubctl provider edit openai '{"base_url":"https://api.openai.com","enabled":true}'

Patchable fields: type, base_url, enabled, api_key, cred_store.

Delete a Provider

curl -X DELETE http://localhost:8080/admin/v1/providers/openai-staging
tokenhubctl provider delete openai-staging

Discover Models

Query a provider's API to discover available models:

curl http://localhost:8080/admin/v1/providers/openai/discover
tokenhubctl provider discover openai

This calls the provider's /v1/models endpoint (using the stored API key from the vault if available) and returns the list of models with a registered flag indicating which are already configured in TokenHub.

Credential Storage Options

cred_storeDescription
vaultAPI key is encrypted and stored in the vault (default when api_key is provided)
noneNo credentials needed (e.g., local vLLM/Ollama without auth)

When using vault, the API key is encrypted with AES-256-GCM and only available when the vault is unlocked.

Supported Provider Types

OpenAI (openai)

  • API endpoint: /v1/chat/completions
  • Health probe: GET /v1/models
  • Streaming: SSE (native)
  • Authentication: Authorization: Bearer <key>

Anthropic (anthropic)

  • API endpoint: /v1/messages
  • Health probe: GET /v1/messages (405 response = healthy)
  • Streaming: SSE (native)
  • Authentication: x-api-key: <key>, anthropic-version: 2023-06-01

vLLM (vllm)

  • API endpoint: /v1/chat/completions (OpenAI-compatible)
  • Health probe: GET /health
  • Streaming: SSE (OpenAI-compatible)
  • Authentication: None (or custom header if configured)
  • Multi-endpoint: Supports multiple endpoints with round-robin load balancing

Audit Trail

All provider mutations are logged in the audit trail:

  • provider.upsert — Provider created or updated
  • provider.patch — Provider partially updated
  • provider.delete — Provider removed