Workaround: Adding Unreleased GitHub Copilot Models to OpenClaw
When GitHub Copilot rolls out new models (like GPT-5.4) before OpenClaw adds them to its built-in catalog, you’ll get errors like:
FailoverError: Unknown model: github-copilot/gpt-5.4
This post documents a workaround using inline model definitions in openclaw.json.
The Problem
OpenClaw resolves models through a catalog. New Copilot models that aren’t in the catalog yet will fail with “Unknown model”, even though they’re available through your Copilot subscription.
Simply adding a model entry isn’t enough — there are two hidden gotchas:
- Missing
apifield →Error: No API provider registered for api: undefined - Missing IDE auth headers →
HTTP 400: missing Editor-Version header for IDE auth
The Fix
Add the model as an inline provider definition in openclaw.json under models.providers, with the correct API adapter and IDE authentication headers:
{
"models": {
"providers": {
"github-copilot": {
"baseUrl": "https://api.enterprise.githubcopilot.com",
"headers": {
"Editor-Version": "vscode/1.96.2",
"Copilot-Integration-Id": "vscode-chat",
"Editor-Plugin-Version": "copilot-chat/0.26.7"
},
"models": [
{
"id": "gpt-5.4",
"name": "GPT-5.4",
"api": "openai-responses",
"reasoning": true,
"input": ["text", "image"],
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
"contextWindow": 128000,
"maxTokens": 128000
}
]
}
}
}
}
You can apply this with the CLI:
openclaw config patch --raw '<the JSON above>'
Or edit ~/.openclaw/openclaw.json directly and restart:
openclaw gateway restart
Key Details
Why api: "openai-responses"?
GitHub Copilot’s chat models use the OpenAI Responses API format. Without this field, the inline model has no API adapter and fails immediately.
Why the IDE headers?
The Copilot API endpoint requires IDE identification headers. Built-in models get these injected automatically through OpenClaw’s internal Copilot auth flow, but inline models bypass that path. The three required headers are:
| Header | Value | Purpose |
|---|---|---|
Editor-Version | vscode/1.96.2 | Identifies the “editor” making the request |
Copilot-Integration-Id | vscode-chat | Identifies the integration type |
Editor-Plugin-Version | copilot-chat/0.26.7 | Plugin version for compat checks |
Will this break existing models?
No. The inline provider definition only adds new model resolution paths. Existing built-in models resolve first and are unaffected. If OpenClaw later adds the same model to its catalog, the built-in definition takes priority and the inline one is silently skipped.
After Setup
Register the model in your agent config so you can use it:
{
"agents": {
"defaults": {
"models": {
"github-copilot/gpt-5.4": {
"alias": "gpt54"
}
}
}
}
}
Then switch to it per-session with /model gpt54 or set it as primary.
Caveats
- Version-dependent: When OpenClaw updates, the built-in catalog may add these models natively, making this workaround unnecessary.
- Header versions: The
Editor-Versionand plugin version values may need updating if GitHub tightens validation. Current values (vscode/1.96.2,copilot-chat/0.26.7) work as of March 2026. - No official support: This is a workaround, not a supported configuration path. Use at your own risk.
Related
- Setting Up GitHub Copilot Models in OpenClaw — basic Copilot model configuration
- OpenClaw GitHub Issue #39459 — feature request to support new models natively