Unofficial GPT-5.5 Field Guide
Unofficial guide Codex rollout live API coming soon Built-in voice reader
Open sources
Unofficial OpenAI documentation digest

Everything practical to know about GPT-5.5 in Codex today.

This page condenses the official launch post, Codex model docs, Codex security docs, app settings, in-app browser docs, computer use docs, pricing notes, and the GPT-5.5 system card into one readable web view.

Check autonomy settings See API status
400K Codex context window for GPT-5.5 in subscription plans.
1.5x Fast mode token generation speed, at 2.5x cost.
1M Planned API context window when gpt-5.5 reaches API.
High OpenAI's preparedness classification for bio/chemical and cyber domains, below Critical for cyber.

Availability and pricing, without the haze.

There are three different surfaces to separate: ChatGPT, Codex, and the API.

ChatGPT

GPT-5.5 is rolling out to Plus, Pro, Business, and Enterprise users in ChatGPT. GPT-5.5 Pro is rolling out to Pro, Business, and Enterprise.

Codex

Codex gets GPT-5.5 for Plus, Pro, Business, Enterprise, Edu, and Go plans. OpenAI says it has a 400K context window in Codex.

API

OpenAI's launch post says gpt-5.5 will come to the Responses and Chat Completions APIs soon, at $5 per 1M input tokens and $30 per 1M output tokens.

!
GPT-5.5 source boundary

This guide uses the GPT-5.5 launch post and GPT-5.5 system card for model claims. Codex permission, browser, and computer-use sections come from current Codex documentation because those controls are product behavior, not model speculation. The only older-model reference is a benchmark baseline copied from OpenAI's GPT-5.5 launch comparison.

What got better.

OpenAI positions GPT-5.5 as stronger for complex coding, computer use, research, professional work, long context, tool use, and cybersecurity defense.

Comparison values are from OpenAI's GPT-5.5 launch page, which lists GPT-5.5 beside prior-model baselines.
Area GPT-5.5 GPT-5.4 Read it as
SWE-Bench Pro public 58.6% 57.7% Small coding benchmark lift, not a magic wand.
Terminal-Bench 2.0 82.7% 75.1% Material improvement for terminal-heavy agent work.
OSWorld-Verified 78.7% 75.0% Better desktop/computer-use performance.
BrowseComp 84.4% 82.7% Better browsing/tool-use reasoning.
MCP Atlas 75.3% 70.6% Better at operating across tool ecosystems.
Graphwalks BFS 1M F1 45.4% 9.4% Large long-context improvement in that eval.
CyberGym 81.8% 79.0% Stronger cyber capability, paired with stricter safeguards.

Best Codex default

The Codex model docs say to start with gpt-5.5 for most tasks once it appears in your model picker, especially for complex coding, computer use, knowledge work, and research.

codex -m gpt-5.5

Fast mode tradeoff

Fast mode is about throughput: OpenAI says GPT-5.5 Fast generates tokens 1.5x faster for 2.5x the cost. Use it when waiting costs more than usage does.

Permissions and "can it do everything?"

The controls are powerful. They are not a universal bypass switch.

Mode or flag
What it enables
What still matters
Verdict
read-only Planning, inspection, answers.

Codex can read permitted files and answer questions.

Edits, shell commands, network access, and risky actions require different permissions or approval.

Safest
workspace-write Default practical coding mode.

Codex can edit files in the workspace and run allowed local commands.

Network is off by default. Writing outside the workspace or using network can still require approval.

Good default
--ask-for-approval never Non-interactive autonomy.

Codex stops asking prompts, but only within whatever sandbox you selected.

This is not the same as full access. In read-only mode, it still only reads.

Context-dependent
--dangerously-bypass-approvals-and-sandbox Alias: --yolo.

No sandbox and no approvals for Codex CLI execution.

OpenAI labels it elevated risk and not recommended. It does not override macOS admin prompts, app security gates, OpenAI policy, or the fact that wrong commands can damage real files.

Dangerous
NO
What you cannot make me bypass

I cannot approve macOS privacy/security prompts for myself, authenticate as an administrator, automate terminal apps through Computer Use, ignore site rules, bypass OpenAI safety policy, or help with harmful misuse.

OK
What you can make smoother

You can grant broader workspace, network, browser, MCP, and desktop-app permissions; allowlist trusted websites; set a better default model; and use automatic approval review for eligible approval prompts.

Browser control: two different ideas.

Codex has an in-app preview browser, and it can also use your regular browser through Computer Use.

In-app browser

The in-app browser gives you and Codex a shared rendered web page inside a thread. OpenAI recommends it for local dev servers, file-backed previews, and public pages that do not require sign-in.

It is ideal for testing pages like this one: local, visual, clickable, and safe to inspect.

Browser plugin settings

In the Codex app settings, you can enable the bundled Browser plugin and manage allowlisted or blocklisted websites. Codex asks before using a site unless it is allowlisted.

That controls websites, not browser brands. Which regular browser app Codex can operate depends on what is installed and what macOS Screen Recording and Accessibility permissions you grant. For signed-in sites, treat approved browser actions as if they are your own clicks.

Desktop app control with Computer Use.

This is the feature for controlling real apps, with clear boundaries.

What it can do

Computer Use lets Codex view screen content, take screenshots, and interact with windows, menus, keyboard input, and clipboard state in the target app. It can use a browser where you are already signed in, if you approve that flow.

You can stop the task or take over at any time.

What it cannot do

OpenAI's docs say Computer Use cannot automate terminal apps or Codex itself, because that could bypass Codex security policies. It also cannot authenticate as an administrator or approve security and privacy permission prompts on your computer.

File edits and shell commands still follow Codex approval and sandbox settings where applicable.

Mac
macOS setup matters

If Codex cannot see or control an app, the docs point you to System Settings, Privacy & Security, Screen Recording and Accessibility for the Codex app. The feature is not available in the EEA, UK, or Switzerland at launch.

API, model strings, and configuration.

The practical distinction today: local Codex can use GPT-5.5 through ChatGPT sign-in; API-key Codex cannot yet.

In Codex, choose gpt-5.5 in the model picker when it appears. The Codex docs say it is strongest for complex coding, computer use, knowledge work, and research workflows.

To set a default local model, add model = "gpt-5.5" to Codex config.toml. If it is not visible, the rollout has not reached that account or sign-in surface yet.

For one thread, start with codex -m gpt-5.5, or use the /model command inside an active thread.

OpenAI says gpt-5.5 will soon be available in Responses and Chat Completions with a 1M context window. The announced standard price is $5 per 1M input tokens and $30 per 1M output tokens.

Key
API-key authentication note

The Codex model docs say GPT-5.5 is currently available in Codex when signed in with ChatGPT, and is not available with API-key authentication during this rollout.

Safety and trusted cyber access.

GPT-5.5 is more capable in cyber and bio/chemical areas, and OpenAI says it shipped with tighter controls.

High, not Critical

The GPT-5.5 system card says OpenAI treats GPT-5.5 as High capability in biological/chemical and cybersecurity domains. For cybersecurity, it is below Critical: OpenAI says the model did not produce functional critical-severity exploit outcomes against tested real-world targets in standard configurations.

Trusted Access for Cyber

OpenAI says verified defenders can apply for trusted access to reduce unnecessary refusals for verified defensive work. This is the legitimate path for lower-friction security assistance, not trying to jailbreak or bypass safeguards.

Bottom line: GPT-5.5 can help more with defensive security, code review, vulnerability research, and remediation. It should not be treated as permission to automate harmful or unauthorized cyber activity.

Primary sources used.

Everything above is grounded in official OpenAI documentation or OpenAI-hosted launch/safety material checked on April 23, 2026.

Introducing GPT-5.5

Launch, availability, Codex context, Fast mode, API pricing plans, benchmark tables.

Open source
GPT-5.5 System Card

Preparedness classification, cyber and bio/chemical safety assessment, safeguards.

Open source
Codex models

Recommended model choice, ChatGPT sign-in availability, API-key limitation, config examples.

Open source
Codex pricing

Subscription usage ranges and API-key availability status for Codex models.

Open source
Agent approvals and security

Sandbox modes, approval policy, network access, dangerous bypass flag, recommendations.

Open source
Codex app settings

Browser plugin allowlists/blocklists, MCP settings, Computer Use setup notes.

Open source
In-app browser

Local dev previews, file-backed previews, public pages, and login caveats.

Open source
Computer Use

Desktop-app capabilities, sensitive-flow guidance, terminal/admin/security-prompt limits.

Open source