ChatGPT
GPT-5.5 is rolling out to Plus, Pro, Business, and Enterprise users in ChatGPT. GPT-5.5 Pro is rolling out to Pro, Business, and Enterprise.
This page condenses the official launch post, Codex model docs, Codex security docs, app settings, in-app browser docs, computer use docs, pricing notes, and the GPT-5.5 system card into one readable web view.
There are three different surfaces to separate: ChatGPT, Codex, and the API.
GPT-5.5 is rolling out to Plus, Pro, Business, and Enterprise users in ChatGPT. GPT-5.5 Pro is rolling out to Pro, Business, and Enterprise.
Codex gets GPT-5.5 for Plus, Pro, Business, Enterprise, Edu, and Go plans. OpenAI says it has a 400K context window in Codex.
OpenAI's launch post says gpt-5.5 will come to the Responses and Chat Completions APIs soon, at $5 per 1M input tokens and $30 per 1M output tokens.
This guide uses the GPT-5.5 launch post and GPT-5.5 system card for model claims. Codex permission, browser, and computer-use sections come from current Codex documentation because those controls are product behavior, not model speculation. The only older-model reference is a benchmark baseline copied from OpenAI's GPT-5.5 launch comparison.
OpenAI positions GPT-5.5 as stronger for complex coding, computer use, research, professional work, long context, tool use, and cybersecurity defense.
| Area | GPT-5.5 | GPT-5.4 | Read it as |
|---|---|---|---|
| SWE-Bench Pro public | 58.6% | 57.7% | Small coding benchmark lift, not a magic wand. |
| Terminal-Bench 2.0 | 82.7% | 75.1% | Material improvement for terminal-heavy agent work. |
| OSWorld-Verified | 78.7% | 75.0% | Better desktop/computer-use performance. |
| BrowseComp | 84.4% | 82.7% | Better browsing/tool-use reasoning. |
| MCP Atlas | 75.3% | 70.6% | Better at operating across tool ecosystems. |
| Graphwalks BFS 1M F1 | 45.4% | 9.4% | Large long-context improvement in that eval. |
| CyberGym | 81.8% | 79.0% | Stronger cyber capability, paired with stricter safeguards. |
The Codex model docs say to start with gpt-5.5 for most tasks once it appears in your model picker, especially for complex coding, computer use, knowledge work, and research.
Fast mode is about throughput: OpenAI says GPT-5.5 Fast generates tokens 1.5x faster for 2.5x the cost. Use it when waiting costs more than usage does.
The controls are powerful. They are not a universal bypass switch.
read-only
Planning, inspection, answers.
Codex can read permitted files and answer questions.
Edits, shell commands, network access, and risky actions require different permissions or approval.
workspace-write
Default practical coding mode.
Codex can edit files in the workspace and run allowed local commands.
Network is off by default. Writing outside the workspace or using network can still require approval.
--ask-for-approval never
Non-interactive autonomy.
Codex stops asking prompts, but only within whatever sandbox you selected.
This is not the same as full access. In read-only mode, it still only reads.
--dangerously-bypass-approvals-and-sandbox
Alias: --yolo.
No sandbox and no approvals for Codex CLI execution.
OpenAI labels it elevated risk and not recommended. It does not override macOS admin prompts, app security gates, OpenAI policy, or the fact that wrong commands can damage real files.
I cannot approve macOS privacy/security prompts for myself, authenticate as an administrator, automate terminal apps through Computer Use, ignore site rules, bypass OpenAI safety policy, or help with harmful misuse.
You can grant broader workspace, network, browser, MCP, and desktop-app permissions; allowlist trusted websites; set a better default model; and use automatic approval review for eligible approval prompts.
Codex has an in-app preview browser, and it can also use your regular browser through Computer Use.
The in-app browser gives you and Codex a shared rendered web page inside a thread. OpenAI recommends it for local dev servers, file-backed previews, and public pages that do not require sign-in.
It is ideal for testing pages like this one: local, visual, clickable, and safe to inspect.
In the Codex app settings, you can enable the bundled Browser plugin and manage allowlisted or blocklisted websites. Codex asks before using a site unless it is allowlisted.
That controls websites, not browser brands. Which regular browser app Codex can operate depends on what is installed and what macOS Screen Recording and Accessibility permissions you grant. For signed-in sites, treat approved browser actions as if they are your own clicks.
This is the feature for controlling real apps, with clear boundaries.
Computer Use lets Codex view screen content, take screenshots, and interact with windows, menus, keyboard input, and clipboard state in the target app. It can use a browser where you are already signed in, if you approve that flow.
You can stop the task or take over at any time.
OpenAI's docs say Computer Use cannot automate terminal apps or Codex itself, because that could bypass Codex security policies. It also cannot authenticate as an administrator or approve security and privacy permission prompts on your computer.
File edits and shell commands still follow Codex approval and sandbox settings where applicable.
If Codex cannot see or control an app, the docs point you to System Settings, Privacy & Security, Screen Recording and Accessibility for the Codex app. The feature is not available in the EEA, UK, or Switzerland at launch.
The practical distinction today: local Codex can use GPT-5.5 through ChatGPT sign-in; API-key Codex cannot yet.
In Codex, choose gpt-5.5 in the model picker when it appears. The Codex docs say it is strongest for complex coding, computer use, knowledge work, and research workflows.
To set a default local model, add model = "gpt-5.5" to Codex config.toml. If it is not visible, the rollout has not reached that account or sign-in surface yet.
For one thread, start with codex -m gpt-5.5, or use the /model command inside an active thread.
OpenAI says gpt-5.5 will soon be available in Responses and Chat Completions with a 1M context window. The announced standard price is $5 per 1M input tokens and $30 per 1M output tokens.
The Codex model docs say GPT-5.5 is currently available in Codex when signed in with ChatGPT, and is not available with API-key authentication during this rollout.
GPT-5.5 is more capable in cyber and bio/chemical areas, and OpenAI says it shipped with tighter controls.
The GPT-5.5 system card says OpenAI treats GPT-5.5 as High capability in biological/chemical and cybersecurity domains. For cybersecurity, it is below Critical: OpenAI says the model did not produce functional critical-severity exploit outcomes against tested real-world targets in standard configurations.
OpenAI says verified defenders can apply for trusted access to reduce unnecessary refusals for verified defensive work. This is the legitimate path for lower-friction security assistance, not trying to jailbreak or bypass safeguards.
Everything above is grounded in official OpenAI documentation or OpenAI-hosted launch/safety material checked on April 23, 2026.
Launch, availability, Codex context, Fast mode, API pricing plans, benchmark tables.
Open sourcePreparedness classification, cyber and bio/chemical safety assessment, safeguards.
Open sourceRecommended model choice, ChatGPT sign-in availability, API-key limitation, config examples.
Open sourceSubscription usage ranges and API-key availability status for Codex models.
Open sourceSandbox modes, approval policy, network access, dangerous bypass flag, recommendations.
Open sourceBrowser plugin allowlists/blocklists, MCP settings, Computer Use setup notes.
Open sourceLocal dev previews, file-backed previews, public pages, and login caveats.
Open sourceDesktop-app capabilities, sensitive-flow guidance, terminal/admin/security-prompt limits.
Open source