Storyline
OpenAI launches gpt-5.3-codex-spark, a 1,000+ tok/s coding model on cerebras
OpenAI introduced GPT-5.3-Codex-Spark, a research-preview coding model aimed at real-time, low-latency programming. Multiple reports highlight throughput of 1,000+ tokens per second and emphasize that it runs on Cerebras accelerators—described as OpenAI’s first production model deployed on non-Nvidia hardware.
Current brief openSource links open
This current storyline is open here with summary, metadata, source links, continuity context, and full evidence. Paid is for compare-over-time, alerts, exports, and workflow.
No card needed for the free brief.
Evidence trail (top sources)
top sources (4 domains)domains are deduped. counts indicate coverage, not truth.4 top sources shown
Overview
OpenAI introduced GPT-5.3-Codex-Spark, a research-preview coding model aimed at real-time, low-latency programming. Multiple reports highlight throughput of 1,000+ tokens per second and emphasize that it runs on Cerebras accelerators—described as OpenAI’s first production model deployed on non-Nvidia hardware.
Score total
2.57
Momentum 24h
9
Posts
9
Origins
8
Source types
3
Duplicate ratio
0%
Why now
- OpenAI is rolling out a research preview to ChatGPT Pro and select API users
- Community posts highlight perceived speed gains, amplifying launch impact
- Media frames it as a notable hardware/inference-stack shift for OpenAI
Why it matters
- Signals OpenAI production inference expanding beyond Nvidia to Cerebras hardware
- Low-latency coding agents may hinge on infra choices, not just model quality
- Codex surfaces (CLI/IDE) make speed improvements immediately user-visible
Continuity snapshot
- Trend status: insufficient_history.
- Continuity stage: broad_confirmed.
- Current status: open.
- 9 current source-linked posts are attached to this storyline.
All evidence
All evidence
OpenAI Taps Cerebras for GPT-5.3 Codex Spark in Bid to Loosen Nvidia’s Grip
TechRepublic AI · techrepublic.com · 2026-02-13 14:33 UTC
ChatGPT 5.3-Codex-Spark has been crazy fast
ChatGPTCoding · reddit.com · 2026-02-13 01:45 UTC
OpenAI Releases a Research Preview of GPT‑5.3-Codex-Spark: A 15x Faster AI Coding Model Delivering Over 1000 Tokens Per Second on Cerebras Hardware
machinelearningresearchnews · marktechpost.com · 2026-02-12 23:31 UTC
OpenAI sidesteps Nvidia with unusually fast coding model on plate-sized chips
arstechnica_all · arstechnica.com · 2026-02-12 22:56 UTC
OpenAI dishes out its first model on a plate of Cerebras silicon
The Register AI + ML (Atom) · go.theregister.com · 2026-02-12 22:32 UTC
OpenAI has yet another new coding model and this time it's really fast
The Decoder AI in practice · the-decoder.com · 2026-02-12 19:24 UTC
Show filters & breakdown
Posts loaded: 0Publishers: 6Origin domains: 6Duplicates: -
Showing 6 / 0
Top publishers (this list)
- TechRepublic AI (1)
- ChatGPTCoding (1)
- machinelearningresearchnews (1)
- arstechnica_all (1)
- The Register AI + ML (Atom) (1)
- The Decoder AI in practice (1)
Top origin domains (this list)
- techrepublic.com (1)
- reddit.com (1)
- marktechpost.com (1)
- arstechnica.com (1)
- go.theregister.com (1)
- the-decoder.com (1)