In partnership with

Good morning. It’s Wednesday, March 4th.

On this day in tech history: In 1956, An Wang sold his magnetic core memory patent to IBM for $500,000. Ferrite “donut” cores wired into 3D arrays delivered the first reliable random-access storage, replacing finicky vacuum tubes. This tech powered Whirlwind and enabled Samuel’s checkers learner plus Rosenblatt’s Perceptron hardware, the literal memory substrate for early symbolic AI and neural nets. Without stable, fast RAM, the entire AI stack collapses.

In today’s email:

  • OpenAI upgrades ChatGPT to GPT-5.3 Instant with massive hallucination cuts

  • Google launches Gemini 3.1 Flash-Lite at 1/8th the cost of Pro

  • 5 New AI Tools

  • Latest AI Research Papers

You read. We listen. Let us know what you think by replying to this email.

Free, private email that puts your privacy first

A private inbox doesn’t have to come with a price tag—or a catch. Proton Mail’s free plan gives you the privacy and security you expect, without selling your data or showing you ads.

Built by scientists and privacy advocates, Proton Mail uses end-to-end encryption to keep your conversations secure. No scanning. No targeting. No creepy promotions.

With Proton, you’re not the product — you’re in control.

Start for free. Upgrade anytime. Stay private always.

Today’s trending AI news stories

OpenAI upgrades ChatGPT to GPT-5.3 Instant with massive hallucination cuts

OpenAI has deployed GPT-5.3 Instant, replacing the 5.2 model as the default ChatGPT experience. The primary goal is "de-cringification," removing the stiff, moralizing, and overly cautious tone that has frustrated power users.

  • Hallucination Reduction: 26.8% drop in web-based errors; 19.7% drop in internal knowledge errors.

  • Prose Refinement: Direct answers that bypass "preachy" preambles and excessive safety caveating.

  • Inference Speed: 25% faster than previous Instant iterations.

This marks a shift toward factual precision. By reducing "hallucination noise," OpenAI is making the model safer for critical domains like finance and law where accuracy is more valuable than conversational fluff.

In a move that challenges its primary partner, Microsoft, OpenAI is reportedly building a proprietary software development platform to rival GitHub. Persistent GitHub outages and the need for deeper integration of agentic capabilities that existing plugins cannot provide are the primary drivers. The resulting platform is designed from the ground up for autonomous repository management and multi-step debugging to allow AI agents to operate with full-stack visibility.

OpenAI is also doubling down on its "Agent Command Center" strategy, ensuring its most powerful tools are available where work actually happens.

  • Codex for Windows: Following a successful macOS launch, OpenAI has confirmed the official launch date for Codex for Windows. This brings native, agentic coding power, including the ability to fork sub-agents and control local audio, to the world’s largest developer OS.

  • Internal Data Agent: OpenAI revealed an internal tool, built by just two engineers, that allows 4,000+ employees to query 600 petabytes of data via Slack. This serves as a blueprint for how companies can use the "Codex Enrichment" layer to map corporate data without manual tagging.

As OpenAI expands into the military sector, it is facing both technical regressions and cultural pushback.

  • Pentagon Guardrails: Following intense criticism, OpenAI added surveillance bans to its DoD contract. The new clauses explicitly prohibit using AI for domestic monitoring of U.S. citizens and bar agencies like the NSA from access without a separate contract.

  • Talent Migration: High-profile departures continue as a key GPT-5 and o1 researcher (Max Schwarzer) has left OpenAI for an RL research role at Anthropic. The move highlights a growing trend of researchers seeking "back-to-basics" individual contributor roles amidst OpenAI’s rapid commercialization. Read more.

Google launches Gemini 3.1 Flash-Lite at 1/8th the cost of Pro

Google has released Gemini 3.1 Flash-Lite, a model designed specifically for high-volume execution where speed and budget are the primary constraints. It serves as the "reflexes" to the "brain" of Gemini 3.1 Pro.

Technical Specs:

  • Latency: 2.5X faster "time to first token" than Gemini 2.5 Flash.

  • Throughput: 363 tokens per second output speed.

  • Intelligence: 12-point jump on the Artificial Analysis Intelligence Index (Score: 34).

  • Context: Maintains a 1-million-token context window.

  • The Cost-Reasoning Ratio: At $0.25/1M input and $1.50/1M output tokens, it is 1/8th the cost of Gemini 3.1 Pro. While input prices are stable, output pricing has tripled compared to the 2.5 version to account for the massive intelligence boost.

This model is the ideal engine for high-volume agentic tasks like real-time translation, content moderation, and structured data extraction that would be cost-prohibitive on flagship models. Read more.

Want to get the most out of ChatGPT?

ChatGPT is a superpower if you know how to use it correctly.

Discover how HubSpot's guide to AI can elevate both your productivity and creativity to get more things done.

Learn to automate tasks, enhance decision-making, and foster innovation with the power of AI.

5 new AI-powered tools from around the web

arXiv is a free online library where researchers share pre-publication papers.

Thank you for reading today’s edition.

Your feedback is valuable. Respond to this email and tell us how you think we could add more value to this newsletter.

Interested in reaching smart readers like you? To become an AI Breakfast sponsor, reply to this email or DM us on 𝕏!

Keep Reading