- AI Breakfast
- Posts
- Meta's HyperNova AR Glasses
Meta's HyperNova AR Glasses
Good morning. It’s Monday, August 18th.
On this day in tech history: In 2004, Google went public, offering over 19 million shares at $85 each. The IPO enabled a massive influx of capital to build compute infrastructure, including data centers, bandwidth, and hardware, which were critical for scaling future AI systems.
In today’s email:
5 New AI Tools
Latest AI Research Papers
You read. We listen. Let us know what you think by replying to this email.
In partnership with WorkOS

Model Context Protocol (MCP) is becoming a standard for connecting tools to LLMs. But how do you securely authorize MCP servers?
OAuth now provides the answer, with five complementary specs for delegation, token exchange, and scoped access.
The WorkOS advantage:
- Packages all 5 OAuth specs into one simple API.
- Pre-built infrastructure that saves engineering time.
- Enterprise ready security out of the box.
Implement MCP Auth with WorkOS →
Thank you for supporting our sponsors!

Today’s trending AI news stories
OpenAI is giving GPT-5 a personality tune-up
After drawing complaints on its latest flagship model, CEO Sam Altman conceded that GPT-5 rollout was “a little more bumpy than we’d hoped.” Now, the company is pushing a personality update meant to make GPT-5 sound warmer without crossing into fake friendliness. Instead of empty flattery, GPT-5 now drops subtle cues like “Good question” or “Great start”. CEO Sam Altman says deeper customization will follow, letting users tune ChatGPT’s style directly.
We’re making GPT-5 warmer and friendlier based on feedback that it felt too formal before. Changes are subtle, but ChatGPT should feel more approachable now.
You'll notice small, genuine touches like “Good question” or “Great start,” not flattery. Internal tests show no rise in
— OpenAI (@OpenAI)
9:03 PM • Aug 15, 2025
As OpenAI fine-tunes its products, Altman is also blunt about market risks, comparing today’s AI hype cycle to the dot-com bubble, saying investors are “overexcited about a kernel of truth.” Heavyweights from Alibaba’s Joe Tsai to Bridgewater’s Ray Dalio have voiced similar warnings prior, and Apollo economist Torsten Slok argues valuations in today’s S&P 500 tech giants may already exceed 1990s excess.
Despite those warnings, capital continues to flood in. OpenAI employees are preparing to sell roughly $6 billion in shares to SoftBank, Thrive Capital, and Dragoneer Investment Group, in a deal that would raise the company’s valuation to $500 billion, up sharply from $300 billion just months ago. Read more.
Meta’s ‘Hypernova’ AR glasses could drop next month, costing less than expected
Meta’s next AR gamble is almost here. Leaked details on its “Celeste” smart glasses point to a right-lens HUD, built-in camera, and a neural wristband (“Ceres”) that picks up muscle signals for gesture controls. Early chatter pegged the price north of $1,300 but Bloomberg now says closer to $800, with the wristband bundled, to avoid another Quest Pro-style flop.
That undercuts earlier speculation while keeping it higher than most consumer smart glasses ($269–$649), though the package promises more capability: app support similar to Quest 3, touch and hand-gesture navigation, and a built-in camera. Prescription lenses and styling options will raise the price further. With Meta Connect set for September 17, the Celeste glasses are expected to debut there, with preorders likely before October shipping. Pricing will determine if Celeste is seen as an affordable on-ramp to AR or another overreach in Meta’s long bet on wearables. Read more.
Researcher strips reasoning from OpenAI’s gpt-oss-20B, releases freer base model
OpenAI’s first open-weights release in six years is already being bent in unexpected directions. Less than two weeks after OpenAI launched its Apache 2.0–licensed gpt-oss family, researcher Jack Morris (Cornell Tech/Meta) released gpt-oss-20b-base a stripped-down version of the 20B model that removes OpenAI’s “reasoning alignment” and restores raw, pretrained behavior. Instead of stepping through chain-of-thought logic, the base model simply predicts the next token, yielding faster, freer, less filtered text, including responses the aligned model would normally block.
OpenAI hasn’t open-sourced a base model since GPT-2 in 2019. they recently released GPT-OSS, which is reasoning-only...
or is it?
turns out that underneath the surface, there is still a strong base model. so we extracted it.
introducing gpt-oss-20b-base 🧵
— jack morris (@jxmnop)
1:07 AM • Aug 13, 2025
Morris achieved this by applying a LoRA update to just 0.3% of the network’s weights (three MLP layers, rank 16), trained on 20,000 FineWeb docs over four days on eight NVIDIA H200 GPUs. It’s now live on Hugging Face under an MIT license, open for anyone to test or commercialize. Researchers get a clearer view of how LLMs behave before alignment but there’s a tradeoff: more unsafe, uncensored, and copyright-spilling behavior. Nevertheless, this shows how fast open-weight models can be remixed, and how little compute it takes to peel back alignment. Read more.
Anthropic Claude Opus 4 models can now terminate chats
Anthropic just gave its latest Claude models a way to walk away from conversations. Claude Opus 4 and 4.1 can now terminate chats outright, not for everyday disagreements, but in “rare, extreme” cases where users keep pushing for things like child sexual abuse material or step-by-step guides to mass violence. Anthropic says this isn’t about shielding people, but about protecting the model itself, coming out from its “AI welfare” research program, which explores whether systems should be allowed to exit harmful interactions.
As part of our exploratory work on potential model welfare, we recently gave Claude Opus 4 and 4.1 the ability to end a rare subset of conversations on claude.ai.
— Anthropic (@AnthropicAI)
7:41 PM • Aug 15, 2025
Anthropic is clear it doesn’t think Claude is sentient, but says it’s testing low-cost guardrails in case future AI moral status isn’t hypothetical. When Claude does cut off a thread, you can still open a new chat or revise prompts. The block is scoped to that one conversation. It remains an experiment for now, and Anthropic is actively gathering feedback. Read more.
AI Breakfast Q&A
From our readers…
Colin W: We are a small women's fashion brand that has recently moved from a wholesale model to an e-commerce model. Do you have any AI tools that you would recommend that would help reduce the workload to run the business, especially in content creation, videos and stills, but certainly in anything that can help reduce overhead costs, eg labour?
A lean AI stack for a small fashion e-com brand could look like this: Shopify Magic + ChatGPT for product copy and SEO, Photoroom + Midjourney for clean product photos and creative visuals, Runway + Descript for quick social video ads, Klaviyo AI + Zapier for automated email/SMS flows and backend tasks, and Browse AI + Otter for competitor tracking and team meetings.
Anonymous: What is the most effective way to minimize the error rate when using an LLM (GenAI bot) - i.e. Using a RAG + Anchoring (referencing) to an actual webpage, other? Also, is there a model / prompting / other that has proved to have 0% error rate?
There’s no way to hit 0% error with an LLM, but the most effective setup is a tight RAG pipeline with verification and abstention: retrieve from a high-quality corpus with hybrid search, make the model quote sources directly (and refuse if not found), use constrained decoding and tools for math/code, then add a verification loop or cross-model check. Combine that with confidence thresholds so the system can abstain instead of hallucinate. This drives error rates very low, but the only true “zero” comes from returning exact source text, or prompting it to return uncertain cases to a human.
Reply to this email to have your question featured in the newsletter!

Nvidia releases open dataset, 2 models for multilingual speech AI blogs
SoftBank taps Foxconn Ohio site for Stargate AI server project with OpenAI, Oracle
US government is reportedly in discussions to take stake in Intel
Perplexity now supports live earnings call transcripts for Indian stocks
Meet the 'neglectons': Previously overlooked particles that could revolutionize quantum computing

5 new AI-powered tools from around the web

arXiv is a free online library where researchers share pre-publication papers.


Thank you for reading today’s edition.

Your feedback is valuable. Respond to this email and tell us how you think we could add more value to this newsletter.
Interested in reaching smart readers like you? To become an AI Breakfast sponsor, reply to this email or DM us on 𝕏!