- AI Breakfast
- Posts
- ChatGPT's "Juice 200" Mode
ChatGPT's "Juice 200" Mode
Good morning. It’s Monday, September 1st.
On this day in tech history: In 2000, the release of OpenCV delivered a widely used, open-source library of computer-vision and AI algorithms (k-NN, SVMs, decision trees, etc.), laying groundwork for accessible machine learning tools in academia and industry.
In today’s email:
ChatGPT’s “Juice 200”
Microsoft To Distance Themselves from OpenAI?
$10k/mo UBI Feasible With AI Growth?
5 New AI Tools
Latest AI Research Papers
You read. We listen. Let us know what you think by replying to this email.
How 433 Investors Unlocked 400X Return Potential
Institutional investors back startups to unlock outsized returns. Regular investors have to wait. But not anymore. Thanks to regulatory updates, some companies are doing things differently.
Take Revolut. In 2016, 433 regular people invested an average of $2,730. Today? They got a 400X buyout offer from the company, as Revolut’s valuation increased 89,900% in the same timeframe.
Founded by a former Zillow exec, Pacaso’s co-ownership tech reshapes the $1.3T vacation home market. They’ve earned $110M+ in gross profit to date, including 41% YoY growth in 2024 alone. They even reserved the Nasdaq ticker PCSO.
The same institutional investors behind Uber, Venmo, and eBay backed Pacaso. And you can join them. But not for long. Pacaso’s investment opportunity ends September 18.
Paid advertisement for Pacaso’s Regulation A offering. Read the offering circular at invest.pacaso.com. Reserving a ticker symbol is not a guarantee that the company will go public. Listing on the NASDAQ is subject to approvals.

Today’s trending AI news stories
ChatGPT experiments with a max-thinking mode called Juice 200
OpenAI is experimenting with a new “Thinking Effort” selector in the ChatGPT web app, letting users adjust the AI’s cognitive intensity. Options range from Light thinking (5) to Max thinking (200), with intermediate tiers like Standard (18) and Extended (48). Max thinking, or “Juice 200,” is currently limited to Pro and Enterprise users due to the heavy compute it demands.
The new ChatGPT web app version has an updated (hidden) thinking effort picker - Max thinking (200), Extended thinking (48), Standard thinking (18), Light thinking (5)
And a few other related experiments, including showing models in the plus menu, showing the selected model in
— Tibor Blaho (@btibor91)
9:54 PM • Aug 29, 2025
The update also includes other experiments: showing the selected model in the composer, a fully collapsed tool menu, and model visibility in the Plus menu. The company wants to give users smarter AI without forcing them to guess which model or power level to pick. Early tests suggest this could let ChatGPT scale from casual chat to heavy reasoning tasks while keeping the experience smooth. Read more.
With new in-house models, Microsoft lays the groundwork for independence from OpenAI
Microsoft is no longer content being just OpenAI’s biggest customer. Last week, it rolled out two homegrown AI models designed to show it can build world-class systems in-house while keeping costs under control. MAI-Voice-1 is a speech-generation model tuned for speed and efficiency: it cranks out a full minute of expressive, multi-speaker audio in under a second on a single GPU. It’s already powering Copilot features like Daily and Podcasts, a clear play for voice to become a mainstream interface.
Excited to share our first @MicrosoftAI in-house models: MAI-Voice-1 and MAI-1-preview. Details and how you can test below, with lots more to come⬇️
— Mustafa Suleyman (@mustafasuleyman)
5:00 PM • Aug 28, 2025
The second model, MAI-1-preview, is a large language model trained on around 15,000 Nvidia H100 GPUs, far fewer than rivals like xAI’s Grok, but engineered to run inference on a single GPU. That balance of scale and efficiency is no accident; AI chief Mustafa Suleyman calls it “punching above its weight,” crediting data curation and training discipline over brute-force compute. The model is in public testing now, with a wider Copilot rollout coming. These systems give Microsoft a hedge against overreliance on OpenAI. Read more.
Ex-OpenAI researcher says $10K UBI payments 'feasible' with AI-growth
Miles Brundage, who once led AGI readiness at OpenAI, argues that current UBI pilots offering $500–$1,500 a month are relics of a pre-AI economy. With trillion-dollar data center buildouts and Nvidia’s Blackwell Ultra GPUs already reshaping global output, he says $10,000 monthly stipends could be both economically and politically viable in the near term. The bottleneck isn’t money, it’s whether governments can rewrite policy fast enough to manage a post-work transition without stagnation.
I think that a significantly more generous UBI experiment than has been tried so far (say, $10k/month vs. $1k/month) would show big effects.
But unfortunately that is very expensive, billionaires have moved on, and a bureaucrat would get crucified for this, so it won't happen.
— Miles Brundage (@Miles_Brundage)
12:16 AM • Aug 20, 2025
Nvidia CEO Jensen Huang sees the same forces driving not redistribution but reconfiguration: AI as the long-awaited solution to the “productivity paradox.” By automating drudge work, he argues, companies can finally execute on shelved ideas, making a four-day workweek not just possible but efficient. Pilot programs back him up - 24% productivity gains, burnout cut in half, turnover down.
Brundage and Huang sketch two diverging but linked paths: direct redistribution of AI’s economic surplus, or structural redesign of work itself. Read more.

Lovable’s CEO isn’t too worried about the vibe-coding competition
Cohere launches open weights AI model Aya 23 with support for nearly two dozen languages
Forget data labeling: Tencent’s R-Zero shows how LLMs can train themselves
Andrej Karpathy says reinforcement learning isn’t enough, LLMs need experience
Nvidia says two mystery customers accounted for 39% of Q2 revenue
Accenture CEO weighs in on why so many AI projects have failed with 3 red flags to watch out for
DeepConf can greatly reduce computational effort in language model reasoning tasks
The White House apparently ordered federal workers to roll out Grok ‘ASAP’
Alibaba develops a new AI chip for a wide range of inference tasks
Musk's xAI sues engineer for allegedly taking secrets to OpenAI
From drones to robots, tourists flock to China to glimpse a ‘cyberpunk’ future
A firewall for science: AI tool identifies 1,000 'questionable' journals
New method enables AI models to forget private and copyrighted data
ChatGPT-powered dolls are becoming caregivers in South Korea

5 new AI-powered tools from around the web

arXiv is a free online library where researchers share pre-publication papers.


Thank you for reading today’s edition.

Your feedback is valuable. Respond to this email and tell us how you think we could add more value to this newsletter.
Interested in reaching smart readers like you? To become an AI Breakfast sponsor, reply to this email or DM us on 𝕏!