Inside Meta's "List of 44"

Good morning. It’s Monday, July 21st.

On this day in tech history: In 2008, NVIDIA released CUDA 2.0, which allowed developers to write massively parallel code in C/C++ to run on NVIDIA GPUs. This made it practical to train deep neural networks by exploiting thousands of GPU cores simultaneously, laying the foundation for breakthroughs in large-scale deep learning.

In today’s email:

  • OpenAI Beats Grok, Gemini in Math Olympiad

  • Inside Meta’s “List of 44”

  • DuckDuckGo Allows Hiding AI Image Results

  • 5 New AI Tools

  • Latest AI Research Papers

You read. We listen. Let us know what you think by replying to this email.

In partnership with Adaline

🚀 Build Smarter, Ship Faster with Adaline

Adaline is the end-to-end platform trusted by world-class product and engineering teams to iterate, evaluate, deploy, and monitor large language model applications—all in one place.

🛠 Prompt like a pro: Test across datasets, compare models, and collaborate seamlessly—with automatic versioning and prompt management that actually works.

⚙️ Deploy without drama: Adaline powers 200M+ API calls a day with 99.998% uptime, handling scale effortlessly and securely.

📈 From idea to insight: Move from sketch to live deployment in record time, with real-time logs, analytics, and performance monitoring.

💡 Ready to launch? Adaline is now generally available—with $1M in API credits for new users.

Thank you for supporting our sponsors!

Today’s trending AI news stories

OpenAI’s General LLM Wins IMO Gold, Experts Urge Caution on What It Means

OpenAI’s experimental large language model has reportedly reached gold medal–level performance at the 2025 International Mathematical Olympiad (IMO), correctly solving five of the six official problems. According to OpenAI, the model produced fully natural language proofs under standard contest conditions, graded anonymously by former medalists.

Unlike DeepMind’s AlphaGeometry, which blends neural networks with symbolic search tailored specifically for geometry, OpenAI’s system remains a general-purpose language model, trained mainly through reinforcement learning and extensive test-time compute, without IMO-specific design. This result, led by researcher Alexander Wei’s team, is described as a step toward human-level creative reasoning, with some calling it a breakthrough in abstract mathematical problem solving.

While other models, including Gemini 2.5 Pro and Grok 4, failed to reach bronze, this model produced detailed solutions graded anonymously by former IMO medalists. Tworek noted the breakthrough came from the same reinforcement learning system powering OpenAI’s recent AI agent, though public release may only happen by year-end. Their GPT-5, a separate system, remains planned for release soon.

However, prominent mathematician Terence Tao advises caution. Tao argues that apparent AI success can be heavily influenced by how the evaluation is set up, comparable to letting human students work in teams, use calculators, or take days rather than hours. If an AI submits only its best attempt after massive parallel sampling or benefits from hidden prompt tuning, it can dramatically inflate perceived capability.

Without transparent and controlled testing protocols matching real contest conditions, Tao warns, comparisons with human contestants risk being misleading. The company emphasized that this IMO model is separate from GPT-5, built by a small research team exploring general reasoning rather than task-specific approaches. Read more.

Inside Meta’s “List of 44”: leaks reveal the team targets superintelligence beyond GPT‑5

Leaked documents reveal Meta’s “List of 44,” an elite AI team assembled by Mark Zuckerberg to chase AGI and rival OpenAI and DeepMind. Roughly 40% of members are ex‑OpenAI staff, half are of Chinese origin, and over 75% hold PhDs from top institutions. Focused on large language models, multimodal reasoning, and RLHF, this group is backed by rumored compensation reaching $100 million for key hires. Analysts say the list signals Meta’s shift from social media to frontier AI, deepening the global talent war.

Insiders claim the team’s mission targets superintelligence beyond short‑term chatbots, while critics question whether high‑pay “mercenary” teams can match purpose‑driven research. Confirmed names include GPT‑4o contributors and former Gemini and Muse architects, highlighting Meta’s aggressive bid to leapfrog GPT‑5 and Gemini‑Ultra. The leak also stirs geopolitical scrutiny given the team’s demographics and Meta’s global infrastructure footprint. Read more.

DuckDuckGo now lets you hide AI-generated images in search results

DuckDuckGo is adding a new privacy-focused option to its search engine: the ability to hide AI-generated images from search results. Users can activate the filter from the Images tab via a drop-down menu labeled “AI images,” or toggle it in the settings under “Hide AI-Generated Images.”

The feature answers growing complaints about “AI slop,” low-quality, synthetic content cluttering search results. The filter works by drawing on curated open-source blocklists, including lists from uBlockOrigin and the Huge AI Blocklist project. While DuckDuckGo acknowledges it won’t catch every AI image, it significantly cuts down visible synthetic content. Read more.

5 new AI-powered tools from around the web

arXiv is a free online library where researchers share pre-publication papers.

Thank you for reading today’s edition.

Your feedback is valuable. Respond to this email and tell us how you think we could add more value to this newsletter.

Interested in reaching smart readers like you? To become an AI Breakfast sponsor, reply to this email or DM us on 𝕏!