- AI Breakfast
- Posts
- OpenAI's Project "Strawberry"
OpenAI's Project "Strawberry"
Good morning. It’s Monday, July 15th.
Did you know: On this day in 1983, the Nintendo Famicom was released in Japan?
In today’s email:
OpenAI’s New Secret Project
Gemini AI Caught Spying on Docs
Deepmind’s PEERs
OpenAI’s Illegal NDAs?
5 New AI Tools
Latest AI Research Papers
You read. We listen. Let us know what you think by replying to this email.
Today’s trending AI news stories
Exclusive: OpenAI working on new reasoning technology under code name ‘Strawberry’
OpenAI is developing a new AI project, "Strawberry," to enhance its models' reasoning capabilities. Formerly known as Q* or Q-Star, Strawberry leverages a specialized form of post-training to adapt pre-trained models for autonomous web searches and "deep research."
Internal documents reveal Strawberry's specialized post-training process, refining AI models after initial training with large datasets. Drawing parallels with Stanford's "Quiet-STaR" method, Strawberry aims to improve AI's logical reasoning by integrating large language models with planning algorithms and reinforcement learning techniques.
This initiative aligns with OpenAI's strategy to empower AI models not only to generate answers but also to plan and navigate the internet autonomously. Elon Musk has acknowledged OpenAI's Strawberry project, expressing bullish optimism for AI's future. Enhancing AI's human-like reasoning is crucial for breakthroughs in scientific research and engineering, addressing common challenges like intuitive problem-solving and logical fallacies.
Recently, OpenAI signaled the release of technology with improved reasoning abilities through specialized post-training processes. Read more.
Google's Gemini AI caught scanning Google Drive hosted PDF files without permission — user complains feature can't be disabled
Google's Gemini AI has come under scrutiny for allegedly scanning PDF files stored on Google Drive without explicit user permission, prompting concerns raised by Kevin Bankston, a Senior Advisor on AI Governance. Bankston criticized the practice after discovering that Gemini automatically summarized his private documents without prior user interaction.
Despite efforts to disable these features through settings, users like Bankston found the process convoluted and ineffective, highlighting a gap in Google's transparency and user control measures. This incident fuels ongoing discussions about AI privacy and user consent, challenging the effectiveness of current safeguards as AI continues to integrate into daily applications.
Google has not clarified the technical rationale behind Gemini's actions, leaving users wary about the privacy implications of AI-driven functionalities within Google's ecosystem. Read more.
DeepMind’s PEER scales language models with millions of tiny experts
DeepMind has introduced Parameter Efficient Expert Retrieval (PEER) to scale Mixture-of-Experts (MoE) models, addressing current limitations of MoE architectures. MoE enhances large language models (LLMs) by routing data to specialized “expert” modules, thus increasing model capacity without escalating computational costs. However, traditional MoE is restricted to a small number of experts. PEER overcomes this by using a learned index to route input data efficiently to millions of tiny expert modules, each with a single neuron in the hidden layer. This design ensures improved parameter efficiency and knowledge transfer.
PEER's architecture can replace existing transformer feedforward (FFW) layers, optimizing the performance-compute tradeoff. It leverages a multi-head retrieval approach, similar to transformer models' multi-head attention mechanism. Evaluations on various benchmarks show PEER achieves lower perplexity scores and better performance-compute tradeoffs compared to dense FFW layers and other MoE architectures. Read more.
Whistleblowers accuse OpenAI of ‘illegally restrictive’ NDAs
Whistleblowers have accused OpenAI of unlawfully restricting its employees from communicating with government regulators, as reported in a letter obtained by The Washington Post. The letter, addressed to SEC Chair Gary Gensler, raises concerns about OpenAI's severance, non-disparagement, and non-disclosure agreements (NDAs). It alleges that these agreements discourage employees from reporting securities violations to the SEC and force them to waive whistleblower incentives.
Moreover, the whistleblowers claim that previous NDAs violated labor laws by placing overly restrictive conditions on employees seeking employment, severance payments, and other financial benefits. OpenAI has not yet responded to requests for comment. A company spokesperson did highlight that their whistleblower policy is designed to protect employees' rights to make protected disclosures. Read more.
Etcetera: Stories you may have missed
5 new AI-powered tools from around the web
PngMaker.io is a free online tool converting text to professional PNG images with transparent backgrounds in seconds, ideal for digital content.
AI Web Designer quickly redesigns websites using AI, allowing users to edit and export results. It democratizes design, simplifying web development.
AyeHigh offers user-friendly generative AI tools for students and professionals, including resume shortlisting, ATS analysis, and content optimization.
Move AI converts 2D video into 3D motion data for lifelike animation using advanced AI, computer vision, biomechanics, and physics technologies.
Phaie AI by Creatr is an open-source tool and Figma plugin for generating, editing, and fixing design systems using AI.
arXiv is a free online library where researchers share pre-publication papers.
Thank you for reading today’s edition.
Your feedback is valuable. Respond to this email and tell us how you think we could add more value to this newsletter.
Interested in reaching smart readers like you? To become an AI Breakfast sponsor, apply here.