• AI Breakfast
  • Posts
  • OpenAI Researchers Warned Board About "Q-Star"

OpenAI Researchers Warned Board About "Q-Star"

Good morning. It’s Friday, November 24th.

Did you know: Decoding AI: A Non-Technical Explanation of Artificial Intelligence is on sale for just $2.99 today

In today’s email:

  • Q-Star

  • Latest AI Product Announcements

  • 5 New AI Tools

  • Latest AI Research Papers

You read. We listen. Let us know what you think by replying to this email.

Interested in reaching 44,403 smart readers like you? To become an AI Breakfast sponsor, apply here. 

Today’s trending AI news stories

OpenAI researchers reportedly warned internally about breakthrough AI discovery "Q-Star"

OpenAI researchers reportedly alerted their board of directors about an AI breakthrough referred to as Q-Star, potentially posing “risks to humanity.” As reported by Reuters, this issue was mentioned in a letter to OpenAI’s board of directors, which was also linked to the firing of CEO Sam Altman. However, there is ambiguity over whether the board actually received this letter, with The Verge citing sources denying its reception.

Q-Star, an algorithm capable of autonomously solving elementary math problems not included in its training data, represents a significant step towards Artificial General Intelligence. This breakthrough, attributed to Chief Scientist Ilya Sutskever and further developed by Jakub Pachocki and Szymon Sidor, showcases advanced, human-like reasoning abilities.

Concerns outlined in the alleged letter include the system’s potential to accelerate scientific progress but also question the adequacy of OpenAI’s safety measures. This development is part of a broader initiative by an “AI Scientist Team,” formed by merging “Code Gen” and “Math Gen” teams, focusing on enhancing AI models’ reasoning capabilities for scientific tasks.

Altman's comments at the APEC Summit 2023 alluded to groundbreaking progress in AI, emphasizing the need for a breakthrough beyond scaling existing systems. Adding to the speculation, The Information revealed a project named GPT-Zero, led by Ilya Sutskever in 2021, aiming to advance language models for new academic discoveries, potentially linked to the internal claims of AGI achievement.

We expect a comment from OpenAI on Q-Star after the holiday weekend.

Latest AI Product Announcements

> ElevenLabs introduces an AI Speech-to-Speech Converter, allowing users to transform and control their voice. This tool offers emotional range, nuance preservation, and consistent quality, ensuring clarity and authenticity in voiceovers. It features a wide range of customizable voices, including the ability to create personalized voices. Users can fine-tune settings for stability, clarity, and style, with applications across gaming, audiobooks, and more. The platform supports voice customization for different characters and content, with a user-friendly process for uploading audio, selecting voices, and generating speech.

> Anthropic's new Claude 2.1 chatbot, similar to OpenAI's GPT-4 Turbo, has been advertised to handle large volumes of text. However, both AI models struggle with the 'lost in the middle' phenomenon, where they often overlook information in the middle and at the edges of a document. This limitation is evident in tests where Claude 2.1's performance significantly decreases in larger context windows, especially beyond 90,000 tokens. Consequently, large context windows in AI models are not yet a fully reliable alternative to more precise and cost-effective vector databases.

> Bard, Google’s AI chatbot, enhances its functionality with a new YouTube integration, enabling users to extract specific information from videos without watching them. This feature, still in experimental stages, can provide detailed summaries and answer queries about video content. While beneficial for users, it raises concerns about its impact on content creators, as it bypasses traditional viewing methods that generate revenue. Google has yet to address these implications.

> Inflection AI, creators of the chatbot Pi, releases Inflection-2, a new AI model that rivals Google and Meta’s offerings and closely approaches OpenAI’s GPT-4 in capability. Inflection-2 outperformed Google’s PaLM Large 2 and Meta’s LLaMA 2 in benchmarks, making it the top model of its size. It will soon be integrated into Pi, offering enhanced conversational capabilities and real-time information access. Inflection AI aims to further scale its models, with ambitious plans for future developments.

> Google Meet introduces a new hand gesture detection feature that can recognize when users physically raise their hand during a video call. This update triggers the hand raise icon, alerting other call participants of the user’s intention to speak, enhancing interaction and communication efficiency.

> Off/Script, a new community-powered product creation platform, launched its mobile app for creating and monetizing AI-designed merchandise. Founded by Jonathan Brun and Justine Massicotte, it enables users to design products like clothing and accessories. The app uses AI, including Stable Diffusion and ControlNet, to generate product mock-ups. Popular designs, decided by user votes, are funded, manufactured, and shipped by Off/Script. Creators earn 20% of sales and a $500 fee, and the app features an in-app report function for content authenticity.

> OpenAI's ChatGPT Voice feature is now available to all free users, enabling iPhone 15 Pro and Pro Max owners to replace Siri with ChatGPT as their voice assistant. Users can configure the new Action Button, previously the Mute button, for various tasks including launching ChatGPT Voice. The app offers diverse voice options for ChatGPT and allows users to speak directly to the AI for responses. The feature is accessible through the Action Button menu in iOS Settings, using the Shortcuts app.

> Formula 1 is set to test an AI system using Computer Vision technology at the Abu Dhabi Grand Prix to determine track limit breaches. The Fédération Internationale de l'Automobile (FIA) aims to reduce the incidents requiring manual review by officials. While not fully automating the process, this tech, previously used in cancer screenings, will help identify clear non-violations, significantly cutting down on the number of potential rule infractions for manual inspection. The FIA anticipates a future shift towards real-time automated policing systems in racing.

In partnership with CODERABBIT AI

Accelerate Your Code Reviews with CodeRabbit AI

CodeRabbit is here to revolutionize your code reviews with its AI-driven platform. With privacy-focused, contextual pull request reviews, CodeRabbit offers line-by-line code suggestions and interactive chat features to make your coding process more efficient and error-free.

Key Features:

Pull Request Summaries: Understand the intent behind changes with clear summaries and automated release notes.

Line-by-Line Code Suggestions: Receive detailed, actionable suggestions for every line of code changed.

Interactive AI Chat: Engage in contextual conversations within your code lines for better coding solutions.

Customizable Reviews: Tailor the AI to suit your specific coding preferences and needs.

Thank you for supporting our sponsors!

10 new AI-powered tools from around the web

  • Trace AI, an innovative tool for building iOS apps, translates plain language into Swift UI code, enabling quick, efficient app development directly in the browser.

  • Mojju introduces unique, powerful custom GPT models for OpenAI, focusing on diverse applications like productivity tools and business aids. Their expert AI team ensures quality, integration with platforms like Zapier, and ongoing support.

  • Deepmark AI, an open-source tool benchmarks various large language models (LLMs) on task-specific metrics like accuracy and latency using your data. It ensures predictable, reliable AI application performance, integrating with major Generative AI APIs for a comprehensive assessment.

  • Yep Pro offers a no-code landing page builder with AI copywriting , a vast image library and features like customizable emails, surveys, A/B testing, and payment integration. Available in three tailored pricing plans.

  • QuizRise is an AI-driven platform for creating quizzes and flashcards from various contents like PDFs and URLs. It supports multiple question types, offers quick exports, easy sharing, and multi-language support, serving over 1000 customers.

  • Fork.ai aids in identifying sales leads by analyzing tech stacks in mobile apps, providing detailed contacts for AI and tech sector professionals through advanced search and analysis tools.

  • Code to Flowchart converts code into interactive flowcharts and sequence diagrams, simplifying complex logic comprehension. It visualizes code for easier understanding of open-source projects and tech concepts, enhanced by AI.

  • Arvin offers the first browser-based GPTs extension, enhancing digital productivity. It’s a free, seamless integration for diverse uses, continuously updates to improve work, social, and personal tasks.

  • Dubecos is an AI dubbing app that translates videos into multiple languages using your voice, offering an easy and fast way to reach a global audience while maintaining the original emotion and inflection.

  • HearTheWeb transforms text into engaging podcasts with AI co-hosts in under 5 minutes. Offering customization, multiple co-host options, and packages for different publishing needs.

arXiv is a free online library where researchers share pre-publication papers.

The paper introduces a two-stage latent diffusion text-to-video generation architecture. It combines keyframe synthesis and frame interpolation for smooth dynamics. The study highlights the superiority of separate temporal blocks over mixed layers for video quality and consistency. It also features an efficient interpolation model and a comprehensive evaluation of various MoVQ-based video decoding schemes, achieving top scores in metrics like CLIPSIM and FVD among open-source solutions.

GAIA, a benchmark by FAIR (Meta), HuggingFace, AutoGPT, and GenAI, Meta aims to challenge the General AI Assistants with real-world questions needing fundamental abilities like reasoning and multi-modality handling. It presents simple yet complex tasks for AI, where humans score 92% against GPT-4’s 15%. GAIA’s approach differs from current AI benchmarks by focusing on tasks simple for humans but complex for AI, aiming to test robustness similar to human-like performance.

PG-Video-LLaVA is the first Large Language Multimodal Model with pixel-level grounding for videos, integrating audio for enhanced understanding. It excels in spatially and temporally localizing objects in videos, outperforming previous models in video-based conversation and grounding tasks. The model’s open-source benchmarks ensure reproducibility and transparency, marking a significant enhancement in multimodal AI research.

Researchers from BIGAI, Peking University, Carnegie Mellon University, and Tsinghua University introduce “LEO”, an embodied multi-modal, multi-task generalist agent adept at perceiving, reasoning, planning, and acting in 3D environments. Utilizing LLM-based architectures, LEO is trained through 3D vision-language alignment and action instruction tuning, with a comprehensive dataset for deep 3D world interaction. LEO marks a significant step towards realizing general intelligence in AI.

Researchers have developed D3PO, a method for fine-tuning diffusion models using human feedback, bypassing the need for a reward model. This new approach addresses the high GPU memory demands of conventional methods, making the process more efficient and cost-effective. D3PO demonstrates its effectiveness in reducing image distortions, generating safer images, and improving prompt-image alignment in diffusion models, marking a significant advancement in model fine-tuning using human feedback.

WildFusion, a new approach for 3D-aware image synthesis, outperforms traditional GAN methods. It models instances in view space, eliminating the need for posed images and learned camera distributions. By leveraging latent diffusion models and modular depth prediction, it synthesizes high-quality, 3D-consistent images from diverse datasets without direct multiview supervision or 3D geometry, offering scalable and robust solutions for 3D content creation from in-the-wild image data.

PF-LRM, a transformative model, excels in reconstructing 3D objects from sparse, unposed images, simultaneously estimating camera poses with remarkable speed and precision. Utilizing a transformer architecture, it harmoniously integrates 3D and 2D tokens, enabling efficient information sharing. Excelling in cross-dataset generalization, PF-LRM outperforms existing methods in pose accuracy and reconstruction quality. Trained on extensive multi-view data, it also adeptly supports downstream applications like text/image-to-3D conversion, showcasing its versatility and robustness.

Thank you for reading today’s edition.

Your feedback is valuable. Respond to this email and tell us how you think we could add more value to this newsletter.

Interested in reaching smart readers like you? To become an AI Breakfast sponsor, apply here.