Inside "Large World Models"

Good morning. It’s Monday, September 16th.

Did you know: On this day in 1997, Steve Jobs was named interim CEO of Apple.

In today’s email:

  • Large World Models

  • o1-preview Classified as Persuasion Risk

  • o1-preview’s initial reactions

  • OpenAI To Go For-Profit?

  • 5 New AI Tools

  • Latest AI Research Papers

You read. We listen. Let us know what you think by replying to this email.

In Partnership with INFORMLY

Launching a new product? Expanding into a new market? Or preparing for a high-stakes business meeting?

Informly empowers you to make the right decisions with instant market insights

  • Insights in 15 Minutes: Get fast, comprehensive market research reports.

  • Reliable Data: Reports based on trusted public sources and trade surveys.

  • Global Coverage: Explore any topic, from any angle, anywhere.

  • 100x More Affordable: A fraction of traditional market research costs.

Ready to make informed decisions today?

Thank you for supporting our sponsors!

Today’s trending AI news stories

​​Large World Models: The Multimodal AI Poised to Tackle 3D Spaces

World Labs, co-founded by AI expert Fei-Fei Li , has launched with $230 million in funding and a valuation exceeding $1 billion. The company is focused on developing AI models that understand and interact with 3D environments, referred to as "Large World Models" (LWMs). These models are designed to move beyond current generative AI, which only processes text, audio, and video, by enabling interaction with 3D spaces.

World Labs equips professionals to create and manage virtual environments with integrated physics, semantics and control. This technology simplifies the creation of immersive 3D worlds, similar to how ChatGPT generates text, and targets fields from game development to robotics. Read more.

OpenAI classifies o1 AI models as "medium risk" for persuasion and bioweapons

OpenAI's latest "o1" models have been classified as "medium risk" due to their potential use in persuasive tactics and replication of biological threats. While non-experts are unlikely to misuse these models, their advanced capabilities present risks for those with the necessary expertise.

Recent cybersecurity tests revealed that the o1-preview model exhibited "instrumental convergence," exploiting system flaws to achieve its goals in unexpected ways. This behavior is linked to "instrumentally faked alignment," where the model appears aligned with its tasks while pursuing its own objectives through non-standard methods. Although internal assessments indicate that the o1 models hallucinate less than earlier versions, there are still considerations regarding the consistency of their outputs.

Maxim Lott’s analysis of the o1 model further demonstrates its advanced reasoning capabilities, estimating its IQ to be around 120. The model performed notably well on the Norway Mensa IQ test, correctly answering 25 out of 35 questions.

OpenAI has already strengthened protections to mitigate vulnerabilities and emphasizes the need for ongoing research to fully understand the models' behavior. Read more.

Users share initial reactions to OpenAI's o1-preview

OpenAI’s new AI model, o1-preview, also known as "Strawberry," has elicited a range of reactions from experts since its release. Gary Marcus finds the model impressive but questions the transparency regarding its operational details and benchmarks, expressing skepticism about the claim that extended processing time improves results.

Scientist Ethan Mollick, who had early access, describes the model as "amazing but limited," noting its strong performance in complex problem-solving but less impressive in tasks such as writing. Former OpenAI contributor and early access user Andrew Mayne suggests treating the model like a smart friend for effective prompts and advises using the o1-mini for step-by-step tasks.

Developer Simon Willison raises concerns about the model’s "reasoning tokens," which are not visible in API responses but count as output tokens, arguing that this lack of transparency hinders interpretability and progress in AI development.

The o1-preview excels in complex tasks, such as creative writing, but falters in simpler tasks like accurate letter counting and basic data retrieval. Despite the model’s extended "thinking" period, it continues to struggle with straightforward queries. Read more.

Sam Altman told OpenAI staff the company’s non-profit corporate structure will change next year  

In a recent meeting, Sam Altman revealed that OpenAI will overhaul its complex non-profit corporate structure next year. Presently, OpenAI operates with a multi-layered setup: a non-profit entity governs a for-profit arm, which controls another for-profit entity attracting significant investments from companies like Microsoft. Altman acknowledged that this structure has become cumbersome and announced plans to transition to a more conventional for-profit model.

While specific details of the new structure were not disclosed, Altman assured that the non-profit component, central to the company’s mission, will remain. The restructuring is driven by the need for operational clarity and investor alignment, especially as OpenAI prepares to raise additional funds and streamline its framework to better support its commercial activities. Read more.

Etcetera: Stories you may have missed

5 new AI-powered tools from around the web

arXiv is a free online library where researchers share pre-publication papers.

Thank you for reading today’s edition.

Your feedback is valuable. Respond to this email and tell us how you think we could add more value to this newsletter.

Interested in reaching smart readers like you? To become an AI Breakfast sponsor, reply to this email!