- AI Breakfast
- Posts
- OpenAI's GPT-o1 Released
OpenAI's GPT-o1 Released
Good morning. It’s Friday, September 13th.
Did you know: On this day in 2003, Steam officially launched.
In today’s email:
GPT-o1
OpenAI’s Fundraiser
Suno Covers
Adobe Firefly Video
3 New AI Tools
Latest AI Research Papers
You read. We listen. Let us know what you think by replying to this email.
In Partnership with GROWTH SCHOOL
Still struggling to achieve work-life balance and manage your time efficiently?
Join this 3 hour Intensive Workshop on AI & ChatGPT tools (usually $399) but FREE for first 100 readers.
Save your free spot here(seats are filling fast!) ⏰
An AI-powered professional will earn 10x more. 💰 An AI-powered founder will build & scale his company 10x faster 🚀 An AI-first company will grow 50x more! 📊 |
Want to be one of these people & be a smart worker?
Free up 3 hours of your time to learn AI strategies & hacks that less than 1% people know! Hurry! Click here to register (FREE for First 100 people only)
Today’s trending AI news stories
OpenAI's new 'o1' model thinks longer to give smarter answers
OpenAI’s latest release, GPT-o1, redefines AI reasoning by extending its thought process before answering. Unlike its predecessors, which prioritized pre-training, o1 invests in prolonged inference, sharpening its logical prowess. Though it doesn't always outperform GPT-4o across the board, o1 shines in tasks requiring deep reasoning.
Accompanying this launch are o1-preview and o1-mini. The former is a compact version designed for refining use cases, while o1-mini, a cost-effective variant, delivers almost the same performance as o1 for STEM challenges. Both models are now available to ChatGPT Plus and Team users, with broader rollout expected. Users have a current limitation of 30 messages per week on o1-preview, and 50 per week on o1-mini.
Looking ahead, OpenAI anticipates that o1 models will be capable of extended reasoning times, ranging from seconds to potentially weeks, which could lead to advancements in fields like drug discovery and theoretical mathematics. Read more.
We're releasing a preview of OpenAI o1—a new series of AI models designed to spend more time thinking before they respond.
These models can reason through complex tasks and solve harder problems than previous models in science, coding, and math.
— OpenAI (@OpenAI)
5:09 PM • Sep 12, 2024
In our exclusive one-word interview with OpenAI CEO Sam Altman, he denied the new model’s AGI status.
Related Story: How to prompt on GPT-o1
OpenAI reportedly seeking $6.5B investment at $150B valuation
OpenAI is reportedly courting $6.5 billion in new funding, setting its valuation at a staggering $150 billion. Thrive Capital is expected to lead the round with a $1 billion investment, while Microsoft, which has already invested $13 billion, may also participate. This funding would support the launch of OpenAI’s upcoming model, o1, known for its advanced reasoning capabilities but requiring more hardware resources, likely increasing operational costs. The capital is also earmarked for AI infrastructure, as indicated in an internal memo about acquiring more compute resources.
Alongside the $6.5 billion funding round, OpenAI is seeking a $5 billion revolving credit facility, a move often seen before a company goes public. While the company’s complex corporate structure, which combines a nonprofit and for-profit arm, might complicate an IPO, sources suggest OpenAI may consider restructuring to allow for more investor returns, potentially easing a path to the stock market. Read more.
Suno releases new "Covers" feature to reimagine music you love
Reimagine the music you love with Covers! Covers can transform anything, from a simple voice recording to a fully-produced track, into an entirely new style all while preserving the original melody that’s uniquely yours. Our newest feature, now available in early-access beta,… x.com/i/web/status/1…
— Suno (@suno_ai_)
8:44 PM • Sep 12, 2024
Suno's new feature, Covers, now in early-access beta, allows users to reimagine their music by transforming it into different styles while maintaining the original melody. This tool supports various audio inputs, such as voice recordings and instrumentals, enabling users to experiment with new genres and add lyrics to instrumental tracks.
To create a cover, users can select a song from the Library or Create page, choose "Cover Song," and pick a new music style. The feature will automatically adapt the original lyrics to fit the selected style, though users can modify the lyrics as desired. This feature is available to Pro/Premier subscribers with an initial allocation of 100 free covers. Suno invites feedback during this beta phase to enhance the tool’s performance. Read more.
Adobe announces Firefly Video Model AI video tool
Adobe is launching Firefly Video Model, an AI-powered video editing tool, with a limited beta version due later this year. This tool, part of Adobe's Firefly suite, marks the company's first step into AI-driven video editing. It allows users to generate five-second video clips from text or image prompts, with capabilities for custom camera angles, pans, and zoom effects. Adobe claims the tool offers superior prompt accuracy and performance compared to competitors like Runway and Pika Labs.
The Firefly Video Model will be trained exclusively on public and licensed content, avoiding Adobe customer data. Alongside this, Adobe will introduce Generative Extend in Premiere Pro, a feature that extends clips by generating two-second inserts. Enthusiasts can join the waiting list for beta access. Read more.
Etcetera: Stories you may have missed
3 new AI-powered tools from around the web
arXiv is a free online library where researchers share pre-publication papers.
Thank you for reading today’s edition.
Your feedback is valuable. Respond to this email and tell us how you think we could add more value to this newsletter.
Interested in reaching smart readers like you? To become an AI Breakfast sponsor, reply to this email.