- AI Breakfast
- Posts
- Microsoft CEO: “We have everything” Regarding OpenAI
Microsoft CEO: “We have everything” Regarding OpenAI
Good morning. It’s Friday, March 22nd.
Did you know: On this day in 1993, Intel introduced the first Pentium microprocessor?
In today’s email:
Microsoft CEO Claims to Own OpenAI
Neuralink Advancements
AI Investments
AI Startups and Product Advancements
Quick Links
5 New AI Tools
Latest AI Research Papers
You read. We listen. Let us know what you think by replying to this email.
Today’s trending AI news stories
Microsoft CEO: If OpenAI disappeared tomorrow “we have everything”
Elon Musk chimed in to our post on X regarding a powerful statement made by Microsoft CEO Satya Nadella to his board members during the brief ousting of Sam Altman in November.
The quote was taken from page 9 of the Musk v. Altman lawsuit over the for-profit nature of the AI company that Musk helped found in 2015.
See the quote below:
Microsoft CEO Satya Nadella to board members:
"If OpenAl disappeared tomorrow, we have all the IP rights and all the capability. We have the people, we have the compute, we have the data, we have everything. We are below them, above them, around them."
Wow. twitter.com/i/web/status/1…
— AI Breakfast (@AiBreakfast)
3:20 PM • Mar 21, 2024
Neuralink Advancements
> Neuralink Implant Lets Man Play Chess With His Thoughts: In a major milestone, Neuralink debuted its brain-computer interface (BCI) with its first human test subject, Noland Arbaugh, a 29-year-old paralyzed man. During a live streamed event, Arbaugh demonstrated his ability to play online chess and Civilization using the BCI, which translates brain signals for movement. The technology, pioneered by Elon Musk's company, offers groundbreaking potential for people with disabilities. Despite facing past criticism, Neuralink, alongside competitors like Synchron, continues to shape the future of BCIs. While awaiting FDA approval, this progress signals significant hope for individuals living with paralysis. Read more.
> Neuralink Claims Brain Chips Restore Sight in Monkeys: Elon Musk claims Neuralink's brain chips have restored sight in blind monkeys and allowed paralyzed individuals to control computers with their minds. He announced "Blindsight," an experimental vision restoration product, potentially exceeding human sight capabilities. The first human recipient, Noland Arbaugh, paralyzed from the shoulders down, reportedly regained computer control to play video games post-implantation. This follows Neuralink's "Telepathy" product for brain-computer interaction. Read more.
Investments in AI
> Beyond the Dollars: Decoding Microsoft's $650 Million Inflection Move: Microsoft's $650 million deal with AI startup Inflection raises questions about its true purpose. Officially focused on integrating Inflection's AI models into Microsoft Azure, the deal involves hiring most of Inflection's team, including DeepMind co-founder Mustafa Suleyman who will lead Microsoft's consumer AI division. Despite the hefty price tag, Inflection will remain independent, operating as an "AI studio" helping other companies build AI models. However, Inflection's flagship model, Inflection 2.5, lags behind competitors like OpenAI and Anthropic. This strategic talent acquisition allows Microsoft to bolster its AI expertise while potentially avoiding regulatory hurdles associated with a full-blown acquisition. Read more.
> AI Startup Cohere Seeks $5 Billion Valuation in Ambitious Fundraising Round: Aiming to raise $500 million, this ambitious move comes despite their moderate annualized revenue of $22 million, up from $13 million in December. Founded by ex-Google researchers, Cohere focuses on building enterprise-focused AI models and plans to expand beyond their current partnership with Oracle. Their valuation jump reflects investor confidence in the future of AI adoption, even with moderate current revenue. This race for funding in the AI space, where Cohere competes with giants like OpenAI, highlights the pressure on these "foundation model" companies to secure resources for development. Read more.
> US Invests $8.5 Billion in Intel to Boost Domestic Chip Manufacturing: The White House proposes up to $8.5 billion to fund Intel's domestic chip manufacturing, aligning with the CHIPS Act to bolster U.S. semiconductor production. Intel's $100 billion investment over five years aims to create over 30,000 jobs, with the funding earmarked for projects in Arizona, New Mexico, Ohio, and Oregon. The proposal includes $50 million for semiconductor and construction workforce development. This initiative underscores efforts to enhance U.S. semiconductor capabilities, promote innovation, and revitalize domestic manufacturing. Read more.
> Astera Labs' IPO Success Gives Amazon an AI Boost: Amazon got a shot in the arm from Astera Labs' stellar IPO. Astera's stock skyrocketed 72% on the Nasdaq, valuing the data center chipmaker at nearly $9.5 billion. This is good news for Amazon, a major client of Astera, which supplies connectivity chips for AI and cloud infrastructure. Amazon holds warrants to buy up to 2.3 million Astera shares (around $144 million at closing price), but to fully cash in, they'll need to purchase $650 million worth of Astera's products. Astera's IPO success reflects strong investor appetite for AI infrastructure, mirroring the anticipated Reddit IPO. Read more.
> AI Agent Disrupts Global Hiring with $27M Seed Round: Borderless AI, armed with $27 million in seed funding, unveils its flagship AI agent, Alberni, to streamline global hiring. Unlike traditional chatbots, Alberni autonomously handles tasks like drafting employment agreements and managing paperwork. Strategic partnerships enhance Alberni’s capabilities, ensuring compliance. Susquehanna and Aglaé Ventures lead the funding round, enabling Borderless to expand geographically. Targeting mid-sized tech firms initially, Borderless aims to cater to larger enterprises seeking automation solutions. Read more.
AI Startups & Product Developments
> Stable Diffusion Developers Exit as Stability AI Struggles: Key researchers behind Stable Diffusion, an influential text-to-image generation model, have departed from Stability AI, exacerbating the company’s challenges. Robin Rombach, Andreas Blattmann, and Dominik Lorenz, instrumental in developing Stable Diffusion, left amid a wave of executive exits and financial struggles. Stability AI, once valued at $1 billion, faces cash flow issues despite raising significant funds and selling off assets like Clipdrop. The departure of Rombach and team follows a series of high-profile exits, including vice presidents and research leads. Stability AI faces legal battles over copyright infringement and was implicated in a botnet-like activity incident by rival Midjourney. CEO Emad Mostaque, criticized for financial opacity, remains at the helm amid investor pressure. Read more.
> Image to Talking Video: Alibaba's EMO is a Game-Changer: Alibaba Group introduces an advancement in video generation with its AI model, EMO. Using just a single image and an audio track, EMO creates realistic lip-synced videos that set a new standard. Developed by Alibaba researchers, EMO employs a novel approach based on Stable Diffusion, bypassing the need for complex 3D models or facial markers. Read more.
> Sakana AI's evolutionary algorithm creates capable AI models by merging existing ones: Sakana AI, a Japanese startup founded by former Google AI experts Llion Jones and David Ha, is pioneering a new method for creating AI models. Their approach, called "Evolutionary Model Fusion," draws inspiration from natural selection to combine existing open-source models and automatically generate new ones tailored to specific tasks. This method leverages neuroevolution and collective intelligence, leading to promising results. Sakana AI's recently released Japanese language and vision models have surpassed state-of-the-art benchmarks. Read more.
Quick Links
Today’s edition is brought to you by:
Our book, Decoding AI: A Non-Technical Explanation of Artificial Intelligence is on sale for just $2.99 today only!
(with a 100% money-back guarantee)
Decoding AI breaks down the complexities of AI into digestible concepts, walking you through its history, evolution, and real-world applications.
We'll introduce you to the key players in the AI field, as well as explain the underlying algorithms, data, and machine learning concepts that power AI systems. You'll gain a deeper understanding of deep learning, neural networks, and reinforcement learning, and we'll explore various types of AI, from rule-based systems to probabilistic networks and beyond.
The goal was to make this book an approachable discovery of how AI works.
5 new AI-powered tools from around the web
ZeroTrusted.ai safeguards AI privacy, enabling secure interactions with LLMs. Ensures data integrity and confidentiality through encryption and context-preserving techniques. Launched as SaaS for enhanced digital privacy.
Fotor Video Enhancer is an online tool utilizing AI for effortless video quality enhancement. Supports popular formats, offers specific adjustments, and user-friendly interface.
ThinkAny is an AI search engine employing RAG technology for retrieving and aggregating high-quality content, coupled with intelligent answering features.
Pulse AI offers instant UX analysis for websites and apps, now with image analysis. Tailor recommendations, track personas, and optimize journeys globally with multiple languages.
Muse Pro is an advanced drawing app for iPhone and iPad integrating real-time AI to augment creativity. Supports Apple Pencil with pressure sensitivity, intuitive controls, and fine-tuned AI collaboration.
arXiv is a free online library where researchers share pre-publication papers.
Be-Your-Outpainter presents MOTIA, a diffusion-based approach for video outpainting, leveraging source video patterns and generative priors for high-quality results. MOTIA's input-specific adaptation and pattern-aware outpainting phases ensure inter-frame and intra-frame consistency. By pseudo outpainting learning, MOTIA captures essential video patterns and effectively bridges standard generation with outpainting. Strategies like spatial-aware insertion and noise travel enhance performance. Extensive evaluations confirm MOTIA's superiority, surpassing benchmarks without extensive tuning. However, it may struggle with source videos lacking information.
RadSplat introduces a novel method for robust real-time rendering of complex scenes, achieving over 900 frames per second (FPS). While prior approaches like neural fields and Gaussian splatting have limitations in either quality or efficiency, RadSplat combines the strengths of both. By leveraging radiance fields as a prior and supervision signal, RadSplat optimizes point-based scene representations, ensuring improved quality and robust optimization. A novel pruning technique reduces point count while maintaining high quality, leading to smaller and faster scene representations. Additionally, a test-time filtering approach accelerates rendering and enables scalability to larger scenes. RadSplat achieves state-of-the-art synthesis quality, surpassing previous methods in both speed and quality, rendering over 3,000 times faster than prior works while maintaining high reconstruction quality.
LLAMAFACTORY is a comprehensive framework facilitating efficient fine-tuning of over 100 Large Language Models (LLMs) for various downstream tasks. Addressing the challenge of adapting LLMs to diverse tasks, LLAMAFACTORY integrates state-of-the-art efficient training methods into a unified platform. The framework offers a user-friendly web UI, LLAMABOARD, enabling customization without coding. By minimizing dependencies between models, datasets, and training methods, LLAMAFACTORY streamlines the fine-tuning process. It supports techniques such as freeze-tuning, gradient low-rank projection, flash attention, and mixed precision training. The framework's modular architecture allows for flexible scaling to different models and datasets. Empirical validation demonstrates its efficiency and effectiveness in language modeling and text generation tasks.
The paper introduces a novel model-stealing attack capable of extracting precise information from black-box production language models such as OpenAI's ChatGPT or Google's PaLM-2 via API access. The attack targets the embedding projection layer of transformer models, revealing hidden dimensions previously undisclosed by model providers. By exploiting the low-rank structure of the final layer's projection, the attack efficiently recovers embedding dimensions and projection matrices. Experimental results demonstrate the attack's effectiveness across various models, with near-perfect extraction success rates. The paper discusses attack implications, potential defenses, and responsible disclosure practices. It also addresses challenges posed by different API capabilities, presenting techniques for attacks under various API configurations.
The paper addresses the inadequacy of current benchmarks in evaluating the visual mathematical reasoning abilities of Multi-modal Large Language Models (MLLMs). They introduce MATHVERSE, a comprehensive benchmark comprising 2,612 high-quality math problems with diagrams, meticulously designed to evaluate MLLMs' understanding of visual elements in mathematical reasoning. By categorizing textual content and varying information modalities, MATHVERSE assesses MLLMs' capability to interpret diagrams accurately. Additionally, they propose a Chain-of-Thought (CoT) evaluation strategy to analyze the step-by-step reasoning process, providing detailed error analysis. Experimental results reveal that existing MLLMs struggle with visual interpretation, highlighting the need for advancements in mathematical visual comprehension.
AI Creates Comics
Thank you for reading today’s edition.
Your feedback is valuable. Respond to this email and tell us how you think we could add more value to this newsletter.
Interested in reaching smart readers like you? To become an AI Breakfast sponsor, apply here.