Could Europe Lose AI Access?

Research shows no AI language models comply with the EU AI Act

Good morning. It’s Wednesday, June 21st.

Did you know: On this day in 2010, iOS 4 was released. It was the first version of Apple's mobile operating system to use the name "iOS" instead of "iPhone OS."

In today’s email:

  • OpenAI's Dual Stance on AI Regulation Featured Story

  • Stanford research shows no large AI language models fully comply with the EU AI Act Subject Line Story

  • DeepMind's AI project, "Bigger, Better, Faster,"

  • Utilizing AI to mine rare earth metals for electric vehicles

  • Bitcoin community introduces "Spirit of Satoshi," an AI aimed at enhancing access to Bitcoin knowledge

  • University of Cambridge and Oxford warn of "model collapse" risk in AI models trained on AI-generated data

  • 5 New AI Tools

  • Latest AI Research Papers

You read. We listen. Let us know what you think of this edition by replying to this email, or DM us on Twitter.

Today’s edition is brought to you by:

Scribe is your GPT-4-powered process documentation platform that automatically creates SOPs, help centers, new user guides and process overviews for any business process.

Scribe AI auto-generates step-by-step guides complete with screenshots and text by capturing your screen while you click and type.

Using your guides, Scribe AI can create full process documentation (including headings, subheadings, and detailed text) with your guides automatically embedded. No more staring at a blank document thinking, "ok, I have to teach someone how to do this, where do I start? what are all the steps".

Today’s trending AI news stories

OpenAI's Dual Stance on AI Regulation: A Tale of Lobbying for More Lenient Rules

OpenAI, arguably the leading player in the field of AI, has emerged at the forefront of discussions regarding AI regulation.

At first glance, it seems to be in line with the growing consensus among tech companies that regulation of artificial intelligence is both necessary and inevitable. The company's CEO, Sam Altman, has been notably vocal about the need for regulation in the industry.

But a closer look at OpenAI's lobbying efforts, especially within the European Union, suggests a more nuanced story.

At the heart of OpenAI's lobbying campaign in the EU is a quest to revise proposed AI regulations that could significantly impact its operations. The company's primary focus appears to be on advocating for more lenient categorization of certain AI systems like GPT-3, currently classified as "high risk" under proposed regulations.

This intriguing duality is evident in a document issued by OpenAI titled "OpenAI's White Paper on the European Union's Artificial Intelligence Act". The paper is not so much a call for universal regulatory measures as it is a plea for specific adjustments to the proposed AI Act, particularly the definitions around what constitutes a "high risk" AI system.

Under the current classification proposed by the European Commission, "high risk" systems are those that present potential threats to health, safety, fundamental rights, or the environment. As a result, these systems would necessitate legal human oversight and transparency. In response to this, OpenAI contends that while its systems, such as GPT-3, aren't inherently high risk, they could potentially be deployed in high-risk use cases. Consequently, OpenAI proposes that regulations should primarily target the companies that employ AI models in potentially harmful ways, rather than the organizations that develop these models.

OpenAI's stance isn’t unique within the tech industry. Other tech giants, including Microsoft and Google, have also lobbied for a relaxation of the EU's AI Act regulations. This alignment suggests a shared industry perspective that current proposed measures might stifle innovation and limit the practical application of AI technologies.

What's noteworthy is that OpenAI's lobbying efforts appear to have borne fruit.

The contentious sections that OpenAI objected to were omitted from the final draft of the AI Act. This successful lobbying could provide some insight into why Altman, who had previously threatened to withdraw OpenAI's operations from the EU due to the AI Act, has now rescinded that threat.

The EU AI act has the potential to shape how world governments regulate AI - and has the potential to shape the widespread adoption of the most significant technological leap the world has seen in decades.

Quick News

No Large Language Model Complies Fully with EU AI Act, Stanford Research Reveals: A Stanford University study found that none of the ten foundational AI language models investigated completely complies with the EU AI Act. The research assessed the models in twelve categories, including transparency, data handling, risk mitigation, and energy usage. Open-source models performed relatively better in terms of compliance compared to commercial models.

DeepMind’s most recent AI project, "Bigger, Better, Faster" learns 26 Atari games in a mere two hours, matching the learning speed of a human: This accomplishment was made possible thanks to a reinforced learning algorithm, a collaborative development by DeepMind, Mila, and the Université de Montréal, which demands far less computational resources than its predecessors. BBF is designed to learn from positive and negative outcomes, bypassing the need to construct a detailed game model. While the algorithm doesn't outshine human abilities in every game, it can compete with systems that have been trained with 500 times more data.

Bill Gates-Backed AI Powered Mining Firm KoBold Metals Becomes $1 Billion AI Unicorn: KoBold Metals, a mining firm that uses AI to mine rare earth metals vital for electric vehicles, has become a billion-dollar company after raising $200 million in a funding round. KoBold uses machine learning to identify deposits of metals such as lithium, nickel, cobalt, and copper, which are increasingly in demand due to the growing EV industry. The company predicts a $12 trillion gap between supply and demand for these metals by 2050.

Bitcoin-Oriented AI "Spirit Of Satoshi" Launched: The Bitcoin community has unveiled the “Spirit of Satoshi,” the first AI focused solely on Bitcoin, with the goal of improving accessibility to Bitcoin knowledge. The AI is based on fundamental principles, trained on the most valuable Bitcoin data, and fine-tuned to preserve communal intelligence.

Airbnb CEO Foresees AI Empowering 'Millions of Startups' Instead of Job Loss: Brian Chesky, the co-founder and CEO of Airbnb, predicts a future where AI creates opportunities for "millions of startups" rather than causing job loss. Chesky posits that while AI may result in fewer employment opportunities, it will empower individuals to establish their own businesses, ultimately bolstering the job market.

AI Models Risk 'Model Collapse' from Relying on AI-Generated Data: According to the University of Cambridge and the University of Oxford, AI models trained on AI-generated data are encountering a problem known as "model collapse." This recursive training process gradually causes models to lose track of the true underlying data distribution, which may lead to a misinterpretation of reality.

🎧 Did you know AI Breakfast has a podcast read by a human? Join AI Breakfast team member Luke (an actual AI researcher!) as he breaks down the week’s AI news, tools, and research: Listen here

5 new AI-powered tools from around the web

Featured Tool:

CF Spark allows you to create Stable Diffusion AI-generated images from a mobile app on your iPhone. Simply type your text and watch as CF Spark generates a unique, high-quality image based on your input.

Olvy uses GPT-4 to analyze customer feedback and generates actionable insights with an AI Copilot for Product Managers. Olvy integrates with Slack, Zendesk, Intercom, Hubspot. #1 Product of the Day on Product Hunt.

Leetresume uses AI to rewrite your resume. Users upload a PDF document and Leetresume analyzes, makes suggestions, and provides a rewrite and redesign of your CV. Free to try.

Fine tune your own AI model with Dioptra, a solution for prompt evaluation, model improvement, and performance tracking. Detect hallucinations, fine-tune models with precise data selection, and monitor performance across versions.

Vimeo has introduced a suite of AI-powered tools to aid video creation. The tools include a script generator that uses generative AI, a teleprompter with customizable display options, and a text-based video editor that eliminates filler words and pauses. Aimed at entry-level video creators, $20/mo. Available in July.

arXiv is a free online library where scientists share their research papers before they are published. Here are the top AI papers for today.

The HomeRobot OVMM benchmark addresses the challenge of open-vocabulary mobile manipulation, where a robot navigates homes and manipulates various objects to complete everyday tasks. The benchmark includes a simulation component with diverse environments and object sets, as well as a real-world component using the low-cost Hello Robot Stretch. The Home Robot benchmark enables the sim-to-real transfer of reinforcement learning and heuristic approaches. It supports manipulation learning, navigation, and object-goal navigation in both simulated and physical environments. The benchmark addresses a field gap and promotes reproducible research for household and robotic assistants.

This paper provides an overview of a comprehensive evaluation and trustworthiness of GPT 3.5 and GPT-4 language models. The evaluation covers eight perspectives: toxicity, stereotype, bias, adversarial robustness, out-of-distribution robustness, robustness to adversarial demonstrations, privacy, machine ethics, and fairness. The findings reveal that while GPT 3.5 and GPT-4 show reduced toxicity and bias in generation, they can still produce toxic or biased content under adversarial conditions. GPT-4 demonstrates higher toxicity and vulnerability to misleading prompts.

MotionGPT is a novel framework for text-motion generation that enables the generation of human motions using multiple control conditions. By fine-tuning LLMs with multimodal control signals, such as text and single-frame poses, MotionGPT achieves the generation of consecutive human motions with rich patterns. This unified model expands the capabilities of motion generation by allowing for precise control and flexibility in motion sequences. Compared to previous methods, MotionGPT stands out as the first method to support multimodal controls, showcasing its potential in real-world applications.

MAGICBRUSH is a large-scale manually annotated dataset designed to facilitate instruction-guided real image editing. It addresses the limitation of existing methods by providing high-quality data that covers diverse scenarios, including single-turn, multi-turn, mask-provided, and mask-free editing. The dataset comprises over 10,000 manually annotated triples consisting of source images, instructions, and target images. By fine-tuning the InstructPix2Pix model on MAGICBRUSH, significant improvements in image editing quality are achieved, surpassing other baseline methods. The dataset highlights the challenges and the gap between current methods and real-world editing requirements, emphasizing the need for advanced model development.

In this study, a framework for evaluating superhuman machine learning models through consistency checks has been proposed. The challenge lies in assessing the correctness of these models’ decisions, as humans cannot serve as reliable proxies for ground truth. By applying logical, human-interpretable rules, the researchers demonstrated the ability to uncover mistakes in the decision-making process of superhuman models. The framework was tested on tasks such as evaluating chess positions, forecasting future events, and making legal judgments, revealing logical inconsistencies in the models’ outputs. These findings highlight the need to address model failures and enhance trust in critical decision-making scenarios.

Thank you for reading today’s edition.

Your feedback is valuable.


Respond to this email and tell us how you think we could add more value to this newsletter.

Attending Ai4 this year? We will be!