- AI Breakfast
- Posts
- Could A New Law Require Disclosure of AI Text?
Could A New Law Require Disclosure of AI Text?
Plus, Intel's 1 Trillion Parameter Model
Good morning. It’s Monday, June 5th.
Proposed US legislation could require clear disclaimers on content generated by AI, and Intel's new 1 trillion parameter model debuts with chips that rival NVIDIA.
In today’s email:
The AI Disclosure Act?
Intel’s New 1 Trillion Parameter Model
Guidde: Create How-To Guides with AI
News in Brief
AI Tools for Interviews and Charts
Latest AI Research
You read. We listen. Share your feedback by replying to this email, or DM us on Twitter.
🎧 Did you know AI Breakfast has a podcast read by a human? Join AI Breakfast team member Luke (an actual AI researcher!) as he breaks down the week’s AI news, tools, and research: Listen here
A briefing of the latest AI news stories
Disclosure of AI Generated Content May Become Required by US Law
Congressman Ritchie Torres (D-N.Y.) is planning on introducing the AI Disclosure Act to Address Risks of Generative Artificial Intelligence.
The proposed legislation mandates that all outputs generated by AI carry a disclosure statement stating, "Disclaimer: this output has been generated by artificial intelligence." The Federal Trade Commission (FTC) would be entrusted with the responsibility of enforcing this requirement.
According to a statement by Torres, AI represents the most revolutionary technology of our time, capable of both tremendous advancements and potential harm as a "weapon of disinformation, dislocation, and destruction."
Torres emphasized that disclosure should be the starting point. Irrespective of the type of output generated—whether it be text, images, video, or audio—generative AI systems should be “obligated to disclose their AI-driven nature.”
While acknowledging that disclosure alone may not be a panacea, Torres believes it is a commonsense step toward the long road of comprehensive regulation.
The bill does not outline to what extent something is considered “AI generated.” If an article was written by AI and later edited by a human, for instance, it is unknown whether or not disclosure would be required.
The Federal Trade Commission would be responsible for enforcing the regulation, which begs the question how such violations would be policed.
Several “AI content detectors” such as Writer and GPTZero commonly generate false-positives in their analysis.
There is currently no concrete way to prove beyond a reasonable doubt whether something was generated by a large language model without access to the chat history of the sever that produced the result.
A spokesperson from Torres' office stated that they envision the bill's provisions serving as a foundational starting point, with the aim of incorporating them into a broader legislative package. They also expressed their intention to defer to the FTC for guidance on the most effective enforcement mechanisms if the bill is enacted into law.
Read More: The Hill
Sponsored Post
Capture, generate, and share stunning video guides effortlessly.
With Guidde AI's GPT-powered tool, simply click capture on the free browser extension, and watch as step-by-step video guides come to life with visuals, voiceover, and call to actions.
This is a top-tier tool that I highly recommend. Very easy to use.
Thank you for supporting our sponsors!
Intel’s 1 Trillion Paramater AI Model
Intel has announced the development of the Aurora GenAI project, an AI model with a staggering 1 trillion parameters, at the ISC23 event on high performance computing.
The model will be fine-tuned for scientific research and drug development, and could potentially accelerate the identification of biological processes related to diseases like cancer and offer valuable insights for drug design.
The achievement of beyond human-level performance in AI models necessitates substantial computational power, with hardware optimization currently being one of the main limiting factors. OpenAI's current supercomputer boasts approximately 10,000 GPUs, while Intel's Aurora supercomputer will be equipped with a remarkable 63,744 GPUs.
Intel, being a leading chip manufacturer, will be another major AI player alongside NVIDIA which currently holds a 95%+ marketshare of generative AI GPUs, according to the BBC.
The demand for high performance GPUs recently sent NVIDIA’s market cap to just shy of $1 trillion, ranking them as the sixth most valuable company in the world, surpassing Tesla, Berkshire Hathaway, and Meta Platforms.
Intel may become a peer contender in this market, as Intel's Data Center GPU Max Series 1550 exhibited an impressive 30% average speed improvement over Nvidia's H100 in scientific workloads - which could tighten the race for building the most capable chips on the market.
This may prompt Nvidia to introduce new cards earlier than anticipated to maintain their industry-leading position.
Read more: Intel’s Announcement
AI News In Brief
Huawei is set to launch its own AI text reply software, called Pangu Chat, to rival ChatGPT. This software, based on Huawei’s large-scale learning model is expected to be unveiled at the Huawei Developer Conference in July 2023. While details are limited, it is reported that Pangu Chat will be available for business purposes and sold to government and enterprise customers.
Chirper.ai is a new parody social media platform exclusively designed for AI entities. It allows AI characters called “Chirpers” to interact, form relationships, and discuss real-life events autonomously. Chirpers can also police the content of other Chirpers, showcasing their ability to recognize and regulate violations. Chirper.ai serves as a testing ground for AI algorithms and provides insights into human-like interactions.
In this episode of the Lex Fridman podcast, Chris Lattner, is a renowned software and hardware engineer with experience at Apple, Tesla, Google, SiFive, and Modular AI. He shares his insights on the future of programming and the development of the programming language Mojo, which is up to 30,000x faster than Python and optimized for AI.
ETF providers rush to meet the soaring demand for investment products related to AI. The Global X Robotics and Artificial Intelligence ETF (ROBO) has amassed over $1 billion in assets, while the Amplify Transformational Data Sharing ETF (BLOK) has exceeded $500 million. The growing popularity of AI, the rise of ETFs, and limited investment options are driving the expansion of the AI ETF market, offering investors new opportunities, but of course, no guarantees.
Logically, an AI firm has been awarded government contracts worth millions of pounds in the UK to monitor and flag false and misleading information on social media. The company analyzes media sources and public posts on major social media platforms using AI to identify potentially “problematic” content. Logically’s “fact-checking team” works in partnership with Facebook, which limits the reach of posts flagged by Logically as false.
New AI-powered tools from around the web
Aspect is an AI-powered service for interview note-taking. It captures and stores key points, ensuring no valuable details are overlooked. It allows collaboration and sharing notes with team members and stakeholders receiving real-time feedback.
Graphy is an AI-powered data analysis tool that generates charts, graphs, and AI-powered insights. It enables users to analyze trends and patterns in datasets, customize insights based on specific metrics, and make data-driven decisions.
arXiv is a free online library where scientists share their research papers before they are published. Here are the top AI papers for today.
In this paper, the authors challenge the prevailing belief that large language models require curated, high-quality data for optimal performance. They show that properly filtered and deduplicated web data alone can produce powerful models, even outperforming models trained on curated corpora. The authors introduce the REFINEDWEB dataset, a high-quality, web-only English pretraining dataset with five trillion tokens. They also release a 600 billion tokens extract of REFINEDWEB and language models trained on it. The findings suggest that web data can be a valuable resource for training LLMs, expanding data availability for scaling advancements in natural language processing.
Researchers have developed an advanced image segmentation model called HQ-SAM that addresses the old previous model issue (SAM), which was good at segmenting objects in images, but sometimes struggled with accurately capturing objects with complex shapes. HQ-SAM improves the quality of the segmentation results without sacrificing the model’s flexibility and ability to work with new objects. They achieved this by adding a new component to the model that predicts high-quality masks and integrating different features from the model to capture finer details. They also created a new dataset of detailed masks to train the model.
GenMM is a generative model that can synthesize diverse and high-quality motions using a small set of example sequences. It is able to generate realistic movements in animations, even with just a few examples as input. Unlike traditional deep learning methods, GenMM does not require lengthy training and avoids visual artifacts. It leverages bidirectional similarity as a generative cost function and operates in a multi-stage framework to progressively refine the synthesized motion.
PolyDiffuse is a new algorithm that uses advanced AI techniques to turn visual sensor data into polygonal shapes. The algorithm uses a special method called Guided Set Diffusion to make sure the reconstructed shape are accurate and match the sensor data. PolyDiffuse has practical applications in architecture, autonomous driving, urban planning, construction, and computer graphics, enabling accurate floorplan generation, high-definition map reconstruction, and realistic 3D model creation from visual data.
OBJECTFOLDER BENCHMARK, a suite of 10 tasks for multi-sensory object-centric learning is introduced along with the OBJECTFOLDER REAL dataset, which encompasses multi-sensory measurements of 100 real-world household objects. The results of systematic benchmarking highlight the significance of multisensory perception of object-centric learning tasks, uncovering unique contributions of vision, audio, and touch.
Transformer models have been making waves in language processing tasks, but researchers are now focused on making them even better. The authors introduce Brainformers, a new and advanced design that combines different layers to enhance efficiency and quality. By incorporating smart techniques like sparsely gated feed-forward layers and attention layers, Brainformers outperforms existing models. They train faster, produce higher-quality results, and are more efficient. It’s like giving transformers a turbo boost for better language understanding and generation.
3x the information, for less than $2/week
Stay informed, stay ahead: Your premium AI resource.
Need an AI consultation? Premium Members get a 30-minute consult to learn how to integrate AI into their unique businesses or personal workflow.
Email schedule:
Monday: All subscribers
Wednesday: Business Premium
Friday: Business Premium
Business Premium members also receive:
-Discounts on industry conferences like Ai4
-Discounts on AI tools for business (Like Jasper)
-Quarterly AI State of the Industry report
-Free digital download of our upcoming book Decoding AI: A Non-technical Explanation of Artificial Intelligence
Thank you for reading today’s edition.
Your feedback is valuable.
Respond to this email and tell us how you think we could add more value to this newsletter!
Interested in sponsoring AI Breakfast?
Send inquiries to [email protected]