- AI Breakfast
- Posts
- Meta launches own AI code-writing tool: Code Llama
Meta launches own AI code-writing tool: Code Llama
Good morning. It’s Friday, August 25th.
Did you know: AI Language models may be running out of data to train on? To solve for this, researchers are using the LLMs to generate their own training data, known as “synthetic data.” Here’s an interesting post on X showcasing the results of this.
In today’s email:
AI Hardware and Infrastructure
AI Software and Services
OpenAI News
5 New AI Tools
Latest AI Research Papers
You read. We listen. Let us know what you think of this edition by replying to this email, or DM us on Twitter.
Today’s edition is brought to you by:
Stay Ahead of the Competition with AE
Accelerate your success with AE's elite team of experts!
🚀 Get ahead with swift development of Minimum Viable Products (MVPs).
🚀 Lead the way in innovation with Digital Transformation Initiatives.
🚀 Boost your ROI with tailored AI/ML solutions.
Today’s trending AI news stories
AI Hardware and Infrastructure
Moonwalkers: These AI-powered strap-on shoes can make you walk three times faster American start-up Shift Robotics has developed “Moonwalkers,” AI-powered shoes capable of increasing walking speeds by up to 250%. Resembling skates, these shoes integrate machine learning algorithms to synchronize with users’ walking patterns, achieving speeds of 11 km/h. The company aims to augment walking efficiency rather than supplant it. Priced at $1,399 per pair and available exclusively in the US, the innovation has generated substantial attention on social media platforms such as TikTok, sparking discussions about its visibility and cost.
IBM reports analog AI chips patterned after the human brain IBM Research has developed an analog AI chip inspired by the human brain to enhance efficiency and reduce battery drain in AI projects. The mixed-signal chip contains 64 analog in-memory cores, each hosting an array of synaptic cell units. The chip achieved a 92.81% accuracy rate on the CIFAR-10 dataset and demonstrated improvements in latency and energy consumption. IBM envisions applications in low-power environments like cell phones and cameras, as well as cloud providers aiming to reduce energy costs and carbon footprint.
SDXL Gets Boost from NVIDIA TensorRT — Stability AI Stability AI collaborates with NVIDIA to enhance the speed of its text-to-image generative AI product, Stable Diffusion XL (SDXL). The integration of NVIDIA TensorRT, a performance optimization framework, doubles the performance on NVIDIA H100 chips, generating high-definition images in just 1.47 seconds.
Arm, the Chip Designer, Files for an I.P.O. Expected to Be Among the Largest Arm, owned by SoftBank, has filed for an IPO on the Nasdaq Exchange, anticipated to be a substantial one. The move comes after Nvidia’s $40 billion acquisition offer was abandoned due to regulatory issues. Arm reported $2.68 billion in revenue for the last fiscal year. The IPO will provide SoftBank with capital for further investments in startups and aligns with its focus on AI. Arm designs and licenses microprocessor blueprints, powering products from mobile phones to industrial equipment.
AI Software, Tools, and Services
Meta launches own AI code-writing tool: Code Llama Meta has introduced Code Llama, a tool built on its Llama 2 language model, designed to generate and debug code. Available with the same license as Llama 2, Code Llama can create code strings, and even understand natural language instructions. While Meta claims Code Llama performed well on benchmark testing, it didn’t specify the models tested against. The tool aims to enhance developer workflows and efficiency. Meta plans to release three sizes of Code Llama, catering to different project needs. Other tech giants like GitHub, AWS, and Google are also exploring similar AI-powered code generators.
Figma introduces Jambot, an AI assistant widget for its whiteboard platform, FigJam Powered by ChatGPT, Jambot assists with tasks like brainstorming, creating mind maps, answering questions, and content rewriting. This enhancement aligns with Figma’s push for AI integration in its platform. The recent acquisition of Diagram, a design AI startup, and Clover Notes, a creating whiteboard tool, exemplify this strategy.
Microsoft may soon give Windows 11 an AI revamp, according to reports Microsoft is reportedly planning to bring AI enhancements to Windows 11 apps like Photos, Snipping Tool, and Paint. The company is exploring features such as object identification and copying in the Photos app, incorporating optical character recognition (OCR) in the Snipping Tool for text extraction, and experimenting with AI-based art generation in the Paint App. These efforts align with Microsoft’s recent focus on AI advancements, including collaboration with OpenAI and the launch of Bing Chat. The company’s fall event on September 21 is set to reveal more about these AI projects.
Google plans to bring AI-fueled security enhancements to Google Workspace These enhancements will leverage AI to automate security tasks and enhance its zero-trust model. The company aims to automatically classify and label sensitive data, improve data loss prevention controls, and provide context-aware controls for sharing sensitive data. Google will also introduce client-side encryption for mobile versions of Gmail, Calendar, Meet, and other Workspace tools, giving customers control over encryption keys and data sovereignty. These updates are expected to be rolled out later this year and in early 2024.
OpenAI
OpenAI names Scale AI 'preferred partner' to fine-tune GPT-3.5 OpenAI has chosen Scale AI as its “preferred partner” for fine-tuning its GPT3.5 Turbo large language model (LLM). Scale AI’s expertise in fine-tuning will be used to enhance the performance of GPT3.5, allowing enterprises to create custom models for their specific needs. Scale’s Data Engine will be leveraged to accelerate model development by generating prompts and ranking model outputs. A case study with fintech company Brex showcased that a fine-tuned GPT3.5 model outperformed the stock model in generating high-quality expense memos.
Major websites like Amazon and the New York Times are increasingly blocking OpenAI's web crawler GPTBot Major websites like Amazon and the New York Times are taking measures to block GPTBot, a web crawler developed by OpenAI. The tool was intended to collect data for training OpenAI’s ChatGPT chatbot. However, concerns about the indiscriminate use of copyrighted data by web crawlers, including Amazon, The New York Times, and CNN, to prevent GPTBot from accessing their content. OpenAI had previously committed to adhering to the robots.txt standard to respect website restrictions.
OpenAI now tries to hide that ChatGPT was trained on copyrighted books, including J.K. Rowling's Harry Potter series New research highlights OpenAI’s ChatGPT’s efforts to conceal its training on copyrighted material, particularly evident in its avoidance of providing verbatim responses from copyright works. The study reveals that ChatGPT now disrupts outputs when users attempt to extract copyrighted content, an advancement in addressing potential copyright infringement. However, the model still exhibited copyrighted material, notably in responses related to J.K. Rowling’s Harry Potter series, underlining the ongoing challenges in preventing leakage of copyrighted text in AI-generated content.
🎧 Did you know AI Breakfast has a podcast read by a human? Join AI Breakfast team member Luke (an actual AI researcher!) as he breaks down the week’s AI news, tools, and research: Listen here
5 new AI-powered tools from around the web
Featured Tool:
Eightify extracts key insights from lengthy YouTube videos. It uses GPT and their own technology to improve the quality of summaries and to support even 32-hour videos. Plus, the average generation time is just 8 seconds!
Alta AI provides an AI chatbot builder that integrates with Google Drive and Slack. Users can easily create branded AI chatbots, sync documents from Google Drive, add preset questions and answers, and connect to Slack. The chatbots can be embedded in websites or mobile apps with a single line of code.
AI-enhanced Forms for Legal Intake Tonkean’s AI-enhanced forms transform legal intake processes by offering dynamic and responsive workflows. With real-time adaptability based on user input, the platform creates personalized journeys for stakeholders, streamlining submissions. The drag-and-drop editor and seamless integrations facilitate effortless form editing and data transfer.
Cursor is an AI-driven code editor that accelerates software development. It supports easy migration from VSCode, offers local security choices, and allows project chat, documentation browsing, code generation, bug spotting, debugging, and more.
Playbook x Midjourney Integrate Playbook with Midjourney for effortless organization of creative iterations. Link your Discord server with Midjourney and Playbook to seamlessly import outputs. Benefit from Playbook’s workspace advantages: store prompts, group variations, auto-tag images, and collaborate in real-time. Elevate brainstorming and project exploration with unified creative workflows.
arXiv is a free online library where scientists share their research papers before they are published. Here are the top AI papers for today.
The Promp2Model framework automates the process of creating deployable machine-learning models from natural language prompts. It addresses the gap between proof-of-concept prototyping with large language models (LLMs) and practical deployment. It employs dataset retrieval, dataset generation using LLMs, and model retrieval, resulting in small yet accurate models that outperform LLMs while being much smaller. The framework supports various tasks and offers an extensible platform for research in model distillation, dataset generation, evaluation, dataset retrieval, and model retrieval. It aims to simplify NLP system construction and offers potential solutions to challenges associated with LLMs.
📄 Enhancing NeRF akin to Enhancing LLMs: Generalizable NeRF Transformer with Mixture-of-View-Experts
The paper introduces a novel approach, termed GNT-MOVE, to enhance the cross-scene generalization ability of Neural Radiance Field (NeRF) models for synthesizing novel views of unseen scenes. Drawing inspiration from LLMs, the authors incorporate Mixture-of-Experts (MoE) transformers into the NeRF framework. By addressing the balance between generality and specialization, they achieve improved performance in zero-shot and few-shot settings. The introduced enhancements, including shared permanent expert and spatial consistency objectives, lead to state-of-the-art results. GNT-MOVE demonstrates the potential of MoE in advancing the field of generalizable view synthesis.
The paper presents DenseDiffusion, a training-tree method to improve the quality of text-to-image synthesis while enabling control over the image’s layout. The method modulates attention maps in pre-trained models according to dense captions and layout conditions, facilitating the appearance of specific objects in corresponding regions. Cross-attention layers are guided by text-conditioned segmentation maps, while self-attention layers focus on tokens within the same segment. The approach adapts original attention scores, respects value ranges, and consider segment sizes for optimal modulation. Experimental results reveal enhanced image quality and layout control compared to existing methods. While dependent on the base model’s capacity, DenseDiffusion offers a promising training-free solution for improved text-to-image synthesis.
The paper presents “Topical-Chat,” a knowledge-grounded human-human conversation dataset spanning 8 topics. Unlike previous datasets, it lacks predefined roles for conversation partners and explores broader topical coverage. The authors train Transformer-based models for response generation and evaluate them using automated metrics and human assessment. The dataset’s versatility and realistic nature allow modeling of partner roles, enabling research in open-domain conversational AI. The study aims to advance conversational skills, including world knowledge utilization, reasoning, and smooth topic transitions. The dataset release encourages data-driven research for knowledge-grounded conversational AI.
The authors introduce two methods to enhance the specialized capabilities of large language models (LLMs) in machine translation. The first method, SWIE (Segment-Weighted Instruction Embedding), enhances model instruction understanding by incorporating global instruction representations into the input and response. The second method, OVERMISS, addresses translation faithfulness by creating a contrastive instruction-tuning dataset to detect over-translation and mis-translation errors. These methods are applied to open-source LLMs, demonstrating significant improvements in translation performance and faithfulness metrics. The proposed techniques show promise in improving the quality and reliability of LLM-generated translations.
Thank you for reading today’s edition.
Your feedback is valuable.
Respond to this email and tell us how you think we could add more value to this newsletter.