- AI Breakfast
- Interview With Harvard Professor on AI in Medicine
Interview With Harvard Professor on AI in Medicine
Good morning. It’s Monday, April 10th.
We interviewed Harvard Medical School professor Dr. Isaac Kohane about the role of GPT-4 in medicine, and explore Facebook’s new image segmentation model that may be the most advanced image recognition model yet.
In today’s email:
AI Breakfast Interviews Dr. Kohane on AI in medicine
Edge Impulse: Pioneers of Edge AI
Facebook’s “Segment Anything” model for images
Top 5 AI Research Papers of the week
You read. We listen. Share your feedback by replying to this email, or DM us on Twitter.
— Elon Musk (@elonmusk)
Apr 10, 2023
Harvard’s Dr. Kohane on AI and Medicine
Dr. Isaac Kohane, a renowned physician, computer scientist, and professor at Harvard Medical School has recently collaborated with colleagues to evaluate the performance of GPT-4, within a medical context.
The authors reveal that GPT-4 outperforms many doctors in certain areas, answering US medical exam licensing questions with over 90% accuracy.
GPT-4's capabilities extend far beyond impressive test-taking skills and fact-finding. The AI model also excels at translation, converting complex medical jargon into language understandable by a 6th-grade audience and even translating patient discharge information for non-English speakers.
We spoke with Dr. Kohane about the future of AI and medicine.
AI Breakfast: How do you think medical education and training should adapt to better prepare future doctors for working alongside AI technologies like GPT-4?
Dr. Kohane: I teach a third year medical school class at Harvard medical school on computationally enabled medicine.
I've already had the students experimenting with large language models (LLM’s) such as chatGPT and GPT-4. In the coming years, I will be explicitly, asking the students to leverage skills in using these (and other) large language models to address problems in communicating with patients, medical workflow, diagnosis, therapeutic decision-making and, patient management.
I do not see what else we can do other than embrace this new and important learning tool all the while repeatedly reminding our students of the importance of double checking the content to make sure that these large language models have not confabulated their responses.
It's also clear that students are using these tools whether we guide them or not and furthermore, many of our patients are doing the same. Therefore, embracing and understanding this new player in the patient, doctor relationship seems important and worthwhile.
AI Breakfast: In the context of global health, do you think GPT-4 could play a significant role in addressing healthcare disparities and improving access to medical expertise in underserved regions?
Dr. Kohane: There is an opportunity for GPT 4 to help reduce some of the disparities in the global perspective. This is probably not the case from for the most resource-poor countries. Unfortunately, transmitting knowledge where food, and water are scarce and medicines rarely are available will not have a large impact. But for those countries which have some resources and a sizable literate fraction of their citizenry, the LLM’s can augment the ability of healthcare paraprofessionals (e.g. village medics), as well as doctors to deliver healthcare closer to the state of the art.
This through the rapid diffusion of medical knowledge via the LLM’s at the point of care, in the local language, with understanding of local resource constraints. However, field testing and educational programs in the use of these LLMs ensuring that they are just as careful to check for LLM confabulations as their developed country counterparts.
AI Breakfast: As AI technologies like GPT-4 continue to advance, how can we ensure that they remain transparent and accountable in terms of their decision-making processes, particularly in a high-stakes field like healthcare?
Dr. Kohane: There is an urgent need to define what our society, writ, large, expects, and needs in terms of transparency of these large language models. Is it in their operation, the data that they use for the training sets, the reinforcement learning that modifies their outputs? This conversation has to happen soon if we are to avoid unpleasant surprises.
AI Breakfast: How do you think AI can be incorporated into medical practice in a way that enhances the doctor-patient relationship, rather than diminishing the human connection in healthcare?
Dr. Kohane: One important goal is to reduce the paper work of doctors so that we return to facing the patient rather than our computer and that the documentation is mostly written as a result of our interactions with our patients.
AI Breakfast: Looking beyond GPT-4, what developments in the field of AI and medicine do you anticipate over the next 5-10 years, and how might they revolutionize healthcare?
Dr. Kohane: Developments are so rapid at present that the 5-10 year horizon seems opaque. The technology will surprise us but I do wonder in which way the medical-payor establishment will slow progress because of the potentially huge financial shifts that these technology could enable.
Edge Impulse: A Pioneer in Edge AI
Edge Impulse has streamlined the development and deployment of lightweight machine learning (ML) models on a wide array of devices, catering to industries like health and industrial sectors.
Edge Impulse Studio Platform: A web-based platform that allows users to create datasets, train models, extract features, and deploy ML models across diverse hardware.
Bring Your Own Model (BYOM): Enables users to import their trained models into Edge Impulse for optimization and deployment on edge devices.
Python SDK: Provides ML engineers with model profiling, optimization, and C++ library generation within their Python notebooks.
Edge Impulse's innovative features have attracted large enterprises, including NASA, Oura, and Lexmark, along with over 70,000 developers.
The platform empowers ML engineers to seamlessly transition from data to ML model and device deployment, reducing time to market and making edge AI more accessible and efficient for various industries.
Facebook’s Powerful New Image Model
The Segment Anything Model (SAM)
Facebook’s SAM can detect nearly all objects in complex images
Facebook’s Segment Anything Model (SAM) is an open-source tool that allows users to identify and separate objects within images, making it easier to analyze and process them.
In simpler terms, it helps to recognize different elements in a picture and create individual "masks" for them, which are like digital outlines or layers of the objects.
This model has been trained on a massive dataset of 11 million images and 1.1 billion masks, which enables it to perform extremely well in various object segmentation tasks.
The significance of SAM lies in its ability to process images and identify objects in them with minimal input, making it highly efficient and versatile.
This has a huge potential in various applications, such as:
Image editing: SAM can be used in photo editing software to help users easily separate and manipulate different objects within images. For example, changing the color of a specific object or removing it entirely from the image.
Surveillance and security: In video monitoring systems, SAM can identify and track different objects or individuals, enhancing security measures and providing real-time insights.
Retail and marketing: SAM can be used to analyze shopper behavior by identifying and tracking different products and customers within a store, allowing retailers to optimize store layouts and marketing strategies.
Medical imaging: SAM can assist in processing medical images, like X-rays or MRIs, by accurately identifying and isolating different anatomical structures or abnormalities, which can help medical professionals in diagnosis and treatment planning.
Robotics and automation: In robotics applications, SAM can help robots recognize and interact with objects in their environment, allowing them to perform tasks more efficiently and adapt to new situations.
A unique aspect of SAM is that Facebook open-sourced the code for it. It seems that Meta’s competitive approach to AI is making their code publicly available, which will likely become their greatest strength in the AI race.
Check out the segmentation demo in the link below to see how it works.
Decoding AI: A Non-Technical Explanation of Artificial Intelligence
Our new book is available April 18th
This was a fun book to write.
Decoding AI breaks down the complexities of AI into digestible concepts, walking you through its history, evolution, and real-world applications.
We'll introduce you to the key players in the AI field, as well as explain the underlying algorithms, data, and machine learning concepts that power AI systems. You'll gain a deeper understanding of deep learning, neural networks, and reinforcement learning, and we'll explore various types of AI, from rule-based systems to probabilistic networks and beyond.
The goal was to make this book an approachable discovery of how AI works.
It discusses a wide range of applications AI has in areas like natural language processing, computer vision, robotics, and predictive analytics. It also delves into the regulatory landscape and policy issues surrounding AI, as well as the potential future developments in AI, such as its applications in healthcare, education, transportation, and even space exploration.
You'll also learn the difference between narrow AI and Artificial General Intelligence (AGI), and how to get started with using AI through tips and resources.
The link below is a discounted pre-order for the readers of the newsletter (Premium Subscribers receive a free copy) at 33% off.
The ebook will be delivered to your email on April 18th. It’s a fun, easy read for the endlessly curious.
5 Most Viewed Papers from the past 7 days
Note: arXiv is a free online library where scientists share their research papers before they are published. These are the 5 most viewed papers in the last week.
Do the Rewards Justify the Means? Measuring Trade-Offs Between Rewards and Ethical Behavior in the MACHIAVELLI Benchmark MACHIAVELLI is a new benchmark that measures AI models' social decision-making abilities in diverse scenarios, aiming to find a balance between maximizing rewards and behaving ethically, ultimately making progress in creating safer and more capable AI agents.
A Survey of Large Language Models This research explores the development and impact of large language models (LLMs), advanced AI algorithms that have evolved over time and now demonstrate remarkable abilities in understanding and generating language, with potential to revolutionize how we use AI technology.
Self-Refine: Iterative Refinement with Self-Feedback is a new method that improves AI-generated text by having the AI model provide feedback and refine its own output, leading to better results in a variety of tasks without needing extra training data or reinforcement learning.
Summary of ChatGPT/GPT-4 Research and Perspective Towards the Future of Large Language Models This study provides an extensive overview of ChatGPT and GPT-4, highlighting their improvements and their potential applications across a wide range of fields, while also addressing ethical concerns and future developments.
AnnoLLM: Making Large Language Models to Be Better Crowdsourced Annotators This study shows that GPT-3.5 can be used as an efficient annotator for labeling data by providing it with guidance and examples, achieving results comparable to or better than crowdsourced annotations in various language tasks.
3x the information, for less than $2/week
Stay informed, stay ahead: Your premium AI resource.
Thank you for reading today’s edition.
Your feedback is valuable.
Respond to this email and tell us how you think we could add more value to this newsletter.
Read by employees from