Applications of Machine Learning (ML) have become pervasive in today's clinical literature. Nevertheless, while the promise of ML, as a tool to aid clinical decision making, is widely recognized, it has yet to be embraced by the clinical community. So, what constitutes a good machine learning model for clinical applications? Certainly, a necessary condition for the success of any machine learning model is that it achieves an accuracy that is superior to pre-existing methods. In the healthcare sphere, however, accuracy alone does not, nor should it, ensure that a model will gain clinical acceptance. In view of the fact that no model, in practice, has 100% accuracy, attempts to understand when a given model is likely to fail should form an important part of the evaluation of any machine learning model that will be used clinically. Moreover, the most useful clinical models are explainable in the sense that it is possible to describe, in clearly understandable language, why the model arrives at a particular result for a given set of inputs. In this talk I will expand upon these challenges that make the creation of clinically useful machine learning tools particularly difficult, and discuss ways in which they can be overcome.
HAPPENING NOW!
— DBMI at Harvard Med (@HarvardDBMI) December 9, 2019
Monday 12/9 2pm
2nd annual Gilbert S. Omenn Lecture (+ reception 3pm)
Featuring Collin Stultz, MD, PhD (@RLEatMIT @MIT_IMES @MITEECS @MassGeneralNews)
"Machine Learning Models for Clinical Medicine—the Holy Grail or a Pandora’s Box?" #MLhttps://t.co/8l0wO23pO4 pic.twitter.com/GBfzfgAQNg
Very pleased to hear Prof. Stultz at @HarvardDBMI annual Gil Omenn lecture discussing why ML has not been embraced more widely by clinical medicine and what it will take pic.twitter.com/DZEHlAkiYH
— Isaac Kohane (@zakkohane) December 9, 2019
Great #omennlecture @HarvardDBMI by @CollinStultz from @MIT on #MachineLearning for clinical medicine that addressed important questions about trust, consistency, and explainabilty. Key takeaway: what’s an explanation or what can be interpreted strongly depends on the audience. pic.twitter.com/3M7jjj2ix5
— Nils Gehlenborg (@ngehlenborg) December 9, 2019
© 2024 by the President and Fellows of Harvard College