Arnaub is a Senior Expert/Associate Partner in the Pharmaceutical and Medical Products group at McKinsey & Company. Additionally, he serves as Teaching Associate in the Department of Health Care Policy at Harvard Medical School and Lecturer in the Department of Policy Analysis and Management at Cornell University. At McKinsey, he advises pharmaceutical and biotech companies on a range of topics, including digital transformation, utilizing novel data and analytics for product positioning and preparing for launch. He also works with leading technology companies and startups who are entering the healthcare sector on a range of market strategy and entry issues.
Prior to McKinsey, he served as Director within Merck's Data Science and Insights group where he led ventures and partnerships with academic medical centers, payers and technology companies to power R&D and commercial activities. He previously served in the Obama Administration as an advisor to former Chief Technology Officers Todd Park and Bryan Sivak at the U.S. Department of Health and Human Services (HHS). At HHS, he co-led efforts for the Health Data Initiative, designed the www.healthdata.gov platform, and launched the Innovation Fellows Program. At HHS, he also worked in the Secretary's Office as a lead policy analyst on healthcare fraud and abuse policy initiatives around the Affordable Care Act. Prior to government service, he spent a number of years as a strategy consultant at Deloitte Consulting, supporting pharmaceutical companies on M&A strategy.
He holds graduate degrees in health administration (MHA) and public administration (MPA) from Cornell University and received his undergraduate degree from the University of Michigan.
Hugo is a data scientist, educator, writer and podcaster at DataCamp. His main interests are promoting data & AI literacy, helping to spread data skills through organizations and society and doing amateur stand up comedy in NYC.
Hugo: Arnaub, I'm really excited to have you here today to talk about the role of AI, data science, and machine learning in health care, what has and hasn't worked, but before we get there, I'd love for you to set the scene of your own journey and let us know how you got into data science originally.
Arnaub: Thank, Hugo, and thanks to DataCamp for having me today. The notion of how I got into data science is quite serendipitous, and as is a lot of things in life, a bit of the right time in the right place. I think there's also a concurrent movement within health care where data science is really taken off within the last 10 years, so as a lot of perfect storms work, all these factors aligned. I think my career's been a little bit of a zig-zag in terms of the fact that I've held roles in consulting, worked for the previous administration in both of the technology and the policy worlds, and then in pharma, and now back in consulting. The central theme or ethos around how I've worked has always been around data science and the common thread and some link to data science.
Arnaub: Just to give you the quick background, I started my career after grad school as a consultant, originally focusing on pharma MNA, but then helping AIDS filled health data infrastructure in a pre-ACA time, Affordable Care Act time. That actually led me to go work for the Obama administration. I transitioned over to initially work on some data science efforts around health care fraud and abuse, and thinking not only about the policy, but also around how do we think about... See more
Arnaub: Things transitioned into more of a technology bench from that perspective. I then had the opportunity to work with a few technology officers within HHS, and at the time Todd Park and Bryan Sivak were creating a new movement around open data and building APIs and platforms that access this tremendous amount of data the government sat on, so very, very fortunate to have run into two Silicon Valley guys who brought their DNA over to government and start a number of initiatives like the Health Data Initiative. We're building platforms around fda.gov, clinicaltrials.gov, open APIs. A lot of that was also very right time and the right place, and being able to learn from folks who'd done it in the private sector with a technology mindset. Then the pharma aspect of it, I actually ended up transitioning over to pharma because of government, and followed some folks up who had spent some time working in government and started a team that was focused on data science at Merck. What we were doing at Merck was very much around how do we utilize and identify novel data sets, which could include rogue data-like claims, but it could also include clinical and genomic and social media and censors. It really utilized and think about how we demonstrate the clinical and economic value of Merck products. This was a whole new way of thinking about positioning the drug in a different way and thinking about the advent of new methodologies within data science to support and bolster the value of the drug.
Arnaub: I did that for a number of years and worked with a variety of academic institutions on different machine learning methodologies, different data science methods. All that eventually led me to McKinsey, where I am right now. In that capacity, I do a lot of work with not only pharma clients but technology clients and how they're entering the health care sector. I feel in that capacity I've been fortunate to be at the front lines of how different companies are deploying machine learning, data science in a variety of different settings. Hopefully we'll cover a lot of that today.
Hugo: Absolutely. And as you say, it took a lot of moving parts, or it's almost a perfect storm of your interests and serendipity of technology and the emerging data science stack and all the data available that made this career path for you what it's become. We'll see that that happened in this space as well. That is actually took a lot of moving parts, availability, and generation of large scale data, computational power, statistical insights, that allowed data science, in health and otherwise, to emerge as it has.
How did you learn your data skills?
Hugo: I'm also interested in the fact that you started thinking about this type of stuff at grad school, and that's when you started working with data. The skills you needed then and need now to work with data, are these skills that you learned on the job, or were you trained specifically to do this type of work?
Arnaub: It's an interesting question. I think in grad school I spent some time in biostatistics and epidemiology. My background's in health care on the business side and then also health policy bent. The funny thing about data science is if you ask a lot of folks in healthcare they'll tell you that a data scientist is a statistician in California, and they'll basically say that that notion and term has changed a lot as methods and the advent of machine learning has really warped what that definition means.
Arnaub: In some capacity I think people who have worked with traditional claims data and have skill sets like epidemiology, like biostatistics, are in their own right data scientists. I think what's changed now is, like you mentioned, greater volumes and different types of data. There are different ways of processing and understanding how we use it. My training started there, and then now I think, like a lot of people, I'm having to evolve and learn just based on the fact that a lot of disciplines are converging. Computer scientists and the fact that there are actually data science degrees now, programs that are teaching different ways of using data, those are converging with a lot of old school ways that people have used health care data. That's where I find myself right now.
Hugo: Great, and I love that you mentioned different types of data because I suppose heterogeneity of data is something which is so abundant, and actually, when people ask me where does this occur, the clinical setting is one that I mention initially that you can have tabular data from experiments and controls in addition to imaging data from scans, which we'll get to, in addition to natural language from doctors' notes on patients files and that types of stuff.
AI in Healthcare
Hugo: I think that's a great time to move into this conversation about the use of AI in health care. As we know, there's a great deal of hype around the use of machine learning in AI and health care. I'm wondering from your point of view, what has actually worked in this space?
Arnaub: Yeah, it's a really important question, and I think when we talk about what's actually worked, I think it's important to know that this is a very much evolving space, so in some cases the jury is still out. In other cases, we're starting to see very promising signs. Just to acknowledge the hype, I guess in my personal opinion, we're in a golden era of funding for AI in health care. I think the statistic that I saw more recently was that since 2016, there have been 300 startups in AI in health care alone that have emerged. These companies span the gamut from reducing administrative scut work within insurance and billing companies, to actually creating new drugs and empowering compound development. I think what's important about that is we're starting to now see where a lot of the attention for AI is being driven towards and where the media attention is going towards. Venture dollars are flowing to companies that are also streamlining a lot of operational and efficiency tasks, administrative efficiency tasks in health care. They're also flowing towards companies that have these bold aspirations of upending processes that have been happening for decades now.
Arnaub: What we're trying to discern is what a successful outcome looks like, and how do we think about what the bigger aspiration is, where do we see tangible improvements in health care, how do we think about the advancement of patient outcomes? This conversation, it's not meant to splash cold water or put a wet blanket on the great work that's being done. It's just meant to try and understand, like you mentioned at the very beginning of the conversation, a lot of talk about this has been taken place to where we've seen the promise.
Arnaub: Let me go ahead and take on a few different examples in spaces where I think this has worked, and we can have a deeper conversation about this. The very first things you mentioned was around imaging, and particularly within diagnostic imaging and health care, that's a bedrock. I think it's really important to remember that pioneering use cases of AI in other industries actually started with the ability to read back pictures and look at faces and patterns and objects in photographs. That is very much similar in health care where a lot of our lighthouse use cases where we've seen great success are starting to take place.
Arnaub: To give you a little bit of a context on the growth of this market, AI facilitated diagnostic imaging is supposed to be a two billion dollar industry by 2023. That is just a fraction of the whole medical imaging market, which is 57 billion dollars. 57 billions dollars includes equipment, it includes software, services, so it's a massive, massive marketplace that has been in health care for quite some time. I think what we're seeing now is a consensus from a number of parties, whether it's hospitals and technology companies, is that AI is going to transform the diagnostic imaging industry, whether it's enhanced productivity, whether it's improved accuracy, personalized treatment planning, all of these functions are up for grabs.
Arnaub: Just to build a little bit more on this, why imaging as sort of the first place we're seeing improvement, for one thing, hospitals are producing around 50 petabytes of data per year, and 90% of that data is from medical imaging. We're talking about MRI scans, PET scans, CT scans, and all of these are also embedded within EHRs. I think that's one reason, is the availability and the ubiquity of this data.
Arnaub: I think the second reason is that there are a number of compelling use cases that are actually there now within health care. To pick on some great work that Google has done, Google Brain has published their really powerful paper within JAMA, where they worked with a 130 thousand patients from an eye institute, and they looked at retinal fundus images. What they were able to do was come up with a more sophisticated, convolutional neural network that was able to predict diabetic retinopathy, which is one of the leading causes of blindness globally. Around 400 million people have this disorder.
Arnaub: What they effectively did was come up with a more refined version using a subset of those 130,000 images, and they outperformed a panel of eight ophthalmologists in terms of understanding where the retinopathy took place and how do you actually characterize what are the contextual clues. Their F-score was .95. The fact that they had an adjusted AUC, the fact that it was in JAMA. There's quite a strong, clinical argument to be made that if we get access to more of this type of data and we're able to build it into different tools and processes and how ophthalmologists see their patients, that's just the beginning, I think. Deep Mind has a very similar study within the retinal space. Just those two examples alone, I think, are pretty compelling. Not only are you seeing this in ophthalmology, but you're also starting to see it within dermatology and pathology as your next set of lighthouse use cases.
What types of companies are making the advancements?
Hugo: It strikes me as interesting, and I'm wondering if it's interesting to you, that it's companies such as Google in this case, that aren't traditionally known in the health care space that are making such advancements.
Arnaub: Yeah, I think you're starting to see a lot of interesting collaborations between companies that have world class machine learning and health care institutions. On the other side of Silicon Valley you have Facebook. Facebook just announced their collaboration with the NYU School of Medicine, where they're utilizing their AI to speed up recognition of MRI scans. The projects for those of you who are interested is called Fast MRI. It's looking initially at around three million images of the knee and the brain and the liver, and looking at around 10 thousand different patient cases. That was just recently announced. We'll see what the fruits of that labor are.
Arnaub: I don't think it's all that surprising. I think the computational power, it's now incumbent upon Google to figure out how do they think about where the applications are, where the use cases are, and I think this is where you're starting to see imaging as that first initial lighthouse for them because they can, compellingly ... they have done this in other industries, and now they're compellingly able to do it with health care data as well.
Use Cases for AI in Healthcare
Hugo: So you were about to go and tell us a bit more about use cases in ophthalmology and dermatology, for example.
Arnaub: Yeah. I think we're starting to see similar ... ophthalmology's obviously the retinal disease example. We've started to see different cases with breast cancer. There is a great example of a partnership between Kaggle and Intel and a company called MobileODT, where they developed an algorithm that accurately identified a woman's cervix and how do we better screen and treat woman for cervical cancer. The data was consisting of around 10 thousand label cervix images, and it had type one, two and three cervical cancer. This was a 50 layer convolutional, neural network, deep learning model that accurately segmented different parts of cervix type identification. Just another example where this algorithm, just by leveraging the power of the crowd, it wasn't even academically trained or clinically trained folks, they are able to capture and accurately identify cervix type 75% of the time.
Arnaub: I think something to notice that's really important is that these CNNs are actually reproducible. You don't have to rebuild and reinvent to wheel every single time. I think that's where you're going to start to see great refinement, and you're going to start to see a lot of enhancement in terms of how we do imaging recognition and reproducing these algorithms.
Arnaub: I think the second thing is these major partnerships where you're starting to see tech companies partner with eye institutes and large corporations that have imaging data. That's going to be very compelling and powerful.
Hugo: And so when you say reproducible, do you also mean reusable in the transfer learning sense?
Arnaub: Yeah, I think we'll get to this, I think, later on, hopefully, but one of the big challenges with AI is just making it reproducible within health care. The big hurdle there is that data is different in many different parts of the health care system. The patient that you're seeing California will be very different than the one that you're seeing in Texas or South Carolina or Boston. I think what we're trying to better understand is how do you create a generalize-ability on an algorithm that may have been used within a certain subsection of the US population or the global population. Then being able to consistently come up with those algorithms is what is a challenge because there are also different ways of characterizing this, and I'll spend some time talking about this later.
Arnaub: For radiology specifically, the outcomes that you're looking for are different. You might look at the probability of a lesion, and you might look at the feature of a tumor. You might look at the location of a tumor. You have to consistently do that exercise with different types of imaging data over and over again in order to day the algorithm is reproducible. I think that's where we're starting to see that we have to continuously be able to prove that this algorithm is accurate and identifiable with other data settings.
Hugo: Look, in all honesty, having this conversation makes me realize even more how important it is to de-mystify these things, particularly because there are so many touch points, right, of where AI can have an impact in health, as you've mentioned, everything from administrative tasks to the scut work, the insurance industry, to all of these diagnostics. Before we move on, I'm wondering if there are any other examples or super interesting use cases to your mind?
Arnaub: Yeah. Absolutely. The second, I think, bedrock, lighthouse that we're seeing a lot of is around diagnosis prediction. How do you think about new variables that you haven't unearthed within data that might contribute to the progression of treatment.
Arnaub: We're actually working with several clients in this space right now on coming up with novel prognostic variables that could lead to the progression of a disease, perhaps predicting earlier onset of the disease. I think what makes this compelling is there's still a great amount of misunderstanding, there's still a great amount of unmet needs that we're not characterizing within our patient population. If we're able to better understand using machine learning methods on who those patients might be, we might be able to do incredible things in how we're getting them in and out of the hospital, seeing a provider quicker.
Arnaub: A great example of this is: Emory just released a study on sepsis where they looked at 42 thousand patients, and they looked at 65 different measure that might be predictive on the onset of sepsis. They looked at within different time intervals, so within four, six, eight, and 12 hours. What was cool about this is that they were able to come up with the same model and the same level of accuracy as a physician in predicting sepsis with a validation cohort between the doctor and the tool that the algorithm was basically indistinguishable. This is an example of not machine versus physician. It was more that we have the ability to not only confirm and corroborate what physicians are finding. If we continuously refine this, we might find more measures that are more predictive on sepsis.
Arnaub: The one other example I want to share with you actually just came out last week, and this was in the Journal of the American Medical Association, a pretty top tier publication. This was looking at a randomized trial of 500 and some patients with staph infections. They looked at patients over a six year period, and they found that an algorithm did just as well as doctors in suggesting how to treat them with antibiotics. What makes this really compelling is that they were able to say that patients who were on certain antibiotic protocols were on a drug for a certain number of days. They might have been on a certain number of drugs for a fewer number of days. You basically are looking at how we can think about antibiotic protocols and what is the best practice for keeping patients in and out of the hospital. I think that's where you're starting to see a lot of compelling evidence, and given that this is now appearing in top tier medical journals, it's not something that's in the future. These are things that are happening right now.
Hugo: And something you've hinted at several times already is that we're not necessarily ... there's a false dichotomy between humans and machines, essentially, right?
Future of AI in Health Care
Hugo: And what is more interesting, I think, is the idea of human algorithmic interaction. The idea of having artificial intelligence and machine learning models with humans in the loop. Do you see this being part of the future of AI in health care?
Arnaub: I guess this is your robots versus physicians conversation?
Arnaub: Yeah, I think there are a few ... I'll give you two funny anecdotes that are perpetuating this theory of whether we ... We hear a lot of statements around whether physicians are going to be replaced by doctors. One example is medical students are actually reportedly not specializing in radiology because they fear that the job market won't even exist anymore within 10 years.
Arnaub: The other examples is there's a really interesting company in China called IFLYTEK, that is a pretty substantial Chinese AI company. It was the first machine to pass a medical exam, and it scored substantially higher than the student body population. When you hear these types of statements, and then you see all the JAMA evidence or New England Journal evidence of doctors being at the same level as a machine, there's going to be a lot of conversation. It also demonstrates how far machine learning has actually come.
Arnaub: I think there's a few thing that make me believe that we're not at a place where physicians are anywhere close to being replaced. One is that a lot of these AI systems, like if you take the radiology example, they perform what's called narrow AI. These are single tasks, and they're being programmed, and the deep learning models are being set for specific image recognition tasks, so detecting a nodule or looking at a chest CT and looking for a hemorrhage. These are N of one tasks, and they're binary and either yes or no. I think if we keep tasks in this narrow detection focus, we're going to find a lot of interesting things, but what that means is that these are going to be augmenting tools. They're going to help physicians improve diagnostic accuracy, but as we all know, physicians do quite a variety of tasks. There is a substantial amount of mental labor that goes into how physicians diagnose patients.
Arnaub: In the short term, I think we're looking for AI to power a lot of solutions that can reduce costs and improve accuracy and augment physician's decision making. I don't see it replacing the tremendous work that docs do or our providers do any time soon.
Hugo: Yeah, and I love that you mention narrow AI which, as you said, is algorithms, AI models, to solve specific tasks. I think in the cultural consciousness when people hear AI, they don't think of narrow, weak AI. They think of a strong AI, which reflects human cognition in some sense, and that isn't even necessarily what we want in most places and what we're working towards. Right?
Arnaub: Mm-hmm (affirmative). Yeah. That's right. Exactly. It has to be more expansive. I think the other things that's worth mentioning is that there has to be ... we talked about this already, but the consistency and the portability in the models has to happen. We're still a long way from integrating this into physician decision making. I think different vendors are focused on different deep learning algorithms and a variety of different use cases. Even certain things, we'll get to this, they're being approved by the FDA, but they have completely different focal points. Until we can start to standardize a lot of that, it's going to take some time. To your point of narrow versus much more expansive thinking of AI, that's also part of the equation, and then how do we actually make this reproducible.
How has data science, ML, and AI evolved in the health care space?
Hugo: Something that you've mentioned several times is that a lot of the power we see now is from deep learning. You mentioned the use of convolutional neural networks. I'll just step back a bit, and for those people who want a bit of de-mystification of deep learning, deep learning is ... and correct me if I'm wrong at any point ... is a subclass of machine learning and mostly in supervised learning where you're trying to predict something. This particular type of supervised learning model, which is inspired loosely form neural networks in our physiological systems, in our brains. Hugo: Convolutional neural networks are ones that are quite good at picking out patterns in images, essentially. It uses a convolutional technique to do so. Of course, AI pre-dates convolutional neural networks, and although they're very strong at the moment, I'm sure you've seen trends emerge and dissolve. I'm just wondering if you could speak to how the moving parts of data science, ML and AI, have evolved in the health care space since you've been working in it?
Arnaub: Yeah, and I think if we're going to generalize it to health care, there's quite a bit ... there's a whole variety of different models and sophistication of those modes that are being thrown at different problems. I think at it's very basic level, a lot of early applications of AI in health care were focused on the relationships between diagnoses and medications. Some of the more basic techniques, such as association rule mining or supervised learning, those were meant to come up and find and extract important associations. There are a lot of limitations to those methods, so I think if you look at our methods, they were only looking at item level co-occurrences. They weren't really to abstractions at a higher level. There wasn't a lot of usefulness for data exploration or clinical decision support.
Arnaub: And I think if you look at supervised learning techniques, they are tackling these problems from a prediction perspective. If we have the right level of data, we can come up with more non-predictive applications. The things like disease group identifications or patient stratification. There are things that can happen as data becomes much more usable, I guess, for lack of a better word. I think that's where we'll actually be able to see supervised learning become much more applicable, going from small data sets with few observations into much more massive examples. There's great work, for example being done at Stanford and UCFS where they've looked at 100s of thousands of patients over 10 years of longitudinality, billions of observations, and come up with sophisticated deep learning neural networks. I think that's where you're starting to see the far reaching applications of AI.
Arnaub: In other cases, we're still working on the data problem, which is that we're getting enough data to make this interesting, but the sophistication of certain models or methods may not be there because the data's, quite frankly, not that good.
What does the future of data science, ML, and AI in health care look like to you?
Hugo: All that being said, what does the future of data science, ML and AI, in health care looks like to you?
Arnaub: Yeah, so I think there's a lot of applications that we haven't talked about yet. I think we've picked on the two easy ones in terms of what's happened and what's been working - disease diagnosis prediction and then on the imaging. I think there's quite a bit of work in terms of what's happening in drug development. The fact that we are looking at companies now ... there are exciting startups that are doing this that are focused on things like drug repurposing where they're using real world data and machine learning algorithms to explore the relationships between drug molecules and disease. That is extremely compelling. That's where you're starting to see a lot of funding go into, especially from biotech and pharma, there are companies like BenevolentAI and Numerate and others that are using deep learning to mine a quite vast amount of data to look at everything from scientific papers, clinical trials, they're effectively just trying to just understand which compounds can be more effective at targeting disease.
Arnaub: These are the types of things that I think are getting quite a bit of investment, but we haven't seen the fruits of the labor yet. I mentioned Benevolent. They started identifying hypothesis around ALS treatments, and you know this is just a start, but it's starting to narrow down which drug targets or which compounds to go after. It not only saves a tremendous amount of time for biotech and pharma, it also expedites the drug development process. I think that's one example.
Arnaub: There are really interesting and powerful examples of genomic data that we haven't talked about yet, so DeepVariant, if I go back to Google for one sec, DeepVariant is an open source tool that was about two years of work between Google Brain and Verily, which is Google's life science arm. What they effectively are able to do is come up with a more sophisticated statistical approach to spot mutations and filter out errors. What DeepVariant does, it changes to whole task of variant calling, which is trying to figure out which base pairs are part of you and they're not part of some kind of processing artifact. It turns it into an image classification problem. Deep variant is starting to replace and outperform these basic biology tools, like GATK and SAM tools, and reducing the amount of error by up to 10 times.
Arnaub: This is just, I think, in the beginning stages. Even companies like Google will tell you that their genomics work is a couple years out, considering this one took two years to build. But I'm extremely excited about that type of potential. There are other examples around physician burnout and the advent of voice technology within health care, where we're starting understand that doctors spend an enormous amounts of time on EHR, electronic health record data entry, and if we're able to use machine learning and natural language processing and voice technology in the future, then we start to auto populate structure fields within records, make the doctor's job less burdensome, reduce physician documentation burden. Those are three use cases that I think are on the frontier. They're areas that there's a lot of hype and interest and really amazing work happening, but that's a short list of where I see the future going.
Questions from the Audience
Hugo: Great. Before I go on, there are actually a few interesting questions from the crowd, and Gamal has asked a question, what about liability? I actually want to frame that in terms of thinking about the future of data science and ML and AI in healthcare, and in particular the fact that a lot of the algorithms we've discussed are essentially black box algorithms in terms of it's difficult to see why they make the predictions that they do. So in terms of interpretability versus black box, maybe you can discuss that with respect to, I suppose, liability for the models that we as data scientists build.
Arnaub: Yeah, I think that's an incredibly important question. One thing I want to talk about it is the policy space in terms of where the future is. This notion of FDA approved algorithms is actually starting to happen. What we're seeing right now is this lack of consistency, transferability in the current models because they focus on different end points, they are done in a black box setting where it's data in, we're not really sure what comes out. I think what that means is that regulatory bodies are going to intervene, albeit in a positive way.
Arnaub: As an example, the American College of Radiology is actually helping to provide methodologies for vendors to verify the effectiveness of the algorithm before they're taken to market. I think that's one example. The other example: about accepting algorithms and approving them is part of diagnosis. They gave a positive vote to a decision support tool that uses an algorithm for neurovascular disease. They did the same thing for diabetic retinopathy in April, and then they did something for a computer aided tool that helps with wrist fractures in adult patients. These are all the FDA's permitting the marketplace to be open. They are allowing algorithms to actually help providers in a regulated way.
Arnaub: There's actually really cool stuff happening within the White House and the House Oversight Information Technology Committee. If you guys are extremely bored, you should read this really ominous report called "Rise of the Machines" that the House Oversight Committee just put out. It's basically how is the NIH going to ensure that there's standardization within algorithms. The same thing with the White House. They put out a really interesting plan from the government to build up AI in an ethical way. I think the black box problem is going to continue to happen. We've already seen it be problematic for some large companies. We need to be able to address that, and although we don't love government intervention, I think this is one instance where we're actually seeing a lot of positive things come out of it.
Do ethical questions in data science have a much bigger impact on AI in health care, and are there moves towards developing ethical guidelines for researchers in this space?
Hugo: Following up on this, we've actually got a great question from someone in the audience called Daniel about ethical questions in general: Do ethical questions in data science have a much bigger impact on AI in health care, and are there moves towards developing ethical guidelines for researchers in this space? You've spoken to that we respective to top down. I'm also wondering about practice from within the data science community. What type of stakeholders will hold data scientists accountable? And also, the fact that in marketing, for example, or advertising perhaps... if you show someone the wrong ad, that doesn't have such an important impact as giving someone the wrong diagnosis, right? Are there things that are particularly valuable in the health space?
Arnaub: Yeah, so I think what we're seeing is how do we standardize an ontology for a disease, and that's an evolving question. There are academic consortiums that are focused on reproducing these phenotypes. So a phenotype is basically how are we characterizing a patient and their respective disease. If academic groups and organizations come together to say this is a generally accepted algorithm, this is how we can avoid erroneous cancer treatment advice, or this is how we see this is an unsafe or incorrect treatment recommendation, I think that will actually compel more folks to work within certain parameters and build algorithm that are within guidelines and practice. Otherwise, it's incredibly easy to find signal within a lot of health data noise. I think there are companies that have suffered some hard lessons that way. I think as long as we are working with organizations that are attempting to do this, that's one way of tackling this.
Arnaub: I think the other thing is health data is incredibly inconsistent, and there is a national subcommittee called HL-7, which is a health data standards committee. They are really making a strong push for something called FHIR, which is Fast Health Care Interoperability Resourcing. It's trying to create a standard where data is no longer somebody's competitive advantage, but it's something that everyone can use, and it's something that is standardized for everyone. You're not just looking at inconsistent standards.
Arnaub: The Centers for Medicare/Medicaid services are really trying to push standard ontologies. I think FHIR and these other organizations are trying to create a consistency behind all the chaos and the noise. Hugo: Fantastic. That actually answers the next question that we had from the audience, which is from David, which concerns research into the policy implications of AI in health care, and in particular, will FHIR have any impact on AI implementation. That's great that you were able to answer that question even before I framed it. Hugo: Another question, I'll just remind people who are here but listening and watching today to send through questions in the chat as soon as they come to mind. I've got a really interesting question from William, which we hinted at earlier, but William says "What I notice is that a big part of the hype is focused on the R&D side of health care. For example, image analysis, drug discovery. What the the hypes, and more important, the promising applications on the manufacturing side?"
Arnaub: Yeah, so that's a good question. I assume that's in relation to drug development, sorry, within pharmacos?
Hugo: Yeah, exactly.
Arnaub: Yeah, so I think we're starting to see a lot of activity in this space. It's a little bit more nuanced, but in terms of how manufacturing is trying to tackle this, I think we are now in a place where the ability to standardize and create better understanding of how drug cycling takes place, where the supply chain can be optimized. I think that's where there are companies like BERG, for example, that is using AI for not only research applications but for manufacturing. It's something that I come across less, but something that is still popular. I think there are ways to think about unsupervised learning methods in terms of how we're trying to understand drug circulation and where we can improve our supply chain efforts.
Arnaub: There is actually a cool effort from the UK Royal Society that is looking at the role of machine learning in bio-manufacturing. Can we actually help optimize the time factor here, like helping manufacturers reduce the time to produce drugs, lower costs, improve replication? Yes, still very much a popular topic. It's not something that we've avoided, but discovery is where I've seen a lot of the money and interest flow to right now.
Given the black box nature of AI and the proprietary nature of industry, how will external validation and reproducibility of algorithms be assessed?
Hugo: We've got several questions coming through about, I suppose, the ethical nature of using AI and ML and data science in health care. The first one I want to ask you is from a listener called James. James says written given the black box nature of AI and the proprietary nature of industry, how will external validation and reproducibility of algorithms be assessed? He also says, basically, where does open science fit in for the commercial AI space?
Arnaub: Yeah, that's a good question. I think what we need to do is come up with more of an interdisciplinary, multi-stakeholder way of evaluating different algorithms that are entering the marketplace. You have these large top down institutions like the FDA that are evaluating the ability for a physician to utilize an algorithm in practice. I think there are other organizations at the academic level that are more interdisciplinary. A great one is called OHDSI, which is the observational health data sciences and informatics group. What they're attempting to do, and this is actually a collaboration between pharma and academics and startups. One thing that they've done that I think is really important is they've created a common data model for health care. They've looked at disparate observational health care databases and decided that EMRs are really important for supporting clinical care. Databases like clean data are important for reimbursement, but they both serve different purposes. We need to create a common data model that accommodates both of these different types of data.
Arnaub: This CDM, through a partnership called OMOP, which stands for the Observational Medical Outcomes Partnership, it basically is trying to take all this noise from random coding systems and create one standardized ontology and vocabulary. That's one way of trying to get buy in from other people, multiple players, interdisciplinary players. That's, I think, something that helps with the ethical challenge.
Arnaub: OHDSI is an organization that actually works on reproducing and publishing all this research. All of it's open source. They've created a number of software tools like Atlas and Achilles that are standardized databases for profiling different data and data quality. This is not something that we're going to solve overnight. I think regulatory bodies are going to be very judicious about what they approve and what they don't approve. What tends to happen in health care is as soon as there's some sort of adverse event, or there's some kind of clinical error, you're going to see the entire industry get slapped. Nobody wants that to happen. I think that's what's stifling for innovation.
Arnaub: We are, hopefully, trying to ... It's weird, the world that we sit in right now where we're trying to get as much AI related work out there as possible, also being very mindful that a successful deployment of this, once it enters the physician's hands, or it starts to be part of patient care, is incredibly challenging. That's what the next evolution of all of this stuff is, it's the very careful implementation of these algorithm into a clinical decision model.
Is it or will it be enough to prove the efficacy of an algorithm or model, or is a fully, full-functional description required?
Hugo: I'm glad that you mentioned regulation there because we have a related question from Stephen, the question is from a regulatory perspective, is it or will it be enough to prove the efficacy of an algorithm or model, or is a fully, full-functional description required?
Arnaub: Yeah, I don't know if we have proper guidance around that. I think there are a lot of organizations that are trying to demystify how the AI, or how the FDA should be thinking about this. A few things that are standard in terms of how much do you have to prove something. One is that what is your benchmark data? And are you using ... are you using ground truth data? Meaning is it a trusted claims data source? Is it a standardized ontology for EHR data? I think companies grapple with choosing a whole variety of different data sources and collection methods. Then they realize that their algorithm isn't that good, or they're hoping for its approval. There are good definitions on what ground truth is. That's one way of creating a really strong model.
Arnaub: I think there are other ways of thinking about what's the intended use of the algorithm. How do we think about this interfacing with a physician? Does it have any downstream patient implications? Is there going to be algorithm bias? Meaning are you going to deny care to a certain patient population? That's something that the FDA considers in terms of how it considers it to be ethical or not ethical.
Arnaub: Then I think there's a whole regulatory approach on refitting models so that they're continuously learning. Scott Gottlieb, who runs the FDA, has talked a lot about how will patients evolve, and how do we think about when companies have to do a refit of the model and how it should be validated. What is the right refitting cadence? Is it every hour? Is it six months? Is it annually? I feel like a few organizations have taken a stab at trying to create these lists of guiding light questions that can help us come up with a good model versus a subpar model and one that's more likely to be implemented and accepted by the clinical community versus ones that have a great finding but may have some holes in them.
Due to health care being a regulated area, do you feel that AI and health care will be focused area for only big companies like Google and Facebook to be able to make advances? Or do you feel small companies have a space?
Hugo: In terms of the regulation, as well we've got a great question from Harsh. Harsh asks that due to health care being a regulated area, do you feel that AI and health care will be focused area for only big companies like Google and Facebook to be able to make advances? Or do you feel small companies have a space? You mentioned earlier that there have been several hundred startups in this space in particular. Maybe you can speak a bit more about it?
Arnaub: Sure. I think it's a really interesting question. I think ... I'll give you one example that I think raised a to of eyebrows. To your point about big tech companies like Google. Facebook actually announced earlier this year that they are looking at using AI to monitor suicide and understand which one of their users might be more likely to commit a suicidal event. That is a highly ethically challenging place for a tech company to play in.
Arnaub: I think that large tech companies, while they have great applications and great computer science, are treading very carefully because they realize that this isn't something that you can just wade into. I think they actually have a tremendous amount of organizational risk as they enter this market. One, because patient care is totally different than selling ads. I think the ability for them to use their science and use the computational power comes at a great risk. They have to evaluate whether it's worth it or not, but they all want to make the world a better place. That's their aspiration.
Arnaub: With other companies, there are so many powerful tech companies. I mentioned Voice Technologies in this space. Companies like Virtual Assistance and Voice Technology are going to be major players in this space. And I don't just mean Amazon, but I mean companies like Orbita and others are doing incredible work, Robin AI, they are basically trying to help reduce physician documentation burden. These are well funded, well capitalized, strongly supported startups that are doing great things. There is patient data and risk analytics. There's companies like NarrativeDx that are doing really compelling work. They're partnering directly with health care systems to do this in a safe and compliant way, so I don't think large tech companies are the only one that can play in this space.
Arnaub: I think if you are very calculated, methodical about how you enter, if you engage in the right partnerships, a startup ... I mean, there are quite a few startups that have made a lot of headway in this space, in the drug discovery world. These are startups that have raised hundreds of millions of dollars, literally, that are now functioning quite well and successfully. Pharma companies are making multi-year bets on companies like Numerate and BenevolentAI and investing quite a bit of money. This is not just a large tech company space anymore.
What are the main limitations on the rate of adoption on AI in health care?
Hugo: There are actually an overwhelming number of fantastic questions coming through, but we're going to need to wrap up in the next 10 minutes or so. I've got two more questions. The first is from Christopher, which I'm very interested in. Christopher asks what are the main limitations on the rate of adoption on AI in health care?
Arnaub: Yeah. That's a good question. We talked a little bit about what are the policy hurdles, and we talked about how do we think about approaches. I think what is the biggest rate limiting step in health care is going to be the ubiquity of good quality data. This is the biggest challenge that I think has been plaguing health care for decades now, is that as soon as a novel data set is released into the health care world, everybody gets really excited about it. As soon as the government made standards for EHR, electronic health records, EHR became competitive domain for any organization that had the record. The challenge was getting access to that data. It's the same thing with genomics now. We're starting to see bio-banks and the ability to have genomically sequenced data, genetically sequenced data. That is the next domain. That's where people are trying to get to, but none of this matters unless the data is linkable, unless there is a standard, unless there is labeled data. We talk a lot about imaging today, but radiologists suffer from the fact that imaging data is stored within these pax warehouses, pax is the archiving system, and then they're not labeled. We don't know what we're necessarily looking at. What that all goes to show is that the biggest hurdle for AI adoption in health care is good quality data, which is why I mentioned standards like FHIR and others trying to create some kind of harmony and consistency in the data in a very chaotic world.
Arnaub: I think the other thing is that hospitals and other organizations that have the data are very much willing to work with players, but there's quite a bit of overlap in terms of what companies are promising. We're starting to see a lot of companies tread into different spaces and say that they're doing compound development or they're looking at molecule identification or target validation. They're trying to be jack of all trades. I think that is obfuscating what the company actually does.
Arnaub: My suggestion is just having a very clear, refined focus on what you think you are doing and what you're good at versus trying to then wade into a lot of other murky waters. With that said, I mean, the market is so hot right now that you will see a tremendous amount of partnership and opportunity for startups. The biggest, limiting step is access to the data, finding the right partner, being able to demonstrate a use case, and then the application of that algorithm within clinical practice.
Do you think artificial intelligence will democratize medicine?
Hugo: I've got one more question from one of our listeners, and then I'm going ask you one final question. This is from Gamal, and it's a relatively general question that I'd like you to interpret in whatever way you see fit. Do you think artificial intelligence will democratize medicine?
Arnaub: Oh, interesting. I think that we will get to a place where ... I'm going to be liberal about the use of the world democratize. I think what that means is enable access to care, perhaps, or maybe that's the definition that we'll choose to use. I think patients are increasingly interfacing with the health system in different ways, and the fact that the majority, the vast majority, of patients go online to look up health information, almost 90% now. The fact that there is still ... there's a lot of ways for patients to share their data now with physicians, with technology companies. We all know the work that Apple's doing in its work in health kit and research kit to try to get more access to data. I think there is going to be some greater role for AI, and maybe for technology to help with access to care. And hopefully I'm addressing your question, but feel free to rephrase.
Arnaub: At the same time the US is suffering from tremendous endemic health policy challenges that I don't think AI will fix. I think AI will enable and help certain things. It will maybe power diagnosis. Maybe it will improve better health outcomes over time. There's still a good portion of the population that will never be AI enabled, for lack of a better word, or have access to health care resources. I think that's the greatest hurdle in our system.
Call to Action
Hugo: That definitely does answer the question. My final question for you is: for all our listeners out there, do you have a final call to action?
Arnaub: Yeah. I think we've talked a lot about challenges. We've also talked a lot about promise and where the industry is going. I think this concept of fixing a lot of low-hanging fruit problems, we've picked on a lot of sexier things like drug development, but our health care system suffers from tremendous waste. These are enormous problems, and AI can fix a lot of these things, like insurance and billing claims. My old mentors used to say that a lot of the most lucrative work in health care is also the least sexy and the back office application type stuff.
Arnaub: If we're able to do things like predict better waste or fraud, or if we're able to improve billing and documentation processes, these are incredibly important problems to go after, and I think there's something there. You should use AI and your powers to go fix them.
Arnaub: I think the other thing is that these problems shouldn't be done in isolation, or fixed in isolation. You are going to see a lot of different and perhaps unique partnerships in health care take place. Hospitals and tech companies and patient groups working with startups. I think that whole model has flipped on its head. I encourage everyone to be very inventive about how they want to work with different parties. There are a lot of non-traditional folks that are getting into the health care space, so think about where the intersections lie and where the cross-functionality lies. That's where you usually find more inventive solutions as opposed to working through the same channels.
Hugo: Thanks, Arnaub.
Arnaub: Yeah, thank you, Hugo. Thanks to DataCamp for the time. I really appreciate the opportunity. Hugo: Absolutely. All right.
Arnaub: Thank you.
Announcing the "Become an AI Developer" Code-Along Series
ChatGPT & Generative AI: The Year in Review – Top 17 Moments
A Guide to Docker Certification: Exploring The Docker Certified Associate (DCA) Exam
Fine-tuning Stable Diffusion XL with DreamBooth and LoRA
What is A Confusion Matrix in Machine Learning? The Model Evaluation Tool Explained
Nisha Arya Ahmed
Pandas Profiling (ydata-profiling) in Python: A Guide for Beginners