Skip to main content
HomePodcastsPodcast

Using AI to Improve Data Quality in Healthcare

In this episode, we speak with Nate Fox, CTO and Co-Founder at Ribbon Health, and Sunna Jo, resident data science at Ribbon Health on how AI is improving data quality in healthcare.
Updated Feb 2, 2023

Photo of Nate Fox
Guest
Nate Fox
LinkedIn

Nate Fox is the Co-Founder and CTO at Ribbon. Ribbon's mission is to build the infrastructure to transform billions of care decisions. 


Photo of Sunna Jo, MD
Guest
Sunna Jo, MD

Sunna Jo is a medical doctor and data scientist at Ribbon. 


Photo of Richie Cotton
Host
Richie Cotton

Richie helps individuals and organizations get better at using data and AI. He's been a data scientist since before it was called data science, and has written two books and created many DataCamp courses on the subject. He is a host of the DataFramed podcast, and runs DataCamp's webinar program.

Key Quotes

I leverage my clinical experience daily which is both amazing and motivating. Because of my clinical experience, I am able to provide an additional lens on the data from the perspective of a healthcare provider, and give my team the context for the data so they can interpret and translate the data in a way that makes sense. For example, for one of our provider performance products, we work really closely with medical codes.  These are designated codes that define certain diagnoses and procedures. My team is cleaning and building a model on these same codes that I used to bill for my own visits as a provider. Being able to recognize and understand the insights that we can get from these codes have just been a great reminder of the value of my experience.

Data engineering is a huge part of making this data usable. I think it requires a lot of creativity to think about "How can you scalably ingest thousands of schemas?". For example, address data can be formatted a number of different ways, we need to standardize that data across all the different scales that we see across different data sources. We built a tool that helps with onboarding new data sources by mapping all different fields to our own standard fields. Before, it would take us 20-30 minutes in Python to code up just one new data source, so imagine the mountain of work that’s created when you have hundreds of sources. Now, we have a simple UI that even starts to guess some initial mappings for you, reducing a 20-to-30-minute data mapping process per new data source to just 10-15 seconds, which makes a lot of our operations and our data adjustment processes a lot smoother and far more scalable.

Key Takeaways

1

Data Engineering is very valuable when it comes to the scalability of data cleaning. It’s essential to think creatively about how to solve data quality challenges so that your solutions work reliably at scale.

2

It's helpful to understand the context of the data, such as learning why the data was produced in the first place, who sits behind it, and what their intentions are. That context can change the entire process, starting with how you clean the data, analyze it, and how you consider anomalies and edge cases.

3

Having a strong and clear operating definition for what is considered good quality data can help you more effectively work with messy data, transform it into usable data, and draw meaningful insights from it.

Topics
Related

blog

How Data Science is Transforming Healthcare

The integrated use of data science and machine learning in healthcare has many applications for improving patient care, business processes and operations, and pharmaceuticals. But the healthcare industry faces considerable challenges in data quality and infrastructure

Matthew Przybyla

9 min

podcast

Data & AI for Improving Patient Outcomes with Terry Myerson, CEO at Truveta

Richie and Terry explore the current state of health records, data challenges including privacy and accessibility, data silos and fragmentation, AI and NLP for fragmented data, regulatory grade AI, the future of healthcare and much more.
Richie Cotton's photo

Richie Cotton

39 min

podcast

AI in Healthcare, an Insider's Account

Arnaub Chatterjee, a Senior Expert and Associate Partner in the Pharmaceutical and Medical Products group at McKinsey & Company, discusses cutting through the hype about artificial intelligence (AI) and machine learning (ML) in healthcare.
Hugo Bowne-Anderson's photo

Hugo Bowne-Anderson

62 min

podcast

[Radar Recap] Scaling Data Quality in the Age of Generative AI

Barr Moses, CEO of Monte Carlo Data, Prukalpa Sankar, Cofounder at Atlan, and George Fraser, CEO at Fivetran, discuss the nuances of scaling data quality for generative AI applications, highlighting the unique challenges and considerations that come into play.
Adel Nehme's photo

Adel Nehme

41 min

podcast

How Data and AI are Changing Data Management with Jamie Lerner, CEO, President, and Chairman at Quantum

Richie and Jamie explore AI in the movie industry, AI in sports, business and scientific research, AI ethics, infrastructure and data management, challenges of working with AI in video, excitement vs fear in AI and much more.
Richie Cotton's photo

Richie Cotton

48 min

podcast

Monetizing Data & AI with Vin Vashishta, Founder & AI Advisor at V Squared, & Tiffany Perkins-Munn, MD & Head of Data & Analytics at JPMC

Richie, Vin, and Tiffany explore the challenges of monetizing data and AI projects, the importance of aligning technical and business objectives to keep outputs focused on core business goals, how to assess your organization's data and AI maturity, why long-term vision and strategy matter, and much more.
Richie Cotton's photo

Richie Cotton

61 min

See MoreSee More