Letting ML algorithms dictate healthcare can be dangerous

I recently read an article on WIRED that gave me pause. I currently work at a company where we use machine learning models for checking symptoms. Our goal is to help the patient understand the cause for their symptoms and where they can find appropriate care. We want to make the healthcare experience better for everyone!

But the WIRED article details a case where an ML algorithm was used to determine a patient’s risk of opioid abuse. On the face of it, this seems like a good thing – we don’t want to exacerbate the opioid crisis. But if doctors use a black-box machine learning model that doesn’t exactly tell you how opioid abuse risk is calculated, this can backfire easily. Patients who depend on painkillers to manage their health conditions can suddenly face hurdles to their care. Lack of insight into why medication was denied can lead to a lot of confusion. Restriction to proper care can increase risk of overdose. Pets with health problems can be cut off from their meds if their caregiver is labeled “risky”. Having recently adopted a senior cat, my heart would break if she was in pain. Especially if it’s because of a machine learning model that doesn’t tell me why I can’t help her.

It’s never a good idea to depend wholly on a ML model without knowing what the input data are, and how they are being used to make predictions. I’ve written about this previously with student test scores – AI can get opaque and non-generalizable quickly. We need more explainability and transparency in ML algorithms before we place our faith in them. And particularly in healthcare, AI predictions should be taken not with a grain, but a spoonful of salt. I care greatly about making the patient journey more accessible, and that comes with humans staying in the loop and using data wisely.