DocWire News spoke with Dr. Gidi Stein, a practicing physician, computer scientist and CEO and Co-Founder of MedAware, a company that has developed a platform using artificial intelligence (AI) and machine learning to accurately identify potential medication-related errors, evolving adverse drug events and even opioid addiction before a patient can be harmed.
Dr. Stein addressed how AI can be used by health systems to reduce alert fatigue, physician burnout and enhance patient care. He also discussed how the COVID-19 pandemic has affected physician burnout and how this impacts patient safety.
DocWire News: Can you give us background on yourself and the company, MedAware?
Dr. Gidi Stein: Sure. So I started off as a software engineer many years ago and was co-founder CTO of several failing startups in the early ’90s. And at some point, had enough of the startup world, vowed never to do it again, and went to medical school. I was the oldest medical student in Tel Aviv University at the time, graduated specializing in internal medicine, and later having executive roles in one of Israel’s leading hospitals. You call it like the PhD in computational biology. I teach medicine, I practice medicine. These are the things I really like. And in the last few years, I’m CEO of a company named MedAware.
The idea behind MedAware came purely by chance. I never expected to go back to the startup business after I vowed never to do it again. But a few years ago here in Israel, a nine year old boy died simply because his primary care physician clicked on the wrong entry, and they put them in your list of the medications, and prescribed the wrong drug by mistake. There was no contraindications, there were no dosage, no interaction, just simply the wrong drug for the wrong patient. And one would have thought it would be some kind of a cool spellchecker to prevent these typos from happening, but apparently this did not happen, and the boy died a few days after taking the medication. Now, when you think about the tragedy, it wasn’t bad judgment, it was a typo. And eventually typos kill you in medical practice. So I decided I should do something about it. And here we are today.
DocWire News: Can you talk to us about this AI-based platform that identifies potential medication-related errors?
Dr. Gidi Stein: So going back to the story that actually initiated the company, you know… just wrong drug for the wrong patients. How would you identify that? I mean, you could write rules that say, don’t give this with that, but it’ll be endless, right? They’re like 2000 different medications, just the combination you’re reaching millions. So how would you know what is normal or what is not normal, or what is an outlier in terms of the relevance of that medication to that specific patient? So with the use of artificial intelligence and statistical methodologies it is suddenly possible, because we can harvest millions and millions of electronic medical records, analyze them, and let the machine, the AI engine, basically learn what are the normal patterns of prescribing and identifying outliers as potential risks. So in the example of the boy, he received anticoagulation a blood diluter instead of his asthma medication.
Now, if you look at the patients who are nine years old at the pediatric clinic, and we look, what is the likelihood of them being prescribed with anticoagulation, then chances are very, very, very low. Look on the flip side and see what are the characteristics of the patients that are likely to receive anticoagulation? So mostly our elder, older people with cardiovascular problems, where we have some lab tests or specialty visits… clues that would give us a hint that they may or are likely to receive anticoagulation or blood diluters. And this nine year old boy, which is practically healthy, is again an outlier to that population. So we can see a different dimension that that medication, not even clinically, but statistically, is an outlier relative to the profile of that patient. And this is again based on analysis of millions, of millions of patients similar to that boy.
And this was the foundation of the companies and this is what we have started with. And along the way, we have developed more and more motors that are able to detect different kinds of errors. So we basically started by saying, how clinicians make errors. What is the mechanism of an error? So one mechanism, like the one choosing the wrong drug in the pull-down menu, just clicking the wrong button, so we built a model for that. And then we said, okay, but today in the EMR world, the physician can give the right drug to the wrong patient just by being in the wrong file by mistake, so build the model around that. And then we dove into identifying maybe adverse drug events, maybe contraindications, maybe monitoring the patients throughout the duration of treatment, and trying to identify different risks along the way.
So suddenly we became from company that develops point of care decision support at the point of prescribing, to basically monitoring the patient profile throughout the duration of treatment of lifespan, looking continuously and actively for errors or for different kinds of risks that may emerge later on. So an example would be if a physician prescribed a medication which was perfectly good for that patient, and the prescription was for a month. Maybe two weeks later, the patient underwent a lab test and that lab test result made one of the medication contraindicated for that patient. So today there is no mechanism to connect it because the patient is out of context. He is not in the physician’s office. And the lab test was maybe all done by another physician and not the physician that prescribed it. But we’re able to match all these data points, and understand that we have now a risk. Question is, what are the workflows and needs to support that this list will be mitigated? But that’s a second level of problem. The first level problem, how to identify that risk, and this is basically what we’re doing.
DocWire News: What impact has the COVID-19 pandemic had on physician burnout, and in turn, how does physician burnout impact patient safety?
Dr. Gidi Stein: So let’s start with the second part of the question. This is something that we actually examined in an academic publication that was published, I think four or five months ago in the JAMIA, journal of American Medical Informatics Association. So there we talk about thousands of physicians that prescribe 1.6 million prescriptions in [inaudible 00:07:13], a very large academic medical center. And we basically used our system as a surrogate marker for errors, because we know we’re very, very accurate in a retrospective cohort. And then we measured, what is the work burden, slip the provision on that physician, and lack of experience. So not surprisingly, we found that the more continuous shifts the physicians are working, the more workload is in a specific shift, the more likely that these physicians will prescribe erroneously. Not only that, but we found that physicians that are prescribing the medications that they’re not used to prescribe for the first time, that also triple the likelihood of erroneous prescribing.
And this is in line with other published data, so this is not new, but we’re the first that published it at scale, of millions of prescriptions. Now, anyone who underwent residency or went into the ER at three o’clock in the morning, sees the doctor, he definitely understands the situation. You don’t want the doctor to treat you three o’clock in the morning after he didn’t sleep for two days. But, then came COVID. And with COVID two things emerge. First of all, the workload is tremendous, especially in last year, during the escalation of the pandemic. ERs were overflowed, ICUs were overflowed. The nurses and physicians and technicians and pharmacists basically are working to the limits around the clock, all the catastrophes waiting to happen and then some. And then you put on top of it repositioning of physicians into places where they’re not used to be prescribing medications that they are not used to treat for COVID and others, just because of lack of manpower, and then you get the perfect recipe for catastrophe. And we don’t have data on the magnitude of that catastrophe yet, but I’m sure that it will come out, because anyone who’s been there can tell you lots about it.
DocWire News: How can MedAware’s AI-based system be used to reduce physician burnout and enhance patient?
Dr. Gidi Stein: So there are two components to it. One is, again, well documented, is that many of the current decision support tools, not only the electronic medical records themselves, but the current decision support tools that were built to help physicians and save them from making mistakes, are doing the opposite in many cases. Because of high alert building reaching 20% or more prescriptions that are being flagged by the drug interactions, data bases, allergy checkers, et cetera, while more than 95% of these alerts are false alarms, and completely disregarded by the clinicians, this is cause for alert fatigue. And if you take a physician who didn’t sleep for a few days at two o’clock in the morning, he would never listen to it. So basically they are not saving the physicians, they are adding to the fatigue and basically preventing them from any ability to really capture the ills [inaudible 00:10:53].
Now, if we eliminate that, if we take this out of the equation, well it won’t give them hours of sleep and want to replace the electronic medical records rentals. But taking that buzz, that noise behind their heads, the hundreds and hundreds of false alarms every night out of their ears and putting high quality alerts only when it’s needed… Once a day, twice a day only will some real catastrophe happens, then suddenly the stress and anxiety and the disbelief is significantly diminished. So that’s one. The second part… That’s regarding the fatigue of the physician. The second part is reducing the erroneous prescribing. We have seen that once we are reducing the number of alerts by a magnitude, and enhancing the clinical relevance, suddenly it’s not white noise. These are blips that physicians and clinicians actually respond to. And by that we can prevent error, because even if you’re giving perfect alerts that nobody listens to, you’ve done nothing.
The only real measurement is how did you change the practice of the clinicians following the intervention that you made? And we know that, well, in almost half of the cases, physicians change their prescribing, following the alerts that we provided to them, even in the middle of the night. And by taking these two approaches, reducing the number of overdoses, reducing the fatigue of the clinicians on one had and enhancing the clinical relevance and accuracy of the intervention so physicians will actually act upon it, then you get a win-win situation. And this is how we approach that problem.
DocWire News: Closing thoughts?
Dr. Gidi Stein: So I just want to touch on the huge problem of opiate dependency, which is a huge problem in the U.S., and also around the world. And we see that although though we are mostly concentrated around the patients who are at high risk of overdose, and really trying to wrap them around with health care services and really identifying them, they’re basically only the tip of the iceberg. These are the ones that are going to collapse soon. But for every patient with opioid use disorder, which is diagnosed at least three more than are underdiagnosed and many more that are on the verge that are not at risk of dying tomorrow, but the levels of opioids that they’re taking are driving them to lose their families, lose their jobs, getting involved in car accidents. These are the patients that we’re trying to target and even prevent.
The secret is identifying the future risk of the patient for dependency before the first opiate prescription is provided. And by doing so.. Providing with a prescriber who may be an innocent bystander and an ER doc that just finished school or something like that with the insight to say, Hey, this specific individual is of high risk of future dependency, if you give him this prescription now. So either choose a non-opiate medication, maybe reduce the dose, maybe call him in for a follow-up next week and not next month, just to make sure that he’s well. And by providing these tools to the clinicians, we’re empowering them to reduce the overall number of dependencies and better this terrible epidemic while it’s around us.