Android

AI in the ICU: OLVG and Maasstad turn to Robo Doc for a 2nd opinion

A so-called artificial intelligence from software company Pacmed will help Amsterdam doctors in the ICU. No reason to panic just yet. The “AI” is not going to decide whether you can go under the knife or have been written off. The intention is that the system will work as an extra check when a doctor makes the choice whether someone from the ICU can be discharged.

AI calculates probability of death or return to ICU

That’s basically all the system does. When a doctor thinks it is time to make a decision, they check with the system how likely they are that a patient will return to the ICU or die within two weeks after immediate discharge. This percentage is based on a patient’s electronic record in relation to that of all other recently admitted patients in that hospital.

For the time being, the doctor will therefore remain in charge of the ICUs. On average, there is about 5 to 10 percent when discharged from the ICU. If the “AI” suddenly comes up with a much higher percentage, this may be a reason for a doctor to reconsider the choice.

An “AI” can process much more data than a human brain will ever be able to. In addition, computer software can recognize patterns more easily, provided the software knows how to “look” at the data. The idea is that the software can recognize a problem at an early stage, for example, while a flesh-and-blood doctor misses it.

Shorter ICU visits and less manpower

So the software is not a panacea. A well-trained eye remains necessary for making the right choice. However, the software can help reduce the average duration of an ICU visit. Such a visit is not only drastic for the patient. It is also very intensive, both financially and in terms of manpower. Especially the latter will weigh heavily in view of the acute shortages in the entire medical world, but specifically in the ICUs.

But is it safe?

One of the most concerning voices from the “AI” world comes from scientific researchers. It’s nice that an “AI” model can make your teacher believe that you did your homework yourself, but it doesn’t depend very much on it. If the “AI” model fails, then there is usually no man overboard. It is therefore not particularly relevant to examine why the “AI” made that mistake in a specific case.

When it comes to medical decision-making, the situation is different. Fortunately, the choice of whether or not someone can be discharged from the ICU ultimately rests with the doctor. But when an “AI” starts talking about the subject, it is very important that how the “AI” model arrived at the answer is fully understood. And now it often goes wrong there.

Black box models

What we call artificial intelligences are nothing more than algorithms that make a prediction based on input (training data and an assignment) as to what the next word/pixel etc should probably be. As current models become more complex, researchers and developers find it increasingly difficult to understand how such an algorithm arrives at an answer. The focus is mainly on generating the desired answer.

It is therefore usually only checked whether an answer is in line with expectations. An answer that is therefore by definition limited by the biases present in the dataset. It is not without reason that more and more voices are being raised that want the use of “AI” by governments, for example, to meet strict requirements.

Not getting every bias out of your dataset is fine when you’re dealing with a human brain that’s capable of reasoning. At least, that’s what we assume. (Let’s not get too philosophical!) But that’s not what an “AI” does. An “AI” is just gambling until a human says, “That’ll do pig, that’ll do.” Again, nice for generating an abstract artwork. You may ask yourself if something like this really belongs in an ICU?

Leave a Reply

Your email address will not be published. Required fields are marked *