Explainability in Health Solutions or Taking it from Benchmarking to an Acceptable Reality

  • Explainability is of upmost importance in health in general for explaining the benefit of the patient…and not the least gaining his or her trust.
  • Explainability concerns the algorithmic solution and the data itself:

Known Data

  • Concerning the data as much of it should be labelled, which means that a tag should be added on the image or piece of data (clinical data). This is a tedious task which is labor intensive.
    • It enables a machine learning method called Supervised learning.
  • However there is a way of labelling as much data as possible through a technique called Pseudolabelling. Basically after having labelled some of the data, the unlabelled data is labelled with the rules of this labelled data.
    • The labelled and pseudolabelled datasets are combined and passed through the neural network. This technique is called semi-supervised learning.
  • Learning on unlabelled data will be useful to find patterns, but will complicate explainability as no specific problem has been asked, and there this can’t be a rational solution to that. Even supervised learning is a problem, but it does go one step further in the explainability paradigm.

Data which makes sense to resolve a given problem: Bias

  • An algorithm can only train and find solutions based on the data it receives.
  • This means that if there is not enough data say, on black skin and women, the model will likely underperform on black woman.
  • This is an inherent problem in our data rich world, but as for everything, it is unevenly distributed, with most of it being on Caucasians an Chinese

Explainable algorithms

Here below we explain the concept of the absence of logical explanation of artificial neural networks:

  • The infamous “black box”
    • This is a problem in AI developed solutions in health
    • Engineers would like us to believe that only the correct solution obtained through the algorithm matters. [Algorithm derives from “Al-Kwarizmi”, a Persian scholar so this term has been around since the 9th century.] Learn your child to multiply 3 times 4 and 12 will be the answer. Input this on a calculator and you’ll get the same answer. However, the mechanism which to these answers were obtained will not be known.
    • However in medicine, its’ not that simple.It’s not heads or tails as if you were flipping a coin. There is a gray zone which constitutes the art of medicine.

Conclusion

  • Explainability is thus an inherent must in healthcare. Whatever the technique explainability must be sought: explanation is not causation.
  • Doctors make the Hippocratic oath, not algorithms. As is being done in hospitals all over the world, it is up to Healthcare Providers (HCPs) dealing with reality to determine what goes from a research lab into a clinical solution….with plenty of gathering and publishing of evidence in between !

Categories

Scroll to Top