The widespread presence of artificial intelligence in different areas of activity also has the potential to be useful in health care.
A new study shows that doctors who use AI when examining tissue samples can work faster while doing high-quality work.
Does AI generate trust for its use in medicine?
A great challenge for those who develop AI solutions for health purposes is that artificial intelligence does not always do things well. The trial and error process can be tedious and even frustrating.
Martin Lindvall, Ph.D. in AI software and autonomous systems from Linköping University, Sweden, has been investigating the potential uses of AI in healthcare, commenting on these processes.
“We have learned to expect AI to make mistakes. However, we also know that we can improve it over time by telling you when it's wrong or right. While we are aware of this flawed nature of AI, we must ensure that these systems are efficient and effective for users. It is also important that users feel that machine learning adds something positive”, he commented.
Approaches like machine learning involve training an AI to find patterns in large amounts of data. This type of AI can eventually be trained to find cancer on medical images or other tissue abnormalities, for example. However, solutions of this type still generate skepticism. “Computer programs using machine learning will inevitably make mistakes in ways that are difficult to anticipate,” says Martin Lindvall.
Models trained with machine learning are sensitive and easily affected by small factors, such as changing the manufacturer of the chemicals used to stain the tissue sections, how thick they are, and whether there is dust on the scanner glass. Small alterations in the map of elements involved can cause the AI model to malfunction.
AI systems, despite their constant improvement challenge, are still in the process of being adapted for integration into clinical environments. Its deployment has been primarily at the research level.
One of the main pending challenges is around the transfer of the necessary security that needs to be transferred to the users who would benefit from it. It has been mentioned many times that AI can even outperform a doctor on certain tasks, based on statistical reports that go beyond a simple slogan. However, as long as patients do not trust these systems, their reach will be limited for the time being.