Recently, an artificial intelligence-guided robot successfully performed a laparoscopic surgery to connect two ends of an intestine in four pigs, without any human intervention. And according to the researchers involved, the robot surgeon produced “significantly better” results than humans. Though such an accomplishment is astonishing and signals the inevitable rise of fully autonomous medical robots, it is important to remember that the results generated by artificial intelligence are only as good as the information used to train it.
Over the years, researchers have found that systematic biases in clinical data collection can affect the type of patterns artificial intelligence recognizes or the predictions it makes. This can especially affect certain minorities due to longstanding underrepresentation in clinical trial and patient registry populations. For example, in one instance, when images of white skin were being used to train algorithms to detect melanoma, researchers realized the artificial intelligence struggled to detect cancerous moles on darker skins.
Going forward, as we inch closer to the mainstream adoption of fully autonomous medical robots, manufacturers will need to keep in mind that if the underlying clinical data informing their artificial intelligence is flawed, then the robots relying on that artificial intelligence will perform in a suboptimal manner. Meaningful steps should be taken early in the development cycle to identify and mitigate clinical data-related biases when creating artificial intelligence. Failure to take such steps could expose manufacturers to design defect claims.