Would you trust an algorithm to help you with a medical diagnosis? As hospitals seek out new tools to assist them in triaging the patients most in need, technologies driven by machine learning are expected to make a big impact in the medical sector.
The latest company to throw its hat into the ring is General Electric, which is investing big in software and is already known for its medical imaging equipment. The manufacturing giant exclusively shared with Fast Company that it is partnering with UC San Francisco for the next three years to develop a set of algorithms to help its radiologists distinguish between a normal result and one that requires further attention.
“There’s tremendous opportunity to look at large datasets, like medical images, to predict how patients will do,” says UC San Francisco’s director of UCSF’s Center for Digital Health Innovation, Michael Blum.
It’s early days, but machine learning and deep learning technologies are already making their way into a small number of medical specialties, including primary care, pathology, and radiology. Radiologists are a particularly hot target for technology companies as they already use advanced software, such as computer-aided detection, to help them spot diseases like cancer from patients’ medical images.
GE is hoping that this partnership is just the beginning. “The end result will really be a library of machine and deep learning algorithms that can be applied to imaging,” says Christopher Austin, GE Healthcare’s global director of imaging analytics, in an interview.
The partnership will initially focus on two specific applications: The first is to create an algorithm to detect pneumothorax, or unwanted air or gas in the cavity between the lungs and the chest well. The goal for that project is to speed up the turnaround time from interpreting an X-ray to placing a potentially lifesaving chest tube. The second project involves alerting radiologists to patients whose nasogastric tube—a gastric intubation via the nasal passage—has been placed incorrectly. Once the radiologist checks the image to make a confirmation, a care team can step in to rectify the situation.
If these projects are successful, Blum is open to exploring further projects. One area he’s particularly interested in is trauma: How do you distinguish the patients that are most at risk for bad outcomes?
Both UCSF and GE stressed that these algorithms are not intended to make radiologists redundant. “There is a lot of concern from the public and from clinicians that we’ll be developing things to replace doctors,” says Blum. “These developments will be focused on supporting clinicians and in developing safer workflows.” Moreover, the regulatory framework in the U.S. is predicated on a human specialist being present to make a diagnosis.
GE’s Austin says the company will be releasing some research in the coming years to demonstrate the efficacy of its technology. That will be key to convincing the medical community. Technology vendors will need to invest in clinical trials to demonstrate that the technology is improving patient outcomes.
“Technology companies are still in process of thinking about how to deploy and commercialize these technologies, and how to bring them to the health care community,” says Blum.