
Developing artificial intelligence tools for health care
A new study from researchers at Weill Cornell Medical College and The Rockefeller University finds that reinforcement learning, an artificial intelligence method, has the potential to guide doctors in designing sequential treatment strategies for better patient outcomes, but it has not been applied in Significant improvements are needed before clinical settings can be implemented.
Reinforcement learning (RL) is a class of machine learning algorithms that can make a series of decisions over time. RL is responsible for recent advances in artificial intelligence, including superhuman performance at chess and Go, which can leverage changing patient conditions, test results, and previous treatment responses to recommend the next best steps for personalized patient care. This approach holds particular promise for decision-making in the management of chronic or psychiatric illnesses.
The research was published in Proceedings of the Neural Information Processing Systems Conference (NeurIPS) And launched on December 13, it introduced “Episodes of Care” (EpiCare), the first healthcare RL benchmark.
“Benchmarks have driven improvements in machine learning applications, including computer vision, natural language processing, speech recognition, and self-driving cars,” said Logan Grosenick, Ph.D., assistant professor of psychiatry neuroscience. “We hope they will now drive the application of reinforcement learning in health care.” progress.
Reinforcement learning agents refine their actions based on the feedback they receive, gradually learning strategies that enhance their decision-making capabilities. “However, our results show that while current methods are promising, they are very data hungry,” Dr. Grosenick added.
The researchers first tested the performance of five state-of-the-art online reinforcement learning models on EpiCare. All five exceeded the standard of care baseline, but only after training on thousands or tens of thousands of real simulated treatment events. In the real world, reinforcement learning methods are never trained directly on patients, so the researchers next evaluated five common “off-policy evaluation” (OPE) methods: designed to use historical data, such as those from clinical trials. data) to circumvent the need for popular methods of online data collection. Using EpiCare, they found that state-of-the-art OPE methods consistently failed to process healthcare data accurately.
“Our results demonstrate that current state-of-the-art OPE methods cannot accurately predict reinforcement learning performance in longitudinal healthcare scenarios,” said first author Dr. Mason Hargrave, a researcher at Rockefeller University. As OPE methods are increasingly discussed in healthcare applications, this finding highlights the need to develop more accurate benchmarking tools, such as EpiCare, to review existing RL methods and provide metrics to measure improvements.
“We hope this work will promote more reliable evaluation of reinforcement learning in healthcare settings and help accelerate the development of better reinforcement learning algorithms and training protocols suitable for medical applications,” said Dr. Grosenick.
Using convolutional neural networks to interpret graph data
In a second NeurIPS publication published on the same day, Dr. Grosenick shared his research on convolutional neural networks (CNN), which are widely used to process images, to process more general graph-structured data such as the brain. , gene or protein network. In the early 2010s, CNN achieved widespread success in image recognition tasks, laying the foundation for the modern era of CNN’s “deep learning” and neural network-driven artificial intelligence applications. CNNs can be used in many applications, including facial recognition, self-driving cars, and medical image analysis.
“We were often interested in analyzing neuroimaging data, which are more like graphs, with vertices and edges, rather than like images. But we realized that for graph-structured data, there was nothing really equivalent to CNNs and deep CNNs,” he said. ” Dr. Grosenick said.
Brain networks are often represented as graphs, in which brain regions (represented as vertices) propagate information to other brain regions (vertices) along “edges” that connect and represent the strength between them. The same is true for gene and protein networks, human and animal behavioral data, and the geometry of chemical compounds such as drugs. By directly analyzing such graphs, we can more accurately model the dependencies and patterns between local and more distant connections.
Isaac Osafo Nkansah, a researcher in Grosenick’s lab and first author on the paper, helped develop the Quantified Graph Convolutional Networks (QuantNets) framework, which generalizes CNNs to graphs. “We are now using it to simulate electroencephalogram (electrical activity in the brain) data from patients. We can build a network of 256 sensors on the scalp to take readings of neuronal activity – it’s a graph ,” Dr. Grosenick said. “We are simplifying these large graphs into more easily interpretable components to better understand how dynamic brain connections change when patients are treated for depression or obsessive-compulsive disorder.”
Researchers foresee QuantNet having broad applicability. For example, they also hope to model graphically structured pose data to track behavior in mouse models and human facial expressions extracted using computer vision.
“While we are still exploring the safety and sophistication of applying cutting-edge artificial intelligence methods to patient care, each step forward—whether it’s a new baseline framework or a more accurate model—gets us closer to personalized treatments. strategies that already have the potential to profoundly improve patient health outcomes,” concluded Dr. Grosenick.
2024-12-18 01:16:09