Interpretable Machine Learning to Deconstruct the Neural Basis of Psychiatric Disorders
This event has passed.
October 2, 2017 - 12:00pm to 1:00pm
David Carlson, Asst. Professor, Duke University
There is an extensive literature in machine learning demonstrating extraordinary ability to predict labels based off an abundance of data, such as object and voice recognition. Multiple scientific domains are poised to go through a data revolution, in which the quantity and quality of data will increase dramatically over the next several years. In psychiatric animal studies, novel devices are currently collecting data orders of magnitude larger than conventional techniques. In addition to being a complex "big data" problem, the limited numbers of animals in such studies makes this simultaneously a "small data" problem. Standard machine learning approaches can adapt to the data complexity to give state of the art predictions. However, in many studies, we are interested in methods that blend predictive power with interpretability such that they can be used for effective future experimental design under this "small data" regime.