How do we quantify behavior

Computer vision and machine learning tools for deep behavioral analysis in animal models, healthy humans, and patients.

Large datasets and deep convolutional neural networks have led to rapid advances in computer vision, opening new avenues to analyze behavior and phenotype pathologies. Multi-perspective high-resolution video data of human or animal behavior can now be processed with unprecedented fidelity and throughput, yielding rich datasets that can be used to obtain dense quantitative classifications of behavior over different timescales. With such approaches, the exact geometrical configuration of multiple, arbitrarily chosen or anatomically constrained body parts can readily be extracted, and postural epochs or disease phenotypes be classified – surpassing human annotators and reducing inherent biases.

Virtual reality view, camera recording, and pose reconstruction of a human participant engaged in foraging under virtual threat. Courtesy of Dr Ulises Serratos.

Video of a walking fruit fly on an air-supported ball, under optogenetic control of a walking-initiating brain neuron. Joint angles are tracked with advanced computer vision methods in DeepLabCut. Combined with five additional multi-angle camera views, accurate and robust reconstruction of all leg joint angle kinematics is possible at high temporal resolution. Courtesy of Moritz Haustein, Büschges Lab, UoC.

.3D tracking of mouse hindlimb kinematics while walking on a runaway. Stick diagram built for hip, knee, ankle and paw. Movie