Empowering the Future of Neuroscience

Empowering the Future of Neurosceince

Gall Group

The Computer Vision Group, which is headed by Prof. Dr. Jürgen Gall, has long-standing research expertise in computer vision. The group focuses on analyzing human behavior from video data, which includes human pose estimation and tracking, action recognition and temporal action segmentation, anticipation of human behavior, and weakly and unsupervised learning. For example, the group developed with other partners the PoseTrack dataset, which is a large-scale benchmark for estimating and tracking the poses of multiple persons in videos. The group also developed the so-called Multi-Stage Temporal Convolutional Networks, which are state-of-the-art networks for temporally segmenting actions in videos. The networks can be used to analyze the behavior of humans or animals. In order to reduce the annotation effort for training the networks, several techniques have been developed in order to train the networks for temporal action segmentation from weakly annotated videos, including an unsupervised approach that aims to infer behavior patterns from video data. Jürgen Gall has received several awards including an Emmy Noether Grant, an ERC Starting Grant, Marr Prize Honorable Mention for his work on open set domain adaptation, and the German Pattern Recognition Award. Furthermore, Jürgen Gall is spokesperson of the DFG funded research unit FOR 2535 on Anticipating Human Behavior (https://for2535.cv-uni-bonn.de).

DISCOVER our homepage here.

Methods

5 selected publications

  1. Li Z., Abu Farha Y., and Gall J., Temporal Action Segmentation from Timestamp Supervision. IEEE Conference on Computer Vision and Pattern Recognition, 8361-8370, 2021
    PDF: https://pages.iai.uni-bonn.de/gall_juergen/download/jgall_timestamp_cvpr21.pdf
    Source Code: https://github.com/ZheLi2020/TimestampActionSeg
  2. Li S., Abu Farha Y., Liu Y., Cheng M.-M., and Gall J., MS-TCN++: Multi-Stage Temporal Convolutional Network for Action Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020
    PDF: https://pages.iai.uni-bonn.de/gall_juergen/download/jgall_MSTCN_pami2020.pdf
    Source Code: https://github.com/sj-li/MS-TCN2
  3. Kuehne H., Richard A., and Gall J., A Hybrid RNN-HMM Approach for Weakly Supervised Temporal Action Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 42, No. 4, 765-779, 2020.
    PDF: https://pages.iai.uni-bonn.de/gall_juergen/download/jgall_RNN-HMM_pami18.pdf
    Source Code: https://github.com/alexanderrichard/weakly-sup-action-learning
  4. Kukleva A., Kuehne H., Sener F., and Gall J., Unsupervised Learning of Action Classes with Continuous Temporal Embedding. IEEE Conference on Computer Vision and Pattern Recognition, 12058-12066, 2019
    PDF: https://pages.iai.uni-bonn.de/gall_juergen/download/jgall_unsupervised_cvpr19.pdf
    Source Code: https://github.com/annusha/unsup_temp_embed
  5. Andriluka M., Iqbal U., Insafutdinov E., Pishchulin L., Milan A., Gall J., and Schiele B., PoseTrack: A Benchmark for Human Pose Estimation and Tracking, IEEE Conference on Computer Vision and Pattern Recognition, 5167-5176, 2018
    PDF: https://pages.iai.uni-bonn.de/gall_juergen/download/jgall_posetrack_cvpr18.pdf
    Benchmark: https://posetrack.net/