Action Tubelet Detector for Spatio-Temporal Action Localization

Vicky Kalogeiton    Philippe Weinzaepfel    Vittorio Ferrari    Cordelia Schmid




Abstract

Current state-of-the-art approaches for spatio-temporal action detection rely on detections at the frame level that are then linked or tracked across time. In this work, we leverage the temporal continuity of videos instead of operating at the frame level. We propose the ACtion Tubelet detector (ACT-detector) that takes as input a sequence of frames and outputs tubelets, i.e., sequences of bounding boxes with associated scores. The same way state-of-the-art object detectors rely on anchor boxes, our ACT-detector is based on anchor cuboids. We build upon the state-of-the-art SSD framework. Convolutional features are extracted for each frame, while scores and regressions are based on the temporal stacking of these features, thus exploiting information from a sequence. Our experimental results show that leveraging sequences of frames significantly improves detection performance over using individual frames. The gain of our tubelet detector can be explained by both more relevant scores and more precise localization. Our ACT-detector outperforms the state of the art methods for frame-mAP and video-mAP on the J-HMDB and UCF-101 datasets, in particular at high overlap thresholds.

You can find the paper here, an extended version here and the poster here.

Code

You can find the code here.

Examples

green: ground-trurth, yellow: correct detections, red: wrong detections

Citation

If you use our code, please cite our paper:

@inproceedings{kalogeiton17iccv:hal-01519812,
  TITLE = {{Action Tubelet Detector for Spatio-Temporal Action Localization}},
  AUTHOR = {Kalogeiton, Vicky and Weinzaepfel, Philippe and Ferrari, Vittorio and Schmid, Cordelia},
  YEAR = {2017},
  MONTH = Oct,
  BOOKTITLE = {{ICCV 2017 - IEEE International Conference on Computer Vision}},
  ADDRESS = {Venice, Italy},
}