In this paper, we propose a new framework for action localization that tracks
people in videos and extracts full-body human tubes, i.e., spatio-temporal
regions localizing actions, even in the case of occlusions or truncations.
This is achieved by training a novel human part detector that scores visible
parts while regressing full-body bounding boxes. The core of our method is a
convolutional neural network which learns part proposals specific to certain
body parts. These are then combined to detect people robustly in each frame.
Our tracking algorithm connects the image detections temporally to extract
full-body human tubes. We apply our new tube extraction method on the problem
of human action localization, on the popular JHMDB dataset, and a very recent
challenging dataset DALY (Daily Action Localization in YouTube), showing
state-of-the-art results.