VideoCube is a high-quality and large-scale benchmark to create a challenging real-world experimental environment for Global Instance Tracking (GIT). MGIT is a high-quality and multi-modal benchmark based on VideoCube-Tiny to fully represent the complex spatio-temporal and causal relationships coupled in longer narrative content.
Global Instance Tracking (GIT) task aims to model the fundamental visual function of humans for motion perception without any assumptions about camera or motion consistency.
VideoCube contains 500 video segments of real-world moving objects and over 7.4 million labeled bounding boxes. We guarantee that each video contains at least 4008 frames, and the average frame length in VideoCube is around 14920.
The collection of VideoCube is based on six dimensions to describe the spatio-temporal relationship and causal relationship of film narrative, which provides an extensive dataset for the novel GIT task.
VideoCube provides 12 attributes for each frame to reflect the challenging situations in actual applications, and implement a more elaborate reference for the performance analysis.
VideoCube provides classical metrics and novel metrics for to evaluation algorithms. Besides, this benchmark also provides human baseline to measure the intelligence level of existing methods.
MGIT design a hierarchical multi-granular semantic annotation strategy to provide scientific natural
language information. Video content is annotated by three grands (i.e., action, activity, and
story).
MGIT expand the evaluation mechanism by conducting experiments under both traditional evaluation
mechanisms (multi-modal single granularity, single visual modality) and evaluation mechanisms adapted to
MGIT (multi-modal multi-granularity).
Please cite our IEEE TPAMI paper if VideoCube helps your research.
Please cite our NeurIPS paper if MGIT helps your research.
Please contact us if you have any problems or suggestions.