VideoCube / MGIT / DTVLT

A general benchmark for visual tracking and visual language tracking intelligence evaluation

VideoCube is a high-quality and large-scale benchmark to create a challenging real-world experimental environment for Global Instance Tracking (GIT). MGIT is a high-quality and multi-modal benchmark based on VideoCube-Tiny to fully represent the complex spatio-temporal and causal relationships coupled in longer narrative content. DTVLT is a new visual language tracking benchmark with diverse texts, based on five prominent VLT and SOT benchmarks, including three sub-tasks: short-term tracking, long-term tracking, and global instance tracking.

Task

Global Instance Tracking (GIT) task aims to model the fundamental visual function of humans for motion perception without any assumptions about camera or motion consistency.

Key Features

Large-Scale

VideoCube contains 500 video segments of real-world moving objects and over 7.4 million labeled bounding boxes. We guarantee that each video contains at least 4008 frames, and the average frame length in VideoCube is around 14920.

Multiple Collection Dimension

The collection of VideoCube is based on six dimensions to describe the spatio-temporal relationship and causal relationship of film narrative, which provides an extensive dataset for the novel GIT task.

Comprehensive Attribute Selection

VideoCube provides 12 attributes for each frame to reflect the challenging situations in actual applications, and implement a more elaborate reference for the performance analysis.

Scientific Evaluation

VideoCube provides classical metrics and novel metrics for to evaluation algorithms. Besides, this benchmark also provides human baseline to measure the intelligence level of existing methods.

Multi-granularity Semantic Annotation

MGIT design a hierarchical multi-granular semantic annotation strategy to provide scientific natural language information. Video content is annotated by three grands (i.e., action, activity, and story).

Evaluation Mechanism for Multi-modal Tracking

MGIT expand the evaluation mechanism by conducting experiments under both traditional evaluation mechanisms (multi-modal single granularity, single visual modality) and evaluation mechanisms adapted to MGIT (multi-modal multi-granularity).

Latest News

Publications

Publication

Global Instance Tracking: Locating Target More Like Humans.
S. Hu, X. Zhao*, L. Huang and K. Huang (*corresponding author)
IEEE Transactions on Pattern Analysis and Machine Intelligence
[PDF] [BibTex]

Please cite our IEEE TPAMI paper if VideoCube helps your research.

Publication

A Multi-modal Global Instance Tracking Benchmark (MGIT):
Better Locating Target in Complex Spatio-temporal and Casual Relationship.
S. Hu, D. Zhang, M. Wu, X. Feng, X. Li, X. Zhao and K. Huang
Advances in Neural Information Processing Systems
[PDF] [BibTex]

Please cite our NeurIPS paper if MGIT helps your research.

Publication

DTLLM-VLT: Diverse Text Generation for Visual Language Tracking Based on LLM.
X. Li, X. Feng, S. Hu, M. Wu, D. Zhang, J. Zhang and K. Huang
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshop
Best Paper Honorable Mention Award
[PDF] [BibTex]

Please cite our CVPRW paper if DTLLM-VLT helps your research.

Publication

DTVLT: A Multi-modal Diverse Text Benchmark for Visual Language Tracking Based on LLM.
X. Li, S. Hu, X. Feng, D. Zhang, M. Wu, J. Zhang and K. Huang
ArXiv Preprint
[PDF] [BibTex]

Please cite our paper if DTVLT helps your research.

Publication

Visual Language Tracking with Multi-modal Interaction: A Robust Benchmark.
X. Li, S. Hu, X. Feng, D. Zhang, M. Wu, J. Zhang and K. Huang
ArXiv Preprint
[PDF] [BibTex]

Please cite our paper if VLT-MI helps your research.

Demo

VideoCube Benchmark

MGIT Benchmark

Organizers

  • Shiyu Hu, Center for Research on Intelligent System and Engineering (CRISE), CASIA.
  • Xin Zhao, Center for Research on Intelligent System and Engineering (CRISE), CASIA.
  • Lianghua Huang, Center for Research on Intelligent System and Engineering (CRISE), CASIA.
  • Kaiqi Huang, Center for Research on Intelligent System and Engineering (CRISE), CASIA.

Maintainer

  • Xuchen Li, Center for Research on Intelligent System and Engineering (CRISE), CASIA.

Contact

Please contact us if you have any problems or suggestions.


© 2022-2024 Copyright.