OmniLabel Challenge 2023
In conjunction with our Workshop at CVPR 2023, we are hosting a challenge with the OmniLabel dataset.
OmniLabel benchmark: Novel dataset with complex, free-form text descriptions of objects. Checkout our paper for details
Train on public datasets and evaluate on our benchmark! We define three tracks to compete based on the allowed training data.
Test set and validation sets are available (v0.1.3). Use our evaluation server to participate in the challenge. Use our our toolkit to evaluate on the validation set by yourself.
A prize money of $10,000 will be distributed among the participants of the challenge!
Download the data
Check out the task definitions
Compete on the evaluation servers
Tracks
Track A
Allowed training datasets:
Allowed pre-trained models: Publicly available models only (e.g., CLIP)
Track B
Allowed training datasets:
Object detection: same as Track A + OpenImagesV5-train
Referring/Grounding: same as Track A + Localized Narratives
Captions: same as Track A + CC-12M + 15M subset of YFCC100M
Others: same as Track A + Kinetics-600, ActivityNet
Allowed pre-trained models: Publicly available models only (e.g., CLIP)
Track C
Allowed training datasets: Any
Allowed pre-trained models: Any
Results
Below are the results of each track. Congratulations to all participants. The first three places in each of the track will receive a share of the $10000 prize money. Detauls about the competition will follow soon ...
Track A
Winners
l_harold: Liunian Harold Li, Zi-Yi Dou, Nanyun Peng, Kai-Wei Chang (UCLA)
tuyentx: Xuan-Tuyen Tran (Applied Artificial Intelligence Institute, Deakin University, Australia)
abcaaa: Xiaowen Zhang, Zitao Wang, Yi Zuo, Yuting Yang, Licheng Jiao (Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Educatio Xidian University, Xi’an City, Shaanxi Province, China)
Track B
Winners
l_harold: Liunian Harold Li, Zi-Yi Dou, Nanyun Peng, Kai-Wei Chang (UCLA)
tuyentx: Xuan-Tuyen Tran (Applied Artificial Intelligence Institute, Deakin University, Australia)
abcaaa: Xiaowen Zhang, Zitao Wang, Yi Zuo, Yuting Yang, Licheng Jiao (Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Educatio Xidian University, Xi’an City, Shaanxi Province, China)
Track C
Winners
l_harold: Liunian Harold Li, Zi-Yi Dou, Nanyun Peng, Kai-Wei Chang (UCLA)
tuyentx: Xuan-Tuyen Tran (Applied Artificial Intelligence Institute, Deakin University, Australia)
sanghyeokchu: Sanghyeok Chu, Bohyung Han (ECE & ASRI, Seoul National University)
Timeline
02/07/2023 - Public release of benchmark data (validation set and evaluation toolkit)
03/28/2023 - Evaluation server goes online with the validation set
05/03/2023 - Test set release - evaluation server accepts submissions on the test set
05/26/2023 - Challenge closes
05/31/2023 - Deadline for submitting report (all challenge participants are required to provide a brief report about their method)
06/02/2023 - Challenge winners will be informed
06/18/2023 - Workshop at CVPR 2023 (half-day, morning session)
06/19/2023 - Open the benchmark to the public (submit results without participating in a challenge)