The next generation of perception should understand complex free-form object descriptions, rather than a fixed set of categories. To accelerate this vision, we propose a novel & challenging benchmark. Checkout our task description and paper for more details
How do you evaluate your method? We provide a simple Python toolkit that lets you interact with the data, visualize samples, get statistics and evaluate your method
We are organizing a challenge with the OmniLabel benchmark along with our CVPR23 workshop. Participate and compare your method against others ...
[05/04/23] Test set of the OmniLabel challenge was released! Download instructions are here.
[04/28/23] Our paper describing the benchmark is online on arXiv
[04/05/23] IMPORTANT UPDATE: We changed the track definitions to better match training dataset settings from existing works like GLIP or MDETR.
[03/29/23] Evaluation server for the workshop challenge is online!!! Also, we updated the validation set with cleaner annotations (see download site). So get the new annotations, the updated code from github, and participate in the challenge.
[02/07/23] Initial release of our novel benchmark and corresponding dataset. Please explore the dataset with some samples and download the full dataset to evaluate your own model. Along with the dataset, we also released a Python toolkit to work with the data (visualize, evaluate, statistics, ...)
[12/15/22] The OmniLabel workshop got accepted to CVPR 2023! This workshop will use this benchmark for an exciting new challenge ... stay tuned for more details soon.