[06/27/23] The leaderboard is open again - anyone can evaluate their models on the test set now!
[06/06/23] The challenge ended! We thank all the participants for their efforts in pushing the state-of-the-art in language-based detection. Please find the results here!
[04/05/23] IMPORTANT UPDATE: We changed the track definitions to better match training dataset settings from existing works like GLIP or MDETR.
[03/29/23] Evaluation server for the workshop challenge is online!!! Also, we updated the validation set with cleaner annotations (see download site). So get the new annotations, the updated code from github, and participate in the challenge.
[02/07/23] Initial release of our novel benchmark and corresponding dataset. Please explore the dataset with some samples and download the full dataset to evaluate your own model. Along with the dataset, we also released a Python toolkit to work with the data (visualize, evaluate, statistics, ...)
[12/15/22] The OmniLabel workshop got accepted to CVPR 2023! This workshop will use this benchmark for an exciting new challenge ... stay tuned for more details soon.