2 months
Rules Acceptance Deadline
85
Teams
90
Competitors
281
Entries
Points
This competition awards standard ranking points
Tiers
This competition counts towards tiers
image datax 1733
data type > image data
computer visionx 1053
technique > computer vision
custom metric
Automatic Tag
In this competition, you are asked to take test images and recognize which landmarks (if any) are depicted in them. The training set is available in the train/
folder, with corresponding landmark labels in train.csv
. The test set images are listed in the test/
folder. Each image has a unique id
. Since there are a large number of images, each image is placed within three subfolders according to the first three characters of the image id
(i.e. image abcdef.jpg
is placed in a/b/c/abcdef.jpg
).
This is a synchronous rerun code competition. The provided test set is a representative set of files to demonstrate the format of the private test set. When you submit your notebook, Kaggle will rerun your code on the private dataset. Additionally, this competition also has two unique characteristics:
- To facilitate recognition-by-retrieval approaches, the private training set contains only a 100k subset of the total public training set. This 100k subset contains all of the training set images associated with the landmarks in the private test set. You may still attach the full training set as an external data set if you wish.
- Submissions are given 12 hours to run, as compared to the site-wide session limit of 9 hours. While your commit must still finish in the 9 hour limit in order to be eligible to submit, the rerun may take the full 12 hours.
GLDv2
The training data for this competition comes from a cleaned version of the Google Landmarks Dataset v2 (GLDv2), which is available here. Please refer to the paper for more details on the dataset construction and how to use it. See this code example for an example of a pretrained model.
If you make use of this dataset in your research, please consider citing:
"Google Landmarks Dataset v2 - A Large-Scale Benchmark for Instance-Level Recognition and Retrieval", T. Weyand, A. Araujo, B. Cao and J. Sim, Proc. CVPR'20
Data Explorer
test
train
sample_submission.csv
train.csv
Summary
1.59m files
4 columns
sample_submission.csv(292.99 KB)
2 of 2 columns