在数据集上和别人的结果进行比较,可以量化视觉工作的结果。列举一些从书上摘来的流行的数据集。
CUReT: Columbia-Utrecht Reflectance and Texture Database,http://www1.cs.columbia.
edu/CAVE/software/curet/(Dana, van Ginneken, Nayaret al.1999).
Middlebury Color Datasets: 包含不同相机拍摄的彩色图片,用于研究相机如何对色域和颜色进行变换, http://vision.middlebury.edu/color/data/
(Chakrabarti, Scharstein, and Zickler 2009).
Middlebury test datasets for evaluating MRF minimization/inference algorithms,http:
//vision.middlebury.edu/MRF/results/ (Szeliski, Zabih, Scharsteinet al.2008).
hapter4: Feature detection and matching
Affine Covariant Features database for evaluating feature detector and descriptor match-
ing quality and repeatability, http://www.robots.ox.ac.uk/
vgg/research/affine/(Miko-
lajczyk and Schmid 2005; Mikolajczyk, Tuytelaars, Schmidet al.2005).
Database of matched image patches for learning and feature descriptor evaluation,
http://cvlab.epfl.ch/
brown/patchdata/patchdata.html(Winder and Brown 2007; Hua,
Brown, and Winder 2007).
Chapter5: Segmentation
Berkeley Segmentation Dataset and Benchmark of 1000 images labeled by 30 humans,
along with an evaluation, http://www.eecs.berkeley.edu/Research/Projects/CS/vision/
grouping/segbench/(Martin, Fowlkes, Talet al.2001).
Weizmann segmentation evaluation database of 100 grayscale images with ground
truth segmentations, http://www.wisdom.weizmann.ac.il/
vision/SegEvaluationDB/
index.html (Alpert, Galun, Basriet al.2007).
Chapter8: Dense motion estimation
The Middlebury optic flow evaluation Web site,http://vision.middlebury.edu/flow/data
(Baker, Scharstein, Lewiset al.2009).
The Human-Assisted Motion Annotation database,
http://people.csail.mit.edu/celiu/motionAnnotation/(Liu, Freeman, Adelsonet al.2008)
Chapter10: Computational photography
High Dynamic Range radiance maps,http://www.debevec.org/Research/HDR/ (De-
bevec and Malik 1997).
Alpha matting evaluation Web site,http://alphamatting.com/(Rhemann, Rother, Wang
et al.2009).
Chapter11: Stereo correspondence
Middlebury Stereo Datasets and Evaluation,http://vision.middlebury.edu/stereo/(Scharst
and Szeliski 2002).
Stereo Classification and Performance Evaluation of different aggregation costs for
stereo matching, http://www.vision.deis.unibo.it/spe/SPEHome.aspx(Tombari, Mat-
toccia, Di Stefano et al.2008).
Middlebury Multi-View Stereo Datasets,http://vision.middlebury.edu/mview/data/(Seitz
Curless, Diebelet al.2006).
Multi-view and Oxford Colleges building reconstructions,http://www.robots.ox.ac.uk/
vgg/data/data-mview.html.
Multi-View Stereo Datasets,http://cvlab.epfl.ch/data/strechamvs/(Strecha, Fransens,
and Van Gool 2006).
Multi-View Evaluation,http://cvlab.epfl.ch/
strecha/multiview/(Strecha, von Hansen,
Van Goolet al.2008).
hapter12: 3D reconstruction
HumanEva: synchronized video and motion capture dataset for evaluation of artic-
ulated human motion,http://vision.cs.brown.edu/humaneva/(Sigal, Balan, and Black
2010).
hapter13: Image-based rendering
The (New) Stanford Light Field Archive,http://lightfield.stanford.edu/(Wilburn, Joshi,
Vaishet al.2005).
Virtual Viewpoint Video: multi-viewpoint video with per-frame depth maps,http:
//research.microsoft.com/en-us/um/redmond/groups/ivm/vvv/ (Zitnick, Kang, Uytten-
daeleet al.2004).
hapter14: Recognition
For a list of visual recognition datasets, see Tables14.1–14.2. In addition to those,
there are also:
Buffy pose classes,http://www.robots.ox.ac.uk/
vgg/data/buffyposeclasses/and Buffy
stickmen V2.1,http://www.robots.ox.ac.uk/
vgg/data/stickmen/index.html(Ferrari, Marin
Jimenez, and Zisserman 2009; Eichner and Ferrari 2009).
H3D database of pose/joint annotated photographs of humans,http://www.eecs.berkeley.
edu/
lbourdev/h3d/ (Bourdev and Malik 2009).
Action Recognition Datasets,http://www.cs.berkeley.edu/projects/vision/action, has point-
ers to several datasets for action and activity recognition, as well as some papers. The
human action database athttp://www.nada.kth.se/cvap/actions/contains more action
sequences.