Contains 165 grayscale images in GIF format of 15 individuals. There are 11 images per subject, one per different facial expression or configuration: center-light, w/glasses, happy, left-light, w/no glasses, normal, right-light, sad, sleepy, surprised, and wink.
A database of 41,368 images of 68 people, each person under 13 different poses, 43 different illumination conditions, and with 4 different expressions.
AT&T "The Database of Faces" (formerly "The ORL Database of Faces")
Ten different images of each of 40 distinct subjects. For some subjects, the images were taken at different times, varying the lighting, facial expressions (open / closed eyes, smiling / not smiling) and facial details (glasses / no glasses). All the images were taken against a dark homogeneous background with the subjects in an upright, frontal position (with tolerance for some side movement).
450 face images. 896 x 592 pixels. JPEG format. 27 or so unique people under with different lighting/expressions/backgrounds.
The AR Face Database, The Ohio State University, USA
4,000 color images corresponding to 126 people's faces (70 men and 56 women). Images feature frontal view faces with different facial expressions, illumination conditions, and occlusions (sun glasses and scarf).
The CAS-PEAL face database has been constructed under the sponsors of National Hi-Tech Program and ISVISION. The goals to create the PEAL face database include: providing the worldwide researchers of FR community a large-scale Chinese face database for training and evaluating their algorithms; facilitating the development of FR by providing large-scale face images with different sources of variations, especially Pose, Expression, Accessories, and Lighting (PEAL); advancing the state-of-the-art face recognition technologies aiming at practical applications especially for the oriental.
Japanese Female Facial Expression (JAFFE) Database
The database contains 213 images of 7 facial expressions (6 basic facial expressions + 1 neutral) posed by 10 Japanese female models. Each image has been rated on 6 emotion adjectives by 60 Japanese subjects.
450 face images. 896 x 592 pixels. JPEG format. 27 or so unique people under with different lighting/expressions/backgrounds.
The UCD Colour Face Image Database for Face Detection
The database has two parts. Part one contains colour pictures of faces having a high degree of variability in scale, location, orientation, pose, facial expression and lighting conditions, while part two has manually segmented results for each of the images in part one of the database. These images are acquired from a wide variety of sources such as digital cameras, pictures scanned using photo-scanner, other face databases and the World Wide Web. The database is intended for distribution to researchers.
The database contains images of 50 people and is stored in JPEG format. For each individual, there are 15 color images captured between 06/01/99 and 11/15/99. Most of the images were taken in two different sessions to take into account the variations in illumination conditions, facial expression, and appearance. In addition to this, the faces were captured at different scales and orientations.
The database contains a set of face images taken in February, 2002 in the IIT Kanpur campus. There are eleven different images of each of 40 distinct subjects. For some subjects, some additional photographs are included. All the images were taken against a bright homogeneous background with the subjects in an upright, frontal position. The files are in JPEG format. The size of each image is 640x480 pixels, with 256 grey levels per pixel. The images are organized in two main directories - males and females. In each of these directories, there are directories with name as a serial numbers, each corresponding to a single individual. In each of these directories, there are eleven different images of that subject, which have names of the form abc.jpg, where abc is the image number for that subject. The following orientations of the face are included: looking front, looking left, looking right, looking up, looking up towards left, looking up towards right, looking down. Available emotions are: neutral, smile, laughter, sad/disgust.
GavabDB: 3D face database, GAVAB research group, Universidad Rey Juan Carlos, Spain
GavabDB is a 3D face database. It contains 549 three-dimensional images of facial surfaces. These meshes correspond to 61 different individuals (45 male and 16 female) having 9 images for each person. The total of the individuals are Caucasian and their age is between 18 and 40 years old. Each image is given by a mesh of connected 3D points of the facial surface without texture. The database provides systematic variations with respect to the pose and the facial expression. In particular, the 9 images corresponding to each individual are: 2 frontal views with neutral expression, 2 x-rotated views (ą30o, looking up and looking down respectively) with neutral expression, 2 y-rotated views (ą90o, left and right profiles respectively) with neutral expression and 3 frontal gesture images (laugh, smile and a random gesture chosen by the user, respectively).
Natural Visible and Infrared facial Expression database (USTC-NVIE)
The database contains both spontaneous and posed expressions of more than 100 subjects, recorded simultaneously by a visible and an infrared thermal camera, with illumination provided from three different directions. The posed database also includes expression images with and without glasses. The paper describing the database is available here.
CAFE - The Child Affective Face Set
Although there is a large amount of research examining the perception of emotional facial expressions, almost all of this research has focused on the perception of adult facial expressions. There are several excellent stimulus sets of adult facial expressions that can be easily obtained and used in scientific research (i.e., NimStim, Ekman faces). However, there is no complete stimulus set of child affective facial expressions, and thus research on the perception of children making affective facial expression is sparse. In order to fully understand how humans respond to and process affective facial expressions, it is important to have this understanding across a variety of means. The Child Affective Facial Expressions Set (CAFE) is the first attempt to create a large and representative set of children making a variety of affective facial expressions that can be used for scientific research in this area. The set is made up of 1200 photographs of over 100 child models (ages 2-8) making 7 different facial expressions - happy, angry, sad, fearful, surprise, neutral, and disgust.
Senthil IRTT Face Database Version 1.1
This database contains IRTT (Institute of Road and Transport Technology) students of both colour and gray scale facial images. There are 317 facial images for 13 IRTT students. They are of same age factor around 23 to 24 years. The images along with background are captured by canon digital camera of 14.1 megapixels resolution. The actual size of cropped faces 550x780 and they are further resized to downscale factor 5. Out of 13, 12 male and one female. Each subject have variety of face expressions, little makeup, scarf, poses and hat also.
Virginia Tech - Arab Academy for Science & Technology (VT-AAST) Bench-marking Dataset is a color face image database for benchmarking of automatic face detection algorithms and human skin segmentation techniques. It is named the VT-AAST image database, and is divided into four parts. Part one is a set of 286 color photographs that include a total of 1027 faces in the original format given by our digital cameras, offering a wide range of difference in orientation, pose, environment, illumination, facial expression and race. Part two contains the same set in a different file format. The third part is a set of corresponding image files that contain human colored skin regions resulting from a manual segmentation procedure. The fourth part of the database has the same regions converted into grayscale. The database is available on-line for noncommercial use.
Facial Expression Research Group Database (FERG-DB)
Facial Expression Research Group Database (FERG-DB) is a database of stylized characters with annotated facial expressions. The database contains multiple face images of six stylized characters. The characters were modelled using the MAYA software and rendered out in 2D to create the images. The database contains facial expression images of six stylized characters. The images for each character is grouped into seven types of expressions - anger, disgust, fear, joy, neutral, sadness and surprise.
The SoF dataset is a collection of 42,592 (2,662×16) images for 112 persons (66 males and 46 females) who wear glasses under different illumination conditions. The dataset is FREE for reasonable academic fair use. The dataset presents a new challenge regarding face detection and recognition. It is devoted to two problems that affect face detection, recognition, and classification, which are harsh illumination environments and face occlusions. The glasses are the common natural occlusion in all images of the dataset. However, the glasses are not the sole facial occlusion in the dataset; there are two synthetic occlusions (nose and mouth) added to each image. Moreover, three image filters, that may evade face detectors and facial recognition systems, were applied to each image. All generated images are categorized into three levels of difficulty (easy, medium, and hard). That enlarges the number of images to be 42,592 images (26,112 male images and 16,480 female images). Furthermore, the dataset comes with a metadata that describes each subject from different aspects. The original images (without filters or synthetic occlusions) were captured in different countries over a long period. // Usage: 1 - Gender classification; 2 - Face detection; 3 - Facial landmark estimation; 4 - Emotion Recognition; 5 - Eyeglasses detection; 6 - Age classification.
The IST-EURECOM Light Field Face Database
The IST-EURECOM Light Field Face Database includes data from 100 subjects, captured by a Lytro ILLUM camera in two 1-6 months separated sessions, with 20 samples per each person per session. To simulate multiple scenarios, the images are captured with several facial variations, covering a range of emotions, actions, poses, illuminations, and occlusions. The database includes the raw light field images, 2D rendered images and associated depth maps, along with a rich set of metadata. The first part of the database, captured at Instituto de Telecomunicações - Instituto Superior Técnico, Lisbon, Portugal can be accessed at http://www.img.lx.it.pt/LFFD/. The second part, captured at EURECOM, SophiaTech Campus, Nice, France can be accessed at http://lffd.eurecom.fr/