Datasets

SNAP-2DFE Synchronized NAtural and Posed 2D Facial Expressions

In order to quantify the impact of free head movements on expression recognition performances, we propose an innovative acquisition system that collects data simultaneously in presence and absence of head movement.

Each facial expression is recorded simultaneously using a two-camera system : one camera is fixed on a helmet, while the other is placed in front of the user at near-range distance. Our database enhances measuring the impact of head-movements relying on the information returned by the frontal camera, compared to the helmet one. In each sequence, the user follows a specific pattern of movement that corresponds to one of the following animations : one translation on Ox, combined with three rotations (roll, yaw, pitch).

Facial landmarks are provided as annotations on both cameras. Sequences are also annotated with type of acted expressions (Neutral, Happiness, Fear, Anger, Disgust, Sadness, Surprise). A 3-axis gyroscope provides head orientation information throughout the sequence.


Dataset size: 1260 videos collected from 15 subjects

Download: User agreement to be signed and sent to the owner - this agreement states the general terms of use.

Contact: I.M. Bilasco - Marius[point]Bilasco[at]univ-lille1.fr

Multimodal Identification and Head Orientation Dataset

This dataset contains images of faces acquired in a lab environments under different:

  • headposes ;
  • lighting settings ;
  • facial expressions ;
  • modalities : time-of-flight camera, stereoscopic camera, Kinect.


Stereoscopic capture.


Time-of-flight capture.


Kinect Capture.

Dataset size: 2624 images - 64 different persons

Download: User agreement to be signed and sent to the owner - this agreement states the general terms of use.

Contact : Jean Martinet - Jean[point]Martinet[at]univ-lille1.fr

Web Emotion Dataset

This dataset contains face images with various facial expressions (happiness, sadness, surprise, anger, neutral). The images were gathered by querying Google adn Flickr with emotion-related terms. False positives were filtered out using a face detector first, then manually.

Dataset size: 5800 images - 5 classes

Download: this data is not public. Please contact the person in charge of it for access conditions.

Contact: Marius Bilasco - Marius[dot]Bilasco[at]univ-lille1.fr

 

Web gender dataset

This dataset contains images of male and female faces gathered by querying Google with gender-related terms in various languages (French, English, German, Chinese, Turkish...). False positives were filtered out using an automatic face detector and manually.

Dataset size: 4700 images - 2 classes

Download: this data is not public. Please contact the person in charge of it for use terms.

Contact: Marius Bilasco - Marius[dot]Bilasco[at]univ-lille1.fr

Eye Center Annotations

This dataset provides annotations of the positions of eye centers for two datasets of face images: Caltech Faces and Youtube Faces. All annotations were performed manually.

Dataset size: Annotations for 450 and 5000 images

Download: this data is not public. Please contact the person in charge of it for use terms.

Contact: Marius Bilasco - Marius[dot]Bilasco[at]univ-lille1.fr

 

GENKI4K Gender-Emotion Annotations

This dataset provides annotations in subject genders and emotions for the dataset of face images GENKI4K. All annotations were performed manually.

Dataset size: Annotations for 4000 images

Download:
this data is not public. Please contact the person in charge of it for use terms.

Contact: Marius Bilasco - Marius[dot]Bilasco[at]univ-lille1.fr

 

Gender LFW Pro Annotations

This dataset contains annotations in subject gender for the dataset of face images LFW Pro.

Dataset size: Annotations for 7895 images

Download:
this data is not public. Please contact the person in charge of it for use terms.

Contact: Marius Bilasco - Marius[dot]Bilasco[at]univ-lille1.fr

 

FOX Persontracks

This dataset is adapted from the dataset developed by ANR (French research funding agency) for the REPERE challenge (identification of individuals in videos). It provides short video shots from TV broadcast shows showing a single individual and provides the name of each indicidual in each video. These video shots are obtained using the original video groundtruth of the dataset, an automatic shot segmentation and a manual filtering of erroneous shots. These videos are also available after automatic background substraction.


From left to right: initial image, face detection, head segmentation.

Dataset size : 4,604 persontracks featuring 266 persons

Contents :
  1. Persontracks
  2. Ground truth providing an identity to each persontracks
  3. Ground truth providing face locations (manually annotation) on 2,081 key frames
  4. Face locations (automatic detection) for every frame
  5. Persontracks after (automatic) background substraction
  6. Evaluation software
Download : this dataset is available through ELDA under ISLRN 168-132-570-218-1 (free of charge for non-commercial use)

References : please cite the following paper in any publication using this dataset :

Introducing FoxPersonTracks: a Benchmark for Person Re-Identification from TV Broadcast Shows
Rémi Auguste, Pierre Tirilly and Jean Martinet
IEEE Workshop on Content-Based Multimedia Indexing (CBMI) 2015
Prague, Czech Republic


Contact: Jean Martinet - Jean[dot]Martinet[at]univ-lille1.fr

 

TV Broadcast Talking Heads dataset

This dataset is adapted from the dataset developed by ANR (French research funding agency) for the REPERE challenge (identification of individuals in videos). It provide video shots were individual faces were annotated as speaking or not speaking. The dataset was obtained by combining the audio and video groundtruth of the original data and filtering out erroneous shots manually.

Dataset size 334 videos - face annotations into two categories

Download:
this data is not public. Please contact the person in charge of it for access conditions.

Contact: Pierre Tirilly - Pierre[dot]Tirilly[at]lifl.fr