LILiR Twotalk Corpus
The LILiR Twotalk corpus is comprised of four conversations of two person (dyadic) conversations recorded with minimal constraints on participant behavior. Four conversations of 12 minutes were recorded with two PAL progressive scan cameras, one microphone and eight subjects. Annotation was performed by multiple annotators from various cultures on 527 clips, extracted from the longer videos. The conversation participants were only instructed to be seated and to talk. The topic of conversation was not constrained. All videos have tracking data supplied using LP flock trackers with shape constraints (Ong and Bowden, 2009).
Annotation data was collected from paid and volenteer Internet annotators. The annotation focused on four non-verbal communication categories:
|Question category||Minimum Score||Maximum Score|
|Does this person disagree or agree with what is being said?||Strong disagreement||Strong agreement|
|Is this person thinking hard?||No indication||In deep thought|
|Is this person asking a question?||No indication||Definitely asking question|
|Is this person indicating they understand what is being said to them?||No indication or N/A||Strongly indicating understanding|
Analysis of annotation data collected is presented in our paper (see below).
Manually selected NVC clips
- Archive of 527 files (161Mb), MPEG-4 encoded, 720x576 pixels.
- Time offset of clips within the longer conversation
Entire conversation videos
- Low quality videos (523Mb) (Downloadable on request)
- High quality video - this are available on DVD upon request
Multi-Cultural NVC Annotation
The main annotation data set, which includes 79130 individual clip ratings by 711 different annotators. The data was collected from Mechanical Turk, Samasource and our own annotation web site. Some of the annotators were paid to provide this annotation service. This data is noisy, due to some uncooperative annotators and is referred to as the "unfiltered data". We also make the filtered data available; the filtering method is described in our 2011 paper. The filtered data contains annotations from 388 annotators based in three areas (26 Great Britian, 167 India, 195 Kenya). The archive contains documentation that describes the data set in more detail.
A subset of frames have been manually annotated for the purposes of tracking evaluation.
Automatic tracking was performed on 46 facial features using linear predictor tracking. This tracking was used as the basis of the 2009 and 2011 NVC papers.
Volenteer NVC Annotation used in Earlier Work
These annotation data sets are subsets of the above multi-cultural annotation data set, but are included because of their use in our earlier published work.
- 31 annotator version, primarily by UK based volenteers.
- A subset annotatiors of the above, which was used in our 2009 ICCV workshop paper (21 annotator in total).
- Demographics for annotators in full, simplified.
Screenshots of Annotation Questions
For further information, contact Tim Sheerman-Chase. We are still collecting annotation data for futher research into non-verbal communication. You can assist our research by annotating video clips for NVC signals. This work was supported by the EPSRC funded project LILiR.
If you publish research based on this data, please cite: Tim Sheerman-Chase, Eng-Jon Ong, Richard Bowden. Cultural Factors in the Regression of Non-verbal Communication Perception. In Workshop on Human Interaction in Computer Vision, Barcelona, 2011.
Other Papers that Use the LILiR Twotalk Corpus
- Hatice Çinar Akakin, Bülent Sankur: Robust classification of face and head gestures in video. Image Vision Comput. 29(7): 470-483 (2011). DOI
- Tim Sheerman-Chase, Eng-Jon Ong, Richard Bowden. Feature Selection of Facial Displays for Detection of Non Verbal Communication in Natural Conversation. In IEEE International Workshop on Human-Computer Interaction, Kyoto, 2009.
Ong E J, Bowden R, Robust Lip-Tracking using Rigid Flocks of Selected Linear Predictors, 8th IEEE International Conf. on Automatic Face and Gesture Recognition, Amsterdam, Sept 2008.