Machine learning


Multimodal annotation is time-consuming, and Red Hen is involved in a series of projects that recruit the help of computers to simplify the task. Computer Scientists are actively working on developing tools that develop new classifiers that encode the regularities and patterns in a particular set of manual annotations. Such classifiers can in turn be used to propagate the manual annotations to a larger dataset robotically. Such automated gesture recognition can then generate new metadata that makes new forms of communication research possible. 

The methods are imperfect, and the types of manual annotations rich and varied, so high-quality classifiers typically need feedback from the user in a recursive learning process. One of Red Hen's goals is to integrate Elan into such semi-supervised machine learning systems. 

Related pages

Machine learning projects

Red Hen is involved in a series of machine learning projects, and is rapidly developing new capacities in this area.
  1. Social trait analysis with Jungseock Joo and Song-Chun Zhu (since 2014). Weixin Li has set up a facial analysis pipeline on the Case HPC that uses torch7.
  2. iMotion project with Heiko Schuldt's team at the University of Basel (since Spring 2015)
  3. Soumya Ray's graduate course project on timeline gestures (since Fall 2015). See the page for Automatic Detection of Timeline Gestures. See also the page for Tagging for Likelihood of Gesture Data, which involves not a classifier but a motion detector.
Professor Ray's project aims to develop the ability to recognize timeline gesture in videos, based on the manual annotations of 140 instances contributed by Red Hen researchers Javier Valenzuela and Cristóbal Pagán Cánovas

Red Hen is proposing several projects for Google Summer of Code 2016 focused on machine learning; see Ideas page.


Deep learning frameworks