计算机网络和信息集成教育部重点实验室(东南大学)

 
   



2011年学术报告


--- 2011年学术报告
---
Animating faces from speech

时间:2011年5月5日 地点:九龙湖校区计算机楼401室

报告简介:

    In this work, I describe a model of the correspondence between facial motion and speech. The face and sound are modelled separately, with phonemes being the link between both. We propose a sequential model and evaluate its suitability for the generation of the facial animation from a sequence of phonemes, which we obtain from speech. We evaluate the results both by computing the error between generated sequences and real video, as well as with a rigorous double-blind test with human subjects. Experiments show that our model compares favourably to other existing methods and that the sequences generated are comparable to real video sequences.

报告人简介:

    Gwenn Englebienne is currently a Postdoc at the University of Amsterdam. He received his PhD degree from the Computer Science Department of the University of Manchester in 2009, for his work on modelling facial motion with probabilistic models. His has since focused on developing probabilistic models of human behaviour, both using simple sensors and video cameras. He has collaborated intensively with Philips research on machine vision projects. Most recently, he has worked on monitoring the health of elderly people with non-intrusive sensors.
   

东南大学计算机网络和信息集成教育部重点实验室 版权所有