所有提交的电磁系统将被重定向到在线手稿提交系统。作者请直接提交文章在线手稿提交系统各自的杂志。

编辑雷竞技app下载苹果版

2016年计算机图形学:Co-articulation基于mpeg - 4人脸动画和语音同步——Abdennour El Rhalibi -利物浦大学

文摘

在这说话,教授Abdennour El Rhalibi将他在历经的游戏技术研究的总结。他将最近的一些项目开发与BBC R & D,在游戏中间件开发和面部动画。特别是他将引入一个完全独特的co-articulation和语音同步框架基于mpeg - 4的面部动画。系统,称为魅力,使创建、编辑和播放高分辨率的三维模型;mpeg - 4流动画;和兼容知名葛丽塔和Xface相关系统。它支持语音动态语音同步。该框架还允许使用quadric-based表面实时模型简化。co-articulation方法提供了现实和高性能对口型动画,支持Cohen-Massaro co-articulation模型适应mpeg - 4人脸动画(FA)规范。他还将讨论一些实验表明,co-articulation技术相比助眠效果整体相关的最先进的技术。作者开发的系统采用mpeg - 4足总标准和其他开发使创建、编辑和播放高分辨率的三维模型; MPEG-4 animation streams; and is compatible with well-known related systems like Greta and Xface. It supports text-to-speech for dynamic speech synchronization. The framework enables real-time model simplification using quadric-based surfaces.Facial animation has proved to be an immense fascination and interest by research community and the industry. As its usesisimmensely broad, from the uses in the movie industry, to generate CGI characters or even to recreate real actors in CGI animation;tothe games industry where coupled with several advancesin both hardware and software it has permitted the creation of realistic characters that immerse the players like never before. Facial animationalso reached more recently different sectorsand applications, such as virtual presenceandmedical research. There are a few toolkits and frameworks that are dedicated to facial animation, such as Facegen, and other tools.Such broad potential and fascination around this subject has generated intense and dedicated research in the past three decades.Thesehave led to the creation of several branches, that focusin key aspects,such as modelling (e.g.wrinkles, skin, and hair), and embodied agent systems that enhance the user’s experience by incorporatingvirtual characterwith subsystemsinvolvingpersonality,emotions,and speech moods.These subsystemscontributeto mimic our behavioursand physical appearancemore realistically, and all have an important role in communication, either directly through the use of speech, or indirectly usingbody gestures,gaze, moods, and expressions. However, speech is the principaldirect means of communication between embodied agents andthe user. This justifies the efforts taken by the research community inthe last thirty yearstoward thecreation ofrealistic synthetic speech and lip movements forvirtual characters. In the past two decades several advanceshave permitted the creation of synthetic-visual audio speech forvirtual characters, involving speech processing enabled with the creation of text-to-speech engines such asMary-TTS (Schröder, 2001). With the creation of these speech engines, coarticulation also saw several advances beingmade (Massaro, 1993, Pelachaud, 2002, Sumedha and Nadia, 2003, Terry and Katsaggelos, 2008)which havebeen further complemented with facial Initial lip-sync studies attempted to concatenate phonetic segments, however it was found that phonemes do not achieve their ideal target shape at all times, due tothe influence of consecutive phonetic segments on each other.Such phenomenonis known as coarticulation. Coarticulation refers to the phonetic overlap, or changes in articulation that occurs between segments, which does not allow the segment to reach its perfect target shape. Coarticulation effects can be divided within two main phenomena, perseverative coarticulation:if the segments affectedare the preceding ones;and anticipatory coarticulation if the segments are affected by the upcoming ones.

传记:

Abdennour El Rhalibi教授娱乐计算和利物浦约翰摩尔大学的战略项目主管。他是电脑游戏研究实验室主管保护研究中心。他有超过22年的经验在计算机科学研究和教学。他曾在三个欧盟项目首席研究员在法国和英国。他目前的研究涉及到游戏技术和应用人工智能。6年来他一直主要几个项目在娱乐计算基于由英国广播公司和英国的游戏公司,包括跨平台开发工具,游戏,3 d网络游戏中间件开发、状态同步在多玩家在线游戏中,p2p MMOG和3 d角色动画。他在这些领域发表了超过150的出版物。他在许多杂志编辑委员会包括ACM计算机娱乐和电脑游戏技术的国际期刊。他担任主席和IPC成员国在100年会议在电脑娱乐、人工智能和虚拟现实。他是许多国际研究委员会在人工智能和娱乐计算,包括IEEE MMTC搞笑:3 d渲染、处理和通信(3 drpcig)、IEEE工作组计算智能视频游戏和联合会WG 14.4游戏和娱乐计算。

Abdennour El Rhalibi

阅读全文下载全文