您的瀏覽器不支援JavaScript語法,網站的部份功能在JavaScript沒有啟用的狀態下無法正常使用。

中央研究院 資訊科學研究所

活動訊息

友善列印

列印可使用瀏覽器提供的(Ctrl+P)功能

學術演講

:::

TIGP (SNHCC) -- Virtual Musician: An Automated System for Generating Expressive Virtual Violin Performances from Music

  • 講者林鼎崴 博士 (中央研究院資訊科學研究所)
    邀請人:TIGP (SNHCC)
  • 時間2025-10-13 (Mon.) 14:00 ~ 16:00
  • 地點資訊所新館106演講廳
摘要
Motion-capture (MOCAP)-free music-to-performance generation using deep generative models has emerged as a promising solution for the next generation of animation technologies, enabling the creation of animated musical performances without relying on motion capture. However, building such systems presents substantial challenges, particularly in integrating multiple
independent models responsible for different aspects of avatar control, such as facial expression generation for emotive dynamics and fingering generation for instrumental articulation. Moreover, most existing approaches primarily focus on human-only performance generation, overlooking the critical role of human-instrument interactions in achieving expressive and realistic musical performances.
To address these limitations, this dissertation proposes a comprehensive system for generating expressive virtual violin performances. The system integrates five key modules—expressive music synthesis, facial expression generation, fingering generation, body movement generation, and video shot generation—into a unified framework. By eliminating the need for MOCAP and explicitly modeling human-instrument interactions, this work advances the field of MOCAP-free content-to-performance generation. Extensive experiments, including quantitative analyses and user studies, demonstrate the system's ability to produce realistic, expressive, and synchronized virtual performances, paving the way for interactive applications such as VTubing, music education, and virtual concerts.