Dynamic 3D Scene Modeling from 3D Geometry to 2D Videos

发布者:曹玲玲发布时间:2025-05-09浏览次数:10

报告人:Junhui Hou 副教授 香港城市大学  

主持人:贾育衡

报告时间:2025年5月12日(周一)16:30

报告地点:东南大学九龙湖校区计算机楼513报告厅

报告摘要:In this talk, I will showcase our recent advancements in dynamic 3D scene modeling from either 3D geometry data or monocular 2D videos. Initially, I will introduce Dynosurf, a framework designed to reconstruct topologically consistent dynamic 3D meshes from continuous sequences of 3D point clouds with unknown temporal correspondences. Following this, I will introduce two 3D Gaussian Splatting (GS)-based frameworks, i.e., RigGS, a framework modeling articulated objects captured in monocular videos to enable novel view synthesis, while also being easily editable, drivable, and re-posable, and MoDGS, a pipeline to render novel views of dynamic scenes derived from casually captured monocular videos. Finally, I will introduce a new novel view synthesis paradigm that operates without the need for training, by leveraging the potent generative capabilities of pre-trained large video diffusion models.

报告人简介:Junhui Hou is an Associate Professor with the Department of Computer Science, City University of Hong Kong. His research interests are multi-dimensional visual computing. Dr. Hou received the Early Career Award from the Hong Kong Research Grants Council in 2018 and the NSFC Excellent Young Scientists Fund in 2024. He has served or is serving as an Associate Editor for IEEE Transactions on Visualization and Computer Graphics, IEEE Transactions on Image Processing, IEEE Transactions on Multimedia, and IEEE Transactions on Circuits and Systems for Video Technology.


  • 联系方式
  • 通信地址:南京市江宁区东南大学路2号东南大学九龙湖校区计算机学院
  • 邮政编码:211189
  • ​办公地点:东南大学九龙湖校区计算机楼
  • 学院微信公众号