报告人:吴桐桐 博士 蒙纳士大学
报告时间:2024年12月13日(周五)上午10:00-11:00
报告地点:东南大学九龙湖校区计算机楼513室
报告摘要:Continual learning with large language models (LLMs) is crucial for enabling AI systems to adapt and evolve in real-time, maintaining and enhancing knowledge without succumbing to catastrophic forgetting, thereby ensuring sustained operational efficiency and relevance. This report explores the integration of continual learning with large language models across multi-modal information sources. We begin by reviewing traditional continual learning, illustrating its application in text, image, and speech extraction, and multi-modal knowledge graph construction. We then redefine continual learning for LLMs, focusing on overcoming catastrophic forgetting and enhancing knowledge retention through continual pre-training, instruction tuning, and alignment. Looking ahead, we discuss challenges such as data evolution and contamination and propose innovations in architectures and learning paradigms, including language agents evolution and proactive continual learning.
报告人简介:吴桐桐博士,Monash大学博士后研究员,东南大学-Monash 联合培养博士。他的研究聚焦于大语言模型(LLMs)、数据和知识的协同进化,并获得了包括Monash Seed Grant, eBay Research, ByteDance Research等机构资助。他在ICLR、ACL、EMNLP、AAAI、IJCAI等会议上发表了二十余篇论文,并担任ICML、ICLR、NeurIPS、ACL ARR、ACM MM、AAAI等重要会议的程序委员会成员。