報告人:吳桐桐 博士 蒙納士大學
報告時間:2024年12月13日(周五)上午10:00-11:00
報告地點:bet356手机版唯一官网九龍湖校區計算機樓513室
報告摘要:Continual learning with large language models (LLMs) is crucial for enabling AI systems to adapt and evolve in real-time, maintaining and enhancing knowledge without succumbing to catastrophic forgetting, thereby ensuring sustained operational efficiency and relevance. This report explores the integration of continual learning with large language models across multi-modal information sources. We begin by reviewing traditional continual learning, illustrating its application in text, image, and speech extraction, and multi-modal knowledge graph construction. We then redefine continual learning for LLMs, focusing on overcoming catastrophic forgetting and enhancing knowledge retention through continual pre-training, instruction tuning, and alignment. Looking ahead, we discuss challenges such as data evolution and contamination and propose innovations in architectures and learning paradigms, including language agents evolution and proactive continual learning.
報告人簡介:吳桐桐博士,Monash大學博士後研究員,bet356手机版唯一官网-Monash 聯合培養博士。他的研究聚焦于大語言模型(LLMs)、數據和知識的協同進化,并獲得了包括Monash Seed Grant, eBay Research, ByteDance Research等機構資助。他在ICLR、ACL、EMNLP、AAAI、IJCAI等會議上發表了二十餘篇論文,并擔任ICML、ICLR、NeurIPS、ACL ARR、ACM MM、AAAI等重要會議的程序委員會成員。