DSA Seminar

Decision Making with Multimodal Learning

With the advent of multimodal large (language) models (MMLMs), a new paradigm in decision-making research has emerged, where multimodal learning can potentially enhance an AI agent's robustness, generality, and learning efficiency in decision-making. This presentation will first introduce my past research achievements, covering deep reinforcement learning (DRL) and multi-agent systems. Then we will discuss my current research agenda, focusing on developing multi-modal decision-making systems that leverage the strengths of these advanced technologies to address complex, dynamic, and interactive environments. By bridging my past and present research and highlighting the synergy between these advanced AI technologies, the presentation aims to showcase novel methodologies for building decision-making systems capable of operating in complex and dynamic environments.

Zheng TIAN

Assistant Professor

ShanghaiTech University

Zheng Tian obtained his PhD from University College London (UCL). During his PhD studies, Zheng primarily researched multi-agent systems, reinforcement learning, and generative models, with multiple research works published in top international conferences such as NeurIPS, ICLR, AAAI, IJCAI, and CoRL. Zheng is also a key member of the UCL team for the ExIt project, which was independently developed alongside Google's DeepMind AlphaGo algorithm. Zheng has served as a Program Committee (PC) member for renowned international conferences including NeurIPS, ICLR, AAAI, and IJCAI, and as a reviewer for several prestigious international journals and conferences, including ACM Computing Surveys and Frontiers of Computer Science.


30 May 2024


10:00:00 - 11:00:00


W2-2F-201, HKUST(GZ) & Online

Join Link

Zoom Meeting ID:
829 0430 5958

Passcode: dsat

Event Organizer

Data Science and Analytics Thrust