应用数学青年讨论班(午餐会)-- Theoretical Understanding of Adversarial Examples: Expressive Power and Training Dynamics
Speaker(s): Binghui Li (Peking University)
Time: 11:45-13:00 April 2, 2025
Venue: 智华楼四元厅225
摘要:
In recent years, machine learning methods—especially deep learning—have shown exceptional performance in domains like computer vision, natural language processing, speech recognition, and game playing. However, deep neural networks still face fundamental limitations in robustness and reliability. A key issue is their vulnerability to adversarial examples—small perturbations that cause incorrect predictions while being imperceptible to humans. This poses significant concerns for deploying deep models in safety-critical applications, such as autonomous driving. In this talk, we aim to provide a theoretical account of adversarial examples in deep learning. Our analysis is grounded in two key perspectives: the expressive power of neural networks and the underlying principles of feature learning. By connecting these theoretical foundations, we seek to shed light on the mechanisms that give rise to adversarial vulnerability and offer insights into potential pathways for improving model robustness.
报告人信息:
李柄辉,北京大学前沿交叉学科研究院国际机器学习研究中心博士生,主要研究方向为深度学习的理论基础以及人工智能方法在数学问题中的应用。
欢迎大家参与4月2号的午餐会。报告时间是12:00-13:00,午餐于11:45开始提供。请有意参与的老师和同学在4月1日15:00前填写以下问卷 https://www.wjx.cn/vm/r0SiKQY.aspx#。