Skip to Content
Feb. 5, 2026

Physics for AI for Physics

In just the past decade, neural networks have made stunning progress on tasks long thought to be exclusive to humans, but the "hard problem" of artificial intelligence remains: why does a trained neural network give the output it does? In this talk, I will show that an approach to studying neural networks which borrows techniques and perspectives from physics can make quantitative progress on at least three important facets of this problem: what happens during training, why performance appears to scale predictably and robustly with the amount of training data, and how the structure of data affects both training and performance. I will argue that physics provides a suite of theoretical tools naturally suited for studying neural networks, and how the topology and geometry of collider physics data may be used as a testbed for theories of machine learning relevant for data “in the wild”. Armed with this improved understanding beyond the “black box” of AI, we can put AI tools to better use to discover more about the physics of our universe.

Host: Michael Luke
Event series  Physics Colloquium