ACM SIGMETRICS 2025
Stony Brook, New York, USA
June 9-13, 2025
Theater -- Tuesday, June 10, 2025, 9:15 AM - 10:15 AM
The objective of matrix completion is to estimate or complete an unknown matrix from its partial, noisy observations. Since its introduction as a model for recommendation systems in the early 1990s, it has been central to advances in machine learning, statistics, and applied probability. In this talk, I will discuss few incarnations of it that arise in the context of time-series analysis, causal inference, reinforcement learning and empirical risk minimization. Time permitting, I will discuss directions for future investigation.
Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, MIT.
Devavrat Shah is Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT where he has been teaching since 2005. He was the faculty director of Deshpande Center for Tech Innovation and the founding director of the Statistics and Data Science Center at MIT. His current research interests include algorithms for causal inference, social data processing and stochastic networks. He is a distinguished alumni of his alma mater IIT Bombay. He has co-founded Celect (later acquired by Nike) and Ikigai Labs, which enables enterprises transform their forecasting and planning with AI.
Session Chair: Mohammad Hajiesmaili
Theater -- Wednesday, June 11, 2025, 9:00 AM - 10:15 AM
The past 50 years has seen a dramatic increase in the amount of compute capability per person, in particular, those enabled by AI. It is essential that AI, the twenty-first century’s most important technology, be developed with sustainability in mind. I will highlight key efficiency optimization opportunities for cutting-edge AI technologies, from deep learning recommendation models to multi-modal generative AI tasks. To scale AI sustainably, we must also go beyond efficiency. I will talk about optimization opportunities across the life cycle of computing infrastructures, from hardware manufacturing to datacenter operations and end-of-life processing for the hardware, capturing both the operational and manufacturing carbon footprint of AI computing. Based on the industry experience and lessons learned, I will share key challenges, on what and how at-scale optimization can help reduce the overall carbon footprint of AI and computing. This talk will conclude with important development and research directions to advance computing sustainably.
Director of AI Research, Meta.
Carole-Jean Wu is a Director of AI Research at Meta, leading the Systems and Machine Learning Research team. She is a founding member and a Vice President of MLCommons - a non-profit organization that aims to accelerate machine learning innovations for everyone. Dr. Wu's expertise sits at the intersection of computer architecture and machine learning with a focus on performance, energy efficiency and sustainability. She is passionate about pathfinding and tackling system challenges to enable efficient, scalable, and environmentally-sustainable AI technologies. Her work has been recognized with several IEEE Micro Top Picks and ACM/IEEE Best Paper Awards. She is in the Hall of Fame of ISCA, HPCA, IISWC, and serves on the study committee of the National Academies. Prior to Meta, Dr. Wu was a tenured professor at ASU. She earned her M.A. and Ph.D. from Princeton University and B.Sc. from Cornell University.
Session Chair: Anshul Gandhi
Theater -- Thursday, June 12, 2025, 9:00 AM - 10:15 AM
The massive success of LLMs has revolutionized the field of machine learning, but a core tenet remains: AI systems need to be built and tuned using high-quality data from the right domain. As these systems increasingly touch our daily lives, the relevant training data is more and more often privacy sensitive. We begin with a framework that helps bring precision to discussions of privacy and AI by highlighting a number of important general principles. We then dive into practice, exploring three case studies, each focusing on technologies that serve these principles: 1) Data-minimization via cross-device federated learning; 2) Robust anonymization for ML models via differential privacy; and 3) External verifiability of privacy protections via trusted computing and SMPC. Join us to see how principled approaches can unlock the power of AI while safeguarding user trust.
Principal Research Scientist, Google.
Brendan McMahan is a principal research scientist at Google, where he leads efforts on decentralized and privacy-preserving machine learning. His team pioneered the concept of federated learning, and continues to push the boundaries of what is possible when working with centralized and decentralized data using privacy-preserving techniques. Previously, he has worked in the fields of online learning, large-scale convex optimization, and reinforcement learning. Brendan received his Ph.D. in computer science from Carnegie Mellon University.
Session Chair: Mor Harchol-Balter