ACM SIGMETRICS 2026
Ann Arbor, Michigan, USA
June 8-12, 2026
Vishal Misra
Columbia University
Keynote title and abstract forthcoming
Bio: Vishal Misra is the RKS Family Professor of Computer Science and the Vice Dean for Computing and AI in the School of Engineering at Columbia University. He is an ACM and IEEE Fellow and his research emphasis is on mathematical modeling of systems, bridging the gap between practice and analysis. As a graduate student, he co-founded CricInfo, which was acquired by ESPN in 2007. In 2021 he developed one of the world's first commercial applications built on top of GPT-3 for ESPNCricinfo, and has since been modeling the behavior of LLMs. He also played an active part in the Net Neutrality regulation process in India, where his definition of Net Neutrality was adopted both by the citizen's movement as well as the regulators. He has been awarded a Distinguished Alumnus Award by IIT Bombay (2019) and a Distinguished Young Alumnus Award by the UMass Amherst College of Engineering (2014).
Steve Teig
Amazon
Keynote title and abstract forthcoming
Bio: Steve Teig is an American technology executive, entrepreneur, and computer engineer. He earned a B.S.E. in electrical engineering and computer science from Princeton University in 1982, co-founded Simplex in 1998, later served as chief scientist at Cadence after its acquisition, and went on to co-found Tabula as CTO. He subsequently served as CTO of Tessera Technologies, which became Xperi, and then became CEO of Perceive, a semiconductor company focused on machine learning hardware for mobile devices. After Amazon acquired Perceive in 2024, he became a Vice President and Distinguished Engineer at Amazon. He is also known for holding more than 390 patents.
Adam Tauman Kalai
OpenAI
Evaluating large language models for accuracy incentivizes hallucinations
Abstract: Large language models sometimes produce confident, plausible falsehoods (“hallucinations”), limiting their reliability. Prior work has offered numerous explanations and effective mitigations, such as retrieval and tool use, consistency-based self-verification, and reinforcement learning from human feedback. Nonetheless, the problem persists even in state-of-the-art language models. Here we show how next-word prediction and accuracy-based evaluations inadvertently reward unwarranted guessing. Initially, next-word pretraining creates statistical pressure toward hallucination even with idealized error-free data: using learning theory, we show that facts lacking repeated support in training data, such as one-off details, yield unavoidable errors, while recurring regularities, such as grammar, do not. Subsequent training stages aim to correct such errors. However, dominant headline metrics like accuracy systematically reward guessing over admitting uncertainty. To align incentives, we suggest two additions to the classic approach of adding error penalties to evaluations to control abstention. First, we propose “open-rubric” evaluations that explicitly state how errors are penalized, if at all, which test whether a model modulates its abstentions to stated stakes while optimizing accuracy. Second, since hallucination-specific benchmarks rarely make leaderboards, we suggest using open-rubric variants of existing evaluations to reverse their guessing incentives. Reframing hallucination as an incentive problem opens a practical path toward more reliable language models.
Joint work with: Santosh Vempala, Ofir Nachum, and Edwin Zhang.
Bio: Adam Tauman Kalai is a Research Scientist at OpenAI whose work spans AI safety and ethics, algorithms, fairness, AI theory, game theory, and crowdsourcing. He earned his BA from Harvard University and his PhD from Carnegie Mellon University, and has held research positions across academia and industry, including at MIT, TTIC, Georgia Tech, and Microsoft Research New England. His work has received numerous honors, including the Majulook Prize.
Additional keynote speakers may be announced as the program is finalized.