Venue of ACM SIGMETRICS / IFIP PERFORMANCE 2022

ACM SIGMETRICS / IFIP PERFORMANCE 2022

Mumbai, India
June 6-10, 2022

Tutorials

Tutorial 1: Machine learning for solving optimal power flow problems

The optimal power flow (OPF) problem is to determine the least-cost generator dispatch to meet the load in a power network, subject to physical and operational constraints. It is central to power grid operations and underpins various applications, including real-time market clearing, unit commitment, demand response, reliability assessment, and grid modernization endeavors for pursuing carbon neutrality and mitigating climate change. It concerns billions of US dollars each year globally and 5% of cost difference amounts to 36 billion US dollars.

With increasing uncertainty from intermittent renewable, distributed generation, and flexible loads, the optimal operating point of the electrical power system may change rapidly during real-time operations. As such, grid operators now need to solve OPF problems more frequently to track the optimal operating points. Increasing uncertainty will also demand more frequent reliability and risk assessments. However, the OPF problem with a full AC power flow formulation is non-convex and NP-hard, making it difficult to solve efficiently by iterative solvers, especially for large-scale instances.

Recently, it has become an active area of research to employ machine learning techniques for solving OPF problems in a fraction of the time used by iterative solvers. To date, various studies have applied learning techniques to generate quality solutions for popular OPF formulations with a few orders of magnitude speedup as compared to iterative solvers, over power networks with realistic topology and load profiles. This tutorial will provide an overview of the on-going research in developing neural networks or reinforcement learning schemes to identify active and inactive constraints in OPF problems, to provide warm start points for iterative OPF solvers, to learn iterative strategies for solving OPF problems, to learn the OPF load-solution mapping and then use the mapping to obtain quality OPF solution directly, etc. We will cover both background and the fundamental, state-of-the-art results, open issues, and potential future directions. We will also discuss the applications of the machine learning approaches in solving general constrained optimization problems.

Speakers:

Minghua Chen

Minghua Chen Minghua Chen received his B.Eng. and M.S. degrees from the Department of Electronic Engineering at Tsinghua University. He received his Ph.D. degree from the Department of Electrical Engineering and Computer Sciences at University of California Berkeley. He is a Professor of School of Data Science, City University of Hong Kong. He received the Eli Jury award from UC Berkeley (presented to a graduate student or recent alumnus for outstanding achievement in the area of Systems, Communications, Control, or Signal Processing) and The Chinese University of Hong Kong Young Researcher Award. He also received several best paper awards, including IEEE ICME Best Paper Award, IEEE Transactions on Multimedia Prize Paper Award, and ACM Multimedia Best Paper Award. Minghua’s recent research interests include online optimization and algorithms, machine learning in power system operations, intelligent transportation systems, distributed optimization, delay-constrained network coding, and capitalizing the benefit of data-driven prediction in algorithm/system design. Minghua is an IEEE Fellow and an ACM Distinguished Scientist.


Steven Low

Steven Low is the F. J. Gilloon Professor of the Department of Computing & Mathematical Sciences and the Department of Electrical Engineering at Caltech and Honorary Professor of the University of Melbourne, Australia. Before that, he was with AT&T Bell Laboratories, Murray Hill, NJ, and the University of Melbourne, Australia. He is an awardee of the IEEE INFOCOM Achievement Award and the ACM SIGMETRICS Test of Time Award, and is a Fellow of the IEEE, ACM, and CSEE. He was well- known for work on Internet congestion control and semidefinite relaxation of optimal power flow problems in smart grid. His research on networks has been accelerating more than 1TB of Internet traffic every second since 2014. His research on smart grid is providing large-scale cost effective electric vehicle charging to workplaces. He received his B.S. from Cornell and PhD from Berkeley, both in EE.


Tutorial 2: Mean field interacting particle systems: Limit laws and large deviations

Mean-field interacting particle systems can be used to model various networked systems, e.g., load balancing networks, loss networks, and wireless local area networks. In a mean-field model, the states of the particles evolve over time in a Markovian fashion; the transition rates of a particle depend on the states of the other particles through the empirical measure. From the point of view of performance analysis, one is often interested in studying the empirical measure process of the system of particles. This tutorial aims to provide an introduction to limit laws and large deviations associated with the empirical measure process of mean-field interacting particle systems towards the performance analysis of networked systems.

This tutorial is divided into three parts. In the first part, we present the convergence of the empirical measure process to the McKean-Vlasov equation, an ODE on the probability simplex. We also outline the convergence of the invariant measure when this ODE possesses a unique global attractor. In the second part, we present a process-level large deviation principle (LDP) for the empirical measure process. As an application of this LDP, we quantify various large-time phenomena for the empirical measure process, such as the mean exit time from a domain and the time required for equilibration. In the third part, we introduce two extensions of the mean-field model: a mean-field model with time scale separation wherein the particles are subject to a randomly varying fast environment and a countable-state mean-field model. For the mean-field model with time scale separation, we present a process-level LDP for the joint law of the empirical measure process of the particles and the occupation measure process of the fast environment. For the countable-state model, we present the large deviations of the invariant measure and highlight some issues involved in transferring the finite-horizon LDP to the stationary regime.

Speakers:

Rajesh Sundaresan

Rajesh Sundaresan received the B.Tech. degree in electronics and communication from IIT Madras and the M.A. and Ph.D. degrees in electrical engineering from Princeton University in 1996 and 1999, respectively. From 1999 to 2005, he has worked with Qualcomm Inc., on the design of communication algorithms for wireless modems. Since 2005, he has been with the Indian Institute of Science, where he is currently a Professor with the Department of Electrical Communication Engineering and an Associate Faculty with the Robert Bosch Centre for Cyber-Physical Systems. He is also the current Dean of the Division of EECS at IISc. His research interests include communication, computation, and control over networks. He was an Associate Editor of the IEEE Transactions on Information Theory from 2012 to 2015 and is currently on the editorial board of the Journal of the Indian Institute of Science, which is a multidisciplinary reviews journal.


Sarath Yasodharan

Sarath Yasodharan is a Ph.D. candidate in the Department of Electrical Communication Engineering at the Indian Institute of Science (IISc), Bengaluru, India. His research interests are broadly in applied probability. His Ph.D. research focused on the study of the large- time behavior and metastability in weakly interacting Markov processes with jumps. He is a recipient of the Cisco research fellowship from the Centre for Networked Intelligence at IISc.


Tutorial 3: Optimization and learning with Markovian data

Simple streaming algorithms such as stochastic gradient descent (SGD) are the workhorses of modern machine learning, due to their ability to effectively deal with large amounts of data. While SGD and related algorithms have been thoroughly analyzed for independent data and tight finite time guarantees are known, their finite sample performance with dependent data has not been as thoroughly analyzed. However, data with temporal dependence occurs often in practice in the analysis of time series, system identification and reinforcement learning (RL) among others. In these settings, data is modeled as coming from a mixing Markov process and it is important to learn-on-the-go by designing effective SGD-style streaming algorithms. Several ad-hoc techniques have been deployed in these situations, with varying success.

In this tutorial, we will describe some recent exciting progress on this topic in providing theoretically grounded techniques for learning. We first establish the statistical limits of learning with data coming from a Markov process under various assumptions on the noise and provide nearly-optimal streaming algorithms in each setting. This allows us to recognize settings where i.i.d. data like statistical efficiency can be achieved (that is without dependence on the mixing time in the dominant error term). We show that the key to achieving these rates is clever algorithm design which effectively unravels the dependency structure present in the data. In particular, we introduce a simple and elegant variant of experience replay, called reverse experience replay, and show how this deals with dependencies by bringing out certain natural martingale structures present in the problem.

We apply reverse experience replay to popular problems like linear/nonlinear system identification and offline reinforcement learning with well specified linear function approximation. In each of these cases, we show that SGD/ Q-learning with reverse experience replay achieves near optimal convergence rates. This establishes the wide applicability of our techniques in designing one-pass, streaming algorithms to learn with data from Markov processes.

Speakers:

Dheeraj Nagaraj

Dheeraj Nagaraj is a Research Scientist at Google Research India, Bengaluru. Prior to this, he obtained his PhD in Electrical Engineering and Computer Science from MIT, advised by Prof. Guy Bresler. His research interests revolve around applied probability, statistics and machine learning. He is currently interested in designing computationally efficient algorithms to learn with dependent data, stochastic optimization algorithms, random graphs and theory of neural networks.


Praneeth Netrapalli

Praneeth Netrapalli is a Research Scientist at Google Research India, Bengaluru. He is also an adjunct professor at TIFR, Mumbai and a faculty associate of ICTS, Bengaluru. Prior to this, he was a researcher at Microsoft Research India and did his postdoc at Microsoft Research New England in Cambridge, MA. He obtained his MS and PhD degrees from The University of Texas at Austin in Electrical and Computer Engineering. His research interests are broadly in designing reliable and robust machine learning (ML) algorithms, representation learning, black-box optimization, time series modeling, quantum optimization and minimax/game theoretic optimization, and has published extensively in these areas. He has been/is currently a PC member for AISTATS, ICML, ICLR, COLT and NeurIPS conferences. He has given several invited talks at colloquia and workshops and has also been invited to teach a short course on “Optimization in Machine Learning” at IIT Madras. Relevant tutorial lecture experience: Praneeth has given a tutorial at ISIT 2019 as well as multiple invited overview talks at NIPS and ICML workshops focused on optimization, and also at university workshops. He is a recipient of IEEE Signal Processing Society Best Paper Award, 2019 and Indian National Science Academy (INSA) Medal for Young Scientists, 2021.


Tutorial 4: Data plane algorithms in programmable networks

Computer networks are transforming into infrastructure with end-to-end programmability. Over the past decade, significant efforts have been made on designing and manufacturing a variety of programmable data plane devices, such as ASIC-based programmable switches, smart NICs, and FPGA, and on designing domain-specific languages such as P4. These data plane programmabilities have made it possible to consider the design and implementation of fine-grained data plane algorithms to facilitate various types of networked systems for better performance, security, and reliability.

Together with this emerging trend, we’ve seen a range of recent efforts in data plane algorithms (DPA) for network telemetry, load balancing, caching, network security, among others. Now, DPA research topics have attracted researchers from both academia and industry worldwide to explore what should be done in the data plane with this new-around capability. This question is prominent, requiring the programmable network ecosystem to join the discussion such as (1) network operators who are the users of the capabilities, (2) algorithm designers who design and analyze DPA, (3) platform vendors and developers who provide software/hardware primitives and use these primitive to realize DPA. SIGMETRICS is the right venture to raise this question for a diverse group of researchers and practitioners, and explore this state-of-the-art topic.

This tutorial aims to bring data plane algorithm research to the broader SIGMETRICS community. We will explain some representative data plane algorithms and provide hands-on implementation and evaluation experiences. The organizers have significant experience in data plane algorithms for networking, systems, and security.

Speakers:


Tutorial 5: Internet Data Streaming and Sketches

This tutorial covers the fundamental concepts, data structures, and algorithms for extracting information from packet streams on the Internet in real time, with applications in network security, traffic engineering, e-commerce, and big data analytics. It contains three lectures of 45 min each. The first lecture provides an introduction to big Internet data, practical needs of making big data small, different statistics of interest, flow models, how to summarize big network data, challenges in performing summarization, some basic sketch designs, etc. The second lecture focuses on data structures and algorithms on flow-size measurement, including various hash tables (such as cuckoo and d-left), CountMin sketches and variants, and virtual counters. The third lecture focuses on data structures and algorithms for flow-spread measurement, including single-flow sketches (such as bitmap and HLL) and multi-flow sketches (such as generalized CountMin, virtual data structures, and sampling solutions).

Speaker:

Shigang Chen
Shigang Chen is a professor with Department of Computer and Information Science and Engineering at University of Florida. He received his B.S. degree in computer science from University of Science and Technology of China in 1993. He received M.S. and Ph.D. degrees in computer science from University of Illinois at Urbana-Champaign in 1996 and 1999, respectively. After graduation, he had worked with Cisco Systems for three years before joining University of Florida in 2002. His research interests include Internet of things, big data, cybersecurity, data privacy, edge-cloud computing, intelligent cyber-transportation systems, and wireless systems. He published over 200 peer-reviewed journal/conference papers. He received the NSF CAREER Award and other research/best paper awards. He holds 13 US patents, and many of them were used in software products. He served as an associate editor for IEEE Transactions on Mobile Computing, IEEE/ACM Transactions on Networking and a number of other journals. He served in various chair positions or as committee members for numerous conferences. He holds the University of Florida Research Foundation Professorship and the University of Florida Term Professorship. He is a Fellow of IEEE and an ACM Distinguished Scientist.
-->