Venue of ACM SIGMETRICS 2024

ACM SIGMETRICS / IFIP PERFORMANCE 2024

Venice, Italy
June 10-14, 2024

Tutorials

Tentative Program: Monday June 10, 2024

There are 11 tutorials presented in 4 tracks in parallel. The program starts at 9.15 a.m. and ends at 5.30 p.m.

  9.15 am -
  10.45 am

  10.45 am -
  11.15 am

  11.15 am -
  12.45 pm

  12.45 pm -
  2 pm

  2 pm -
  3.30 pm

  3.30 pm -
  4 pm

  4 pm -
  5.30 pm

  Track 1

  T1.A

  Break

  T1.A

  Lunch

  T1.B

  Break

  T1.B

  Track 2

  T2.A

  Break

  T2.A

  Lunch

  T2.B

  Break

  T2.C

  Track 3

  T3.A

  Break

  T3.A

  Lunch

  T3.B

  Break

  T3.C

  Track 4

  T4.A

  Break

  T4.A

  Lunch

  T4.B

  Break

  T4.C


9.15 am - 12.45 pm

Track 1

T1.A "Network clustering: 50 years and still going! Information-theoretic criteria and efficient algorithms for a problem that thrives"

by Maximilien Dreveton and Daniel R. Figueiredo
Room 1G

Track 2

T2.A Transform Method for Stochastic Processing and Matching networks

by Prakirt Jhunjhunwala, Sushil Mahavir Varma, and Siva Theja Maguluri
Room 5X

Track 3

T3.A "Games in Product Form"

by Michel de Lara
Room 9C

Track 4

T4.A "Unveiling privacy and data protection issues from the use of social media and online advertising platforms"

by Ángel Merino, Francisco Caravaca, Ángel Cuevas, and Rubén Cuevas
Room 9D

2 pm - 3.30 pm

Track 1

T1.B"Fundamental Limits and Efficient Algorithms for Random Graph Matching"

by Jiaming Xu and Sophie H. Yu
Room 1G

Track 2

T2.B"An introduction to pyMarmote and pyMarmoteMDP for Markovian modeling"

by Alain Jean-Marie and Emmanuel Hyon
Room 5X

Track 3

T3.B "Optimistic Learning for Resource Management in Communication Systems"

by George Iosifidis and Naram Mhaisen
Room 9C

Track 4

T4.B"A Comprehensive Overview of Network Tomography: Inverse Methods for Network State Monitoring from End-to-End Measurements"

by Ting He
Room 9D

4 pm - 5.30 pm

Track 1

T1.B"Fundamental Limits and Efficient Algorithms for Random Graph Matching"

by Jiaming Xu and Sophie H. Yu (shortened tutorial)

Track 2

T2.C"Analyzing Queues with Markovian Arrivals and Markovian Service"

by Isaac Grosof
Room 5X

Track 3

T3.C"Distributional Analysis of Stochastic Algorithms"

by Qiaomin Xie and Yudong Chen (canceled)

Track 4

T4.C"Causal Inference in Complex Systems with Network Interference and Temporal Dynamics"

by Christina Lee Yu
Room 9D

Track 1

Tutorial T1.A: Network clustering: 50 years and still going!

Speakers: Maximilien Dreveton (EPFL, Switzerland), Daniel R. Figueiredo (UFRJ, Brazil)

Duration: 3 hours
Slides available here.

Abstract: Networks are arguably the most fundamental model to represent relationships between objects and have been adopted by different disciplines to represent various systems and phenomena. From social networks that encode different interactions among people, to the power grid and the Internet that represents how electrons and bits are transferred, to the connectome and protein interaction networks that encode neural and protein activity in the brain and the cell, respectively. Finding sets of similar nodes in a network, known as network clustering or community detection, is one of the most fundamental problems in network analysis. In particular, node clusters often reveal different latent information such as node function or node type with applications in various domains. Not surprisingly, network clustering has been studied for over 50 years in various scenarios and approaches.

Most advances within network clustering can be broadly categorized into either information-theoretic or algorithmic. The former is concerned with establishing conditions under which cluster detection is possible. This often requires a clean mathematical model for generating networks with clusters and provides thresholds (as a function of model parameters) for partial and exact recovery of the network clusters as the network grows large. Algorithmic advances are concerned with the design of algorithms that recover network clusters from data. To be useful, such algorithms must be efficient (in terms of computational complexity) and accurate (in terms of identifying network clusters).

Motivated by the increasing amounts of data, nodes, and edges of modern networks often have various attributes that significantly enhance the information encoded by the network (much beyond just the network structure). This relatively recent observation has led to novel advances in information-theoretic and algorithmic results further broadening the problem of network clustering.

This tutorial provides a comprehensive overview of network clustering over the past 50 years, covering fundamental information-theoretic and algorithmic results. Theoretical thresholds for partial and exact recovery of clusters in the classic Stochastic Block Model and in the more recent Contextual Stochastic Block Model (that handles attributes) will be presented. Classic algorithms based on spectral methods, statistical inference, modularity-maximization, and more recent algorithms that leverage Graph Neural Networks, will also be presented. The tutorial will also provide an outlook for topics not covered, such as overlapping communities and evolving communities

Speakers biographies:

Maximilien Dreveton received bachelor's and master's degrees in physics from the Ecole Normale Supérieure de Lyon (ENS Lyon), France in 2013 and 2015, respectively, and a Ph.D. degree in computer science from Inria Sophia Antipolis, France, in 2022. He is currently a Postdoctoral Fellow with Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland. His research interests include statistical analysis of random graphs, and particularly community detection in networks. He co-authored the book "Statistical Analysis of Networks" with Konstantin Avrachenkov and taught a Master-level course on this topic.

Daniel R. Figueiredo received a BS cum laude degree in Computer Science and MSc degree in Computer and Systems Engineering from the Federal University of Rio de Janeiro (UFRJ) Brazil, in 1996 and 1999, respectively. He received a MSc and PhD degrees in Computer Science from the University of Massachusetts Amherst (UMass) in 2005. He worked as a post-doc researcher at the Swiss Federal Institute of Technology, Lausanne (EPFL). In 2007, he joined the Department of Computer and Systems Engineering (PESC/COPPE) at the Federal University of Rio de Janeiro (UFRJ), Brazil where he currently holds an Associate Professor position. He has a Research Productivity Fellowship granted by CNPq (since 2009), was a member of the Young Scientists Program (from 2010 to 2018) and is a member of the Scientists Program (since 2023) both granted by FAPERJ. He was a Visiting Fulbright Scholar at Columbia University (6 months in 2017), and is a Visiting Professor at EPFL with a fellowship from CNPq (PDE) (8 months in 2024). His current main interests are in Network Science and Graph Learning.

Tutorial T1.B: Fundamental Limits and Efficient Algorithms for Random Graph Matching

Speakers: Jiaming Xu (Duke University, USA) and Sophie H. Yu (Stanford University, USA)

Jiaming Xu will conduct the tutorial as Sophia H. Yu is not able to travel to Italy, the duration of the tutorial has been reduced. Sorry for the inconvenience.
Duration: 3 hours 1.5 hours
Slides available here.

Abstract: Random graph matching aims to recover the hidden vertex correspondence between two random graphs whose edges are correlated. This is a ubiquitous problem arising in various applications across diverse fields such as network privacy, computational biology, computer vision, and natural language processing. The problem is also deep and rich in theory, involving the delicate interplay of information-theoretic limits, algorithms, and computational complexity. Recently, exciting progress has been achieved in matching two correlated Erdös–Rényi graphs, thanks to collective efforts from multiple research communities including information theory, probability, and optimization. This tutorial will present an overview of the information-theoretic recovery thresholds for matching correlated Erdös–Rényi graphs and efficient algorithms with near optimality both statistically and computationally. Various open problems and important future directions will be discussed along the way.

Speakers biographies:

Jiaming Xu is an associate professor at the Fuqua School of Business at Duke University. He received a Ph.D. degree from UIUC in 2014, an M.S. degree from UT-Austin in 2011, and a B.E. degree from Tsinghua University in 2009, all in Electrical and Computer Engineering. His research interests include high-dimensional statistics, networks, information theory, convex and non-convex optimization, and queueing theory. He received a Simons-Berkeley Fellowship in 2016 and an NSF Career Award in 2022.

Sophie H. Yu is a postdoctoral scholar in the Department of Management Science and Engineering at Stanford University and an incoming assistant professor at The Wharton School of Business, University of Pennsylvania. She received her Ph.D. in Decision Sciences from the Fuqua School of Business at Duke University, and an M.S. degree in statistical and economic modeling from Duke University in 2017, and a B.S. degree in Economics from Renmin University of China in 2015. Her research interests focus on high-dimensional statistics, algorithm design, and performance evaluation in large-scale networks and stochastic systems.



Track 2

Tutorial T2.A: Transform Method for Stochastic Processing and Matching Networks

Prakirt Jhunjhunwala (Columbia Business School, USA), Sushil Mahavir Varma (Georgia Tech, USA), Siva Theja Maguluri (Georgia Tech, USA)

Prakirt Jhunjhunwala and Sushil Mahavir Varma will conduct the tutorial as Siva Theja Maguluri is not able to travel to Italy.
Duration: 3 hours
Slides available here (zip archive).

Abstract: Motivated by real-world applications in data centers and online marketplaces, this tutorial delves into the design and performance of stochastic processing and matching networks. Our focus centers on congestion or queue length behavior, e.g., jobs waiting for processing at a data center or customers waiting for a taxi in ride-hailing systems. In this tutorial, we present the recently developed Transform method, a methodology to systematically characterize the steady-state queue length distribution across various asymptotic and non-asymptotic regimes.

The idea of the transform method is to analyze the drift of a carefully constructed exponential test function in the steady state. The resultant equation can then be solved to obtain an approximate closed-form expression of the transform of the queue lengths. Such a result allows us to obtain queue length distribution in various asymptotic regimes as well as tight bounds on queue length statistics like the mean and tail in non-asymptotic regimes. While the framework of the transform method remains invariant to the application, a careful adaptation is warranted. For example, the test function needs to be tailored to capture the underlying system dynamics, and solving the resultant equation requires novel ideas. To this end, we demonstrate the versatility of the method by analyzing a wide variety of stochastic processing and matching networks like load balancing systems, input-queued switches, and matching queues, endowed with features like customer abandonment, multiple bottlenecks, and surge pricing respectively. In doing so, the transform method opens new frontiers in the analysis of stochastic networks and we hope this tutorial will enable us to inspire fellow researchers to consider adopting and further developing this methodology.

Speakers biographies:

Prakirt Jhunjhunwala is Post-Doctoral Fellow at Columbia Business School. He received his Ph.D. in Operations Research at ISyE, Georgia Institute of Technology in August 2023. Before coming to Georgia Tech, Prakirt obtained his B.Tech. and Honors in Electrical Engineering and also received the Undergraduate Research Award from the IIT Bombay. His research interest lies in the design and analysis of Stochastic Networks with applications in data centers and quantum networks. Prakirt received an Honorable Mention for SIGMETRICS Doctoral Dissertation Award 2023. He also received the Best Paper Award at SPCOM 2018, and the Ed Iacobucci Fellowship for Excellence in Research, the area of Applied Probability and Simulation in 2022 at ISyE, Georgia Tech. In the past, Prakirt has given multiple talks at INFORMS. He has also given a tutorial at ISyE, GeorgiaTech on transform methods.

Sushil Varma is an incoming tenure-track assistant professor at the Industrial and Operations Engineering Department of the University of Michigan, Ann Arbor. He is currently a Ph.D. student at Georgia Institute of Technology. His research interests include queueing theory and revenue management with applications in online marketplaces, load balancing, and stochastic processing/matching networks. Sushil has won the Stephen. S. Lavenberg Best Student Paper Award in IFIP Performance 2021 and the Outstanding Graduate Teaching Assistant Award 2022. He was also a finalist in the INFORMS Transportation Science and Logistics (TSL) Best Student Paper Award. Sushil has delivered a 3-hour-long tutorial on stochastic matching networks in IFIP Performance 2023.

Siva Theja Maguluri is Fouts Family Early Career Professor and an Associate Professor in the H. Milton Stewart School of Industrial and Systems Engineering at Georgia Tech. His research interests span the areas of Control, Optimization, Algorithms and Applied Probability. His research and teaching are recognized through several awards including the Best Publication in Applied Probability award, NSF CAREER award, second place award at INFORMS JFIG best paper competition, CTL/BP Junior Faculty Teaching Excellence Award, and Student Recognition of Excellence in Teaching: Class of 1934 CIOS Award. He has conducted several tutorials on topics such as stochastic networks, stochastic approximation and reinforcement learning at INFORMS annual meeting 2023, IFIP Performance 2023, Simons institute 2022, ACM SIGMETRICS 2021, EC 2019 and IFIP Performance 2017.

Tutorial T2.B: An introduction to pyMarmote and pyMarmoteMDP for Markovian modeling

Speakers: Alain Jean-Marie (Inria, France) and Emmanuel Hyon (Paris Nanterre University, France)

Duration: 1.5 hours
Slides available here
Marmote tool is available here.

Abstract: Markov Chain modeling and optimization with Markov Decision Processes are classical activities in the SIGMETRICS and PERFORMANCE communities and in several fields of science: Stochastic Operations Research, Performance Evaluation, Performability, Machine Learning, but also Statistical Physics, BioInformatics etc. Many packages for solving Markov chains (or higher-level descriptions involving them, like Queueing Networks, Petri Nets, Stochastic Algebras) have been proposed in the past. Yet, modelers still resort mostly to direct programming in their favorite language. Programming the solution of Markov models from scratch is typically a time-consuming and error-prone activity.

The intention of Marmote is to ease up the work of Markov modelers by providing an easy-to-use yet numerically efficient modeling environment. For this reason, it has been programmed in C++ but has been endowed with a Python user interface. The tutorial will provide an introduction to the capabilities of software environment Marmote via its Python interface: pyMarmote/pyMarmoteMDP.

Marmote is a programming library for modeling with Markov chains, analyzing and "solving" these chains. It provides the objects for building continuous-time and discrete-time Markov chains on discrete but possibly complicated state spaces. Once defined, Markov chains can be analyzed with a variety of methods, including structural analysis, Monte-Carlo simulation and numerical solution for criteria such as transient and stationary distributions, or average hitting times. The extension MarmoteMDP provides a library for modeling with Markov Decision Processes. It provides algorithms for numerically determining optimal policies for all classical optimization criteria. It also features capabilities for the structural analysis of the resulting policies and value functions.

The presentation will consist of showing how to use Marmote in a series of thematic Python notebooks. These notebooks will be shared so as to allow motivated attendants to practice themselves.

Speakers biographies:

Alain Jean-Marie is senior researcher at Inria, the French National Institute for Research in Digital Science and Technology. He has been working in Queueing Theory and Stochastic Optimal Control for more than 35 years. From 2013 on, he has been leading the Marmote project, initially with funding from the French National Research Agency. Applications of the software now include the modeling of smart telecom base stations and semiconductor Lasers.

Emmanuel Hyon is an Associate Professor of Computer Science at Paris Nanterre University and member of LIP6 laboratory, the Computer Science laboratory of Sorbonnes Universités. He has been working in Queueing Theory and Stochastic Optimal Control fields for almost 20 years. He was a member of the Marmote Project and the lead developer of the MarmoteMDP package dedicated to Markov Decision Processes. The software has been used for impatient customers queue control, for minimization of the energy under QoS constraint in cloud systems and for supply chain problems.

Tutorial T2.C: Analyzing Queues with Markovian Arrivals and Markovian Service

Speaker: Isaac Grosof (University of Illinois, Urbana-Champaign, USA)

Duration: 1.5 hours
Slides available here
Video recording available here.

Abstract: Queueing analysis focuses on settings with i.i.d. interarrival times and service durations. Many real systems are not captured by these models. Real systems have varying rates. More generally, real systems have correlated arrival and service patterns. Components fail, demand surges, and networks experience congestion.

This tutorial will teach about a highly general model for fluctuating arrival and service rates known as the MAMS queue, a single-server queue where arrival rate and service rate each fluctuate according to a general finite-state Markov chain. This model is flexible enough to capture a wide range of applications that may be of interest to attendees.

The tutorial will teach about the Relative Arrivals for Drift (RAD) method for analyzing the MAMS system. Attendees will learn how to use RAD analysis to prove closed-form, explicit bounds on performance measures such as mean queue length in MAMS models which arise from practical applications. These bounds also provide tight characterizations of the heavy-traffic behavior of the system. RAD overcomes limitations from prior approaches to analyzing the MAMS system, where results were too complicated or too computational to provide analytic insight. RAD captures the effects of fluctuations of arrival and service rate in a concise, analytically-tractable fashion, providing a new way to think about the MAMS system.

The tutorial will be highly interactive, helping attendees to engage with the material and internalize RAD analysis.

Speakers biography:

Isaac Grosof is currently a postdoctoral researcher, studying queueing theory at University of Illinois, Urbana-Champaign, ECE department, being mentored by Prof. R. Srikant. They previously were a postdoc at Georgia Tech, where they were mentored by Prof. Siva Theja Maguluri. They received their PhD from Carnegie Mellon University, where they were advised by Prof. Mor Harchol-Balter. They will be starting as a tenure-track professor at Northwestern University in Fall 2024, in the IEMS department. They have given dozens of research talks, both as seminars at a wide variety of universities and at a wide variety of conferences. In particular, they were an invited speaker at Young Europeans in Queueing Theory 2021, at the Symposium on Theory of Computing 2022 in the Algorithms with Predictions workshop, and at the Stochastic Networks, Applied Probability, and Performance (SNAPP) 2023 seminar series. Their research has received many awards, including the ACM SIGMETRICS 2023 Doctoral Dissertation Award, the INFORMS 2022 George Nicholson Prize, the ACM SIGMETRICS 2021 Best Paper Award, the ACM SIGMETRICS 2019 Best Student Paper Award, and the IFIP Performance 2018 Best Student Paper Award.

Track 3

Tutorial T3.A: Games in Product Form

Speaker: Michel De Lara (École des Ponts ParisTech, France)

Duration: 3 hours
Slides available here.

Abstract: Game theory offers a mathematical formalism to study strategic interactions. In such models of competition, information (who knows what and before whom) plays a crucial role. In the fifties, H. W. Kuhn used trees to define a game in extensive form (GEF); in a GEF, the information of a player is represented by a partition of the player node moves. In the seventies, H. S. Witsenhausen used agents, a product set and a product sigma-field to define the so-called intrinsic model (IM) in multi-agent stochastic control problems; in an IM, the information of an agent is represented by a subfield of the product sigma-field. In this tutorial, we introduce games in product form (GPF) as an alternative (based on IM) to GEF. In the first hour, we present the Witsenhausen intrinsic model, with many illustrations. We also discuss classification of information structures and the potential for decomposition, be it hierarchical or parallel, with respect to subgroups of agents. In the second hour, we advocate the relevance of GPF for game theory by providing a case of game form that is playable, but that cannot be written on a tree, illustrating with examples the easiness and flexibility of GPF to model games (Alice and Bob, principal-agent models), and discussing how to represent stochastic and Bayesian games inside the GPF formalism. In the third hour, we provide a definition of perfect recall, of behavioral strategies "à la Aumann" and we prove a Kuhns Equivalence Theorem for GPF.

Speakers biography:

Michel De Lara is an applied mathematician, trained in stochastic processes and in control theory, after graduating as an engineer at Ecole Polytechnique and at Ecole nationale des ponts et chauss´ees, where he is presently working. He started his career in the environment research center, while working part time at the French ministry of the Environment. He is now in position at the mathematics research center, Cermics. Regarding theory, Michel De Laras interest fields fall into control theory, stochastic optimization, convex analysis and game theory. As to applications, he specializes in developing mathematical methods for the sustainable management of natural resources, with particular focus on renewable energy and biodiversity. Michel De Lara has published five books, and more than seventy papers in biology, economics and mathematics journals.

Tutorial T3.B: Optimistic Learning for Resource Management in Communication Systems

Speakers: George Iosifidis (TU Delft, The Netherlands; Amazon), Naram Mhaisen (TU Delft, The Netherlands)

Duration: 1.5 hours
Slides available here.

Abstract: The ever-growing complexity of communication systems, coupled with the increasingly-stringent KPIs of the offered services, call for a new generation of resource management (RM) tools that are both principled and practical. Today, a vast range of such tools are built using Deep Learning (DL) and are proved to be extremely effective if there are sufficient representative training data. At the same time, a parallel research thrust relies on online learning algorithms that turn runtime observations into resource control decisions and promise robust, alas conservative, performance guarantees. These two orthogonal approaches come with their own advantages and disadvantages, and as such fail to serve as an effective and versatile resource management toolbox.

In this tutorial we will present a best-of-both-worlds solution based on optimistic learning. The premise of OL is simple and intuitive: leverage any available predictions about the future system

state, environment conditions or user demands, to improve the performance of online learning algorithms. The ultimate goal of OL is to expedite the learning convergence when the predictions are accurate, while maintaining the worst-case bounds of the legacy learning algorithms, when the predictions fail. OL is particularly promising for communication systems where typically one has access to past observations without, however, knowing in advance their relevance to the yet-to-be-seen conditions/scenarios. Therefore, OL has the potential to mitigate the lack of guarantees and the (often) detrimental performance of DL-based tools while overcoming the unnecessary slow convergence of legacy online learning algorithms, enabling eventually the robust and interpretable performance of communication systems.

The tutorial will start by tracing the origins of OL in machine learning optimization and introducing the basic principles and main results of OL for convex optimization problems. Next, it will focus on commonly-encountered non-convex problems and explain how OL can be extended to these more intricate scenarios. The last part will present how OL can be used to optimize stateful systems and systems/KPIs with memory effects. The theoretical results will be tied to representative, classical and new, resource management (RM) problems in caching, edge computing, slicing and load-balancing.

Speakers biographies:

George Iosifidis: Associate Professor, TU Delft; Research Scientist, Amazon.

Dr. Iosifidis received the Diploma in communications engineering from the Greek Air Force, 2000 and the Ph.D. in 2012 from the Dep. of Electrical Engineering, University of Thessaly, Greece. He was a Post-Doc with CERTH, Greece, 2012-14; an Associate Research Scientist with Yale University, 2014-17; and an Assistant Professor with Trinity College Dublin, 2016-2020, before joining TU Delft. Since 2021, he also works as a part-time Research Scientist with Amazon (Visiting Academic). His interests lie in the area of network optimization and economics, with a recent focus on online learning techniques for mobile networks and edge computing systems. He is currently an Editor for IEEE/ACM Trans. on Networking and has served as Editor for IEEE JSAC, IEEE Trans. on Communications, and IEEE Trans. on Network Science & Engineering. He received the SFI Career Development Award in 2018, and Best Paper Awards in WiOPT 2013, IEEE INFOCOM 2017 and EuCNC22. His work on cooperative networks has appeared in Nature Communications (2019), PNAS (2019, 2020), and Nature Human Behavior (2018).

Naram Mhaisen is a fourth-year PhD student at TU Delft, Software Technology Department, working on data-driven and optimistic learning techniques for resource management problems in communication and computing systems. He graduated first-in-class from Qatar University in 2017, majoring in Computer Engineering. He also obtained an MSc degree with Distinction, in Computing, 2020. His work has appeared in top-tier venues such as IEEE Surveys & Tutorials, IEEE Trans. on Mobile Computing, IEEE Trans. on Net. Science & Engineering, IEEE IoT Journal, and ACM Sigmetrics. Moreover, his work has received awards from industry labs, namely, the Microsoft Imagine Cup 2017, and the NEC Europe Research Fellowship 2022. Finally, Mr. Mhaisen has extensive pre-doctoral research experience on the intersection of learning frameworks and IoT networks, where he has published more than 10 journal papers, and has delivered lectures at conferences, seminars, and as a TA in courses.

Tutorial T3.C: Distributional Analysis of Stochastic Algorithms

Speakers: Qiaomin Xie (University of Wisconsin-Madison), Yudong Chen (University of Wisconsin-Madison)

The speakers are not able to travel to Italy, the tutorial is therefore canceled.
Sorry for the inconvenience.

Duration: 1.5 hours

Abstract: Stochastic algorithms power modern machine learning and optimization. They instantiate as stochastic gradient descent for loss minimization, stochastic descent-ascent for min-max optimization, TD/Q-Learning for reinforcement learning, stochastic approximation for fixed-point problems, and methods for stochastic Variational Inequalities. Together with their many variants, these algorithms become increasingly vital in todays large-scale problems with finite noisy data. It is of imminent interest to obtain fine-grained characterization of stochastic algorithms and enhance their sample and computational efficiencies.

Traditionally, stochastic algorithms are treated as the noisy versions of their deterministic counterparts, viewing their stochasticity as a nuisance. Prior work thus focuses on controlling the stochastic fluctuation of the iterates, both in terms of algorithm design (e.g., use of diminishing stepsizes) and analysis (e.g., bounding mean squared errors). A recent line of work deviates from the above paradigm and embraces the probabilistic behavior of stochastic algorithms. By viewing the iterate sequence as a stochastic process, its distribution can be studied using modern tools from Markov chain and stochastic analysis, which provides fine-grained characterization of the behaviors of the algorithms.

In this tutorial, we will present an overview of these new techniques and results for distributional analysis of stochastic algorithms. Three main ingredients of this approach will be covered. (1) Establishing finite-time distributional convergence of the iterates and relating their ergodicity properties to the characteristics of the problem instance, algorithm and data. (2) Characterization of the steady-state distribution of the iterates using the techniques of coupling and basic adjoint relationship. (3) Leveraging the precise probabilistic characterization for stepsize scheduling, variance reduction, bias refinement and efficient statistical inference. Our focus will be on the constant stepsize paradigm popular among practitioners, and emphasize disentangling the deterministic and stochastic behaviors of the algorithms as well as their transient and long-run behaviors. We will cover both background and the fundamental, state-of-the-art results, and applications in concrete optimization and RL problems. Open issues and potential future directions will also be discussed.

Speakers biographies:

Qiaomin Xie is an assistant professor in the Department of Industrial and Systems Engineering at the University of Wisconsin-Madison. Her research interests lie in the fields of reinforcement learning, applied probability, game theory, and queueing theory, with applications to computer and communication networks. She was previously a visiting assistant professor at the School of Operations Research and Information Engineering at Cornell University (2019-2021). Prior to that, she was a postdoctoral researcher with LIDS at MIT. Qiaomin received her Ph.D. in Electrical and Computing Engineering from the University of Illinois Urbana-Champaign in 2016. She received her B.S. in Electronic Engineering from Tsinghua University. She is a recipient of the NSF CAREER Award, the JPMorgan Faculty Research Award, Google Systems Research Award, and the UIUC CSL Ph.D. Thesis Award.

Yudong Chen is currently an Associate Professor in the Department of Computer Sciences, University of Wisconsin-Madison. Previously he was an Associate Professor in the School of Operations Research and Information Engineering, Cornell University. He received the B.S. and M.S. degrees in control science and engineering from Tsinghua University and the Ph.D.degree in electrical and computer engineering from The University of Texas at Austin. His research works lie in machine learning, reinforcement learning, high-dimensional statistics, and optimization, with applications in network scheduling, wireless communication, and financial systems. He has received a National Science Foundation CAREER Award. The courses he has developed have won him multiple teaching awards at Cornell University. Work by his group has been recognized by the Best Student Paper Award from ACM SIGMETRICS and 2nd Place in the INFORMS George Nicholson Student Paper Competition.

Track 4

Tutorial T4.A: Unveiling privacy and data protection issues from the use of social media and online advertising platforms

Speakers: Ángel Merino (Universidad Carlos III de Madrid, Spain), Francisco Caravaca (Universidad Carlos III de Madrid, Spain), Ángel Cuevas (Universidad Carlos III de Madrid, Spain), Rubén Cueva (Universidad Carlos III de Madrid, Spain)

Duration: 3 hours
Slides available here.

Social Media platforms on the Internet are a huge source of information, often very useful for research purposes. An important part of these platforms is their advertising ecosystem, as these platforms rely on obtaining information from their users to offer advertisers accurate user targeting based on that information. Therefore, we have leveraged the automatic collection of data to conduct different research works related to user privacy on these platforms, such as user uniqueness and overprofiling. In this tutorial, you will learn how to face the process of obtaining information from different types of online services by automatic means, and how this information is useful for researching. We will combine a presentation of the contents, context, and our research with practical parts in which the audience will learn how we extract the specific data needed for our research analysis in an automated way, using their own Facebook and LinkedIn accounts and Python programming language to build the scripts. We will provide snippets of code in a Jupyter Notebook to work over them. Thus, the intended audience is people with knowledge of some programming language, preferably Python, and with an interest in social media, user privacy, and automated data extraction.

Speakers biographies:

Angel Merino is a Ph.D. student and researcher at the Department of Telematics at UC3M, where got his bachelors degree in Engineering in Telecommunication Technologies (2021) and his masters degree in Cybersecurity (2022). His research focuses on leveraging data from social networks to address different issues, especially pointing out and analyzing problems related to user privacy.

Francisco Caravaca is a researcher and a Ph.D. student at UC3M, where he pursued his bachelors degree in Telematics Engineering (2022), with an award for the best academic record. He also did his Master in Cybersecurity (2023) at UC3M. His research is focused on social media platforms and privacy.

Angel Cuevas received the M.Sc. (2007), and the Ph.D. (2011) degrees in Telematics Engineering from the University Carlos III of Madrid. He is currently an Associate Professor in the Department of Telematic Engineering, University Carlos III of Madrid. He is a co-author of more than 70 papers in prestigious international journals and conferences, such as the IEEE/ACM TRANSACTIONS ON NETWORKING, the ACM Transactions on Sensor Networks, Computer Networks (Elsevier), the IEEE NETWORK, the IEEE Communications Magazine, USENIX Security, WWW, ACM CoNEXT, and ACM CHI. His research interests focus on Internet measurements, web transparency, privacy, and P2P networks. He was a recipient of the Best Paper Award at ACM MSWiM 2010.

Ruben Cuevas is an Associate Professor in the Telematic Engineering department at Universidad Carlos III de Madrid, Spain and the Deputy Director of the UC3M-Santander Big Data Institute. He has co-authored over 70 papers in prestigious international journals and conferences such as ACM CoNEXT, WWW, Usenix Security, ACM HotNets, the IEEE Infocom, ACM CHI, IEEE/ACM TON, the IEEE TPDS, CACM, PNAS, Nature Scientific Reports, PlosONE, or Communications of the ACM. He has been the PI of 10 research projects and participated in more than 25 projects. His research work has been featured in major international media such as The Financial Times, BBC, The Guardian, New Scientist, Wired, Corriere della Sera, Le Figaro, El Pais, etc. His main research interests include online advertising, web transparency, and Internet measurements.

Tutorial T4.B: A Comprehensive Overview of Network Tomography: Inverse Methods for Network State Monitoring from End-to-End Measurements

Speaker: Ting He (Pennsylvania State University, USA)

Duration: 1.5 hours
Slides available here.

Abstract: Network state information (e.g., topology, link delays/loss rates) is needed for a number of management functions at network layer and above. However, the traditional ways of obtaining this information require either administrative access to the control plane (e.g., via SNMP or OpenFlow) or cooperation from internal nodes (e.g., via traceroute). In this tutorial, I will give an overview of an inference-based approach for obtaining network state information based on end-to-end measurements in the data plane, known as network tomography. As a field, network tomography dated back to 1999, and has evolved into a broad family of inversion problems including (i) network performance tomography, which aims at inferring the performance states of links from the performance measurements on selected paths, (ii) network topology tomography, which aims at jointly inferring the (routing) topology and the performance states of links from the performance measurements on paths between selected vantage points, and (iii) traffic matrix tomography, which aims at inferring the flow rates between given ingress/egress points from the aggregate rates measured at selected links. I will review all these problems, with focus on network performance tomography and network topology tomography, both about inferring network internal state from end-to-end measurements. The tutorial will cover the basic ideas in addressing each problem, the state-of-the-art solutions, the theoretical conditions for guaranteed accuracy, as well as the limitations of each approach. I will conclude the tutorial with an example of how the inferred information can facilitate upper-layer applications in the context of overlay networks, and an outlook of the directions of future research.

Speakers biography:

Ting He has been an Associate Professor in the School of Electrical Engineering and Computer Science at the Pennsylvania State University since 2016. She received the "Ph.D. degree in Electrical and Computer Engineering from Cornell University in 2007. Her interests reside at the intersection of computer networking, performance evaluation, and machine learning. Dr. He is an IEEE Senior Member and an ACM Member. She has served as Associate Editor for IEEE/ACM Transactions on Networking (2017-present) and IEEE Transactions on Communications (2017-2020), General Co-Chair for IEEE RTCSA (2023), TPC Co-Chair for IEEE ICCCN (2022), Area TPC Chair for IEEE INFOCOM (2021), Corporate Sponsorship Co-Chair for ACM MobiHoc (2019), and TPC member for many communication/networking conferences such as INFOCOM, ICDCS, ICNP, IWQoS, WiOpt, Networking, and SECON, as well as the Steering Committee for ICCCN workshop on Edge Computing and Networking (ECN). Dr. He has published over 100 papers in refereed journals/conferences with over 7000 citations, 2 books, and 13 patents. This includes the first comprehensive book on network tomography, titled "Network tomography: Identifiability, Measurement Design, and Network State Inference", published by Cambridge University Press. Her work received the 2021 IEEE Communications Society Leonard G. Abraham Prize, the Kenneth C. Sevcik Outstanding Student Paper Award at SIGMETRICS15, the Best Paper Award at ICDCS13, the Best Student Paper Award at ICASSP05, and Best Paper Nominations at SmartGridComm20 and IMC13. Dr. He also received the Awards for "Military Impact and Commercial Prosperity of Coreset Research" and "Most Collaboratively Complete Publications" by the International Technology Alliance (ITA) in 2015 and 2021, the IBM Research Division Award for "Scientific Advances in Quality of Information" in 2016, and multiple Outstanding Contributor Awards and Invention Achievement Awards from IBM in 2007-2016, as well as multiple Distinguished TPC Member Awards from INFOCOM in 2016-2020.

Tutorial T4.C: Causal Inference in Complex Systems with Network Interference and Temporal Dynamics

Speaker: Christina Lee Yu (Cornell University, USA)

Duration: 1.5 hours
Slides available here.

Abstract: Randomized experiments are widely used to evaluate new policies or proposed "treatments" in domains spanning the physical and biological sciences, social sciences, engineering, medicine and health, as well as in public service domains and the technology industry. However, classical approaches to experimental design rely on critical independence assumptions that are violated in systems with network interactions or temporal dynamics. In this tutorial, we will introduce the challenges of causal inference under network interference and temporal dynamics and survey the different approaches proposed in the literature to account for such dependencies, highlighting ongoing open questions in the field.

Speakers biography:

Christina Lee Yu is an Assistant Professor at Cornell University in the School of Operations Research and Information Engineering. Prior to Cornell, she was a postdoc at Microsoft Research New England. She received her PhD in 2017 and MS in 2013 in EECS from MIT, and her BS in CS from Caltech in 2011. She received honorable mention for the 2018 INFORMS Dantzig Dissertation Award. She is a recipient of the 2021 Intel Rising Stars Award, 2021 JPMorgan Faculty Research Award, and 2024 NSF CAREER award. Her research interests include algorithm design and analysis, high dimensional statistics, inference over networks, sequential decision making under uncertainty, online learning, and network causal inference. Her research is supported by NSF and AFOSR awards. She gave a tutorial on matrix completion at the 2018 International Symposium on Information Theory, and she has presented tutorials on causal inference in the presence of network interference at the 2022 CORS/INFORMS International Conference, 2023 North American School of Information Theory, and 2023 SIGMETRICS workshop on Causal Inference for Engineers. She received the 2022 Ralph S. Watts Excellence in Teaching Award at Cornell.

Credits for the banner image