Venue of SIGMETRICS 2021

ACM SIGMETRICS 2021

Beijing, China
June 14-18, 2021

Tutorials

Friday, June 18, 2021

Each tutorial includes two or three sessions. Each session is a one-hour pre-recorded presentation followed by a half-an-hour live Q&A.

Tutorial 1: Similarity Caching

Similarity search is a key operation in multimedia retrieval systems and recommender systems, and it will play an important role also for future machine learning and augmented reality applications. The search for similar objects is based on a k-nearest neighbor search in an opportune metric space. When these systems need to serve large objects with tight delay constraints, edge servers close to the end-user can operate as similarity caches to speed up the retrieval. When answering to a request, a classic cache provides a subset of the objects the server would provide. A similarity cache differs from a classic one as it may reply to a request using a local subset of objects that are potentially—hopefully only slightly—farther than the true closest neighbors to reduce the fetching cost. The term was first coined in [1, 2], where similarity caches were envisaged for content-based image retrieval and contextual advertising, respectively. But the idea has been rediscovered a number of times under different names for different applications: semantic caches for object recognition [3, 4, 5], approximate caches for fast machine learning inference [6], soft caches for recommender systems [7]. The purpose of this tutorial is to present the general similarity caching model, its applications, current results and open problems, and in general to motivate additional research on this new modeling framework.


[1] Fabrizio Falchi, Claudio Lucchese, Salvatore Orlando, Raffaele Perego, and Fausto Rabitti. A metric cache for similarity search. In Proc. of the ACM LSDS-IR, pages 43–50, 2008.
[2] Sandeep Pandey, Andrei Broder, Flavio Chierichetti, Vanja Josifovski, Ravi Kumar, and Sergei Vassilvitskii. Nearest-neighbor caching for content-match applications. In Proc. of the WWW, pages 441–450, 2009.
[3] Utsav Drolia, Katherine Guo, Jiaqi Tan, Rajeev Gandhi, and Priya Narasimhan. Cachier: Edge-caching for recognition applications. In Proc. of the IEEE ICDCS, pages 276–286. IEEE, 2017.
[4] Peizhen Guo, Bo Hu, Rui Li, and Wenjun Hu. Foggycache: Cross-device approximate computation reuse. In Proc. of the MobiCom, pages 19–34, 2018.
[5] Srikumar Venugopal, Michele Gazzetti, Yiannis Gkoufas, and Kostas Katrinis. Shadow puppets: Cloud-level accurate AI inference at the speed and economy of edge. In USENIX HotEdge , 2018.
[6] Daniel Crankshaw, Xin Wang, Guilio Zhou, Michael J Franklin, Joseph E Gonzalez, and Ion Stoica. Clipper: A low-latency online prediction serving system. In 14th USENIX Symposium on Networked Systems Design and Implementation (NSDI 17), pages 613–627, 2017.
[7] P. Sermpezis, T. Giannakas, T. Spyropoulos, and L. Vigneri. Soft cache hits: Improving performance through recommendation and delivery of related content. IEEE Journal on Selected Areas in Communications, 36(6):1300–1313, 2018.

Speakers:

Emilio Leonardi

Emilio Leonardi is currently a Professor at the Dipartimento di Elettronica e Telecomunicazioni of Politecnico di Torino. His research interests are in the field of performance evaluation of computer networks and distributed systems, network science, social networks, human centric computation.

He has co-authored over 200 papers published in international journals and presented in leading international conferences. He has participated in several national and European projects, being also involved in several consulting and research projects with private industries. He was the scientific coordinator of the European 7-th FP STREP project “NAPA-WINE”, on P2P streaming applications.

He participated in the editorial boards/technical program committees of several journals and conferences including: IEEE Transactions on Parallel and Distributed Systems, IEEE Journal of Selected Areas of Communications, ACM Sigmetrics, IEEE Infocom, ACM MobiHoc.


Giovanni Neglia

Giovanni Neglia is a researcher at Inria Sophia Antipolis Méditerranée since 2008. He received his Habilitation in 2017 from the Université Côte d’Azur, his PhD and electronic engineering degree from University of Palermo (Italy), respectively in 2005 and in 2001. Before joining Inria as a permanent researcher, he was a research scholar at the University Massachusetts Amherst (2005) and a postdoc at Inria (2006-2007). Currently, his main interests are networks of caches, distributed optimization for ML, and complex networks. He co-authored more than 100 conference and journal papers. For his research activity he received 7 best paper awards.


Thrasyvoulos Spyropoulos

Thrasyvoulos Spyropoulos is a Full Professor at the Communication Systems Department of EURECOM. He holds a PhD degree from the University of Southern California (USC), Los Angeles, US. Before joining EURECOM he spent 1 year as a post-doctoral researcher at INRIA, Sophia-Antipolis, and 3 years as a Senior Researcher and Lecturer at ETH Zurich. His research interests include content-centric networks, 5G/6G cellular system modelling and optimization, applied machine learning, recommendation systems and social networks. He has co-authored more than 100 publications in international conferences and journals. He has served in the TPC of numerous top-tier conferences (ACM Mobihoc, IEEE Infocom, ACM Sigmetrics), and is an associate editor of ACM/IEEE Trans. on Networking. He has co-chaired ACM CHANTS 2013, IEEE NetSciCom 2014, and IEEE CCDWN 2019. He is also the co-receipient of the best paper awards at IEEE SECON 2008, IEEE WoWMoM 2012, and a best paper award runner-up for ACM Mobihoc 2011, and IEEE WoWMoM 2015. He is the recipient of the prestigious French young investigator award (ANR JCJC) for work related to cooperative edge caching, communication, and recommendation, which is directly related to the topic of this tutorial.


Tutorial 2: Predicting Time-to-Event Outcomes - A Tour of Survival Analysis from Classical to Modern

Predicting time-to-event outcomes arises in all sorts of applications, whether we want to know the amount of time until a device fails to how long a patient will stay in the hospital to when a convicted criminal is likely to re-offend. These time-to-event prediction problems have been studied for decades largely in the statistics and medical communities within the field of survival analysis. Only recently has survival analysis been explored more by machine learning researchers, with a number of significant methodological advances that take advantage of neural nets. This tutorial aims to go over the fundamentals of survival analysis including its basic problem formulation and how it is commonly used primarily in the medical literature before highlighting some of the recent machine learning innovations and open challenges.

Speakers:

George H. Chen

George H. Chen is an Assistant Professor of Information Systems at Carnegie Mellon University's Heinz College of Information Systems and Public Policy, and an affiliated faculty member of the Machine Learning Department. He primarily works on machine learning for healthcare, with an emphasis on forecasting problems involving survival analysis as well as time series data. A recurring theme in his work is the use of nonparametric prediction methods that aim to make few assumptions on the underlying data. Since these methods inform interventions that can be costly and affect people’s well-being, ensuring that predictions are reliable is essential. To this end, in addition to developing nonparametric predictors, George also produces theory to understand when and why they work, and identifies forecast evidence to help practitioners make decisions. George received his PhD from MIT in Electrical Engineering and Computer Science, where he won the George Sprowls Award for outstanding computer science thesis as well as the top graduate student teaching award, the Goodwin Medal.



Tutorial 3: Security and Privacy for Distributed Optimization & Distributed Machine Learning

The tutorial will include an introduction to (i) distributed optimization and distributed machine learning, (ii) security/fault-tolerance for distributed optimization/learning and (iii) privacy in distributed optimization/learning. The presentation will cover the basic principles, and some representative solutions. The presentation will be designed to be accessible to attendees without any background in these topics. Server-based and peer-to-peer (decentralized) solutions will be discussed.

Tutorial outline:

  • Motivation, applications of distributed optimization/learning
  • Solutions for distributed optimization/learning, including federated and decentralized approaches
  • Variations on the theme: (i) Asynchronous, (ii) Stochastic optimization
  • Security/Fault-tolerance: Motivation, Space of solutions and trade-offs, Example solutions
  • Privacy: Motivation, Space of solutions, Examples solutions

Speakers:

Nitin Vaidya

Nitin Vaidya Nitin Vaidya is the Robert L. McDevitt, K.S.G., K.C.H.S. and Catherine H. McDevitt L.C.H.S. Chair of Computer Science at Georgetown University. He has held faculty positions in the Department of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign (2001-2018), and in the Department of Computer Science at the Texas A&M University (1992-2001). His current research interests include distributed computing, including distributed learning, privacy-preserving distributed algorithms and fault-tolerant algorithms. He has also performed extensive research in wireless networking. Nitin has co-authored papers that received awards at several conferences, including ACM MobiCom, ACM MobiHoc, SSS and ICDCN. He is a fellow of the IEEE. Nitin has served as the Chair of the Steering Committee for the ACM PODC conference, as the Editor-in-Chief for the IEEE Transactions on Mobile Computing, and as the Editor-in-Chief for ACM SIGMOBILE publication MC2R. He received Ph.D. from the University of Massachusetts at Amherst, Master's degree from the Indian Institute of Science-Bangalore, and Bachelor's degree from the Birla Institute of Technology and Science-Pilani.



Tutorial 4: Coupling Techniques for Complex Control Problems

Online decision-making and stochastic control is one of the most active areas of applied probability, and a core topic of the SIGMETRICS community. Complex control problems show up in a wide variety of applications, and thanks to the ever present “curse-of-dimensionality”, are typically impossible to solve optimally, and are often even hard to approximate. The huge diversity in underlying models and methodologies also means that these settings are sometimes hard to reason about, and the available algorithms and guarantees are hard to understand and compare.

Stochastic coupling techniques are a powerful tool for obtaining new control policies and performance bounds in such complex settings. These approaches have several desirable features.

  • Coupling provides performance bounds in terms of interpretable “benchmark” policies.
  • Coupling often requires minimal assumptions on the underlying process.
  • Coupling sometimes leads to elegant elementary arguments, even in complex settings.

Moreover, recent lines of work in the SIGMETRICS community have used these techniques with spectacular effect, providing breakthroughs in several long-standing open questions in the field.

Our aim in this tutorial is to give an introduction to these coupling approaches to control problems. To this end, we will first give a survey of different coupling approaches, classifying them based on common themes. Next, we will dive into two particular examples of recent successes using these techniques: one on online resource allocation, and another on multiserver scheduling. Finally, we will highlight some open avenues for further research.

Speakers:

Ziv Scully

Ziv Scully is a fifth-year graduate student in Computer Science at CMU advised by Mor Harchol-Balter and Guy Blelloch. He graduated from MIT in 2016 with a BS in Mathematics with Computer Science. He is the recipient of an NSF Graduate Fellowship and an ARCS Foundation scholarship. Ziv’s research focus is optimizing and analyzing computer systems and algorithms from a stochastic perspective, including job scheduling, load balancing, combinatorial optimization under uncertainty, and parallel algorithms. Recent publications of his have been recognized with awards from the INFORMS Applied Probability Society (Best Student Paper Prize finalist, 2018), IFIP PERFORMANCE (Best Student Paper Award winner, 2018), and ACM SIGMETRICS (Outstanding Student Paper Award winner, 2019; Best Video Award winner, 2020).


Sid Banerjee

Sid Banerjee is an assistant professor in the School of Operations Research at Cornell, working on topics at the intersection of data-driven decision-making, network algorithms and market design. His research is supported by an NSF CAREER award, and grants from the NSF, ARL, and Engaged Cornell. He received his PhD in 2013 from the ECE Department at UT Austin, and was a postdoctoral researcher in the Social Algorithms Lab at Stanford from 2013--2015. He also served as a technical consultant with the research science group at Lyft.


Tutorial 5: Lyapunov Drift Methods for Stochastic Recursions: Optimization, Reinforcement Learning and Resource Allocation

Stochastic iterative algorithms appear in a variety of domains. Examples include studying evolution of queues in a stochastic network, stochastic gradient descent type algorithms for optimization and learning. This tutorial presents a Lyapunov drift framework that can be used to study convergence and steady-state performance of such stochastic iterative algorithms in general. In contrast to the traditional approach based on fluid or diffusion limits and process level convergence, such a drift based approach is much simpler and more powerful. The tutorial is in three parts. In the first part, we present a drift based approach to obtain finite time convergence bounds for stochastic gradient descent and stochastic approximation. In the second part, we build upon these results to obtain finite sample error bounds of various Reinforcement learning algorithms. In the third part, we present the drift method for heavy-traffic analysis in stochastic processing networks with a focus on load balancing systems and the input queued switch. In spite of the simplicity of a drift based approach, a major challenge is finding the right Lyapunov functions. This tutorial presents intuition and rules of thumb for picking the Lyapunov functions in various problems.

Speakers:

 Siva Theja Maguluri

Siva Theja Maguluri is Fouts family early career professor and an assistant professor in the School of Industrial and Systems Engineering at Georgia Tech. Earlier, he was a Research Staff Member in the Mathematical Sciences Department at IBM T. J. Watson Research Center. He obtained his Ph.D. and M.S. in ECE, as well as an M.S. Applied Mathematics, all from UIUC, and a B.Tech. in Electrical Engineering from IIT Madras. His research interests span the areas of Control, Optimization, Algorithms and Applied Probability. In particular, he works on Reinforcement Learning theory, scheduling, resource allocation and revenue optimization problems that arise in a variety of systems including Data Centers, Cloud Computing, Wireless Networks, Block Chains, Ride hailing systems, etc. He is a recipient of the biennial “Best Publication in Applied Probability” award in 2017, second place award at INFORMS JFIG best paper competition in 2020, “CTL/BP Junior Faculty Teaching Excellence Award” in 2020 and “Student Recognition of Excellence in Teaching: Class of 1934 CIOS Award” in 2020.


Zaiwei Chen

Zaiwei Chen is a fifth-year Ph.D. student in Machine Learning at Georgia Tech. He received a B.S. degree in Electrical Engineering at Zhejiang University, China, an M.S. degree in Operations Research and an M.S. degree in Mathematics, both at Georgia Tech. His research focus is on developing Reinforcement Learning algorithms with theoretical performance guarantees. Zaiwei was a recipient of the ARC-TRIAD Student Fellowship at Georgia Tech for Spring 2021.