ABSTRACTS OF TUTORIALS

Monday, June 22 1:30 5:00 Track 1
Performance Evaluation 101: Basic Modeling Techniques
Larry Dowdy, Vanderbilt University

This tutorial is aimed at novice performance modelers. Very little background knowledge is assumed. The context and motivating examples will focus on performance prediction. Given a computer system with several options for possible improvement, a performance prediction model can be used to evaluate each option. The option with the best predicted performance, relative to the implementation cost, is judged to be the best alternative. Topics include: measurement, workload characterization, model construction, model solution, calibration, prediction, and validation. Each of these topics involves difficult problems that are often ignored. However, the practitioner cannot ignore these problems. The presentation will be example oriented and intuition driven. Basic modeling techniques using Markovian analysis, convolution, mean value analysis, Petri net models, bounding techniques, decomposition, and approximate analysis will be covered. Applications to scheduling problems on large parallel multiprocessor systems and case studies will be given. Active participation by the audience will be required via self-assessment exercises.

Monday, June 22 1:30 5:00 Track 2
Performance Characteristics of WWW Information Systems
Mark Crovella and Azer Bestavros, Boston University

This tutorial will provide an overview of techniques for performance characterization of the World Wide Web. It focuses on what makes the Web different from previous distributed information systems; in particular, the two differences that we concentrate on are variability, and scale. Briefly, the performance evaluation of the Web must consider conditions of higher variability, and larger scale, than has been common in the past. We start with introductory background on the mathematics of high variability (alpha-stable distributions), and on the implications of large scale in the Web. We then cover evidence of high variability in client requests, aspects of request arrivals at servers, and the statistical properties of network traffic due to the Web. The resulting implications of the measured effects of high variability for performance modeling of Web components (clients and servers) are then discussed. Next we describe the implications of large scale for the design of Web components. These motivate opportunities for object caching at both client and server, resource discovery, and prefetching that represent potential performance improvements to the Web.

Monday, June 22 1:30 5:00 Track 3
Kronecker operators for the description and solution of large Markov models
Gianfranco Ciardo, College of William and Mary Susanna Donatelli, Universita di Torino

Initial work on the use of Kronecker (or tensor) operators for Markov modeling assumed an overall model obtained as the parallel composition of Markov submodels, expressed as otherwise independent and ergodic stochastic automata: dependencies among automata were either synchronous (simultaneous jump) or functional (rate dependencies). Many additional results have appeared since then, extending the applicability and efficiency of the approach:
(1) extension of the approach to higher level models such as hierarchical queueing networks and stochastic Petri nets; (2) additional interaction mechanisms;
(3) alternative algorithms that exploit the sparsity of the matrices; (4) ways to manage the case where the overall (actual) state-space is much smaller than the (potential) cross-product of the state spaces of the individual submodels. This tutorial presents a thorough discussion of how Kronecker operators can be used to efficiently describe and solve very large Markov chains, from the initial results of B. Plateau to the more recent advances. The tutorial is self-contained. The only required background is a basic understanding of linear algebra, iterative solution of linear systems, and discrete- and continuous-time Markov chains. The target audience consists of researchers and practitioners interested in the numerical solution of large stochastic models for performance, reliability, or performability analysis.

Monday, June 22 1:30 5:00 Track 4
Performance Modeling and Measurement of Distributed Object Systems
Pankaj Garg, HP Labs, Jerome Rolia, Carleton University

Obtaining good performance from software systems has always been a challenge. With the introduction of multi-tiered distributed component systems, however, overcoming this challenge is much more complicated. The main reasons for this complexity are concurrent object interactions -- both synchronous and asynchronous, highly variable network delays, and rapidly evolving workloads. In this tutorial, we describe several techniques to overcome the performance challenge of multi-tiered distributed object systems. The methods are derived from traditional systems performance engineering methodologies. We start at the requirements engineering stage to collect performance requirements early in the software life cycle. For all software life-cycle stages, we exploit appropriate performance evaluation techniques to assess design and configuration trade-offs, and the scalability of the distributed object system. Our analysis approach identifies aspects of a design that are critical for high-performance and enables scalability comparisons of design and deployment alternatives. Our measurement method uses a combination of existing performance monitoring tools and a new standards-based architecture Applications Response Measurements (ARM) for instrumenting distributed applications. Performance evaluation techniques include QNM and Layered Queuing Model based analysis, simulation, and measurement.

Tuesday, June 23 8:30 12:00 Track 1
Continuous Media Storage Servers
Dick Muntz, UCLA, Leana Golubchik, Univ. of Maryland and John Lui, Chinese University of Hong Kong

This tutorial will present a taxonomy of the methods that have been developed in recent years for storing and delivering video. Topics will include: [1] workload characterization (e.g., relative frequency of access to videos, full length film vs. news on demand, constant bit rate compression vs, variable bit rate, layered compression, etc.),[2] data layout and scheduling (striping granularity, cycle based scheduling, etc.), [3] system configuration issues (single disk systems, parallel disk systems, heterogeneous parallel disk configurations, cluster based systems), [4] batching of requests to reduce stream requirements, [5] fault tolerance issues, [6] operating systems scheduling and resource management issues,[7] mixed workload systems (MPEG1 video, MPEG2 video, H.621 video, audio, 3D interactive, etc.) Industry and university prototypes will be discussed as examples. Performance models to evaluate design alternative will be stressed.

Tuesday, June 23 1:30 5:00 Track 1
Performance Evaluation of RAID5 Disk Arrays
Alex Thomasian, IBM Thomas J. Watson Research

The Redundant Arrays of Inexpensive Disks (RAID) paradigm of replacing large expensive disks with commodity disks is a multibillion dollar industry. We start this tutorial by reviewing different RAID technologies. We discuss techniques to reduce the parity update penalty and methods to tolerate multiple disk failures. Dedicated, distributed, and parity sparing approaches for rebuild are considered and various rebuild options are described. We analyze the performance of RAID5 using an M/G/1 queueing model. The response time for fork-join requests to reconstruct data blocks is obtained via a tight upper bound and a vacationing server model is used to estimate rebuild mode performance. Caching reduces read access times and allows fast-writes onto non-volatile storage, such that the destaging of dirty blocks to disk is carried out periodically when the cache is full. We allow reads to have a preemptive or nonpreemptive priority wrt destages. This allows repeated updates of dirty blocks and higher destage efficiency by grouping them. We discuss results of trace-driven simulation studies which consider various cache management options. The performance tradeoffs of various options in the design, configuration, and operation of RAID5 are discussed. The performance analysis can be used as a starting point for developing a capacity planning tool. Open problems will be introduced.

Tuesday, June 23 8:30 12:00 Track 2
Application of Parallel Simulation to Large (Wireless) Networks
R. Bagrodia and M. Gerla, UCLA

A number of protocols and algorithms are being designed for large networks, including wireless, wirelined, and integrated networks. Asis well known, such protocols are complex to design, evaluate and implement. Their performance depends on a combination of factors that include traffic patterns, mobility models, application objectives (e.g., maximize throughput, minimize noise/loss), processor characteristics (cpu speed, load), and radio characteristics (bandwidth, power). Evaluation of the protocol as a function of these diverse parameters is analytically intractable. Given the complexity of the radio environment, sequential simulation of networks with thousands of nodes requires several days, and perhaps, even weeks. To make the design more interactive, it is imperative to reduce the turnaround time for the models. The goal of this tutorial is to describe a simulation environment for very large mobile wireless networks and present some representative case studies. The primary emphasis of the tutorial is on presenting the use of Maisie for parallel simulation of large network models. The primary sources of overhead in the parallel execution of these models will be discussed together with methods to reduce their impact. Common pitfalls encountered in the design of parallel simulation models will be discussed. We will also describe techniques to port simulation models to protocol implementations. Finally, a number of case studies will be presented to highlight the lessons that have been learned.

Tuesday, June 23 1:30 5:00 Track 2
QoS Management and Control of B-ISDN Networks
Herman G. de Meer, University of Hamburg

Current concepts of quality-of-service (QoS) architectures and their respective management systems are tailored to provide users with end-to-end service quality guarantees for advanced applications of multimedia type. QoS management constitutes an important part in these concepts. It should provide prerequisites for QoS specification, mapping, negotiation, renegotiation, monitoring, adaptation, or reconfiguration. Ultimately, QoS management functions rely on lower-level QoS mechanisms such as resource reservation schemes, congestion control techniques like admission control or policing, different techniques of error control, or priority based scheduling techniques. Typically, these mechanisms are performed on the network level of a distributed system and apply directly to network and computing resources. Due to the huge amount of multimedia data to be transported and processed in B-ISDN networks, efficiency is of similar importance as QoS guarantee. While QoS guarantee is based on more or less exclusive reservation schemes efficiency can be achieved by resource sharing strategies. Trading these adverse effects in a cost-effective manner is a challenging task for QoS management. In this tutorial, the main properties of B-ISDN networks, based on technologies like ATM networks, and the requirements of new (multimedia) services and applications are sketched. The major emphasis, though, is on methods relevant to QoS oriented management functions and control of networks and applications. Essentials of quantitative QoS modeling and control are introduced, the main issues concerning QoS provisioning are discussed, and overview of approaches is given that have been followed so far in literature. Timeliness, performance, and dependability properties of the network and system components are equally vital issues for the applications. Quantitative traffic source modeling and prediction is another crucial issue to be solved for an effective QoS management scheme. Since QoS modeling is an active and still growing field of research, some open problems are identified remaining to be solved.

Tuesday, June 23 8:30 10:00 Track 3
The Spectral Expansion Solution Method  CANCELLED
Isi Mitrani, University of Newcastle 

There are many systems whose behavior is modeled by two-dimensional Markov processes on lattice strips. The system state is described by a pair of integer random variables, I and J; one of these has a finite range, and the other can take any non-negative value. Often these models are cast in the framework of a Markov-modulated queue. Then I represents the state of a Markovian environment, while J is the number of jobs in the queue. Under some rather general assumptions, such models can be solved exactly and efficiently by a method called Spectral Expansion. The joint equilibrium distribution of I and J is expressed in terms of the eigenvalues and left eigenvectors of a certain matrix polynomial. Various performance measures such as percentiles, expectations and higher moments are then easily computed. The main object of this tutorial is to present the spectral expansion method to a wider audience, and to discuss and explain its advantages and disadvantages. Illustrative examples of applications will include the modeling of breakdowns in multiprocessors and networks, finite buffers with blocking and data replication with read and write accesses. Issues of computational complexity and numerical robustness will be addressed. A secondary objective is to compare the spectral expansion solution to other existing approaches (e.g., generating functions and the matrix-geometric solution), with respect to efficiency and ease of implementation. A good case can be made for spectral expansion on both counts.

Tuesday, June 23 10:30 12:00 Track 3
An Introduction to Change Point Detection
Joseph L. Hellerstein, IBM T. J. Watson Research Center and Fan Zhang, Columbia University

Change-point detection algorithms estimate the times at which there are changes in performance characteristics. Such knowledge is extremely valuable in practice. It provides an early warning of performance problems in production systems, and it can improve scheduling algorithms by identifying new regimes of system operation. This tutorial provides an introduction to off-line and on-line change-point detection. Off-line change-point detection identifies clusters of stationary observations based on statistical hypothesis testing. On-line change-point detection employs a collection of statistical techniques based on sequential probability tests. A good on-line algorithm raises an alarm close to when system parameters change, and the algorithm has long intervals between false alarms. The tutorial covers the basics of statistical hypothesis testing, likelihood ratios, and average run lengths. The change-point algorithms considered include: Shewart Control Charts, geometric moving averages, CUSUM algorithms, and general likelihood ratios. Throughout, the algorithms are applied to different kinds of data to provide qualitative insights into their effectiveness in practice.

Tuesday, June 23 1:30 5:00 Track 3
Trends in Stochastic Process Algebras
U. Herzog and Joost-Pieter Katoen, University of Erlangen-Nuernberg, Germany

Performance-oriented design of parallel and distributed systems is accurately possible only if we represent in our performance model functional dependencies between correlated tasks. The need for combined performance and functional modeling techniques has been already recognized in the seventies and the most successful examples are stochastic Petri nets, stochastic task graphs and stochastic automata networks. Stochastic process algebras offer a basic design technique, called constructivity, that allows one to systematically specify (and analyze) complex systems from smaller ones: there are mechanisms available for the composition of behaviors as well as for abstraction of internal behavior. In addition, process algebras offer an algebraic characterization of equivalent system behavior. These features are very attractive for the performance comparison of different system designs as well as for hierarchical modeling. After a summary of the basic principles and features, we present the most important (performance) results obtained so far in stochastic process algebras, both for exponential and more general distributions. Besides, we extensively discuss ongoing research directions and trends. The fundamental parts are accompanied by several examples.

Tuesday, June 23 8:30 12:00 Track 4
Techniques for Improving Protocol Performance
George Varghese, Washington University

Techniques for efficient protocol implementation, e.g., proper structuring techniques (e.g., upcalls) and methods of avoiding data touching overhead will be briefly reviewed. Then, we will concentrate on techniques for efficient implementation of common protocol implementation tasks. These tasks include checksums, buffer management, marshalling, striping, demultiplexing and packet filters, lookups, timers, sequence number bookkeeping, fair queueing, message digests, reassembly, and the latest results in best matching prefix lookup in routers. We will also survey generic techniques such as header prediction, and caching based on flow ID. As link speeds move to Gigabit speeds and higher, a number of optimizations will be necessary over and beyond the initial dramatic optimizations caused be restructuring poor implementations. The tutorial covers new and old techniques such as PATHFINDER and other filter fusion methods, timing wheels, summary trees for sequence number bookkeeping, and McKenney's buffer stealing. Wherever possible, we will describe quantitative studies that describe performance improvements using these techniques. A feature of the tutorial is a set of 15 principles that we will use consistently throughout the class, and a set of real-life problems that the students will get a chance to solve using these principles. The tutorial should be of interest to all those who are interested in protocol performance (e.g., router and end system implementors in academia and industry) and also for members of the SIGMETRICS community interested in learning about new problems in protocol performance.

Tuesday, June 23 1:30 5:00 Track 4
Large Deviation Theory and the Estimation of Bandwidth Requirements
John T. Lewis, Raymond Russell, and Fergal Toomey, Dublin Institute for Advanced Studies

The aim of this tutorial is to supply the intuition needed by teletraffic engineers and computer scientists so that they can understand recent developments in the area of bandwidth requirement in high-speed networks and operating systems. It will cover the following topics: (1) Background: the phenomenology of queuing systems - asymptotics of queue-length distributions; the Hurst phenomenon; long-range dependence and level shifts. (2) Review of the elements of Large Deviation theory; the intuition to be gained from considering a simple coin-tossing experiment; when we can expect random variables to display Large Deviation behavior and how to calculate their rate-functions when they do; Cramer's theorem and its extensions. (3) Transformations of Large Deviation behavior; the Contraction principle and the Time-Change formula. (4) Asymptotics for the queue-length distribution; large-buffer asymptotics and their extension to long-range dependent traffic arrivals (Duffield-O'Connell asymptotics); large-time asymptotics for the queue-length distribution; asymptotics in the number of sources (Botvich-Duffield asymptotics) and the shape-function. (5) Introduction to max-plus algebra; queuing systems as discrete-event systems; the min-max-plus description of queuing systems; analysis of general monotone discrete event systems using Large Deviations; applications to queues with finite buffers; large-time asymptotics for such systems. (6) The bandwidth-requirement of a traffic source in a network; the CPU requirement of a multimedia application on a workstation; using Large Deviation theory to approximate resource requirements. (7) Estimating resource requirements using empirical observations; applications to measurement-based network management; performance of estimation schemes in the presence of long-range dependence and non-stationarity.

Workshop on Internet Server Performance
Tuesday, June 23 8:30 5:00 Track 5
Organizers: Pei Cao, Univ. of Wisconsin
                      Sekhar Sarukkai, HP Labs

Today, Web servers are not simple HTTP servers. A typical end-user access to an Internet site may touch web servers, database servers, transaction servers, multimedia servers, advertisement servers and proxy servers. Needless to say, this results in increased complexity of analyzing the performance, and improving the Quality-of-Service (QoS), perceived by the end-user. This full-day workshop addresses many of the above issues in the form of invited presentations and more than a dozen selected papers that touch upon topics of QoS, operating system issues, load balancing, web caching policies, electronic commerce, Java's impact on web-enabled database access, and performance monitoring, among others. The workshop should stimulate lively discussions and shed some light on state-of-the-art research in one of the most rapidly evolving arenas today.