1 Introduction
In the present paper we review scalable load balancing algorithms (LBAs) which achieve excellent delay performance in largescale systems and yet only involve low implementation overhead. LBAs play a critical role in distributing service requests or tasks (e.g. compute jobs, data base lookups, file transfers) among servers or distributed resources in parallelprocessing systems. The analysis and design of LBAs has attracted strong attention in recent years, mainly spurred by crucial scalability challenges arising in cloud networks and data centers with massive numbers of servers.
LBAs can be broadly categorized as static, dynamic, or some intermediate blend, depending on the amount of feedback or state information (e.g. congestion levels) that is used in allocating tasks. The use of state information naturally allows dynamic policies to achieve better delay performance, but also involves higher implementation complexity and a substantial communication burden. The latter issue is particularly pertinent in cloud networks and data centers with immense numbers of servers handling a huge influx of service requests. In order to capture the largescale context, we examine scalability properties through the prism of asymptotic scalings where the system size grows large, and identify LBAs which strike an optimal balance between delay performance and implementation overhead in that regime.
The most basic load balancing scenario consists of
identical parallel servers and a dispatcher where tasks arrive that must immediately be forwarded to one of the servers. Tasks are assumed to have unitmean exponentially distributed service requirements, and the service discipline at each server is supposed to be oblivious to the actual service requirements. In this canonical setup, the celebrated JointheShortestQueue (JSQ) policy has several strong stochastic optimality properties. In particular, the JSQ policy achieves the minimum mean overall delay among all nonanticipating policies that do not have any advance knowledge of the service requirements
[9, 45]. In order to implement the JSQ policy however, a dispatcher requires instantaneous knowledge of all the queue lengths, which may involve a prohibitive communication burden with a large number of servers .This poor scalability has motivated consideration of JSQ() policies, where an incoming task is assigned to a server with the shortest queue among servers selected uniformly at random. Note that this involves exchange of messages per task, irrespective of the number of servers . Results in Mitzenmacher [26] and Vvedenskaya et al. [43] indicate that even sampling as few as servers yields significant performance enhancements over purely random assignment () as grows large, which is commonly referred to as the “poweroftwo” or “powerofchoice” effect. Specifically, when tasks arrive at rate , the queue length distribution at each individual server exhibits superexponential decay for any fixed as grows large, compared to exponential decay for purely random assignment.
As illustrated by the above, the diversity parameter induces a fundamental tradeoff between the amount of communication overhead and the delay performance. Specifically, a random assignment policy does not entail any communication burden, but the mean waiting time remains constant as grows large for any fixed . In contrast, a nominal implementation of the JSQ policy (without maintaining state information at the dispatcher) involves messages per task, but the mean waiting time vanishes as grows large for any fixed . Although JSQ() policies with yield major performance improvements over purely random assignment while reducing the communication burden by a factor O() compared to the JSQ policy, the mean waiting time does not vanish in the limit. Thus, no fixed value of will provide asymptotically optimal delay performance. This is evidenced by results of Gamarnik et al. [14] indicating that in the absence of any memory at the dispatcher the communication overhead per task must increase with in order for any scheme to achieve a zero mean waiting time in the limit.
We will explore the intrinsic tradeoff between delay performance and communication overhead as governed by the diversity parameter , in conjunction with the relative load . The latter tradeoff is examined in an asymptotic regime where not only the overall task arrival rate is assumed to grow with , but also the diversity parameter is allowed to depend on . We write and , respectively, to explicitly reflect that, and investigate what growth rate of is required, depending on the scaling behavior of , in order to achieve a zero mean waiting time in the limit. We establish that the fluidscale and diffusionscale limiting processes are insensitive to the exact growth rate of , as long as the latter is sufficiently fast, and in particular coincide with the limiting processes for the JSQ policy. This reflects a remarkable universality property and demonstrates that the optimality of the JSQ policy can asymptotically be preserved while dramatically lowering the communication overhead.
We will extend the abovementioned universality properties to network scenarios where the servers are assumed to be interconnected by some underlying graph topology . Tasks arrive at the various servers as independent Poisson processes of rate , and each incoming task is assigned to whichever server has the shortest queue among the one where it appears and its neighbors in . In case is a clique, each incoming task is assigned to the server with the shortest queue across the entire system, and the behavior is equivalent to that under the JSQ policy. The abovementioned stochastic optimality properties of the JSQ policy thus imply that the queue length process in a clique will be ‘better’ than in an arbitrary graph . We will establish sufficient conditions for the fluidscaled and diffucionscaled versions of the queue length process in an arbitrary graph to be equivalent to the limiting processes in a clique as . The conditions reflect similar universality properties as described above, and in particular demonstrate that the optimality of a clique can asymptotically be preserved while markedly reducing the number of connections, provided the graph is suitably random.
While a zero waiting time can be achieved in the limit by sampling only servers, the amount of communication overhead in terms of must still grow with
. This may be explained from the fact that a large number of servers need to be sampled for each incoming task to ensure that at least one of them is found idle with high probability. As alluded to above, this can be avoided by introducing memory at the dispatcher, in particular maintaining a record of vacant servers, and assigning tasks to idle servers, if there are any. This socalled JointheIdleQueue (JIQ) scheme
[5, 22] has gained huge popularity recently, and can be implemented through a simple tokenbased mechanism generating at most one message per task. As established by Stolyar [37], the fluidscaled queue length process under the JIQ scheme is equivalent to that under the JSQ policy as , and this result can be shown to extend the diffusionscaled queue length process. Thus, the use of memory allows the JIQ scheme to achieve asymptotically optimal delay performance with minimal communication overhead. In particular, ensuring that tasks are assigned to idle servers whenever available is sufficient to achieve asymptotic optimality, and using any additional queue length information yields no meaningful performance benefits on the fluid or diffusion levels.Stochastic coupling techniques play an instrumental role in the proofs of the abovedescribed universality and asymptotic optimality properties. A direct analysis of the queue length processes under a JSQ() policy, in a load balancing graph , or under the JIQ scheme is confronted with unsurmountable obstacles. As an alternative route, we leverage novel stochastic coupling constructions to relate the relevant queue length processes to the corresponding processes under a JSQ policy, and show that the deviation between these two is asymptotically negligible under mild assumptions on or .
While the stochastic coupling schemes provide a remarkably effective and overarching approach, they defy a systematic recipe and involve some degree of ingenuity and customization. Indeed, the specific coupling arguments that we develop are not only different from those that were originally used in establishing the stochastic optimality properties of the JSQ policy, but also differ in critical ways between a JSQ() policy, a load balancing graph , and the JIQ scheme. Yet different coupling constructions are devised for model variants with infiniteserver dynamics that we will discuss in Section 4.
The remainder of the paper is organized as follows. In Section 2 we discuss a wide spectrum of LBAs and evaluate their scalability properties. In Section 3 we introduce some useful preliminaries, review fluid and diffusion limits for the JSQ policy as well as JSQ() policies with a fixed value of , and explore the tradeoff between delay performance and communication overhead as function of the diversity parameter . In particular, we establish asymptotic universality properties for JSQ() policies, which are extended to systems with server pools and network scenarios in Sections 4 and 5, respectively. In Section 6 we establish asymptotic optimality properties for the JIQ scheme. We discuss somewhat related redundancy policies and alternative scaling regimes and performance metrics in Section 7.
2 Scalability spectrum
In this section we review a wide spectrum of LBAs and examine their scalability properties in terms of the delay performance visavis the associated implementation overhead in largescale systems.
2.1 Basic model
Throughout this section and most of the paper, we focus on a basic scenario with parallel singleserver infinitebuffer queues and a single dispatcher where tasks arrive as a Poisson process of rate , as depicted in Figure 2. Arriving tasks cannot be queued at the dispatcher, and must immediately be forwarded to one of the servers. This canonical setup is commonly dubbed the supermarket model. Tasks are assumed to have unitmean exponentially distributed service requirements, and the service discipline at each server is supposed to be oblivious to the actual service requirements.
2.2 Asymptotic scaling regimes
An exact analysis of the delay performance is quite involved, if not intractable, for all but the simplest LBAs. Numerical evaluation or simulation are not straightforward either, especially for high load levels and large system sizes. A common approach is therefore to consider various limit regimes, which not only provide mathematical tractability and illuminate the fundamental behavior, but are also natural in view of the typical conditions in which cloud networks and data centers operate. One can distinguish several asymptotic scalings that have been used for these purposes: (i) In the classical heavytraffic regime, with a fixed number of servers and a relative load that tends to one in the limit. (ii) In the conventional largecapacity or manyserver regime, the relative load approaches a constant as the number of servers grows large. (iii) The popular HalfinWhitt regime [17] combines heavy traffic with a large capacity, with
(2.1) 
so the relative capacity slack behaves as as the number of servers grows large. (iv) The socalled nondegenerate slowdown regime [2] involves , so the relative capacity slack shrinks as as the number of servers grows large.
The term nondegenerate slowdown refers to the fact that in the context of a centralized multiserver queue, the mean waiting time in regime (iv) tends to a strictly positive constant as , and is thus of similar magnitude as the mean service requirement. In contrast, in regimes (ii) and (iii), the mean waiting time decays exponentially fast in or is of the order , respectively, as , while in regime (i) the mean waiting time grows arbitrarily large relative to the mean service requirement.
In the present paper we will focus on scalings (ii) and (iii), and occasionally also refer to these as fluid and diffusion scalings, since it is natural to analyze the relevant queue length process on fluid scale () and diffusion scale () in these regimes, respectively. We will not provide a detailed account of scalings (i) and (iv), which do not capture the largescale perspective and do not allow for low delays, respectively, but we will briefly revisit these regimes in Section 7.
2.3 Random assignment: N independent M/M/1 queues
One of the most basic LBAs is to assign each arriving task to a server selected uniformly at random. In that case, the various queues collectively behave as independent M/M/1 queues, each with arrival rate
and unit service rate. In particular, at each of the queues, the total number of tasks in stationarity has a geometric distribution with parameter
. By virtue of the PASTA property, the probability that an arriving task incurs a nonzero waiting time is . The mean number of waiting tasks (excluding the possible task in service) at each of the queues is , so the total mean number of waiting tasks is , which by Little’s law implies that the mean waiting time of a task is . In particular, when , the probability that a task incurs a nonzero waiting time is , and the mean waiting time of a task is , independent of , reflecting the independence of the various queues.A slightly better LBA is to assign tasks to the servers in a RoundRobin manner, dispatching every th task to the same server. In the largecapacity regime where , the interarrival time of tasks at each given queue will then converge to a constant as . Thus each of the queues will behave as an D/M/1 queue in the limit, and the probability of a nonzero waiting time and the mean waiting time will be somewhat lower than under purely random assignment. However, both the probability of a nonzero waiting time and the mean waiting time will still tend to strictly positive values and not vanish as .
2.4 JointheShortest Queue (JSQ)
Under the JointheShortestQueue (JSQ) policy, each arriving task is assigned to the server with the currently shortest queue (ties are broken arbitrarily). In the basic model described above, the JSQ policy has several strong stochastic optimality properties, and yields the ‘most balanced and smallest’ queue process among all nonanticipating policies that do not have any advance knowledge of the service requirements [9, 45]
. Specifically, the JSQ policy minimizes the joint queue length vector in a stochastic majorization sense, and in particular stochastically minimizes the total number of tasks in the system, and hence the mean overall delay. In order to implement the JSQ policy however, a dispatcher requires instantaneous knowledge of the queue lengths at all the servers. A nominal implementation would involve exchange of
messages per task, and thus yield a prohibitive communication burden in largescale systems.2.5 JointheSmallestWorkload (JSW): centralized M/M/N queue
Under the JointheSmallestWorkload (JSW) policy, each arriving task is assigned to the server with the currently smallest workload. Note that this is an anticipating policy, since it requires advance knowledge of the service requirements of all the tasks in the system. Further observe that this policy (myopically) minimizes the waiting time for each incoming task, and mimics the operation of a centralized server queue with a FCFS discipline. The equivalence with a centralized server queue yields a strong optimality property of the JSW policy: The vector of joint workloads at the various servers observed by each incoming task is smaller in the Schur convex sense than under any alternative admissible policy [13].
The equivalence with a centralized FCFS queue means that there cannot be any idle servers while tasks are waiting. In our setting with Poisson arrivals and exponential service requirements, it can therefore be shown that the total number of tasks under the JSW policy is stochastically smaller than under the JSQ policy. At the same time, it means that the total number of tasks under the JSW policy behaves as a birthdeath process, which renders it far more tractable than the JSQ policy. Specifically, given that all the servers are busy, the total number of waiting tasks is geometrically distributed with parameter . Thus the total mean number of waiting tasks is , and the mean waiting time is , with denoting the probability of all servers being occupied and a task incurring a nonzero waiting time. This immediately shows that the mean waiting time is smaller by at least a factor than for the random assignment policy considered in Subsection 2.3.
In the largecapacity regime , it can be shown that the probability of a nonzero waiting time decays exponentially fast in , and hence so does the mean waiting time. In the HalfinWhitt heavytraffic regime (2.1), the probability of a nonzero waiting time converges to a finite constant , implying that the mean waiting time of a task is of the order , and thus vanishes as .
2.6 Powerofd load balancing (JSQ(d))
As mentioned above, the achilles heel of the JSQ policy is its excessive communication overhead in largescale systems. This poor scalability has motivated consideration of socalled JSQ() policies, where an incoming task is assigned to a server with the shortest queue among servers selected uniformly at random. Results in Mitzenmacher [26] and Vvedenskaya et al. [43] indicate that even sampling as few as servers yields significant performance enhancements over purely random assignment () as . Specifically, in the fluid regime where , the probability that there are or more tasks at a given queue is proportional to as , and thus exhibits superexponential decay as opposed to exponential decay for the random assignment policy considered in Subsection 2.3.
As illustrated by the above, the diversity parameter induces a fundamental tradeoff between the amount of communication overhead and the performance in terms of queue lengths and delays. A rudimentary implementation of the JSQ policy (, without replacement) involves communication overhead per task, but it can be shown that the probability of a nonzero waiting time and the mean waiting vanish as , just like in a centralized queue. Although JSQ() policies with a fixed parameter yield major performance improvements over purely random assignment while reducing the communication burden by a factor O() compared to the JSQ policy, the probability of a nonzero waiting time and the mean waiting time do not vanish as .
In Subsection 3.5 we will explore the intrinsic tradeoff between delay performance and communication overhead as function of the diversity parameter , in conjunction with the relative load. We will examine an asymptotic regime where not only the total task arrival rate is assumed to grow with , but also the diversity parameter is allowed to depend on . As will be demonstrated, the optimality of the JSQ policy () can be preserved, and in particular a vanishing waiting time can be achieved in the limit as , even when , thus dramatically lowering the communication overhead.
2.7 Tokenbased strategies: JointheIdleQueue (JIQ)
While a zero waiting time can be achieved in the limit by sampling only servers, the amount of communication overhead in terms of must still grow with . This can be countered by introducing memory at the dispatcher, in particular maintaining a record of vacant servers, and assigning tasks to idle servers as long as there are any, or to a uniformly at random selected server otherwise. This socalled JointheIdleQueue (JIQ) scheme [5, 22] has received keen interest recently, and can be implemented through a simple tokenbased mechanism. Specifically, idle servers send tokens to the dispatcher to advertise their availability, and when a task arrives and the dispatcher has tokens available, it assigns the task to one of the corresponding servers (and disposes of the token). Note that a server only issues a token when a task completion leaves its queue empty, thus generating at most one message per task. Surprisingly, the mean waiting time and the probability of a nonzero waiting time vanish under the JIQ scheme in both the fluid and diffusion regimes, as we will further discuss in Section 6. Thus, the use of memory allows the JIQ scheme to achieve asymptotically optimal delay performance with minimal communication overhead.
2.8 Performance comparison
We now present some simulation experiments that we have conducted to compare the abovedescribed LBAs in terms of delay performance. Specifically, we evaluate the mean waiting time and the probability of a nonzero waiting time in both a fluid regime () and a diffusion regime (). The results are shown in Figure 1. We are especially interested in distinguishing two classes of LBAs – ones delivering a mean waiting time and probability of a nonzero waiting time that vanish asymptotically, and ones that fail to do so – and relating that dichotomy to the associated overhead.
JSQ, JIQ, and JSW.
JSQ, JIQ and JSW evidently have a vanishing waiting time in both the fluid and the diffusion regime as discussed in Subsections 2.4, 2.5 and 2.7. The optimality of JSW as mentioned in Subsection 2.5 can also be clearly observed.
However, there is a significant difference between JSW and JSQ/JIQ in the diffusion regime. We observe that the probability of a nonzero waiting time approaches a positive constant for JSW, while it vanishes for JSQ/JIQ. In other words, the mean of all positive waiting times is of a larger order of magnitude in JSQ/JIQ compared to JSW. Intuitively, this is clear since in JSQ/JIQ, when a task is placed in a queue, it waits for at least a residual service time. In JSW, which is equivalent to the M/M/ queue, a task that cannot start service immediately, joins a queue that is collectively drained by all the servers
Random and RoundRobin.
The mean waiting time does not vanish for Random and RoundRobin in the fluid regime, as already mentioned in Subsection 2.3. Moreover, the mean waiting time grows without bound in the diffusion regime for these two schemes. This is because the system can still be decomposed, and the loads of the individual M/M/1 and D/M/1 queues tend to 1.
Jsq() policies.
Three versions of JSQ() are included in the figures;
,
and for which .
Note that the graph for shows sudden
jumps when increases by .
The variants for which have a vanishing waiting time
in the fluid regime, while does not.
The latter observation is a manifestation of the results of Gamarnik
et al. [14] mentioned in the introduction, since JSQ()
uses no memory and the overhead per task does not increase with .
Furthermore, it follows that JSQ() policies outperform
Random and RoundRobin, while JSQ/JIQ/JSW are better in terms of mean
waiting time.
In order to succinctly capture the results and observed dichotomy in Figure 1, we provide an overview of the delay performance of the various LBAs and the associated overhead in Table 1, where denotes the stationary fraction of servers with or more tasks.
Scheme  Queue length  Waiting time (fixed )  Waiting time ()  Overhead per task 
Random  0  
JSQ()  (1)  
same as JSQ  same as JSQ  ??  
same as JSQ  same as JSQ  same as JSQ  
JSQ  , o(1)  o(1)  
JIQ  same as JSQ  same as JSQ  same as JSQ 
3 JSQ(d) policies and universality properties
In this section we first introduce some useful preliminary concepts, then review fluid and diffusion limits for the JSQ policy as well as JSQ() policies with a fixed value of , and finally discuss universality properties when the diversity parameter is being scaled with .
As described in the previous section, we focus on a basic scenario where all the servers are homogeneous, the service requirements are exponentially distributed, and the service discipline at each server is oblivious of the actual service requirements. In order to obtain a Markovian state description, it therefore suffices to only track the number of tasks, and in fact we do not need to keep record of the number of tasks at each individual server, but only count the number of servers with a given number of tasks. Specifically, we represent the state of the system by a vector , with denoting the number of servers with or more tasks at time , including the possible task in service, . Note that if we represent the queues at the various servers as (vertical) stacks, and arrange these from left to right in nondescending order, then the value of corresponds to the width of the th (horizontal) row, as depicted in the schematic diagram in Figure 3.
In order to examine the asymptotic behavior when the number of servers grows large, we consider a sequence of systems indexed by , and attach a superscript to the associated state variables.
The fluidscaled occupancy state is denoted by , with representing the fraction of servers in the th system with or more tasks as time , . Let be the set of all possible fluidscaled states. Whenever we consider fluid limits, we assume the sequence of initial states is such that as .
The diffusionscaled occupancy state is defined as , with
(3.1) 
Note that corresponds to the number of vacant servers, normalized by . The reason why is centered around while , , are not, is because for the scalable LBAs that we pursue, the fraction of servers with exactly one task tends to one, whereas the fraction of servers with two or more tasks tends to zero as .
3.1 Fluid limit for JSQ(d) policies
We first consider the fluid limit for JSQ() policies with an arbitrary but fixed value of as characterized by Mitzenmacher [26] and Vvedenskaya et al. [43].
The sequence of processes has a weak limit that satisfies the system of differential equations
(3.2) 
The fluidlimit equations may be interpreted as follows. The first term represents the rate of increase in the fraction of servers with or more tasks due to arriving tasks that are assigned to a server with exactly tasks. Note that the latter occurs in fluid state with probability , i.e., the probability that all sampled servers have or more tasks, but not all of them have or more tasks. The second term corresponds to the rate of decrease in the fraction of servers with or more tasks due to service completions from servers with exactly tasks, and the latter rate is given by .
The unique fixed point of (3.2) for any is obtained as
(3.3) 
It can be shown that the fixed point is asymptotically stable in the sense that as for any initial fluid state with . The fixed point reveals that the stationary queue length distribution at each individual server exhibits superexponential decay as , as opposed to exponential decay for a random assignment policy. It is worth observing that this involves an interchange of the manyserver () and stationary () limits. The justification is provided by the asymptotic stability of the fixed point along with a few further technical conditions.
3.2 Fluid limit for JSQ policy
We now turn to the fluid limit for the ordinary JSQ policy, which rather surprisingly was not rigorously established until fairly recently in [31], leveraging martingale functional limit theorems and timescale separation arguments [18].
In order to state the fluid limit starting from an arbitrary fluidscaled occupancy state, we first introduce some additional notation. For any fluid state , denote by the minimum queue length among all servers. Now if , then define and for all . Otherwise, in case , define
(3.4) 
and otherwise. The coefficient represents the instantaneous fraction of incoming tasks assigned to servers with a queue length of exactly in the fluid state .
Any weak limit of the sequence of processes is given by the deterministic system satisfying the following system of differential equations
(3.5) 
where denotes the rightderivative.
3.3 Diffusion limit for JSQ policy
We next describe the diffusion limit for the JSQ policy in the HalfinWhitt heavytraffic regime (2.1), as recently derived by Eschenfeldt & Gamarnik [10].
For suitable initial conditions, the sequence of processes as in (3.1) converges weakly to the limit , where is the unique solution to the following system of SDEs
(3.7) 
for , where is the standard Brownian motion and is the unique nondecreasing nonnegative process satisfying .
The above diffusion limit implies that the mean waiting time under the JSQ policy is of a similar order as in the corresponding centralized M/M/ queue. Hence, we conclude that despite the distributed queueing operation a suitable load balancing policy can deliver a similar combination of excellent service quality and high resource utilization in the HalfinWhitt regime (2.1) as in a centralized queueing arrangement. It it important though to observe a subtle but fundamental difference in the distributional properties due to the distributed versus centralized queueing operation. In the ordinary M/M/ queue a fraction of the customers incur a nonzero waiting time as , but a nonzero waiting time is only of length in expectation. In contrast, under the JSQ policy, the fraction of tasks that experience a nonzero waiting time is only of the order . However, such tasks will have to wait for the duration of a residual service time, yielding a waiting time of the order .
3.4 Heavytraffic limits for JSQ(d) policies
Finally, we briefly discuss the behavior of JSQ() policies for fixed in a heavytraffic regime where as with a positive function diverging to infinity. Note that the case corresponds to the HalfinWhitt heavytraffic regime (2.1). While a complete characterization of the occupancy process for fixed has remained elusive so far, significant partial results were recently obtained by Eschenfeldt & Gamarnik [11]. In order to describe the transient asymptotics, we introduce the following rescaled processes , .
Then, for suitable initial states, on any finite time interval, converges weakly to a deterministic system that satisfies the following system of ODEs
(3.8) 
with the convention that .
It is noteworthy that the scaled occupancy process loses its diffusive behavior for fixed . It is further shown in [11] that with high probability the steadystate fraction of queues with length at least tasks approaches unity, which in turn implies that with high probability the steadystate delay is at least as . The diffusion approximation of the JSQ() policy in the HalfinWhitt regime (2.1), starting from a different initial scaling, has been studied by Budhiraja & Friedlander [8]. Recently, Ying [47] introduced a broad framework involving Stein’s method to analyze the rate of convergence of the scaled steadystate occupancy process of the JSQ() policy when with . The results in [47] establish that in steady state, most of the queues are of size and thus the steadystate delay is of order .
3.5 Universality properties
We now further explore the tradeoff between delay performance and communication overhead as a function of the diversity parameter , in conjunction with the relative load. The latter tradeoff will be examined in an asymptotic regime where not only the total task arrival rate grows with , but also the diversity parameter depends on , and we write , to explicitly reflect that. We will specifically investigate what growth rate of is required, depending on the scaling behavior of , in order to asymptotically match the optimal performance of the JSQ policy and achieve a zero mean waiting time in the limit. The results presented in this subsection are based on [31], unless specified otherwise.
Theorem 3.1.
Theorem 3.2.
(Universality diffusion limit for JSQ()) If , then for suitable initial conditions the weak limit of the sequence of processes coincides with that of the ordinary JSQ policy, and in particular, is given by the system of SDEs in (3.7).
The above universality properties indicate that the JSQ overhead can be lowered by almost a factor O() and O() while retaining fluid and diffusionlevel optimality, respectively. In other words, Theorems 3.1 and 3.2 thus reveal that it is sufficient for to grow at any rate and faster than in order to observe similar scaling benefits as in a corresponding centralized M/M/ queue on fluid scale and diffusion scale, respectively. The stated conditions are in fact close to necessary, in the sense that if is uniformly bounded and as , then the fluidlimit and diffusionlimit paths of the system occupancy process under the JSQ() scheme differ from those under the ordinary JSQ policy, respectively. In particular, if is uniformly bounded, the mean steadystate delay does not vanish asymptotically as .
Highlevel proof idea.
The proofs of both Theorems 3.1 and 3.2 rely on a stochastic coupling construction to bound the difference in the queue length processes between the JSQ policy and a scheme with an arbitrary value of . This Scoupling (‘S’ stands for serverbased) is then exploited to obtain the fluid and diffusion limits of the JSQ() policy under the conditions stated in Theorems 3.1 and 3.2.
A direct comparison between the JSQ scheme and the ordinary JSQ policy is not straightforward, which is why the class of schemes is introduced as an intermediate scenario to establish the universality result. Just like the JSQ scheme, the schemes in the class may be thought of as “sloppy” versions of the JSQ policy, in the sense that tasks are not necessarily assigned to a server with the shortest queue length but to one of the lowest ordered servers, as graphically illustrated in Figure 3(a). In particular, for , the class only includes the ordinary JSQ policy. Note that the JSQ scheme is guaranteed to identify the lowest ordered server, but only among a randomly sampled subset of servers. In contrast, a scheme in the class only guarantees that one of the lowest ordered servers is selected, but across the entire pool of servers. It may be shown that for sufficiently small , any scheme from the class is still ‘close’ to the ordinary JSQ policy. It can further be proved that for sufficiently large relative to we can construct a scheme called JSQ, belonging to the class, which differs ‘negligibly’ from the JSQ scheme. Therefore, for a ‘suitable’ choice of the idea is to produce a ‘suitable’ . This proof strategy is schematically represented in Figure 3(b).
In order to prove the stochastic comparisons among the various schemes, the manyserver system is described as an ensemble of stacks, in a way that two different ensembles can be ordered. This stack formulation has also been considered in the literature for establishing the stochastic optimality properties of the JSQ policy [36, 39, 40]. However, it is only through the stack arguments developed in [31] that the comparison results can be extended to any scheme from the class CJSQ.
4 Blocking and infiniteserver dynamics
The basic scenario that we have focused on so far involved singleserver queues. In this section we turn attention to a system with parallel server pools, each with servers, where can possibly be infinite. As before, tasks must immediately be forwarded to one of the server pools, but also directly start execution or be discarded otherwise. The execution times are assumed to be exponentially distributed, and do not depend on the number of other tasks receiving service simultaneously. The current scenario will be referred to as ‘infiniteserver dynamics’, in contrast to the earlier singleserver queueing dynamics.
As it turns out, the JSQ policy has similar stochastic optimality properties as in the case of singleserver queues, and in particular stochastically minimizes the cumulative number of discarded tasks [35, 19, 24, 25]. However, the JSQ policy also suffers from a similar scalability issue due to the excessive communication overhead in largescale systems, which can be mitigated through JSQ() policies. Results of Turner [41] and recent papers by Karthik et al. [20], Mukhopadhyay et al. [32, 33], and Xie et al. [46] indicate that JSQ() policies provide similar “powerofchoice” gains for loss probabilities. It may be shown though that the optimal performance of the JSQ policy cannot be matched for any fixed value of .
Motivated by these observations, we explore the tradeoff between performance and communication overhead for infiniteserver dynamics. We will demonstrate that the optimal performance of the JSQ policy can be asymptotically retained while drastically reducing the communication burden, mirroring the universality properties described in Section 3.5 for singleserver queues. The results presented in the remainder of the section are extracted from [29], unless indicated otherwise.
4.1 Fluid limit for JSQ policy
As in Subsection 3.2, for any fluid state , denote by the minimum queue length among all servers. Now if , then define and for all . Otherwise, in case , define
(4.1) 
and otherwise. As before, the coefficient represents the instantaneous fraction of incoming tasks assigned to servers with a queue length of exactly in the fluid state .
Any weak limit of the sequence of processes is given by the deterministic system satisfying the following of differential equations
(4.2) 
where denotes the rightderivative.
Equations (4.1) and (4.2) are to be contrasted with Equations (3.4) and (3.5). While the form of (4.1) and the evolution equations (4.2) of the limiting dynamical system remains similar to that of (3.4) and (3.5), respectively, an additional factor appears in (4.1) and the rate of decrease in (4.2) now becomes , reflecting the infiniteserver dynamics.
Let and denote the integral and fractional parts of , respectively. It is easily verified that, assuming , the unique fixed point of the dynamical system in (4.2) is given by
(4.3) 
and thus . This is consistent with the results in Mukhopadhyay et al. [32, 33] and Xie et al. [46] for fixed , where taking yields the same fixed point. The fixed point in (4.3), in conjunction with an interchange of limits argument, indicates that in stationarity the fraction of server pools with at least and at most active tasks is negligible as .
4.2 Diffusion limit for JSQ policy
As it turns out, the diffusionlimit results may be qualitatively different, depending on whether or , and we will distinguish between these two cases accordingly. Observe that for any assignment scheme, in the absence of overflow events, the total number of active tasks evolves as the number of jobs in an M/M/ system, for which the diffusion limit is wellknown. For the JSQ policy, it can be established that the total number of server pools with or less and or more tasks is negligible on the diffusion scale. If , the number of server pools with tasks is negligible as well, and the dynamics of the number of server pools with or tasks can then be derived from the known diffusion limit of the total number of tasks mentioned above. In contrast, if , the number of server pools with tasks is not negligible on the diffusion scale, and the limiting behavior is qualitatively different, but can still be characterized. We refer to [29] for further details.
4.3 Universality of JSQ(d) policies in infiniteserver scenario
As in Subsection 3.5, we now further explore the tradeoff between performance and communication overhead as a function of the diversity parameter , in conjunction with the relative load. We will specifically investigate what growth rate of is required, depending on the scaling behavior of , in order to asymptotically match the optimal performance of the JSQ policy.
Theorem 4.1.
In order to state the universality result on diffusion scale, define in case , ,
and otherwise, if , assume as , and define
Theorem 4.2 (Universality diffusion limit for JSQ()).
Assume .
Under suitable initial conditions
(i)
If , then converges to the zero process for , and converges weakly to the OrnsteinUhlenbeck process satisfying the SDE
,
where is the standard Brownian motion.
(ii)
If , then converges weakly to the zero process, and converges weakly to , described by the unique solution of the following system of SDEs:
where is the standard Brownian motion, and is the unique nondecreasing process satisfying .
Given the asymptotic results for the JSQ policy in Subsections 4.1 and 4.2, the proofs of the asymptotic results for the JSQ scheme in Theorems 4.1 and 4.2 involve establishing a universality result which shows that the limiting processes for the JSQ scheme are ‘alike’ to those for the ordinary JSQ policy for suitably large . Loosely speaking, if two schemes are alike, then in some sense, the associated system occupancy states are indistinguishable on scale.
The next theorem states a sufficient criterion for the JSQ scheme and the ordinary JSQ policy to be alike, and thus, provides the key vehicle in establishing the universality result.
Theorem 4.3.
Let be a function diverging to infinity.
Then the JSQ policy and the JSQ scheme are alike,
with , if
(i) for ,
(ii) for .
The proof of Theorem 4.3 relies on a novel coupling construction, called Tcoupling (‘T’ stands for taskbased), which will be used to (lower and upper) bound the difference of occupancy states of two arbitrary schemes. This Tcoupling [29] is distinct from and inherently stronger than the Scoupling used in Subsection 3.5 in the singleserver queueing scenario. Note that in the current infiniteserver scenario, the departures of the ordered server pools cannot be coupled, mainly since the departure rate at the ordered server pool, for some , depends on its number of active tasks. The Tcoupling is also fundamentally different from the coupling constructions used in establishing the weak majorization results in [45, 36, 39, 40, 44] in the context of the ordinary JSQ policy in the singleserver queueing scenario, and in [35, 19, 24, 25] in the scenario of statedependent service rates.
5 Universality of load balancing in networks
In this section we return to the singleserver queueing dynamics, and extend the universality properties to network scenarios, where the servers are assumed to be interconnected by some underlying graph topology . Tasks arrive at the various servers as independent Poisson processes of rate , and each incoming task is assigned to whichever server has the smallest number of tasks among the one where it arrives and its neighbors in . Thus, in case is a clique, each incoming task is assigned to the server with the shortest queue across the entire system, and the behavior is equivalent to that under the JSQ policy. The stochastic optimality properties of the JSQ policy thus imply that the queue length process in a clique will be better balanced and smaller (in a majorization sense) than in an arbitrary graph .
Besides the prohibitive communication overhead discussed earlier, a further scalability issue of the JSQ policy arises when executing a task involves the use of some data. Storing such data for all possible tasks on all servers will typically require an excessive amount of storage capacity. These two burdens can be effectively mitigated in sparser graph topologies where tasks that arrive at a specific server are only allowed to be forwarded to a subset of the servers . For the tasks that arrive at server , queue length information then only needs to be obtained from servers in , and it suffices to store replicas of the required data on the servers in . The subset containing the peers of server can be naturally viewed as its neighbors in some graph topology . In this section we focus on the results in [28] for the case of undirected graphs, but most of the analysis can be extended to directed graphs.
The above model has been studied in [15, 41], focusing on certain fixeddegree graphs and in particular ring topologies. The results demonstrate that the flexibility to forward tasks to a few neighbors, or even just one, with possibly shorter queues significantly improves the performance in terms of the waiting time and tail distribution of the queue length. This resembles the “powerofchoice” gains observed for JSQ() policies in complete graphs. However, the results in [15, 41] also establish that the performance sensitively depends on the underlying graph topology, and that selecting from a fixed set of neighbors typically does not match the performance of resampling alternate servers for each incoming task from the entire population, as in the powerof scheme in a complete graph.
If tasks do not get served and never depart but simply accumulate, then the scenario described above amounts to a socalled ballsandbins problem on a graph. Viewed from that angle, a close counterpart of our setup is studied in Kenthapadi & Panigrahy [21], where in our terminology each arriving task is routed to the shortest of randomly selected neighboring queues.
The key challenge in the analysis of load balancing on arbitrary graph topologies is that one needs to keep track of the evolution of number of tasks at each vertex along with their corresponding neighborhood relationship. This creates a major problem in constructing a tractable Markovian state descriptor, and renders a direct analysis of such processes highly intractable. Consequently, even asymptotic results for load balancing processes on an arbitrary graph have remained scarce so far. The approach in [28] is radically different, and aims at comparing the load balancing process on an arbitrary graph with that on a clique. Specifically, rather than analyzing the behavior for a given class of graphs or degree value, the analysis explores for what types of topologies and degree properties the performance is asymptotically similar to that in a clique. The proof arguments in [28] build on the stochastic coupling constructions developed in Subsection 3.5 for JSQ() policies. Specifically, the load balancing process on an arbitrary graph is viewed as a ‘sloppy’ version of that on a clique, and several other intermediate sloppy versions are constructed.
Let denote the number of servers with queue length
at least at time , , and let the
fluidscaled variables be the
corresponding fractions.
Also, in the HalfinWhitt heavytraffic regime (2.1),
define the centered and diffusionscaled variables
and for ,
analogous to (3.1).
The next definition introduces two notions of asymptotic optimality.
Definition 5.1 (Asymptotic optimality).
A graph sequence is called ‘asymptotically optimal on scale’ or ‘optimal’, if for any , the scaled occupancy process converges weakly, on any finite time interval, to the process given by (3.5).
Intuitively speaking, if a graph sequence is optimal or optimal, then in some sense, the associated occupancy processes are indistinguishable from those of the sequence of cliques on scale or scale. In other words, on any finite time interval their occupancy processes can differ from those in cliques by at most or , respectively.
5.1 Asymptotic optimality criteria for deterministic graph sequences
We now develop a criterion for asymptotic optimality of an arbitrary deterministic graph sequence on different scales. We first introduce some useful notation, and two measures of wellconnectedness. Let be any graph. For a subset , define to be the set of all vertices that are disjoint from , where . For any fixed define
(5.1) 
The next theorem provides sufficient conditions for asymptotic optimality on scale and scale in terms of the above two wellconnectedness measures.
Theorem 5.2.
For any graph sequence , (i) is optimal if for any , as . (ii) is optimal if for any , as .
The next corollary is an immediate consequence of Theorem 5.2.
Corollary 5.3.
Let be any graph sequence. Then (i) If , then is optimal, and (ii) If , then is optimal.
We now provide a sketch of the main proof arguments for Theorem 5.2 as used in [28], focusing on the proof of optimality. The proof of optimality follows along similar lines. First of all, it can be established that if a system is able to assign each task to a server in the set of the nodes with shortest queues, where is , then it is optimal. Since the underlying graph is not a clique however (otherwise there is nothing to prove), for any not every arriving task can be assigned to a server in . Hence, a further stochastic comparison property is proved in [28] implying that if on any finite time interval of length , the number of tasks that are not assigned to a server in is , then the system is optimal as well. The optimality can then be concluded when is , which is demonstrated in [28] under the condition that as as stated in Theorem 5.2.
5.2 Asymptotic optimality of random graph sequences
Next we investigate how the load balancing process behaves on random graph topologies. Specifically, we aim to understand what types of graphs are asymptotically optimal in the presence of randomness (i.e., in an averagecase sense). Theorem 5.4 below establishes sufficient conditions for asymptotic optimality of a sequence of inhomogeneous random graphs. Recall that a graph is called a supergraph of if and .
Theorem 5.4.
Let be a graph sequence such that for each , is a supergraph of the inhomogeneous random graph where any two vertices share an edge with probability .

If for each , there exists subsets of vertices with , such that is , then is optimal.

If for each , there exists subsets of vertices with , such that is , then is optimal.
The proof of Theorem 5.4 relies on Theorem 5.2. Specifically, if satisfies conditions (i) and (ii) in Theorem 5.4, then the corresponding conditions (i) and (ii) in Theorem 5.2 hold.
As an immediate corollary to Theorem 5.4 we obtain an optimality result for the sequence of ErdősRényi random graphs.
Corollary 5.5.
Let be a graph sequence such that for each , is a supergraph of , and . Then (i) If as , then is optimal. (ii) If
Comments
There are no comments yet.