Longest-processing-time-first scheduling explained

Longest-processing-time-first (LPT) is a greedy algorithm for job scheduling. The input to the algorithm is a set of jobs, each of which has a specific processing-time. There is also a number m specifying the number of machines that can process the jobs. The LPT algorithm works as follows:

  1. Order the jobs by descending order of their processing-time, such that the job with the longest processing time is first.
  2. Schedule each job in this sequence into a machine in which the current load (= total processing-time of scheduled jobs) is smallest.

Step 2 of the algorithm is essentially the list-scheduling (LS) algorithm. The difference is that LS loops over the jobs in an arbitrary order, while LPT pre-orders them by descending processing time.

LPT was first analyzed by Ronald Graham in the 1960s in the context of the identical-machines scheduling problem.[1] Later, it was applied to many other variants of the problem.

LPT can also be described in a more abstract way, as an algorithm for multiway number partitioning. The input is a set S of numbers, and a positive integer m; the output is a partition of S into m subsets. LPT orders the input from largest to smallest, and puts each input in turn into the part with the smallest sum so far.

Examples

If the input set is S = and m = 2, then the resulting partition is, . If m = 3, then the resulting 3-way partition is,, .

Properties

LPT might not find the optimal partition. For example, in the above instance the optimal partition,, where both sums are equal to 15. However, its suboptimality is bounded both in the worst case and in the average case; see Performance guarantees below.

The running time of LPT is dominated by the sorting, which takes O(n log n) time, where n is the number of inputs.

LPT is monotone in the sense that, if one of the input numbers increases, the objective function (the largest sum or the smallest sum of a subset in the output) weakly increases. This is in contrast to Multifit algorithm.

Performance guarantees: identical machines

When used for identical-machines scheduling, LPT attains the following approximation ratios.

Worst-case maximum sum

In the worst case, the largest sum in the greedy partition is at most

4
3
times the optimal (minimum) largest sum.[2]

A more detailed analysis yields a factor of

4m-1
3m

=

4-
3
1
3m
times the optimal (minimum) largest sum.[1] [3] (for example, when m =2 this ratio is

7/6 ≈ 1.167

).

The factor

4m-1
3m

is tight. Suppose there are

2m+1

inputs (where m is even):

2m-1,2m-1,2m-2,2m-2,\ldots,m+1,m+1,m,m,m

. Then the greedy algorithm returns:

2m-1,m,m

2m-1,m

2m-2,m+1

2m-2,m+1

,

3m/2,3m/2-1

3m/2,3m/2-1

with a maximum of

4m-1

, but the optimal partition is:

m,m,m

2m-1,m+1

2m-1,m+1

2m-2,m+2

2m-2,m+2

3m/2,3m/2

with a maximum of

3m

.

Input consideration

An even more detailed analysis takes into account the number of inputs in the max-sum part.

  1. In each part of the greedy partition, the j-th highest number is at most

OPT/j

.[4]
  1. Suppose that, in the greedy part P with the max-sum, there are L inputs. Then, the approximation ratio of the greedy algorithm is
L+1-
L
1
Lm

=1+

1
L

-

1
Lm

. It is tight for L≥3 (For L=3, we get the general factor
4-
3
1
3m
). Proof. Denote the numbers in P by x1,...,xL. Before xL was inserted into P, its sum was smallest. Therefore, the average sum per part is at least the sum x1+...+xL-1 + xL/m. The optimal max-sum must be at least the average sum. In contrast, the greedy sum is x1+...+xL-1+xL. So the difference is at most (1-1/m)xL, which by (1) is at most (1-1/m)*OPT/L. So the ratio is at most (1 + 1/L - 1/Lm).

Worst-case minimum sum

In the worst case, the smallest sum in the returned partition is at least

3
4
times the optimal (maximum) smallest sum.[5]

Proof

The proof is by contradiction. We consider a minimal counterexample, that is, a counterexample with a smallest m and fewest input numbers. Denote the greedy partition by P1,...,Pm, and the optimal partition by Q1,...,Qm. Some properties of a minimal counterexample are:

n+
4
1
4n+4
and at most
n+
4
e
2n+2
, where n is the number of inputs.[6]

1+O(log{log{n}}/n)

times the optimum almost surely, and

1+O(1/n)

in expectation, where

n

is the number of inputs.[7]

O(log{n}/n)

almost surely (for uniform or negative exponential distributions), and at most

O(m2/n)

in expectation (for uniform distribution). These results hold also for machines with different speeds.[8]

General objectives

Let Ci (for i between 1 and m) be the sum of subset i in a given partition. Instead of minimizing the objective function max(Ci), one can minimize the objective function max(f(Ci)), where f is any fixed function. Similarly, one can minimize the objective function sum(f(Ci)). Alon, Azar, Woeginger and Yadid[9] prove that, if f satisfies the following two conditions:

  1. A strong continuity condition called Condition F*: for every ε>0 there exists δ>0 such that, if |y-x|<δx, then |f(y)-f(x)|<εf(x).
  2. Convexity.

Then the LPT rule has a finite approximation ratio for minimizing sum(f(Ci)).

Performance with divisible item sizes

An important special case is that the item sizes form a divisible sequence (also called factored). A special case of divisible item sizes occurs in memory allocation in computer systems, where the item sizes are all powers of 2. If the item sizes are divisible, and in addition, the largest item sizes divides the bin size, then LPT always finds a scheduling that minimizes the maximum size,[10] and maximizes the minimum size.

Adaptations to other settings

Besides the simple case of identical-machines scheduling, LPT has been adapted to more general settings.

Uniform machines

In uniform-machines scheduling, different machines may have different speeds. The LPT rule assigns each job to the machine on which its completion time will be earliest (that is, LPT may assign a job to a machine with a larger current load, if this machine is so fast that it would finish that job earlier than all other machines).[11]

2m/(m+1)

. It is not tight, but there is an asymptotic lower bound of 1.5 when m approaches infinity. For the special case of m=2 machines, the approximation ratio is at most

(1+\sqrt{17})/4 ≈ 1.281

and it is tight.

\sqrt{1.5}1.2247

and it is tight.

Cardinality constraints

In the balanced partition problem, there are constraints on the number of jobs that can be assigned to each machine. A simple constraint is that each machine can process at most c jobs. The LPT rule assigns each job to the machine with the smallest load from among those with fewer than c jobs. This rule is called modified LPT or MLPT.

(4m-1)/(3m)

of the minimum largest sum, which the same approximation ratio that LPT attains for the unconstrained problem. The bound is tight for MLPT. It is conjectured[15] that MLPT has the same approximation ratio for more general cardinality constraints (c>3). Currently, it is known that the approximation ratio of MLPT for general c>3 is at most 2.[16]

(3m-1)/(4m-2)

of the maximum smallest sum, which is again the same ratio that LPT attains for the unconstrained problem.Another constraint is that the number of jobs on all machines should be

n/m

rounded either up or down. In an adaptation of LPT called restricted LPT or RLPT, inputs are assigned in pairs - one to each machine (for m=2 machines). The resulting partition is balanced by design.
n+
4
1
2n+2
. The expected difference between the largest and smallest sum is

\Theta(1/n)

.[18]

Kernel constraints - non-simultaneous availability

In the kernel partitioning problem, there are some m pre-specified jobs called kernels, and each kernel must be scheduled to a unique machine. An equivalent problem is scheduling when machines are available in different times: each machine i becomes available at some time ti 0 (the time ti can be thought of as the length of the kernel job).

A simple heuristic algorithm, called SLPT,[19] assigns each kernel to a different subset, and then runs the LPT algorithm.

3m-1
2m
for the minimum largest sum. He then suggests running, in the second step, a modified version of LPT, and proves that it attains an approximation ratio
4
3
.
2m-1
3m-2
for the maximum smallest sum.

Online settings

Often, the inputs come online, and their sizes becomes known only when they arrive. In this case, it is not possible to sort them in advance. List scheduling is a similar algorithm that takes a list in any order, not necessarily sorted. Its approximation ratio is

2m-1
m

=2-

1
m
.

A more sophisticated adaptation of LPT to an online setting attains an approximation ratio of 3/2.[23]

Implementations

See also

Notes and References

  1. Graham. R. L.. March 1969. Bounds on Multiprocessing Timing Anomalies. SIAM Journal on Applied Mathematics. 17. 2. 416–429. 10.1.1.90.8131. 10.1137/0117039. 2099572. .
  2. Book: Xiao, Xin. Proceedings of the 2017 5th International Conference on Frontiers of Manufacturing Science and Measuring Technology (FMSMT 2017) . A Direct Proof of the 4/3 Bound of LPT Scheduling Rule . 2017-04-01. https://www.atlantis-press.com/proceedings/fmsmt-17/25875520. en. Atlantis Press. 486–489. 10.2991/fmsmt-17.2017.102. 978-94-6252-331-9. free.
  3. Book: Coffman. E. G.. Sethi. Ravi. Proceedings of the 1976 ACM SIGMETRICS conference on Computer performance modeling measurement and evaluation - SIGMETRICS '76 . A generalized bound on LPT sequencing . 1976-03-29. https://doi.org/10.1145/800200.806205. New York, NY, USA. Association for Computing Machinery. 306–310. 10.1145/800200.806205. 978-1-4503-7497-2. 16980306 .
  4. Chen. Bo. 1993-10-01. A note on LPT scheduling. Operations Research Letters. en. 14. 3. 139–142. 10.1016/0167-6377(93)90024-B. 0167-6377.
  5. Deuermeyer. Bryan L.. Friesen. Donald K.. Langston. Michael A.. June 1982. Scheduling to Maximize the Minimum Processor Finish Time in a Multiprocessor System. SIAM Journal on Algebraic and Discrete Methods. 3. 2. 190–196. 10.1137/0603019.
  6. Coffman. E. G.. Frederickson. G. N.. Lueker. G. S.. 1984-05-01. A Note on Expected Makespans for Largest-First Sequences of Independent Tasks on Two Processors. Mathematics of Operations Research. 9. 2. 260–266. 10.1287/moor.9.2.260. 0364-765X.
  7. Frenk. J.B.G.. Kan. A.H.G.Rinnooy. June 1986. The rate of convergence to optimality of the LPT rule. Discrete Applied Mathematics. 14. 2. 187–197. 10.1016/0166-218X(86)90060-0. free. 1765/11698.
  8. Frenk. J. B. G.. Rinnooy Kan. A. H. G.. 1987-05-01. The Asymptotic Optimality of the LPT Rule. Mathematics of Operations Research. 12. 2. 241–254. 10.1287/moor.12.2.241. 31770203 . 0364-765X.
  9. Alon. Noga. Azar. Yossi. Woeginger. Gerhard J.. Yadid. Tal. 1998. Approximation schemes for scheduling on parallel machines. Journal of Scheduling. en. 1. 1. 55–66. 10.1002/(SICI)1099-1425(199806)1:1<55::AID-JOS2>3.0.CO;2-J. 1099-1425.
  10. Coffman . E. G . Garey . M. R . Johnson . D. S . 1987-12-01 . Bin packing with divisible item sizes . Journal of Complexity . 3 . 4 . 406–428 . 10.1016/0885-064X(87)90009-4 . 0885-064X.
  11. Gonzalez. Teofilo. Ibarra. Oscar H.. Sahni. Sartaj. 1977-03-01. Bounds for LPT Schedules on Uniform Processors. SIAM Journal on Computing. 6. 1. 155–166. 10.1137/0206013. 0097-5397.
  12. Mireault. Paul. Orlin. James B.. Vohra. Rakesh V.. 1997-02-01. A Parametric Worst Case Analysis of the LPT Heuristic for Two Uniform Machines. Operations Research. 45. 1. 116–125. 10.1287/opre.45.1.116. 0030-364X.
  13. Koulamas. Christos. Kyparisis. George J.. 2009-07-01. A modified LPT algorithm for the two uniform parallel machine makespan minimization problem. European Journal of Operational Research. en. 196. 1. 61–68. 10.1016/j.ejor.2008.02.008. 0377-2217.
  14. Kellerer. Hans. Woeginger. Gerhard. 1993-09-07. A tight bound for 3-partitioning. Discrete Applied Mathematics. en. 45. 3. 249–259. 10.1016/0166-218X(93)90013-E. 0166-218X. free.
  15. Babel. Luitpold. Kellerer. Hans. Kotov. Vladimir. 1998-02-01. The m-partitioning problem. Mathematical Methods of Operations Research. en. 47. 1. 59–82. 10.1007/BF01193837. 5594197 . 1432-5217.
  16. Dell'Amico. Mauro. Martello. Silvano. 2001. Bounds for the cardinality constrained P∥Cmax problem. Journal of Scheduling. en. 4. 3. 123–138. 10.1002/jos.68. 1099-1425.
  17. Chen. Shi Ping. He. Yong. Lin. Guohui. 2002-03-01. 3-Partitioning Problems for Maximizing the Minimum Load. Journal of Combinatorial Optimization. en. 6. 1. 67–80. 10.1023/A:1013370208101. 9053629 . 1573-2886.
  18. Tsai. Li-Hui. 1992-02-01. Asymptotic Analysis of an Algorithm for Balanced Parallel Processor Scheduling. SIAM Journal on Computing. 21. 1. 59–64. 10.1137/0221007. 0097-5397.
  19. Chen. S. -P.. He. Y.. Yao. E. -Y.. 1996-09-01. Three-partitioning containing kernels: Complexity and heuristic. Computing. en. 57. 3. 255–271. 10.1007/BF02247409. 21935917 . 1436-5057.
  20. Lee. Chung-Yee. 1991-01-07. Parallel machines scheduling with nonsimultaneous machine available time. Discrete Applied Mathematics. en. 30. 1. 53–61. 10.1016/0166-218X(91)90013-M. 0166-218X. free.
  21. Lin. Guo-Hui. Yao. En-Yu. He. Yong. 1998-03-01. Parallel machine scheduling to maximize the minimum load with nonsimultaneous machine available times. Operations Research Letters. en. 22. 2. 75–81. 10.1016/S0167-6377(97)00053-9. 0167-6377.
  22. Shen. Lixin. Wang. Dan. Wang. Xiao-Yuan. 2013-04-01. Parallel-machine scheduling with non-simultaneous machine available time. Applied Mathematical Modelling. en. 37. 7. 5227–5232. 10.1016/j.apm.2012.09.053. 0307-904X.
  23. Chen. Bo. Vestjens. Arjen P. A.. 1 November 1997. Scheduling on identical machines: How good is LPT in an on-line setting?. Operations Research Letters. 21. 4. 165–169. 10.1016/S0167-6377(97)00040-0.