Single-machine scheduling explained
Single-machine scheduling or single-resource scheduling is an optimization problem in computer science and operations research. We are given n jobs J1, J2, ..., Jn of varying processing times, which need to be scheduled on a single machine, in a way that optimizes a certain objective, such as the throughput.
Single-machine scheduling is a special case of identical-machines scheduling, which is itself a special case of optimal job scheduling. Many problems, which are NP-hard in general, can be solved in polynomial time in the single-machine case.[1]
In the standard three-field notation for optimal job scheduling problems, the single-machine variant is denoted by 1 in the first field. For example, " 1||
" is an single-machine scheduling problem with no constraints, where the goal is to minimize the sum of completion times.
The makespan-minimization problem 1||
, which is a common objective with multiple machines, is trivial with a single machine, since the makespan is always identical. Therefore, other objectives have been studied.
[2] Minimizing the sum of completion times
The problem 1||
aims to minimize the sum of completion times. It can be solved optimally by the Shortest Processing Time First rule (
SPT): the jobs are scheduled by ascending order of their processing time
.
The problem 1||
aims to minimize the
weighted sum of completion times. It can be solved optimally by the Weighted Shortest Processing Time First rule (
WSPT): the jobs are scheduled by ascending order of the ratio
.
The problem 1|chains|
is a generalization of the above problem for jobs with dependencies in the form of chains. It can also be solved optimally by a suitable generalization of WSPT.
Minimizing the cost of lateness
The problem 1||
aims to minimize the maximum
lateness. For each job
j, there is a due date
. If it is completed after its due date, it suffers
lateness defined as
.
1||
can be solved optimally by the Earliest Due Date First rule (
EDD): the jobs are scheduled by ascending order of their deadline
.
The problem 1|prec|
generalizes the
1||
in two ways: first, it allows arbitrary precedence constraints on the jobs; second, it allows each job to have an arbitrary cost function
hj, which is a function of its completion time (lateness is a special case of a cost function). The maximum cost can be minimized by a greedy algorithm known as
Lawler's algorithm.
The problem 1|
|
generalizes 1||
by allowing each job to have a different release time by which it becomes available for processing. The presence of release times means that, in some cases, it may be optimal to leave the machine idle, in order to wait for an important job that is not released yet. Minimizing maximum lateness in this setting is NP-hard. But in practice, it can be solved using a branch-and-bound algorithm.Maximizing the profit of earliness
In settings with deadlines, it is possible that, if the job is completed by the deadline, there is a profit pj. Otherwise, there is no profit. The goal is to maximize the profit. Single-machine scheduling with deadlines is NP-hard; Sahni[3] presents both exact exponential-time algorithms and a polynomial-time approximation algorithm.
Maximizing the throughput
The problem 1||
aims to minimize the number of late jobs, regardless of the amount of lateness. It can be solved optimally by the Hodgson-Moore algorithm.[4] It can also be interpreted as maximizing the number of jobs that complete on time; this number is called the throughput.The problem 1||
aims to minimize the weight of late jobs. It is NP-hard, since the special case in which all jobs have the same deadline (denoted by 1|
|
) is equivalent to the Knapsack problem.The problem 1|
|
generalizes 1||
by allowing different jobs to have different release times. The problem is NP-hard. However, when all job lengths are equal, the problem can be solved in polynomial time. It has several variants:- The weighted optimization variant, 1|
|
, can be solved in time
.
[5] - The unweighted optimization variant, maximizing the number of jobs that finish on time, denoted 1|
|
, can be solved in time
using dynamic programming, when all release times and deadlines are integers.[6] [7] - The decision variant - deciding whether it is possible that all given jobs complete on time - can be solved by several algorithms,[8] the fastest of them runs in time
.[9] Jobs can have execution intervals. For each job j, there is a processing time tj and a start-time sj, so it must be executed in the interval [''s<sub>j</sub>,'' ''s<sub>j</sub>+t<sub>j</sub>'']. Since some of the intervals overlap, not all jobs can be completed. The goal is to maximize the number of completed jobs, that is, the throughput. More generally, each job may have several possible intervals, and each interval may be associated with a different profit. The goal is to choose at most one interval for each job, such that the total profit is maximized. For more details, see the page on interval scheduling.
More generally, jobs can have time-windows, with both start-times and deadlines, which may be larger than the job length. Each job can be scheduled anywhere within its time-window. Bar-Noy, Bar-Yehuda, Freund, Naor and Schieber[10] present a (1-ε)/2 approximation.
Jobs with non-constant length
Workers and machines often become tired after working for a certain amount of time, and this makes them slower when processing future jobs. On the other hand, workers and machines may learn how to work better, and this makes them faster when processing future jobs. In both cases, the length (processing-time) of a job is not constant, but depends on the jobs processed before it. In this setting, even minimizing the maximum completion time becomes non-trivial. There are two common ways to model the change in job length.
- The job length may depend on the start time of the job.[11] When the length is a weakly-increasing function of the start-time, it is deterioration effect; when it is weakly-decreasing, it is called learning effect.
- The job length may depend on the sum of normal processing times of previously-processed jobs. When the length is a weakly-increasing function of this sum, it is often called aging effect.[12]
Start-time-based length
Cheng and Ding studied makespan minimization and maximum-lateness minimization when the actual length of job j scheduled at time sj is given by
\widehat{pj}(sj)=pj-b ⋅ sj
, where pj is the normal length of j.
They proved the following results:
- When jobs can have arbitrary deadlines, the problems are strongly NP-hard by reduction from 3-partition;[13]
- When jobs can have one of two deadlines, the problems are NP-complete, by reduction from partition.[14]
- When jobs can have arbitrary release times, the problems are strongly NP-hard, by reduction from the problem with arbitrary deadlines.[15]
- When jobs can have one of two release times, either 0 or R, the problems are NP-complete.
Kubiak and van-de-Velde[16] studied makespan minimization when the fatigue starts only after a common due-date d. That is, the actual length of job j scheduled at time sj is given by
\widehat{pj}(sj)=max(pj,pj+bj ⋅ (sj-d))
.
So, if the job starts before
d, its length does not change; if it starts after
d, its length grows by a job-dependent rate. They show that the problem is NP-hard, give a pseudopolynomial algorithm that runs in time
, and give a branch-and-bound algorithm that solves instances with up to 100 jobs in reasonable time. They also study bounded deterioration, where
pj stops growing if the job starts after a common maximum deterioration date D > d. For this case, they give two pseudopolynomial time algorithms.
Cheng, Ding and Lin surveyed several studies of a deterioration effect, where the length of job j scheduled at time sj is either linear or piecewise linear, and the change rate can be positive or negative.
Sum-of-processing-times-based length
The aging effect has two types:
- In the position-based aging model, the processing time of a job depends on the number of jobs processed before it, that is, on its position in the sequence.[17]
- In sum-of-processing-time-based aging model, the processing time of a job is a weakly-increasing function of the sum of normal (=unaffected by aging) processing times of the jobs processed before it.[18]
Wang, Wang, Wang and Wang[19] studied sum-of-processing-time-based aging model, where the processing-time of job j scheduled at position v is given by
\widehat{pj}(\pi,v)=pj ⋅ \left(1+
p\pi(l)\right)\alpha
where
is the job scheduled at position
, and α is the "aging characteristic" of the machine. In this model, the maximum processing time of the permutation
is:
}(\pi,l)
Rudek
[20] generalized the model in two ways: allowing the fatigue to be different than the processing time, and allowing a job-dependent aging characteristic:
\widehat{pj}(\pi,v)=pj ⋅ \left(1+
f(p\pi(l)
Here,
f is an increasing function that describes the dependance of the fatigue on the processing time; and
αj is the aging characteristic of job
j. For this model, he proved the following results:
- Minimizing the maximum completion time and minimizing the maximum lateness are polynomial-time solvable.
- Minimizing the maximum completion time and minimizing the maximum lateness are strongly NP-hard if some jobs have deadlines.
See also
Many solution techniques have been applied to solving single machine scheduling problems. Some of them are listed below.
Notes and References
- Eugene L. Lawler, Jan Karel Lenstra, Alexander H. G. Rinnooy Kan, David B. Shmoys. 1993-01-01. Chapter 9 Sequencing and scheduling: Algorithms and complexity. Handbooks in Operations Research and Management Science. en. 4. 445–522. 10.1016/S0927-0507(05)80189-6. 9780444874726 . 0927-0507.
- Web site: Grinshpoun. Tal. 2020. Subjects in Scheduling. 2021-09-12. www.youtube.com.
- Sahni . Sartaj K. . 1976-01-01 . Algorithms for Scheduling Independent Tasks . Journal of the ACM . 23 . 1 . 116–127 . 10.1145/321921.321934 . 10956951 . 0004-5411. free .
- 1994-07-01. Knapsack-like scheduling problems, the Moore-Hodgson algorithm and the 'tower of sets' property. Mathematical and Computer Modelling. en. 20. 2. 91–106. 10.1016/0895-7177(94)90209-7. 0895-7177. free. Lawler . E.L. .
- Baptiste . P. . 1999 . Polynomial time algorithms for minimizing the weighted number of late jobs on a single machine with equal processing times . Journal of Scheduling. 2 . 6 . 245–252 . 10.1002/(SICI)1099-1425(199911/12)2:6<245::AID-JOS28>3.0.CO;2-5 .
- Chrobak . Marek . Dürr . Christoph . Jawor . Wojciech . Kowalik . Łukasz . Kurowski . Maciej . 2006-02-01 . A Note on Scheduling Equal-Length Jobs to Maximize Throughput . Journal of Scheduling . en . 9 . 1 . 71–73 . 10.1007/s10951-006-5595-4 . cs/0410046 . 7359990 . 1099-1425.
- Chrobak . Marek . Durr . Christoph . Jawor . Wojciech . Kowalik . Lukasz . Kurowski . Maciej . 2021-05-12 . A Note on Scheduling Equal-Length Jobs to Maximize Throughput . cs/0410046 .
- Simons . Barbara . 1978-10-16 . A fast algorithm for single processor scheduling . Proceedings of the 19th Annual Symposium on Foundations of Computer Science . SFCS '78 . USA . IEEE Computer Society . 246–252 . 10.1109/SFCS.1978.4. 10284575 .
- Garey . M. R. . Johnson . D. S. . Simons . B. B. . Tarjan . R. E. . 1981-05-01 . Scheduling Unit–Time Tasks with Arbitrary Release Times and Deadlines . SIAM Journal on Computing . 10 . 2 . 256–269 . 10.1137/0210018 . 0097-5397.
- Bar-Noy. Amotz. Bar-Yehuda. Reuven. Freund. Ari. (Seffi) Naor. Joseph. Schieber. Baruch. Baruch Schieber. 2001-09-01. A unified approach to approximating resource allocation and scheduling. Journal of the ACM. 48. 5. 1069–1090. 10.1145/502102.502107. 12329294 . 0004-5411.
- Cheng . T. C. E . Ding . Q . Lin . B. M. T . 2004-01-01 . A concise survey of scheduling with time-dependent processing times . European Journal of Operational Research . 152 . 1 . 1–13 . 10.1016/S0377-2217(02)00909-8 . 0377-2217.
- Chang . Pei-Chann . Chen . Shih-Hsin . Mani . V. . 2009-01-01 . A note on due-date assignment and single machine scheduling with a learning/aging effect . International Journal of Production Economics . 117 . 1 . 142–149 . 10.1016/j.ijpe.2008.10.004 . 0925-5273.
- Cheng . T. C. E. . Ding . Q. . 1999-07-01 . The time dependent machine makespan problem is strongly NP-complete . Computers & Operations Research . 26 . 8 . 749–754 . 10.1016/S0305-0548(98)00093-8 . 0305-0548.
- Cheng . T. C. E. . Ding . Q. . 1998-06-01 . The complexity of single machine scheduling with two distinct deadlines and identical decreasing rates of processing times . Computers & Mathematics with Applications . 35 . 12 . 95–100 . 10.1016/S0898-1221(98)00099-6 . 0898-1221.
- Cheng . T. C. E. . Ding . Q. . 1998-01-29 . The complexity of scheduling starting time dependent tasks with release times . Information Processing Letters . 65 . 2 . 75–79 . 10.1016/S0020-0190(97)00195-6 . 0020-0190.
- Kubiak . Wieslaw . van de Velde . Steef . August 1998 . Scheduling deteriorating jobs to minimize makespan . Naval Research Logistics . en . 45 . 5 . 511–523 . 10.1002/(SICI)1520-6750(199808)45:5<511::AID-NAV5>3.0.CO;2-6 . 0894-069X.
- Gawiejnowicz . Stanisław . 1996-03-25 . A note on scheduling on a single processor with speed dependent on a number of executed jobs . Information Processing Letters . 57 . 6 . 297–300 . 10.1016/0020-0190(96)00021-X . 0020-0190.
- Gordon . V. S. . Potts . C. N. . Strusevich . V. A. . Whitehead . J. D. . 2008-10-01 . Single machine scheduling models with deterioration and learning: handling precedence constraints via priority generation . Journal of Scheduling . en . 11 . 5 . 357–370 . 10.1007/s10951-008-0064-x . 31422825 . 1099-1425.
- Wang . Ji-Bo . Wang . Li-Yan . Wang . Dan . Wang . Xiao-Yuan . 2009-08-01 . Single-machine scheduling with a time-dependent deterioration . The International Journal of Advanced Manufacturing Technology . en . 43 . 7 . 805–809 . 10.1007/s00170-008-1760-6 . 110043439 . 1433-3015.
- Rudek . Radosław . 2012-03-01 . Some single-machine scheduling problems with the extended sum-of-processing-time-based aging effect . The International Journal of Advanced Manufacturing Technology . en . 59 . 1 . 299–309 . 10.1007/s00170-011-3481-5 . 1433-3015. free .