Parallel task scheduling explained
Parallel task scheduling (also called parallel job scheduling or parallel processing scheduling) is an optimization problem in computer science and operations research. It is a variant of optimal job scheduling. In a general job scheduling problem, we are given n jobs J1, J2, ..., Jn of varying processing times, which need to be scheduled on m machines while trying to minimize the makespan - the total length of the schedule (that is, when all the jobs have finished processing). In the specific variant known as parallel-task scheduling, all machines are identical. Each job j has a length parameter pj and a size parameter qj, and it must run for exactly pj time-steps on exactly qj machines in parallel.
Veltman et al.[1] and Drozdowski[2] denote this problem by
in the
three-field notation introduced by Graham et al.
[3] P means that there are several identical machines running in parallel;
sizej means that each job has a size parameter;
Cmax means that the goal is to minimize the maximum completion time. Some authors use
instead.
Note that the problem of
parallel-machines scheduling is a special case of parallel-task scheduling where
for all
j, that is, each job should run on a single machine.
The origins of this problem formulation can be traced back to 1960.[4] For this problem, there exists no polynomial time approximation algorithm with a ratio smaller than
unless
.
Definition
There is a set
of
jobs, and
identical machines. Each job
has a processing time
(also called the
length of
j), and requires the simultaneous use of
machines during its execution (also called the
size or the
width of j).
A schedule assigns each job
to a starting time
and a set
of
machines to be processed on. A schedule is feasible if each processor executes at most one job at any given time.The objective of the problem denoted by
is to find a schedule with minimum length
}(s_j+p_j), also called the makespan of the schedule.A sufficient condition for the feasibility of a schedule is the following
\sumj,sj\leqt<sj+pj}qj\leqm\forallt\in\{s1,...,sn\}
.
If this property is satisfied for all starting times, a feasible schedule can be generated by assigning free machines to the jobs at each time starting with time
.
[5] [6] Furthermore, the number of machine intervals used by jobs and idle intervals at each time step can be bounded by
. Here a machine interval is a set of consecutive machines of maximal cardinality such that all machines in this set are processing the same job. A machine interval is completely specified by the index of its first and last machine. Therefore, it is possible to obtain a compact way of encoding the output with polynomial size.
Computational hardness
This problem is NP-hard even when there are only two machines and the sizes of all jobs are
(i.e., each job needs to run only on a single machine). This special case, denoted by
, is a variant of the
partition problem, which is known to be NP-hard.
When the number of machines m is at most 3, that is: for the variants
and
, there exists a
pseudo-polynomial time algorithm, which solves the problem exactly.
[7] In contrast, when the number of machines is at least 4, that is: for the variants
for any
, the problem is also
strongly NP-hard[8] (this result improved a previous result showing strong NP-hardness for
).
If the number of machines is not bounded by a constant, then there can be no approximation algorithm with an approximation ratio smaller than
unless
. This holds even for the special case in which the processing time of all jobs is
, since this special case is equivalent to the
bin packing problem: each time-step corresponds to a bin,
m is the bin size, each job corresponds to an item of size
qj, and minimizing the makespan corresponds to minimizing the number of bins.
Variants
Several variants of this problem have been studied. The following variants also have been considered in combination with each other.
Contiguous jobs: In this variant, the machines have a fixed order
. Instead of assigning the jobs to any subset
, the jobs have to be assigned to a
contiguous interval of machines. This problem corresponds to the problem formulation of the
strip packing problem.
Multiple platforms: In this variant, the set of machines is partitioned into independent platforms. A scheduled job can only use the machines of one platform and is not allowed to span over multiple platforms when processed.
Moldable jobs: In this variant each job
has a set of feasible machine-counts
. For each count
, the job can be processed on
d machines in parallel, and in this case, its processing time will be
. To schedule a job
, an algorithm has to choose a machine count
and assign
j to a starting time
and to
machines during the time interval
A usual assumption for this kind of problem is that the total workload of a job, which is defined as
, is non-increasing for an increasing number of machines.
Release dates: In this variant, denoted by
, not all jobs are available at time 0; each job
j becomes available at a fixed and known time
rj. It must be scheduled after that time.
Preemption
In this variant, denoted by
, it is possible to interrupt jobs that are already running, and schedule other jobs that become available at that time.
Algorithms
The list scheduling algorithm by Garey and Graham[9] has an absolute ratio
, as pointed out by Turek et al.
[10] and Ludwig and Tiwari.
[11] Feldmann, Sgall and Teng
[12] observed that the length of a non-preemptive schedule produced by the list scheduling algorithm is actually at most
times the optimum preemptive makespan.A polynomial-time approximation scheme (PTAS) for the case when the number
of processors is constant, denoted by
, was presented by Amoura et al.
[13] and Jansen et al.
[14] Later, Jansen and Thöle found a PTAS for the case where the number of processors is polynomially bounded in the number of jobs. In this algorithm, the number of machines appears polynomially in the time complexity of the algorithm. Since, in general, the number of machines appears only in logarithmic in the size of the instance, this algorithm is a pseudo-polynomial time approximation scheme as well. A
-approximation was given by Jansen,
[15] which closes the gap to the lower bound of
except for an arbitrarily small
.
Differences between contiguous and non-contiguous jobs
Given an instance of the parallel task scheduling problem, the optimal makespan can differ depending on the constraint to the contiguity of the machines. If the jobs can be scheduled on non-contiguous machines, the optimal makespan can be smaller than in the case that they have to be scheduled on contiguous ones.The difference between contiguous and non-contiguous schedules has been first demonstrated in 1992[16] on an instance with
tasks,
processors,
, and
.Błądek et al.
[17] studied these so-called c/nc-differences and proved the following points:
- For a c/nc-difference to arise, there must be at least three tasks with
- For a c/nc-difference to arise, there must be at least three tasks with
- For a c/nc-difference to arise, at least
processors are required (and there exists an instance with a c/nc-difference with
).
- For a c/nc-difference to arise, the non-contiguous schedule length must be at least
- The maximal c/nc-difference
is at least
and at most
- To decide whether there is an c/nc-difference in a given instance is NP-complete.
Furthermore, they proposed the following two conjectures, which remain unproven:
- For a c/nc-difference to arise, at least
tasks are required.
Related problems
There are related scheduling problems in which each job consists of several operations, which must be executed in sequence (rather than in parallel). These are the problems of open shop scheduling, flow shop scheduling and job shop scheduling.
References
- Veltman. B. Lageweg. B. J. Lenstra. J. K. 1990-12-01. Multiprocessor scheduling with communication delays. Parallel Computing. en. 16. 2. 173–182. 10.1016/0167-8191(90)90056-F. 0167-8191.
- Drozdowski. Maciej. 2009. Scheduling for Parallel Processing. Computer Communications and Networks. en-gb. 10.1007/978-1-84882-310-5. 978-1-84882-309-9. 1617-7975.
- Graham. R. L.. Lawler. E. L.. Lenstra. J.K.. Rinnooy Kan. A.H.G.. 1979. Optimization and Approximation in Deterministic Sequencing and Scheduling: a Survey. Elsevier. (5) 287–326. Proceedings of the Advanced Research Institute on Discrete Optimization and Systems Applications of the Systems Science Panel of NATO and of the Discrete Optimization Symposium.
- Codd. E. F.. 1960-06-01. Multiprogram scheduling. Communications of the ACM. EN. 3. 6. 347–350. 10.1145/367297.367317. 14701351. free.
- Johannes. Berit. 2006-10-01. Scheduling parallel jobs to minimize the makespan. Journal of Scheduling. en. 9. 5. 433–452. 10.1007/s10951-006-8497-6. 1099-1425. 20.500.11850/36804. 18819458. free.
- Jansen. Klaus.. Thöle. Ralf.. 2010-01-01. Approximation Algorithms for Scheduling Parallel Jobs. SIAM Journal on Computing. 39. 8. 3571–3615. 10.1137/080736491. 0097-5397.
- Du. Jianzhong.. Leung. Joseph Y.-T.. 1 November 1989. Complexity of Scheduling Parallel Task Systems. SIAM Journal on Discrete Mathematics. 2. 4. 473–487. 10.1137/0402042. 0895-4801.
- Henning. Sören. Jansen. Klaus. Rau. Malin. Schmarje. Lars. 1 January 2020. Complexity and Inapproximability Results for Parallel Task Scheduling and Strip Packing. Theory of Computing Systems. en. 64. 1. 120–140. 1705.04587. 10.1007/s00224-019-09910-6. 1433-0490. 67168004.
- Garey . M. R. . Graham . R. L. . Bounds for Multiprocessor Scheduling with Resource Constraints . SIAM Journal on Computing . 1 June 1975 . 4 . 2 . 187–200 . 10.1137/0204015 . 0097-5397. free .
- Turek . John . Wolf . Joel L. . Yu . Philip S. . Approximate algorithms scheduling parallelizable tasks Proceedings of the fourth annual ACM symposium on Parallel algorithms and architectures . dl.acm.org . 10.1145/140901.141909 . 15607549 . EN.
- Ludwig . Walter . Tiwari . Prasoon . Scheduling malleable and nonmalleable parallel tasks Proceedings of the fifth annual ACM-SIAM symposium on Discrete algorithms . Fifth Annual Symposium on Discrete Algorithms (SODA) . 1994 . 167–176 . EN.
- Feldmann . Anja . Sgall . Jiří . Teng . Shang-Hua . Dynamic scheduling on parallel machines . Theoretical Computer Science . 1 August 1994 . 130 . 1 . 49–72 . 10.1016/0304-3975(94)90152-X . en . 0304-3975. free .
- Amoura . Abdel Krim . Bampis . Evripidis . Kenyon . Claire . Manoussakis . Yannis . Scheduling Independent Multiprocessor Tasks . Algorithmica . 1 February 2002 . 32 . 2 . 247–261 . 10.1007/s00453-001-0076-9 . 17256951 . en . 1432-0541.
- Jansen . Klaus . Porkolab . Lorant . Linear-Time Approximation Schemes for Scheduling Malleable Parallel Tasks . Algorithmica . 1 March 2002 . 32 . 3 . 507–520 . 10.1007/s00453-001-0085-8 . en . 1432-0541. 11858/00-001M-0000-0014-7B6C-D . 2019475 . free .
- Jansen . Klaus . A(3/2+ε) approximation algorithm for scheduling moldable and non-moldable parallel tasks Proceedings of the twenty-fourth annual ACM symposium on Parallelism in algorithms and architectures . 24th Symposium on Parallelism in Algorithms and Architectures, . 2012 . 10.1145/2312005.2312048 . 6586439 . EN.
- Approximate algorithms scheduling parallelizable tasks Proceedings of the fourth annual ACM symposium on Parallel algorithms and architectures. EN. 10.1145/140901.141909. 15607549.
- Błądek. Iwo. Drozdowski. Maciej. Guinand. Frédéric. Schepler. Xavier. 1 October 2015. On contiguous and non-contiguous parallel task scheduling. Journal of Scheduling. en. 18. 5. 487–495. 10.1007/s10951-015-0427-z. 1099-1425. free.