In computer science, rate-monotonic scheduling (RMS)[1] is a priority assignment algorithm used in real-time operating systems (RTOS) with a static-priority scheduling class.[2] The static priorities are assigned according to the cycle duration of the job, so a shorter cycle duration results in a higher job priority.
These operating systems are generally preemptive and have deterministic guarantees with regard to response times. Rate monotonic analysis is used in conjunction with those systems to provide scheduling guarantees for a particular application.
A simple version of rate-monotonic analysis assumes that threads have the following properties:
It is a mathematical model that contains a calculated simulation of periods in a closed system, where round-robin and time-sharing schedulers fail to meet the scheduling needs otherwise. Rate monotonic scheduling looks at a run modeling of all threads in the system and determines how much time is needed to meet the guarantees for the set of threads in question.
The rate-monotonic priority assignment is optimal under the given assumptions, meaning that if any static-priority scheduling algorithm can meet all the deadlines, then the rate-monotonic algorithm can too. The deadline-monotonic scheduling algorithm is also optimal with equal periods and deadlines, in fact in this case the algorithms are identical; in addition, deadline monotonic scheduling is optimal when deadlines are less than periods.[3] For the task model in which deadlines can be greater than periods, Audsley's algorithm endowed with an exact schedulability test for this model finds an optimal priority assignment.
proved that for a set of periodic tasks with unique periods, a feasible schedule that will always meet deadlines exists if the CPU utilization is below a specific bound (depending on the number of tasks). The schedulability test for RMS is:
U=
n | |
\sum | |
i=1 |
{Ui}=
n | |
\sum | |
i=1 |
Ci | |
Ti |
\leqn({2}1/n-1)
where is the utilization factor, is the computation time for process, is the release period (with deadline one period later) for process, and is the number of processes to be scheduled. For example, for two processes. When the number of processes tends towards infinity, this expression will tend towards:
\limnn(\sqrt[n]{2}-1)=ln2 ≈ 0.693147\ldots
Therefore, a rough estimate when
{n}\geq{10}
In practice, for the
{ith
{Ci}
{Ti}
In queueing theory, is called the interarrival time, and is called the service time. These two parameters are often specified as rates:
λi={1\overTi}
\mui={1\overCi}
The utilization for each task, denoted, is then:
\rhoi={λi\over\mui}={Ci\overTi}=Ui
Liu and Layland noted that this bound may be relaxed to the maximum possible value of 1.0, if for tasks
{Tm}
{Ti}
{Tm}{>}{Ti}
i=1...m-1
{Tm}
{Ti}
{T1}
[{T1},{T2},{T3},{T4}]=[1,3,6,12]
Kuo and Mok[4] showed that for a task set made up of harmonic task subsets (known as harmonic chains), the least upper bound test becomes:
U=
n | |
\sum | |
i=1 |
Ci | |
Ti |
\leqK({2}1/K-1)
{K}{=}{n}
{K}{=}{1}
It has been shown that a randomly generated periodic task system will usually meet all deadlines when the utilization is 88% or less,[5] however this fact depends on knowing the exact task statistics (periods, deadlines) which cannot be guaranteed for all task sets, and in some cases the authors found that the utilization reached the least upper bound presented by Liu and Layland.
The hyperbolic bound is a tighter sufficient condition for schedulability than the one presented by Liu and Layland:
n | |
\prod | |
i=1 |
(Ui+1)\leq2
In many practical applications, resources are shared and the unmodified RMS will be subject to priority inversion and deadlock hazards. In practice, this is solved by disabling preemption or by priority inheritance. Alternative methods are to use lock-free algorithms or avoid the sharing of a mutex/semaphore across threads with different priorities. This is so that resource conflicts cannot result in the first place.
OS_ENTER_CRITICAL
and OS_EXIT_CRITICAL
primitives that lock CPU interrupts in a real-time kernel, e.g. MicroC/OS-IIsplx
family of primitives which nest the locking of device interrupts (FreeBSD 5.x/6.x),Priority inheritance algorithms can be characterized by two parameters. First, is the inheritance lazy (only when essential) or immediate (boost priority before there is a conflict). Second is the inheritance optimistic (boost a minimum amount) or pessimistic (boost by more than the minimum amount):
pessimistic | optimistic | |
---|---|---|
immediate | OS_ENTER_CRITICAL / OS_EXIT_CRITICAL | splx , highest locker |
lazy | priority ceiling protocol, basic priority inheritance protocol | |
In practice there is no mathematical difference (in terms of the Liu-Layland system utilization bound) between the lazy and immediate algorithms, and the immediate algorithms are more efficient to implement, and so they are the ones used by most practical systems.
An example of usage of basic priority inheritance is related to the "Mars Pathfinder reset bug" [9] [10] which was fixed on Mars by changing the creation flags for the semaphore so as to enable the priority inheritance.
All interrupt service routines (ISRs), whether they have a hard real-time deadline or not should be included in RMS analysis to determine schedulability in cases where ISRs have priorities above all scheduler-controlled tasks. An ISR may already be appropriately prioritized under RMS rules if its processing period is shorter than that of the shortest, non-ISR process. However, an ISR with a period/deadline longer than any non-ISR process period with a critical deadline results in a violation of RMS and prevents the use of the calculated bounds for determining schedulability of a task set.
One method for mitigating a mis-prioritized ISR is to adjust the analysis by reducing the ISR's period to be equal to that of the shortest period, if possible. Imposing this shorter period results in prioritization that conforms to RMS, but also results in a higher utilization factor for the ISR and therefore for the total utilization factor, which may still be below the allowable bound and therefore schedulability can be proven. As an example, consider a hardware ISR that has a computation time,
{Cisr
{Tisr
{T1}
{Tisr
{Uisr
{0.5ms}/{4ms}{=}0.125
{0.5ms}/{1ms}{=}0.5
Another method for mitigating a mis-prioritized ISR is to use the ISR to only set a new semaphore/mutex while moving the time-intensive processing to a new process that has been appropriately prioritized using RMS and will block on the new semaphore/mutex. When determining schedulability, a margin of CPU utilization due to ISR activity should be subtracted from the least upper bound. ISRs with negligible utilization may be ignored.
Process | Computation time C | Release period T | Priority |
---|---|---|---|
P1 | 1 | 8 | 2 |
P2 | 2 | 5 | 1 |
P3 | 2 | 10 | 3 |
Under RMS, P2 has the highest release rate (i.e. the shortest release period) and so would have the highest priority, followed by P1 and finally P3.
The utilization will be:
U=
1 | |
8 |
+
2 | |
5 |
+
2 | |
10 |
=0.725
The sufficient condition for
3
{Ulub
Because
U<Ulub
Process | Computation time C | Release period T | Priority |
---|---|---|---|
P1 | 3 | 16 | 3 |
P2 | 2 | 5 | 1 |
P3 | 2 | 10 | 2 |
Under RMS, P2 has the highest release rate (i.e. the shortest release period) and so would have the highest priority, followed by P3 and finally P1.
Using the Liu and Layland bound, as in Example 1, the sufficient condition for
3
{Ulub
The total utilization will be:
U=
3 | |
16 |
+
2 | |
5 |
+
2 | |
10 |
=0.7875
Since
U>Ulub
Using the tighter Hyperbolic bound as follows:
n | |
\prod | |
i=1 |
(Ui+1)=(
3 | |
16 |
+1)*(
2 | |
5 |
+1)*(
2 | |
10 |
+1)=1.995\leq2
it is found that the task set is schedulable.
Process | Computation time C | Release period T | Priority |
---|---|---|---|
P1 | 7 | 32 | 3 |
P2 | 2 | 5 | 1 |
P3 | 2 | 10 | 2 |
Under RMS, P2 has the highest rate (i.e. the shortest period) and so would have the highest priority, followed by P3 and finally P1.
Using the Liu and Layland bound, as in Example 1, the sufficient condition for
3
{Ulub
The total utilization will be:
U=
7 | |
32 |
+
2 | |
5 |
+
2 | |
10 |
=0.81875
Since
U>Ulub
Using the tighter Hyperbolic bound as follows:
n | |
\prod | |
i=1 |
(Ui+1)=(
7 | |
32 |
+1)*(
2 | |
5 |
+1)*(
2 | |
10 |
+1)=2.0475
Since
2.0{<}2.0475
Because
{T3}={2{T2}}
Using the total utilization factor calculated above (0.81875), since
0.81875<0.828