In computer operating system design, kernel preemption is a property possessed by some kernels, in which the CPU can be interrupted in the middle of executing kernel code and assigned other tasks (from which it later returns to finish its kernel tasks).
Specifically, the scheduler is permitted to forcibly perform a context switch (on behalf of a runnable and higher-priority process) on a driver or other part of the kernel during its execution, rather than co-operatively waiting for the driver or kernel function (such as a system call) to complete its execution and return control of the processor to the scheduler when done.[1] [2] [3] [4] It is used mainly in monolithic and hybrid kernels, where all or most device drivers are run in kernel space. Linux is an example of a monolithic-kernel operating system with kernel preemption.
The main benefit of kernel preemption is that it solves two issues that would otherwise be problematic for monolithic kernels, in which the kernel consists of one large binary.[5] Without kernel preemption, two major issues exist for monolithic and hybrid kernels: