In computing, a context switch is the process of storing the state of a process or thread, so that it can be restored and resume execution at a later point, and then restoring a different, previously saved, state.[1] This allows multiple processes to share a single central processing unit (CPU), and is an essential feature of a multiprogramming or multitasking operating system. In a traditional CPU, each process - a program in execution - utilizes the various CPU registers to store data and hold the current state of the running process. However, in a multitasking operating system, the operating system switches between processes or threads to allow the execution of multiple processes simultaneously.[2] For every switch, the operating system must save the state of the currently running process, followed by loading the next process state, which will run on the CPU. This sequence of operations that stores the state of the running process and the loading of the following running process is called a context switch.
The precise meaning of the phrase "context switch" varies. In a multitasking context, it refers to the process of storing the system state for one task, so that task can be paused and another task resumed. A context switch can also occur as the result of an interrupt, such as when a task needs to access disk storage, freeing up CPU time for other tasks. Some operating systems also require a context switch to move between user mode and kernel mode tasks. The process of context switching can have a negative impact on system performance.[3]
Context switches are usually computationally intensive, and much of the design of operating systems is to optimize the use of context switches. Switching from one process to another requires a certain amount of time for doing the administration saving and loading registers and memory maps, updating various tables and lists, etc. What is actually involved in a context switch depends on the architectures, operating systems, and the number of resources shared (threads that belong to the same process share many resources compared to unrelated non-cooperating processes).
For example, in the Linux kernel, context switching involves loading the corresponding process control block (PCB) stored in the PCB table in the kernel stack to retrieve information about the state of the new process. CPU state information including the registers, stack pointer, and program counter as well as memory management information like segmentation tables and page tables (unless the old process shares the memory with the new) are loaded from the PCB for the new process. To avoid incorrect address translation in the case of the previous and current processes using different memory, the translation lookaside buffer (TLB) must be flushed. This negatively affects performance because every memory reference to the TLB will be a miss because it is empty after most context switches.[4] [5]
Furthermore, analogous context switching happens between user threads, notably green threads, and is often very lightweight, saving and restoring minimal context. In extreme cases, such as switching between goroutines in Go, a context switch is equivalent to a coroutine yield, which is only marginally more expensive than a subroutine call.
There are three potential triggers for a context switch:
Most commonly, within some scheduling scheme, one process must be switched out of the CPU so another process can run. This context switch can be triggered by the process making itself unrunnable, such as by waiting for an I/O or synchronization operation to complete. On a pre-emptive multitasking system, the scheduler may also switch out processes that are still runnable. To prevent other processes from being starved of CPU time, pre-emptive schedulers often configure a timer interrupt to fire when a process exceeds its time slice. This interrupt ensures that the scheduler will gain control to perform a context switch.
Modern architectures are interrupt driven. This means that if the CPU requests data from a disk, for example, it does not need to busy-wait until the read is over; it can issue the request (to the I/O device) and continue with some other task. When the read is over, the CPU can be interrupted (by a hardware in this case, which sends interrupt request to PIC) and presented with the read. For interrupts, a program called an interrupt handler is installed, and it is the interrupt handler that handles the interrupt from the disk.
When an interrupt occurs, the hardware automatically switches a part of the context (at least enough to allow the handler to return to the interrupted code). The handler may save additional context, depending on details of the particular hardware and software designs. Often only a minimal part of the context is changed in order to minimize the amount of time spent handling the interrupt. The kernel does not spawn or schedule a special process to handle interrupts, but instead the handler executes in the (often partial) context established at the beginning of interrupt handling. Once interrupt servicing is complete, the context in effect before the interrupt occurred is restored so that the interrupted process can resume execution in its proper state.
When the system transitions between user mode and kernel mode, a context switch is not necessary; a mode transition is not by itself a context switch. However, depending on the operating system, a context switch may also take place at this time.
The state of the currently executing process must be saved so it can be restored when rescheduled for execution.
The process state includes all the registers that the process may be using, especially the program counter, plus any other operating system specific data that may be necessary. This is usually stored in a data structure called a process control block (PCB) or switchframe.
The PCB might be stored on a per-process stack in kernel memory (as opposed to the user-mode call stack), or there may be some specific operating system-defined data structure for this information. A handle to the PCB is added to a queue of processes that are ready to run, often called the ready queue.
Since the operating system has effectively suspended the execution of one process, it can then switch context by choosing a process from the ready queue and restoring its PCB. In doing so, the program counter from the PCB is loaded, and thus execution can continue in the chosen process. Process and thread priority can influence which process is chosen from the ready queue (i.e., it may be a priority queue).
Considering a general arithmetic addition operation A = B+1. The instruction is stored in the instruction register and the program counter is incremented. A and B are read from memory and are stored in registers R1, R2 respectively. In this case, B+1 is calculated and written in R1 as the final answer. This operation as there are sequential reads and writes and there's no waits for function calls used, hence no context switch/wait takes place in this case.
However, certain special instructions require system calls that require context switch to wait/sleep processes. A system call handler is used for context switch to kernel mode. A display(data x) function may require data x from the Disk and a device driver in kernel mode, hence the display function goes to sleep and waits on the READ operation to get the value of x from the disk, causing the program to wait and a wait for function call to the released setting the current statement to go to sleep and wait for the syscall to wake it up. To maintain concurrency however the program needs to re-execute the new value and the sleeping process together again.
Context switching itself has a cost in performance, due to running the task scheduler, TLB flushes, and indirectly due to sharing the CPU cache between multiple tasks.[6] Switching between threads of a single process can be faster than between two separate processes, because threads share the same virtual memory maps, so a TLB flush is not necessary.[7]
The time to switch between two separate processes is called the process switching latency.The time to switch between two threads of the same process is called the thread switching latency.The time from when a hardware interrupt is generated to when the interrupt is serviced is called the interrupt latency.
Switching between two processes in a single address space operating system can be faster than switching between two processes in an operating system with private per-process address spaces.[8]
Context switching can be performed primarily by software or hardware. Some processors, like the Intel 80386 and its successors,[9] have hardware support for context switches, by making use of a special data segment designated the task state segment (TSS). A task switch can be explicitly triggered with a CALL or JMP instruction targeted at a TSS descriptor in the global descriptor table. It can occur implicitly when an interrupt or exception is triggered if there's a task gate in the interrupt descriptor table (IDT). When a task switch occurs the CPU can automatically load the new state from the TSS.
As with other tasks performed in hardware, one would expect this to be rather fast; however, mainstream operating systems, including Windows and Linux,[10] do not use this feature. This is mainly due to two reasons:
TS
bit is automatically turned on in the CR0
control register, resulting in a fault when executing floating-point instructions and giving the OS the opportunity to save and restore the floating-point state as needed).