Page cache explained

In computing, a page cache, sometimes also called disk cache,[1] is a transparent cache for the pages originating from a secondary storage device such as a hard disk drive (HDD) or a solid-state drive (SSD). The operating system keeps a page cache in otherwise unused portions of the main memory (RAM), resulting in quicker access to the contents of cached pages and overall performance improvements. A page cache is implemented in kernels with the paging memory management, and is mostly transparent to applications.

Usually, all physical memory not directly allocated to applications is used by the operating system for the page cache. Since the memory would otherwise be idle and is easily reclaimed when applications request it, there is generally no associated performance penalty and the operating system might even report such memory as "free" or "available".

When compared to main memory, hard disk drive read/writes are slow and random accesses require expensive disk seeks; as a result, larger amounts of main memory bring performance improvements as more data can be cached in memory.[2] Separate disk caching is provided on the hardware side, by dedicated RAM or NVRAM chips located either in the disk controller (in which case the cache is integrated into a hard disk drive and usually called disk buffer[3]), or in a disk array controller, such memory should not be confused with the page cache. The operating system may also use some of main memory as filesystem write buffer, it may be called page buffer.[4]

Memory conservation

Pages in the page cache modified after being brought in are called dirty pages.[5] Since non-dirty pages in the page cache have identical copies in secondary storage (e.g. hard disk drive or solid-state drive), discarding and reusing their space is much quicker than paging out application memory, and is often preferred over flushing the dirty pages into secondary storage and reusing their space. Executable binaries, such as applications and libraries, are also typically accessed through page cache and mapped to individual process spaces using virtual memory (this is done through the mmap system call on Unix-like operating systems). This not only means that the binary files are shared between separate processes, but also that unused parts of binaries will be flushed out of main memory eventually, leading to memory conservation.

Since cached pages can be easily evicted and re-used, some operating systems, notably Windows NT, even report the page cache usage as "available" memory, while the memory is actually allocated to disk pages. This has led to some confusion about the utilization of page cache in Windows.

Disk writes

The page cache also aids in writing to a disk. Pages in the main memory that have been modified during writing data to disk are marked as "dirty" and have to be flushed to disk before they can be freed. When a file write occurs, the cached page for the particular block is looked up. If it is already found in the page cache, the write is done to that page in the main memory. If it is not found in the page cache, then, when the write perfectly falls on page size boundaries, the page is not even read from disk, but allocated and immediately marked dirty. Otherwise, the page(s) are fetched from disk and requested modifications are done. A file that is created or opened in the page cache, but not written to, might result in a zero byte file at a later read.

However, not all cached pages can be written to as program code is often mapped as read-only or copy-on-write; in the latter case, modifications to code will only be visible to the process itself and will not be written to disk.

Side-channel attacks

In 2019, security researchers demonstrated side-channel attacks against the page cache: it's possible to bypass privilege separation and exfiltrate data about other processes by systematically monitoring whether some file pages (for example executable or library files) are present in the cache or not.[6]

See also

External links

Notes and References

  1. Web site: Linux Kernel Development (Second Edition), Chapter 15. The Page Cache and Page Writeback . 2005-01-12 . 2015-07-24 . Robert Love . Sams Publishing . makelinux.net.
  2. Web site: Disk Cache. Webopedia. September 1996.
  3. Web site: What to Look for in a Hard Drive . 2014-12-20 . Mark Kyrnin . about.com . A drive's buffer is an amount of RAM on the drive to store frequently accessed data from the drive. . 2015-04-04 . https://web.archive.org/web/20150404165753/http://compreviews.about.com/od/storage/a/HDBuyersPt1.htm . unfit .
  4. Web site: free(1) — procps — Debian bookworm — Debian Manpages .
  5. Web site: Glossary - TechNet Library . 28 January 2010 . Microsoft .
  6. Gruss. Daniel. Kraft. Erik. Tiwari. Trishita. Schwarz. Michael. Trachtenberg. Ari. Hennessey. Jason. Ionescu. Alex. Fogh. Anders. 2019-01-04. Page Cache Attacks. 1901.01161. cs.CR.