Mem (computing) explained

In computing, mem is a measurement unit for the number of memory accesses used or needed by a process, function, instruction set, algorithm or data structure. Mem has applications in computational complexity theory, computing efficiency, combinatorial optimization, supercomputing, computational cost (algorithmic efficiency) and other computational metrics.

Example usage, when discussing processing time of a search tree node, for finding 10 × 10 Latin squares: "A typical node of the search tree probably requires about 75 mems (memory accesses) for processing, to check validity. Therefore the total running time on a modern computer would be roughly the time needed to perform mems." (Donald Knuth, 2011, The Art of Computer Programming, Volume 4A, p. 6).

Reducing mems as a speed and efficiency enhancement is not a linear benefit, as it trades off increases in ordinary operations costs.

PFOR compression

This optimization technique also is called PForDelta[1]

Although lossless compression methods like Rice, Golomb and PFOR are most often associated with signal processing codecs, the ability to optimize binary integers also adds relevance in reducing MEMS tradeoffs vs. operations. (See Golomb coding for details).[2]

See also

References

Breaking the Wall of the Quantum Computing Hype - MemComputing, Inc.

Notes and References

  1. Web site: on compression" techniques of benchmarking and optimization using compression . 2014-02-13 . https://web.archive.org/web/20121221081452/http://cis.poly.edu/cs912/indexcomp.pdf . 2012-12-21 . dead .
  2. https://stackoverflow.com/questions/15464410/algorithmic-complexity-analysis-practically-using-knuths-ordinary-operations MEMS vs. OOPS Article including compression codecs