Flynn's taxonomy explained

Flynn's taxonomy is a classification of computer architectures, proposed by Michael J. Flynn in 1966[1] and extended in 1972.[2] The classification system has stuck, and it has been used as a tool in the design of modern processors and their functionalities. Since the rise of multiprocessing central processing units (CPUs), a multiprogramming context has evolved as an extension of the classification system. Vector processing, covered by Duncan's taxonomy,[3] is missing from Flynn's work because the Cray-1 was released in 1977: Flynn's second paper was published in 1972.

Classifications

The four initial classifications defined by Flynn are based upon the number of concurrent instruction (or control) streams and data streams available in the architecture.[4] Flynn defined three additional sub-categories of SIMD in 1972.

Single instruction stream, single data stream (SISD)

See main article: Single instruction, single data. A sequential computer which exploits no parallelism in either the instruction or data streams. Single control unit (CU) fetches a single instruction stream (IS) from memory. The CU then generates appropriate control signals to direct a single processing element (PE) to operate on a single data stream (DS) i.e., one operation at a time.

Examples of SISD architectures are the traditional uniprocessor machines like older personal computers (PCs) (by 2010, many PCs had multiple cores) and mainframe computers.

Single instruction stream, multiple data streams (SIMD)

See main article: Single instruction, multiple data. A single instruction is simultaneously applied to multiple different data streams. Instructions can be executed sequentially, such as by pipelining, or in parallel by multiple functional units.Flynn's 1972 paper subdivided SIMD down into three further categories:

Array processor

The modern term for an array processor is "single instruction, multiple threads" (SIMT). This is a distinct classification in Flynn's 1972 taxonomy, as a subcategory of SIMD. It is identifiable by the parallel subelements having their own independent register file and memory (cache and data memory). Flynn's original papers cite two historic examples of SIMT processors: SOLOMON and ILLIAC IV.

Nvidia commonly uses the term in its marketing materials and technical documents, where it argues for the novelty of its architecture.[5] SOLOMON predates Nvidia by more than 60 years.

The Aspex Microelectronics Associative String Processor (ASP)[6] categorised itself in its marketing material as "massive wide SIMD" but had bit-level ALUs and bit-level predication (Flynn's taxonomy: associative processing), and each of the 4096 processors had their own registers and memory (Flynn's taxonomy: array processing). The Linedancer, released in 2010, contained 4096 2-bit predicated SIMD ALUs, each with its own content-addressable memory, and was capable of 800 billion instructions per second.[7] Aspex's ASP associative array SIMT processor predates NVIDIA by 20 years.[8] [9]

Pipelined processor

At the time that Flynn wrote his 1972 paper many systems were using main memory as the resource from which pipelines were reading and writing. When the resource that all "pipelines" read and write from is the register file rather than main memory, modern variants of SIMD result. Examples include Altivec, NEON, and AVX.

An alternative name for this type of register-based SIMD is "packed SIMD"[10] and another is SIMD within a register (SWAR). When predication is applied, it becomes associative processing (below)

Associative processor

The modern term for associative processor is "predicated" (or masked) SIMD. Examples include AVX-512.

Some modern designs (GPUs in particular) take features of more than one of these subcategories: GPUs of today are SIMT but also are Associative i.e. each processing element in the SIMT array is also predicated.

Multiple instruction streams, single data stream (MISD)

See main article: Multiple instruction, single data. Multiple instructions operate on one data stream. This is an uncommon architecture which is generally used for fault tolerance. Heterogeneous systems operate on the same data stream and must agree on the result. Examples include the Space Shuttle flight control computer.[11]

Multiple instruction streams, multiple data streams (MIMD)

See main article: Multiple instruction, multiple data. Multiple autonomous processors simultaneously execute different instructions on different data. MIMD architectures include multi-core superscalar processors, and distributed systems, using either one shared memory space or a distributed memory space.

Diagram comparing classifications

These four architectures are shown below visually. Each processing unit (PU) is shown for a uni-core or multi-core computer:

Further divisions

, all of the top 10 and most of the TOP500 supercomputers are based on a MIMD architecture.

Although these are not part of Flynn's work, some further divide the MIMD category into the two categories below,[12] [13] [14] [15] [16] and even further subdivisions are sometimes considered.[17]

Single program, multiple data streams (SPMD)

See main article: SPMD. Multiple autonomous processors simultaneously executing the same program (but at independent points, rather than in the lockstep that SIMD imposes) on different data. Also termed single process, multiple data[16] - the use of this terminology for SPMD is technically incorrect, as SPMD is a parallel execution model and assumes multiple cooperating processors executing a program. SPMD is the most common style of explicit parallel programming.[18] The SPMD model and the term was proposed by Frederica Darema of the RP3 team.[19]

Multiple programs, multiple data streams (MPMD)

Multiple autonomous processors simultaneously operating at least two independent programs. In HPC contexts, such systems often pick one node to be the "host" ("the explicit host/node programming model") or "manager" (the "Manager/Worker" strategy), which runs one program that farms out data to all the other nodes which all run a second program. Those other nodes then return their results directly to the manager. An example of this would be the Sony PlayStation 3 game console, with its SPU/PPU processor.

MPMD is common in non-HPC contexts. For example, the make build system can build multiple dependencies in parallel, using target-dependent programs in addition to the make executable itself. MPMD also often takes the form of pipelines. A simple Unix shell command like ls | grep "A" | more launches three processes running separate programs in parallel with the output of one used as the input to the next.

These are both distinct from the explicit parallel programming used in HPC in that the individual programs are generic building blocks rather than implementing part of a specific parallel algorithm. In the pipelining approach, the amount of available parallelism does not increase with the size of the data set.

See also

Notes and References

  1. Flynn. Michael J. . Michael J. Flynn. 10.1109/PROC.1966.5273. Very high-speed computing systems. Proceedings of the IEEE. 54. 12. 1901–1909. December 1966.
  2. Flynn. Michael J. . Michael J. Flynn. 10.1109/TC.1972.5009071. Some Computer Organizations and Their Effectiveness. IEEE Transactions on Computers. C-21. 9. 948–960. September 1972. 18573685 .
  3. Duncan . Ralph. A Survey of Parallel Computer Architectures. 10.1109/2.44900. Computer. 23. 2. 5–16. February 1990. 15036692. 2018-07-18 . live . https://web.archive.org/web/20180718181803/http://www.eng.ucy.ac.cy/theocharides/Courses/ECE656/Duncan90.pdf . 2018-07-18.
  4. Web site: Data-Level Parallelism in Vector, SIMD, and GPU Architectures. 12 November 2013.
  5. Web site: NVIDIA's Next Generation CUDA Compute Architecture: Fermi . Nvidia.
  6. R. M. . Lea . ASP: A Cost-Effective Parallel Microcomputer . . 1988 . 8 . 5 . 10–29 . 10.1109/40.87518. 25901856 .
  7. Web site: Linedancer HD – Overview . Aspex Semiconductor . https://web.archive.org/web/20061013175702/http://www.aspex-semi.com/pages/products/products_linedancer_hd_overview.shtml . 13 October 2006.
  8. A. . Krikelis . 1988 . Artificial Neural Network on a Massively Parallel Associative Architecture . International Neural Network Conference . 10.1007/978-94-009-0643-3_39 . 978-94-009-0643-3 . . Dordrecht.
  9. Web site: Effective Monte Carlo simulation on System-V massively parallel associative string processing architecture . Géza . Ódor . Argy . Krikelis . György . Vesztergombi . Francois . Rohrbach.
  10. Y. . Miyaoka . J. . Choi . N. . Togawa . M. . Yanagisawa . T. . Ohtsuki . An algorithm of hardware unit generation for processor core synthesis with packed SIMD type instructions . Asia-Pacific Conference on Circuits and Systems . 2002 . 171–176 . 10.1109/APCCAS.2002.1114930 . 0-7803-7690-0 . 2065/10689 . free.
  11. Spector. A. . Gifford . D. . September 1984 . The space shuttle primary computer system . Communications of the ACM . 27 . 872–900 . 9 . 10.1145/358234.358246. 39724471 . free .
  12. Web site: Single Program Multiple Data stream (SPMD) . Llnl.gov . 2013-12-09 . 2004-06-04 . https://web.archive.org/web/20040604110624/http://www.llnl.gov/casc/Overture/henshaw/documentation/App/manual/node36.html . dead .
  13. Web site: Programming requirements for compiling, building, and running jobs. Lightning User Guide. https://web.archive.org/web/20060901114042/http://www.cisl.ucar.edu/docs/lightning/program.jsp. September 1, 2006.
  14. Web site: CTC Virtual Workshop . Web0.tc.cornell.edu . 2013-12-09.
  15. Web site: NIST SP2 Primer: Distributed-memory programming . Math.nist.gov . 2013-12-09 . https://web.archive.org/web/20131213105449/http://math.nist.gov/~KRemington/Primer/distrib.html . 2013-12-13 . dead .
  16. Web site: Understanding parallel job management and message passing on IBM SP systems. https://web.archive.org/web/20070203153908/http://www.cisl.ucar.edu/docs/ibm/ref/parallel.html. February 3, 2007.
  17. Web site: 9.2 Strategies. Distributed Memory Programming. https://web.archive.org/web/20060910222800/http://www.tc.cornell.edu/Services/Education/Topics/Parallel/Distributed/+9.2+Strategies.htm. September 10, 2006.
  18. Web site: Single program multiple data . Nist.gov . 2004-12-17 . 2013-12-09.
  19. Darema . Frederica . Frederica Darema. George . David A.. Norton . V. Alan. Pfister . Gregory F.. A single-program-multiple-data computational model for EPEX/FORTRAN. 10.1016/0167-8191(88)90094-4. Parallel Computing. 7. 1. 11–24. 1988.