Distributed data processing[1] (DDP)[2] was the term that IBM used for the IBM 3790 (1975) and its successor, the IBM 8100 (1979). Datamation described the 3790 in March 1979 as "less than successful."[3] [4]
Distributed data processing was used by IBM to refer to two environments:
Each pair included a Telecommunications Monitor and a Database system.[5] The layering involved a message, containing information to form a transaction, which was then processed by an application program.[6] Development tools such as program validation services were released by IBM to facilitate expansion.[7]
Use of "a number of small computers linked to a central computer" permitted local[8] and central processing, each optimized at what it could best do. Terminals,[9] including those described as intelligent, typically were attached locally,[10] to a "satellite processor."[11] Central systems, sometimes multi-processors, grew to handle the load.[12] Some of this extra capacity, of necessity, is used to enhance data security.[13] Years before open systems made its presence felt, the goal of some hardware suppliers was "to replace the big, central mainframe computer with an array of smaller computers that are tied together."[14]
Hadoop[15] adds another term to the mix: File System. Tools added for this use of distributed data processing include new programming languages.
In 1976[16] Turnkey Systems Inc (TSI)/DPF Inc. introduced a hardware/software telecommunications front-end to off-load some processing that handled distributed data processing. Named Flexicom,[17] The CPU was IBM-manufactured, and it ran (mainframe) DOS Rel. 26, with Flexicom