The Protocol Wars were a long-running debate in computer science that occurred from the 1970s to the 1990s, when engineers, organizations and nations became polarized over the issue of which communication protocol would result in the best and most robust networks. This culminated in the Internet–OSI Standards War in the 1980s and early 1990s, which was ultimately "won" by the Internet protocol suite (TCP/IP) by the mid-1990s when it became the dominant protocol suite through rapid adoption of the Internet.
In the late 1960s and early 1970s, the pioneers of packet switching technology built computer networks providing data communication, that is the ability to transfer data between points or nodes. As more of these networks emerged in the mid to late 1970s, the debate about communication protocols became a "battle for access standards". An international collaboration between several national postal, telegraph and telephone (PTT) providers and commercial operators led to the X.25 standard in 1976, which was adopted on public data networks providing global coverage. Separately, proprietary data communication protocols emerged, most notably IBM's Systems Network Architecture in 1974 and Digital Equipment Corporation's DECnet in 1975.
The United States Department of Defense (DoD) developed TCP/IP during the 1970s in collaboration with universities and researchers in the US, UK and France. IPv4 was released in 1981 and was made the standard for all DoD computer networking. By 1984, the international reference model OSI model, which was not compatible with TCP/IP, had been agreed upon. Many European governments (particularly France, West Germany and the UK) and the United States Department of Commerce mandated compliance with the OSI model, while the US Department of Defense planned to transition from TCP/IP to OSI.
Meanwhile, the development of a complete Internet protocol suite by 1989, and partnerships with the telecommunication and computer industry to incorporate TCP/IP software into various operating systems laid the foundation for the widespread adoption of TCP/IP as a comprehensive protocol suite. While OSI developed its networking standards in the late 1980s, TCP/IP came into widespread use on multi-vendor networks for internetworking and as the core component of the emerging Internet.
Computer science was an emerging discipline in the late 1950s that began to consider time-sharing between computer users and, later, the possibility of achieving this over wide area networks. In the early 1960s, J. C. R. Licklider proposed the idea of a universal computer network while working at Bolt Beranek & Newman (BBN) and, later, leading the Information Processing Techniques Office (IPTO) at the Advanced Research Projects Agency (ARPA, later, DARPA) of the US Department of Defense (DoD). Independently, Paul Baran at RAND in the US and Donald Davies at the National Physical Laboratory (NPL) in the UK invented new approaches to the design of computer networks.[1]
Baran published a series of papers between 1960 and 1964 about dividing information into "message blocks" and dynamically routing them over distributed networks.[2] [3] [4] Davies conceived of and named the concept of packet switching using high-speed interface computers for data communication in 1965–1966.[5] [6] He proposed a national commercial data network in the UK, and designed the local-area NPL network to demonstrate and research his ideas. The first use of the term protocol in a modern data-communication context occurs in an April 1967 memorandum A Protocol for Use in the NPL Data Communications Network written by two members of Davies' team, Roger Scantlebury and Keith Bartlett.[7] [8] [9]
Licklider, Baran and Davies all found it hard to convince incumbent telephone companies of the merits of their ideas. AT&T held a monopoly on communications infrastructure in the United States, as did the General Post Office (GPO) in the United Kingdom, which was the national postal, telegraph and telephone service (PTT). They both believed speech traffic would continue to dominate and continued to invest in traditional telegraphic techniques.[10] [11] [12] [13] Telephone companies were operating on the basis of circuit switching, alternatives to which are message switching or packet switching.[14]
Bob Taylor became the director of the IPTO in 1966 and set out to achieve Licklider's vision to enable resource sharing between remote computers.[15] Taylor hired Larry Roberts to manage the programme.[16] Roberts brought Leonard Kleinrock into the project; Kleinrock had applied mathematical methods to study communication networks in his doctoral thesis.[17] At the October 1967 Symposium on Operating Systems Principles, Roberts presented the early "ARPA Net" proposal, based on Wesley Clark's idea for a message switching network using Interface Message Processors (IMPs).[18] Roger Scantlebury presented Davies' work on a digital communication network and referenced the work of Paul Baran.[19] At this seminal meeting, the NPL paper articulated how the data communications for such a resource-sharing network could be implemented.[20] [21]
Larry Roberts incorporated Davies' and Baran's ideas on packet switching into the proposal for the ARPANET.[22] The network was built by BBN. Designed principally by Bob Kahn,[23] it departed from the NPL's connectionless network model in an attempt to avoid the problem of network congestion.[24] The service offered to hosts by the network was connection oriented. It enforced flow control and error control (although this was not end-to-end).[25] [26] With the constraint that, for each connection, only one message may be in transit in the network, the sequential order of messages is preserved end-to-end.[25] This made the ARPANET what would come to be called a virtual circuit network.[27]
Packet switching can be based on either a connectionless or connection-oriented mode, which are different approaches to data communications. A connectionless datagram service transports data packets between two hosts independently of any other packet. Its service is best effort (meaning out-of-order packet delivery and data losses are possible). With a virtual circuit service, data can be exchanged between two host applications only after a virtual circuit has been established between them in the network. After that, flow control is imposed to sources, as much as needed by destinations and intermediate network nodes. Data are delivered to destinations in their original sequential order.[28]
Both concepts have advantages and disadvantages depending on their application domain. Where a best effort service is acceptable, an important advantage of datagrams is that a subnetwork may be kept very simple. A counterpart is that, under heavy traffic, no subnetwork is per se protected against congestion collapse. In addition, for users of the best effort service, use of network resources does not enforce any definition of "fairness"; that is, relative delay among user classes.[29] [30] [31]
Datagram services include the information needed for looking up the next link in the network in every packet. In these systems, routers examine each arriving packet, look at their routing information, and decide where to route it. This approach has the advantage that there is no inherent overhead in setting up the circuit, meaning that a single packet can be transmitted as efficiently as a long stream. Generally, this makes routing around problems simpler as only the single routing table needs to be updated, not the information for every virtual circuit. It also requires less memory, as only one route needs to be stored for any destination, not one per virtual circuit. On the downside, there is a need to examine every datagram, which makes them (theoretically) slower.[32]
On the ARPANET, the starting point in 1969 for connecting a host computer (i.e., a user) to an IMP (i.e., a packet switch) was the 1822 protocol, which was written by Bob Kahn.[33] Steve Crocker, a graduate student at the University of California Los Angeles (UCLA) formed a Network Working Group (NWG) that year. He said "While much of the development proceeded according to a grand plan, the design of the protocols and the creation of the RFCs was largely accidental." Under the supervision of Leonard Kleinrock at UCLA, Crocker led other graduate students, including Jon Postel and Vint Cerf, in designing a host-host protocol known as the Network Control Program (NCP). They planned to use separate protocols, Telnet and the File Transfer Protocol (FTP), to run functions across the ARPANET.[34] After approval by Barry Wessler at ARPA, who had ordered certain more exotic elements to be dropped,[35] the NCP was finalized and deployed in December 1970 by the NWG. NCP codified the ARPANET network interface, making it easier to establish, and enabling more sites to join the network.[36] [37]
Roger Scantlebury was seconded from the NPL to the British Post Office Telecommunications division (BPO-T) in 1969. There, engineers developed a packet-switching protocol from basic principles for an Experimental Packet Switched Service (EPSS) based on a virtual call capability. However, the protocols were complex and limited; Davies described them as "esoteric".[38] [39]
Rémi Després started work in 1971, at the CNET (the research center of the French PTT), on the development of an experimental packet switching network, later known as RCP. Its purpose was to put into operation a prototype packet switching service to be offered on a future public data network.[40] [41] Després simplified and improved on the virtual call approach, introducing the concept of "graceful saturated operation" in 1972.[42] He coined the term "virtual circuit" and validated the concepts on the RCP network.[43] Once set up, the data packets do not have to contain any routing information, which can simplify the packet structure and improve channel efficiency. The routers are also faster as the route setup is only done once; from then on, packets are simply forwarded down the existing link. One downside is that the equipment has to be more complex as the routing information has to be stored for the length of the connection. Another disadvantage is that the virtual connection may take some time to set up end-to-end, and for small messages, this time may be significant.
Davies had conceived and described datagram networks, done simulation work on them, and built a single packet switch with local lines. Louis Pouzin thought it looked technically feasible to employ a simpler approach to wide-area networking than that of the ARPANET.[44] In 1972, Pouzin launched the CYCLADES project, with cooperation provided by the French PTT, including free lines and modems. He began to research what would later be called internetworking;[45] at the time, he coined the term "catenet" for concatenated network.[46] The name "datagram" was coined by Halvor Bothner-By. Hubert Zimmermann was one of Pouzin's principal researchers and the team included Michel Elie, Gérard Le Lann, and others. While building the network, they were advised by BBN as consultants.[47] Pouzin's team was the first to tackle the highly-complex problem of providing user applications with a reliable virtual circuit while using a best-effort service.[48] The network used unreliable, standard-sized, datagrams in the packet-switched network and virtual circuits for the transport layer.[49] First demonstrated in 1973, it pioneered the use of the pure datagram model, functional layering, and the end-to-end principle.[50] Le Lann proposed the sliding window scheme for achieving reliable error and flow control on end-to-end connections.[51] [52] [53] However, the sliding window scheme was never implemented on the CYCLADES network and it was never interconnected with other networks (except for limited demonstrations using traditional telegraphic techniques).[54]
Louis Pouzin's ideas to facilitate large-scale internetworking caught the attention of ARPA researchers through the International Network Working Group (INWG), an informal group established by Steve Crocker, Pouzin, Davies, and Peter Kirstein in June 1972 in Paris, a few months before the International Conference on Computer Communication (ICCC) in Washington demonstrated the ARPANET. At the ICCC, Pouzin first presented his ideas on internetworking, and Vint Cerf was approved as INWG's Chair on Steve Crocker's recommendation. INWG grew to include other American researchers, members of the French CYCLADES and RCP projects, and the British teams working on the NPL network, EPSS and the proposed European Informatics Network (EIN), a datagram network. Like Baran in the mid-1960s, when Roberts approached AT&T about taking over the ARPANET to offer a public packet-switched service, they declined.
Bob Kahn joined the IPTO in late 1972. Although initially expecting to work in another field, he began work on satellite packet networks and ground-based radio packet networks, and recognized the value of being able to communicate across both. In Spring 1973, Vint Cerf moved to Stanford University. With funding from DARPA, he began collaborating with Kahn on a new protocol to replace NCP and enable internetworking. Cerf built a research team at Stanford studying the use of fragmentable datagrams. Gérard Le Lann joined the team during the period 1973-4 and Cerf incorporated his sliding windows scheme into the research work.
Also in the United States, Bob Metcalfe and others at Xerox PARC outlined the idea of Ethernet and the PARC Universal Packet (PUP) for internetworking.[55] INWG met in Stanford in June 1973. Zimmermann and Metcalfe dominated the discussions. Notes from the meetings were recorded by Cerf and Alex McKenzie, from BBN, and published as numbered INWG Notes (some of which were also RfCs). Building on this, Kahn and Cerf presented a paper at a networking conference at the University of Sussex in England in September 1973. Their ideas were refined further in long discussions with Davies, Scantlebury, Pouzin and Zimmerman. Most of the work was done by Kahn and Cerf working as a duet.[56]
Peter Kirstein put internetworking into practice at University College London (UCL) in June 1973, connecting the ARPANET to British academic networks, the first international heterogeneous computer network. By 1975, there were 40 British academic and research groups using the link.[57]
The seminal paper, A Protocol for Packet Network Intercommunication, published by Cerf and Kahn in 1974 addressed the fundamental challenges involved in interworking across datagram networks with different characteristics, including routing in interconnected networks, and packet fragmentation and reassembly.[58] The paper drew upon and extended their prior research, developed in collaboration and competition with other American, British and French researchers.[59] [60] [61] DARPA sponsored work to formulate the first version of the Transmission Control Program (TCP) later that year. At Stanford, its specification,, was written in December by Cerf with Yogen Dalal and Carl Sunshine as a monolithic (single layer) design. The following year, testing began through concurrent implementations at Stanford, BBN and University College London,[62] but it was not installed on the ARPANET at this time.
A protocol for internetworking was also being pursued by INWG.[63] There were two competing proposals, one based on the early Transmission Control Program proposed by Cerf and Kahn (using fragmentable datagrams), and the other based on the CYCLADES transport protocol proposed by Pouzin, Zimmermann and Elie (using standard-sized datagrams).[64] A compromise was agreed and Cerf, McKenzie, Scantlebury and Zimmermann authored an "international" end-to-end protocol.[65] [66] It was presented to the CCITT by Derek Barber in 1975 but was not adopted by the CCITT nor by the ARPANET.[67]
The fourth biennial Data Communications Symposium later that year included presentations from Davies, Pouzin, Derek Barber, and Ira Cotten about the current state of packet-switched networking. The conference was covered by Computerworld magazine which ran a story on the "battle for access standards" between datagrams and virtual circuits, as well as a piece describing the "lack of standard access interfaces for emerging public packet-switched communication networks is creating 'some kind of monster' for users". At the conference, Pouzin said pressure from European PTTs forced the Canadian DATAPAC network to change from a datagram to virtual circuit approach,[68] although historians attribute this to IBM's rejection of their request for modification to their proprietary protocol.[69] Pouzin was outspoken in his advocacy for datagrams and attacks on virtual circuits and monopolies. He spoke about the "political significance of the [datagram versus virtual circuit] controversy," which he saw as "initial ambushes in a power struggle between carriers and the computer industry. Everyone knows in the end, it means IBM vs. Telecommunications, through mercenaries."
After Larry Roberts and Barry Wessler left ARPA in 1973 to found Telenet, a commercial packet-switched network in the US, they joined the international effort to standardize a protocol for packet switching based on virtual circuits shortly before it was finalized.[70] With contributions from the French, British, and Japanese PTTs, particularly the work of Rémi Després on RCP and TRANSPAC, along with concepts from DATAPAC in Canada, and Telenet in the US, the X.25 standard was agreed by the CCITT in 1976.[71] [72] X.25 virtual circuits were easily marketed because they permit simple host protocol support.[73] They also satisfy the INWG expectation of 1972 that each subnetwork can exercise its own protection against congestion (a feature missing with datagrams).[74] [75]
Larry Roberts adopted X.25 on Telenet and found that "datagram packets are now more expensive than VC packets" in 1978. Vint Cerf said Roberts turned down his suggestion to use TCP when he built Telenet, saying that people would only buy virtual circuits and he could not sell datagrams. Roberts predicted that "As part of the continuing evolution of packet switching, controversial issues are sure to arise." Pouzin remarked that "the PTT's are just trying to drum up more business for themselves by forcing you to take more service than you need."[76]
Internetworking protocols were still in their infancy. Various groups, including ARPA researchers, the CYCLADES team, and others participating in INWG, were researching the issues involved, including the use of gateways to connect between two networks.[77] At the National Physical Laboratory in the UK, Davies' team studied the "basic dilemma" involved in interconnecting networks: a common host protocol requires restructuring existing networks that use different protocols. To explore this dilemma, the NPL network connected with the EIN by translating between two different host protocols, that is, using a gateway. Concurrently, the NPL connection to the EPSS used a common host protocol in both networks. NPL research confirmed establishing a common host protocol would be more reliable and efficient.
The CYCLADES project, however, was shut down in the late 1970s for budgetary, political and industrial reasons and Pouzin was "banished from the field he had inspired and helped to create".
The design of the Transmission Control Program incorporated both connection-oriented links and datagram services between hosts. A DARPA internetworking experiment in July 1977 linking the ARPANET, SATNET and PRNET demonstrated its viability.[78] Subsequently, DARPA and collaborating researchers at Stanford, UCL and BBN, among others, began work on the Internet, publishing a series of Internet Experiment Notes.[79] [80] Bob Kahn's efforts led to the absorption of MIT's proposal by Dave Clark and Dave Reed for a Data Stream Protocol (DSP) into version 3 of TCP in January 1978 written by Cerf, now at DARPA, and Jon Postel at the Information Sciences Institute of the University of Southern California (USC).[81] Following discussions with Yogen Dalal and Bob Metcalfe at Xerox PARC,[82] [83] in version 4 of TCP, first drafted in September 1978, Postel split the Transmission Control Program into two distinct protocols, the Transmission Control Protocol (TCP) as a reliable connection-oriented service and the Internet Protocol (IP) as connectionless service.[84] For applications that did not want the services of TCP, an alternative called the User Datagram Protocol (UDP) was added in order to provide direct access to the basic service of IP. Referred to as TCP/IP from December 1978,[85] Version 4 was made standard for all military computer networking in March 1982.[86] It was installed on SATNET and adopted by NORSAR/NDRE in March and Peter Kirstein's group at UCL in November.[87] On January 1, 1983, known as "flag day", TCP/IP was installed on the ARPANET.[88] [89] This resulted in a networking model that became known as the DoD internet architecture model (DoD model for short) or DARPA model.[90] [91] [92] Leonard Kleinrock's theoretical work published in the mid-1970s on the performance of the ARPANET underpinned the development of the protocol.[93] [94] [95]
The Coloured Book protocols, developed by British Post Office Telecommunications and the academic community at UK universities, gained some acceptance internationally as the first complete X.25 standard. First defined in 1975, they gave the UK "several years lead over other countries" but were intended as "interim standards" until international agreement was reached.[96] [97] The X.25 standard gained political support in European countries and from the European Economic Community (EEC). The EIN, which was based on datagrams, was replaced with Euronet, which used X.25.[98] [99] Peter Kirstein wrote that European networks tended to be short-term projects with smaller numbers of computers and users. As a result, the European networking activities did not lead to any strong standards except X.25, which became the main European data protocol for fifteen to twenty years. Kirstein said his group at University College London was widely involved, partly because they were one of the groups with the most expertise, and partly to try to ensure that the British activities, such as the JANET NRS, did not diverge too far from the US. The construction of public data networks based on the X.25 protocol suite continued through the 1980s; international examples included the International Packet Switched Service (IPSS) and the SITA network.[100] Complemented by the X.75 standard, which enabled internetworking across national PTT networks in Europe and commercial networks in North America, this led to a global infrastructure for commercial data transport.[101] [102]
Computer manufacturers developed proprietary protocol suites such as IBM's Systems Network Architecture (SNA), Digital Equipment Corporation's (DEC's) DECnet, Xerox's Xerox Network Systems (XNS, based on PUP) and Burroughs' BNA. By the end of the 1970s, IBM's networking activities were, by some measures, two orders of magnitude larger in scale than the ARPANET.[103] During the late 1970s and most of the 1980s, there remained a lack of open networking options. Therefore, proprietary standards, particularly SNA and DECnet, as well as some variants of XNS (e.g., Novell NetWare and Banyan VINES), were commonly used on private networks, becoming somewhat "de facto" industry standards.[104] Ethernet, promoted by DEC, Intel, and Xerox, outcompeted MAN/TOP, promoted by General Motors and Boeing.[105] DEC was an exception among the computer manufactures in supporting the peer-to-peer approach.[106]
In the US, the National Science Foundation (NSF), NASA, and the United States Department of Energy (DoE) all built networks variously based on the DoD model, DECnet, and IP over X.25.
The early research and development of standards for data networks and protocols culminated in the Internet–OSI Standards War in the 1980s and early 1990s. Engineers, organizations and nations became polarized over the issue of which standard would result in the best and most robust computer networks.[107] Both standards are open and non-proprietary in addition to being incompatible,[108] although "openness" may have worked against OSI while being successfully employed by Internet advocates.[109] [110]
Researchers in the UK and elsewhere identified the need for defining higher-level protocols. The UK National Computing Centre publication 'Why Distributed Computing', which was based on extensive research into future potential configurations for computer systems,[111] resulted in the UK presenting the case for an international standards committee to cover this area at the ISO meeting in Sydney in March 1977.[112]
Hubert Zimmermann, and Charles Bachman as chairman, played a key role in the development of the Open Systems Interconnections reference model. They considered it too early to define a set of binding standards while technology was still developing since irreversible commitment to a particular standard might prove sub-optimal or constraining in the long run.[113] Although dominated by computer manufacturers, they had to contend with many competing priorities and interests. The rate of technological change made it necessary to define a model that new systems could converge to rather than standardizing procedures after the fact; the reverse of the traditional approach to developing standards.[114] Although not a standard itself, it was an architectural framework that could accommodate existing and future standards.[115]
Beginning in 1978, international work led to a draft proposal in 1980.[116] In developing the proposal, there were clashes of opinions between computer manufacturers and PTTs, and of both against IBM.[117] The final OSI model was published in 1984 by the International Organization for Standardization (ISO) in alliance with the International Telecommunication Union Telecommunication Standardization Sector (ITU-T), which was dominated by the PTTs.[118]
The most fundamental idea of the OSI model was that of a "layered" architecture. The layering concept was simple in principle but very complex in practice. The OSI model redefined how engineers thought about network architectures.
The DoD model and other existing protocols, such as X.25 and SNA, all quickly adopted a layered approach in the late 1970s.[119] Although the OSI model shifted power away from the PTTs and IBM towards smaller manufacturer and users, the "strategic battle" remained the competition between the ITU's X.25 and proprietary standards, particularly SNA.[120] Neither were fully OSI compliant. Proprietary protocols were based on closed standards and struggled to adopt layering while X.25 was limited in terms of speed and higher-level functionality that would become important for applications.[121] As early as 1982, criticised "zealous" advocates of the OSI reference model and criticised the functionality of the X.25 protocol and its use as an ""end-to-end" protocol in the sense of a Transport or Host-to-Host protocol".
Vint Cerf formed the Internet Configuration Control Board (ICCB) in 1979 to oversee the network's architectural evolution and field technical questions. However, DARPA was still in control and, outside the nascent Internet community, TCP/IP was not even a candidate for universal adoption.[122] [123] The implementation in 1985 of the Domain Name System proposed by Paul Mockapetris at USC, which enabled network growth by facilitating cross-network access, and the development of TCP congestion control by Van Jacobson in 1986-88, led to a complete protocol suite, as outlined in and in 1989. This laid the foundation for the growth of TCP/IP as a comprehensive protocol suite, which became known as the Internet protocol suite.[124]
DARPA studied and implemented gateways, which helped to neutralize X.25 as a rival networking paradigm. The computer science historian Janet Abbate explained: "by running TCP/IP over X.25, [D]ARPA reduced the role of X.25 to providing a data conduit, while TCP took over responsibility for end-to-end control. X.25, which had been intended to provide a complete networking service, would now be merely a subsidiary component of [D]ARPA's own networking scheme. The OSI model reinforced this reinterpretation of X.25's role. Once the concept of a hierarchy of protocols had been accepted, and once TCP, IP, and X.25 had been assigned to different layers in this hierarchy, it became easier to think of them as complementary parts of a single system, and more difficult to view X.25 and the Internet protocols as distinct and competing systems."
The DoD reduced research funding for networks, responsibilities for governance shifted to the National Science Foundation and the ARPANET was shut down in 1990.[125]
Historian Andrew L. Russell wrote that Internet engineers such as Danny Cohen and Jon Postel were accustomed to continual experimentation in a fluid organizational setting through which they developed TCP/IP. They viewed OSI committees as overly bureaucratic and out of touch with existing networks and computers. This alienated the Internet community from the OSI model. A dispute broke out within the Internet community after the Internet Architecture Board (IAB) proposed replacing the Internet Protocol in the Internet with the OSI Connectionless Network Protocol (CLNP). In response, Vint Cerf performed a striptease in a three-piece suit while presenting to the 1992 Internet Engineering Task Force (IETF) meeting, revealing a T-shirt emblazoned with "IP on Everything". According to Cerf, his intention was to reiterate that a goal of the IAB was to run IP on every underlying transmission medium. At the same meeting, David Clark summarized the IETF approach with the famous saying "We reject: kings, presidents, and voting. We believe in: rough consensus and running code." The Internet Society (ISOC) was chartered that year.
Cerf later said the social culture (group dynamics) that first evolved during the work on the ARPANET was as important as the technical developments in enabling the governance of the Internet to adapt to the scale and challenges involved as it grew.
François Flückiger wrote that "firms that win the Internet market, like Cisco, are small. Simply, they possess the Internet culture, are interested in it and, notably, participate in IETF."[126]
Furthermore, the Internet community was opposed to a homogeneous approach to networking, such as one based on a proprietary standard such as SNA. They advocated for a pluralistic model of internetworking where many different network architectures could be joined into a network of networks.[127]
Russell notes that Cohen, Postel and others were frustrated with technical aspects of OSI. The model defined seven layers of computer communications, from physical media in layer 1 to applications in layer 7, which was more layers than the network engineering community had anticipated. In 1987, Steve Crocker said that although they envisaged a hierarchy of protocols in the early 1970s, "If we had only consulted the ancient mystics, we would have seen immediately that seven layers were required." Although some sources say this was an acknowledgement that the four layers of the Internet Protocol Suite were inadequate.[128]
Strict layering in OSI was viewed by Internet advocates as inefficient and did not allow trade-offs ("layer violation") to improve performance. The OSI model allowed what some saw as too many transport protocols (five compared with two for TCP/IP). Furthermore, OSI allowed for both the datagram and the virtual circuit approach at the network layer, which are non-interoperable options.
By the early 1980s, the conference circuit became more acrimonious. Carl Sunshine summarized in 1989: "In hindsight, much of the networking debate has resulted from differences in how to prioritize the basic network design goals such as accountability, reliability, robustness, autonomy, efficiency, and cost effectiveness. Higher priority on robustness and autonomy led to the DoD Internet design, while the PDNs have emphasized accountability and controllability."
Richard des Jardins, an early contributor to the OSI reference model, captured the intensity of the rivalry in a 1992 article by saying "Let's continue to get the people of good will from both communities to work together to find the best solutions, whether they are two-letter words or three-letter words, and let's just line up the bigots against a wall and shoot them."
In 1996, described the "Architectural Principles of the Internet" by saying "in very general terms, the community believes that the goal is connectivity, the tool is the Internet Protocol, and the intelligence is end to end rather than hidden in the network."
Beginning in the early 1980s, DARPA pursued commercial partnerships with the telecommunication and computer industry which enabled the adoption of TCP/IP.[129] In Europe, CERN purchased UNIX machines with TCP/IP for their intranet between 1984 and 1988.[130] Nonetheless, Paul Bryant, the UK representative on the European Academic and Research Network (EARN) Board of Directors,[131] said "By the time JNT [the UK academic network [[JANET]]] came along [in 1984] we could demonstrate X25… and we firmly believed that BT [British Telecom] would provide us with the network infrastructure and we could do away with leased lines and experimental work. If we had gone with DARPA then we would not have expected to be able to use a public service. In retrospect the flaws in that argument are clear but not at the time. Although we were fairly proud of what we were doing, I don't think it was national pride or anti USA that drove us, it was a belief that we were doing the right thing. It was the latter that translated to religious dogma." JANET was a free X.25-based network for academic use, not research; experiments and other protocols were forbidden.[132]
The DARPA Internet was still a research project that did not allow commercial traffic or for-profit services. The NSFNET initiated operations in 1986 using TCP/IP but, two years later, the US Department of Commerce mandated compliance with the OSI model and the Department of Defense planned to transition away from TCP/IP to OSI.[133] Carl Sunshine wrote in 1989 that "by the mid-1980s ... serious performance problems were emerging [with TCP/IP], and it was beginning to look like the critics of "stateless" datagram networking might have been right on some points".
The major European countries and the EEC endorsed OSI. They founded RARE and associated national network operators (such as DFN, SURFnet, SWITCH) to promote OSI protocols, and restricted funding for non-OSI compliant protocols. However, by 1988, the Internet community had defined the Simple Network Management Protocol (SNMP) to enable management of network devices (such as routers) on multi-vendor networks and the Interop '88 trade show showcased new products for implementing networks based on TCP/IP.[134] [135] The same year, EUnet, the European UNIX Network, announced its conversion to Internet technology. By 1989, the OSI advocate Brian Carpenter made a speech at a technical conference entitled "Is OSI Too Late?" which received a standing ovation.[136] [137] OSI was formally defined, but vendor products from computer manufactures and network services from PTTs were still to be developed.[138] [139] TCP/IP by comparison was not an official standard (it was defined in unofficial RFCs) but UNIX workstations with both Ethernet and TCP/IP included had been available since 1983 and now served as a de facto interoperability standard. Carl Sunshine notes that "research is underway on how to optimize TCP/IP performance over variable delay and/or very-high-speed networks" However, Bob Metcalfe said "it has not been worth the ten years wait to get from TCP to TP4, but OSI is now inevitable" and Sunshine expected "OSI architecture and protocols ... will dominate in the future." The following year, in 1990, Cerf said: "You can't pick up a trade press article anymore without discovering that somebody is doing something with TCP/IP, almost in spite of the fact that there has been this major effort to develop international standards through the international standards organization, the OSI protocol, which eventually will get there. It's just that they are taking a lot of time.".[140]
By the beginning of the 1990s, some smaller European countries had adopted TCP/IP. In February 1990, RARE stated "without putting into question its OSI policy, [RARE] recognizes the TCP/IP family of protocols as an open multivendor suite, well adapted to scientific and technical applications." In the same month, CERN established a transatlantic TCP/IP link with Cornell University in the United States.[141] Conversely, starting in August 1990, the NSFNET backbone supported the OSI CLNP in addition to TCP/IP. CLNP was demonstrated in production on NSFNET in April 1991, and OSI demonstrations, including interconnections between US and European sites, were planned at the Interop '91 conference in October that year.[142]
At the Rutherford Appleton Laboratory (RAL) in the United Kingdom in January 1991, DECnet represented 75% of traffic, attributed to Ethernet between VAXs. IP was the second most popular set of protocols with 20% of traffic, attributed to UNIX machines for which "IP is the natural choice". Paul Bryant, Head of Communications and Small Systems at RAL, wrote "Experience has shown that IP systems are very easy to mount and use, in contrast to such systems as SNA and to a lesser extent X.25 and Coloured Books where the systems are rather more complex." The author continued "The principal network within the USA for academic traffic is now based on IP. IP has recently become popular within Europe for inter-site traffic and there are moves to try and coordinate this activity. With the emergence of such a large combined USA/Europe network there are great attractions for UK users to have good access to it. This can be achieved by gatewaying Coloured Book protocols to IP or by allowing IP to penetrate the UK. Gateways are well known to be a cause of loss of quality and frustration. Allowing IP to penetrate may well upset the networking strategy of the UK." Similar views were shared by others at the time, including Louis Pouzin. At CERN, Flückiger reflected "The technology is simple, efficient, is integrated into UNIX-type operating systems and costs nothing for the users' computers. The first companies that commercialize routers, such as Cisco, seem healthy and supply good products. Above all, the technology used for local campus networks and research centres can also be used to interconnect remote centers in a simple way."
Beginning in March 1991, the JANET IP Service (JIPS) was set up as a pilot project to host IP traffic on the existing network.[143] Within eight months, the IP traffic had exceeded the levels of X.25 traffic, and the IP support became official in November. Also in 1991, Dai Davies introduced Internet technology over X.25 into the pan-European NREN, EuropaNet, although he experienced personal opposition to this approach.[144] [145] The EARN and RARE adopted IP around the same time, and the European Internet backbone EBONE became operational in 1992. OSI usage on the NSFNET remained low when compared to TCP/IP. In the UK, the JANET community talked about a transition to OSI protocols, which was to begin with moving to X.400 mail as the first step, but this never happened. The X.25 service was closed in August 1997.[146]
Mail was commonly delivered via Unix to Unix Copy Program (UUCP) in the 1980s, which was well suited for handling message transfers between machines that were intermittently connected. The Government Open Systems Interconnection Profile (GOSIP), developed in the late 1980s and early 1990s, would have led to X.400 adoption. Proprietary commercial systems offered an alternative. In practice, use of the Internet suite of email protocols (SMTP, POP and IMAP) grew rapidly.
The invention of the World Wide Web in 1989 by Tim Berners-Lee at CERN, as an application on the Internet,[147] brought many social and commercial uses to what was previously a network of networks for academic and research institutions.[148] [149] The Web began to enter everyday use in 1993–4.[150] The US National Institute for Standards and Technology proposed in 1994 that GOSIP should incorporate TCP/IP and drop the requirement for compliance with OSI, which was adopted into Federal Information Processing Standards the following year.[151] NSFNET had altered its policies to allow commercial traffic in 1991,[152] and was shut down in 1995, removing the last restrictions on the use of the Internet to carry commercial traffic.[153] Subsequently, the Internet backbone was provided by commercial Internet service providers and Internet connectivity became ubiquitous.[154] [155]
As the Internet evolved and expanded exponentially, an enhanced protocol was developed, IPv6, to address IPv4 address exhaustion.[156] In the 21st century, the Internet of things is leading to the connection of new types of devices to the Internet, bringing reality to Cerf's vision of "IP on Everything".[157] Nonetheless, shortcomings exist with today's Internet; for example, insufficient support for multihoming.[158] [159] Alternatives have been proposed, such as Recursive Network Architecture,[160] and Recursive InterNetwork Architecture.[161]
The seven-layer OSI model is still used as a reference for teaching and documentation;[162] however, the OSI protocols conceived for the model did not gain popularity. Some engineers argue the OSI reference model is still relevant to cloud computing.[163] Others say the original OSI model doesn't fit today's networking protocols and have suggested instead a simplified approach.[164]
Other standards such as X.25 and SNA remain niche players.[165]
Katie Hafner and Matthew Lyon published one of the earliest in-depth and comprehensive histories of the ARPANET and how it led to the Internet. Where Wizards Stay Up Late: The Origins of the Internet (1996) explores the "human dimension" of the development of the ARPANET covering the "theorists, computer programmers, electronic engineers, and computer gurus who had the foresight and determination to pursue their ideas and affect the future of technology and society".[166] [167]
Roy Rosenzweig suggested in 1998 that no one single account of the history of the Internet is sufficient and there will need to be a more adequate history written that includes aspects of many books.[168]
Janet Abbate's 1999 book Inventing the Internet was widely reviewed as an important work on the history of computing and networking, particularly in highlighting the role of social dynamics and of non-American participation in early networking development.[169] [170] The book was also praised for its use of archival resources to tell the history.[171] She has since written about the need for historians to be aware of the perspectives they take in writing about the history of the Internet and explored the implications of defining the Internet in terms of "technology, use and local experience" rather than through the lens of the spread of technologies from the United States.[172] [173]
In his many publications on the "histories of networking", Andrew L. Russell argues scholars could and should look differently at the history of the Internet. His work shifts scholarly and popular understanding about the origins of the Internet and contemporary work in Europe that both competed and cooperated with the push for TCP/IP.[174] [175] [176] James Pelkey conducted interviews with Internet pioneers in the late 1980s and completed his book with Andrew Russell in 2022.
Martin Campbell-Kelly and Valérie Schafer have focused on British and French contributions as well as global and international considerations in the development of packet switching, internetworking and the Internet.
In chronological order: