ALOHAnet, also known as the ALOHA System,[1] [2] [3] or simply ALOHA, was a pioneering computer networking system developed at the University of Hawaii. ALOHAnet became operational in June 1971, providing the first public demonstration of a wireless packet data network.[4] [5]
The ALOHAnet used a new method of medium access, called ALOHA random access, and experimental ultra high frequency (UHF) for its operation. In its simplest form, later known as Pure ALOHA, remote units communicated with a base station (Menehune) over two separate radio frequencies (for inbound and outbound respectively). Nodes did not wait for the channel to be clear before sending, but instead waited for acknowledgement of successful receipt of a message, and re-sent it if this was not received. Nodes would also stop and re-transmit data if they detected any other messages while transmitting. While simple to implement, this results in an efficiency of only 18.4%. A later advancement, Slotted ALOHA, improved the efficiency of the protocol by reducing the chance of collision, improving throughput to 36.8%.
ALOHA was subsequently employed in the Ethernet cable based network in the 1970s, and following regulatory developments in the early 1980s it became possible to use the ALOHA random-access techniques in both Wi-Fi and in mobile telephone networks. ALOHA channels were used in a limited way in the 1980s in 1G mobile phones for signaling and control purposes. In the late 1980s, the European standardization group GSM who worked on the Pan-European Digital mobile communication system GSM greatly expanded the use of ALOHA channels for access to radio channels in mobile telephony. In the early 2000s additional ALOHA channels were added to 2.5G and 3G mobile phones with the widespread introduction of General Packet Radio Service (GPRS), using a slotted-ALOHA random-access channel combined with a version of the Reservation ALOHA scheme first analyzed by a group at BBN Technologies.
One of the early computer networking designs, development of the ALOHA network was begun in September 1968 at the University of Hawaii under the leadership of Norman Abramson and Franklin Kuo, along with Thomas Gaarder, Shu Lin, Wesley Peterson and Edward ("Ned") Weldon. The goal was to use low-cost commercial radio equipment to connect users on Oahu and the other Hawaiian islands with a central time-sharing computer on the main Oahu campus. The first packet broadcasting unit went into operation in June 1971. Terminals were connected to a special purpose terminal connection unit using RS-232 at 9600 bit/s.[6]
ALOHA was originally a contrived acronym standing for Additive Links On-line Hawaii Area.[7]
The original version of ALOHA used two distinct frequencies in a hub configuration, with the hub machine broadcasting packets to everyone on the outbound channel, and the various client machines sending data packets to the hub on the inbound channel. If data was received correctly at the hub, a short acknowledgment packet was sent to the client; if an acknowledgment was not received by a client machine after a short wait time, it would automatically retransmit the data packet after waiting a randomly selected time interval. This acknowledgment mechanism was used to detect and correct for collisions created when two client machines both attempted to send a packet at the same time.
ALOHAnet's primary importance was its use of a shared medium for client transmissions. Unlike the ARPANET where each node could only talk to a single node at the other end of a wire or satellite circuit, in ALOHAnet all client nodes communicated with the hub on the same frequency. This meant that some sort of mechanism was needed to control who could talk at what time. The ALOHAnet solution was to allow each client to send its data without controlling when it was sent, and implementing an acknowledgment/retransmission scheme to deal with collisions. This approach radically reduced the complexity of the protocol and the networking hardware, since nodes do not need to negotiate who is allowed to speak.
This solution became known as a pure ALOHA, or random-access channel, and was the basis for subsequent Ethernet development and later Wi-Fi networks. Various versions of the ALOHA protocol (such as Slotted ALOHA) also appeared later in satellite communications, and were used in wireless data networks such as ARDIS, Mobitex, CDPD, and GSM.
The Aloha network introduced the mechanism of randomized multiple access, which resolved device transmission collisions by transmitting a packet immediately if no acknowledgement is present, and if no acknowledgment was received, the transmission was repeated after a random waiting time.[8] The probability distribution of this random waiting time for retransmission of a packet that has not been acknowledged as received is critically important for the stability of Aloha-type communication systems. The average waiting time for retransmission is typically shorter than the average time for generation of a new packet from the same client node, but it should not be allowed to be so short as to compromise the stability of the network, causing a collapse in its overall throughput.[9]
Also important was ALOHAnet's use of the outgoing hub channel to broadcast packets directly to all clients on a second shared frequency and using an address in each packet to allow selective receipt at each client node. Separate frequencies were used for incoming and outgoing communications to the hub so that devices could receive acknowledgments regardless of transmissions.
thumb|alt=Graph of frames being sent from 4 different stations according to the pure ALOHA protocol with respect to time, with overlapping frames shaded to denote collision.|Pure ALOHA protocol. Boxes indicate frames. Shaded boxes indicate frames that have collided.
The original version of the protocol (now called Pure ALOHA, and the one implemented in ALOHAnet) was quite simple:
Pure ALOHA does not check whether the channel is busy before transmitting. Since collisions can occur and data may have to be sent again, ALOHA cannot efficiently use 100% of the capacity of the communications channel. How long a station waits until it retransmits, and the likelihood a collision occurs are interrelated, and both affect how efficiently the channel can be used. This means that the concept of retransmit later is a critical aspect; The quality of the backoff scheme chosen significantly influences the efficiency of the protocol, the ultimate channel capacity, and the predictability of its behavior.
To assess Pure ALOHA, there is a need to predict its throughput, the rate of (successful) transmission of frames.[10] First make a few simplifying assumptions:
Let refer to the time needed to transmit one frame on the channel, and define frame-time as a unit of time equal to . Let refer to the mean used in the Poisson distribution over transmission-attempt amounts. That is, on average, there are transmission attempts per frame-time.
thumb|alt=Graph of 3 frames with respect to time. The earlier green frame overlaps with the yellow frame sent at time t0, which overlaps with the later purple frame.|Overlapping frames in the pure ALOHA protocol. Frame-time is equal to 1 for all frames.
Consider what needs to happen for a frame to be transmitted successfully. Let refer to the time at which it is intended to send a frame. It is preferable to use the channel for one frame-time beginning at, and all other stations to refrain from transmitting during this time.
For any frame-time, the probability of there being transmission-attempts during that frame-time is:
Gke-G | |
k! |
thumb|alt=Throughput vs. Traffic Load of Pure Aloha and Slotted Aloha.|Comparison of Pure Aloha and Slotted Aloha shown on Throughput vs. Traffic Load plot.
The average number of transmission-attempts for two consecutive frame-times is . Hence, for any pair of consecutive frame-times, the probability of there being transmission attempts during those two frame-times is:
(2G)ke-2G | |
k! |
Therefore, the probability (
Probpure
Probpure=e-2G
The throughput can be calculated as the rate of transmission attempts multiplied by the probability of success, and it can be concluded that the throughput (
Spure
Spure=Ge-2G
The maximum throughput is frames per frame-time (reached when
G=0.5
thumb|alt=Graph of frames being sent from 8 different stations according to the slotted ALOHA protocol with respect to time, with frames in the same slots shaded to denote collision.|Slotted ALOHA protocol. Boxes indicate frames. Shaded boxes indicate frames which are in the same slots.
An improvement to the original ALOHA protocol was Slotted ALOHA, which introduced discrete time slots and increased the maximum throughput.[11] A station can start a transmission only at the beginning of a time slot, and thus collisions are reduced. In this case, only transmission-attempts within 1 frame-time and not 2 consecutive frame-times need to be considered, since collisions can only occur during each time slot. Thus, the probability of there being zero transmission attempts by other stations in a single time slot is:
Probslotted=e-G
the probability of a transmission requiring exactly k attempts is (k-1 collisions and 1 success):
Probslottedk=e-G(1-e-G)k-1
The throughput is:
Sslotted=Ge-G
The maximum throughput is 1/e frames per frame-time (reached when G = 1), which is approximately 0.368 frames per frame-time, or 36.8%.
Slotted ALOHA is used in low-data-rate tactical satellite communications networks by military forces, in subscriber-based satellite communications networks, mobile telephony call setup, set-top box communications and in the contactless RFID technologies.
Reservation ALOHA, or R-ALOHA, is an effort to improve the efficiency of Slotted ALOHA. The improvements with Reservation ALOHA are markedly shorter delays and ability to efficiently support higher levels of utilization. As a contrast of efficiency, simulations have shown that Reservation ALOHA exhibits less delay at 80% utilization than Slotted ALOHA at 20–36% utilization.[12]
The chief difference between Slotted and Reservation ALOHA is that with Slotted ALOHA, any slot is available for utilization without regards to prior usage. Under Reservation ALOHA's contention-based reservation schema, the slot is temporarily considered "owned" by the station that successfully used it. Additionally, Reservation ALOHA simply stops sending data once the station has completed its transmission. As a rule, idle slots are considered available to all stations that may then implicitly reserve (utilize) the slot on a contention basis.
The use of a random-access channel in ALOHAnet led to the development of carrier-sense multiple access (CSMA), a listen before send random-access protocol that can be used when all nodes send and receive on the same channel. CSMA in radio channels was extensively modeled.[13] The AX.25 packet radio protocol is based on the CSMA approach with collision recovery,[14] based on the experience gained from ALOHAnet. A variation of CSMA, CSMA/CD is used in early versions of Ethernet.
ALOHA and the other random-access protocols have an inherent variability in their throughput and delay performance characteristics. For this reason, applications that need highly deterministic load behavior may use master/slave or token-passing schemes (such as Token Ring or ARCNET) instead of contention systems.
The central node communications processor was an HP 2100 minicomputer called the Menehune, which is the Hawaiian language word for dwarf people, and was named for its similar role to the original ARPANET Interface Message Processor (IMP) which was being deployed at about the same time. In the original system, the Menehune forwarded correctly received user data to the UH central computer, an IBM System 360/65 time-sharing system. Outgoing messages from the 360 were converted into packets by the Menehune, which were queued and broadcast to the remote users at a data rate of 9600 bit/s. Unlike the half-duplex radios at the user TCUs, the Menehune was interfaced to the radio channels with full-duplex radio equipment.[15]
The original user interface developed for the system was an all-hardware unit called an ALOHAnet Terminal Control Unit (TCU) and was the sole piece of equipment necessary to connect a terminal into the ALOHA channel. The TCU was composed of a UHF antenna, transceiver, modem, buffer and control unit. The buffer was designed for a full line length of 80 characters, which allowed handling of both the 40- and 80-character fixed-length packets defined for the system. The typical user terminal in the original system consisted of a Teletype Model 33 or a dumb CRT user terminal connected to the TCU using a standard RS-232 interface. Shortly after the original ALOHA network went into operation, the TCU was redesigned with one of the first Intel microprocessors, and the resulting upgrade was called a Programmable Control Unit (PCU).
Additional basic functions performed by the TCUs and PCUs were generation of a cyclic-parity-check code vector and decoding of received packets for packet error detection purposes, and generation of packet retransmissions using a simple random interval generator. If an acknowledgment was not received from the Menehune after the prescribed number of automatic retransmissions, a flashing light was used as an indicator to the human user. Also, since the TCUs and PCUs did not send acknowledgments to the Menehune, a steady warning light was displayed to the human user when an error was detected in a received packet. Considerable simplification was incorporated into the initial design of the TCU as well as the PCU for interfacing a human user into the network.
In later versions of the system, simple radio relays were placed in operation to connect the main network on the island of Oahu to other islands in Hawaii, and Menehune routing capabilities were expanded to allow user nodes to exchange packets with other user nodes, the ARPANET, and an experimental satellite network.
Two fundamental choices which dictated much of the ALOHAnet design were the two-channel star configuration of the network and the use of random access for user transmissions.
The two-channel configuration was primarily chosen to allow for efficient transmission of the relatively dense total traffic stream being returned to users by the central time-sharing computer. An additional reason for the star configuration was the desire to centralize as many communication functions as possible at the central network node (the Menehune) to minimize the cost of the original all-hardware terminal control unit (TCU) at each user node.
The random-access channel for communication between users and the Menehune was designed specifically for the traffic characteristics of interactive computing. In a conventional communication system, a user might be assigned a portion of the channel on either a frequency-division multiple access or time-division multiple access basis. Since it was well known that in time-sharing systems (circa 1970), computer and user data are bursty, such fixed assignments are generally wasteful of bandwidth because of the high peak-to-average data rates that characterize the traffic.
To achieve a more efficient use of bandwidth for bursty traffic, ALOHAnet developed the random-access packet switching method that has come to be known as a pure ALOHA channel. This approach effectively dynamically allocates bandwidth immediately to a user who has data to send, using the acknowledgment and retransmission mechanism described earlier to deal with occasional access collisions. While the average channel loading must be kept below about 10% to maintain a low collision rate, this still results in better bandwidth efficiency than when fixed allocations are used in a bursty traffic context.
Two 100 kHz channels in the experimental UHF band were used in the implemented system, one for the user-to-computer random-access channel and one for the computer-to-user broadcast channel. The system was configured as a star network, allowing only the central node to receive transmissions in the random-access channel. All user TCUs received each transmission made by the central node in the broadcast channel. All transmissions were made in bursts at, with data and control information encapsulated in packets.
Each packet consisted of a 32-bit header and a 16-bit header parity check word, followed by up to 80 bytes of data and a 16-bit parity check word for the data. The header contained address information identifying a particular user so that when the Menehune broadcast a packet, only the intended user's node would accept it.
In the 1970s ALOHA random access was employed in the nascent Ethernet cable based network[16] and then in the Marisat (now Inmarsat) satellite network.[17]
In the early 1980s frequencies for mobile networks became available, and in 1985 frequencies suitable for what became known as Wi-Fi were allocated in the US.[18] These regulatory developments made it possible to use the ALOHA random-access techniques in both Wi-Fi and in mobile telephone networks.
ALOHA channels were used in a limited way in the 1980s in 1G mobile phones for signaling and control purposes.[19] In the late 1980s, the European standardization group GSM who worked on the Pan-European Digital mobile communication system GSM greatly expanded the use of ALOHA channels for access to radio channels in mobile telephony. In addition, SMS message texting was implemented in 2G mobile phones. In the early 2000s additional ALOHA channels were added to 2.5G and 3G mobile phones with the widespread introduction of General Packet Radio Service (GPRS), using a slotted-ALOHA random-access channel combined with a version of the Reservation ALOHA scheme first analyzed by a group at BBN Technologies.[20]