The AT&T Hobbit is a microprocessor design developed by AT&T Corporation in the early 1990s. It was based on the company's CRISP (C-language Reduced Instruction Set Processor) design resembling the classic RISC pipeline, and which in turn grew out of the C Machine design by Bell Labs of the late 1980s. All were optimized for running code compiled from the C programming language. The design concentrates on fast instruction decoding, indexed array access, and procedure calls.
The project was ended in March 1994[1] because the Hobbit failed to achieve commercially viable sales.
The C Machine Project at Bell Labs had been underway since 1975 to develop computer architectures to run C programming language programs efficiently, aiming for a design that would offer an order of magnitude performance improvement over commercially available computers while remaining competitive in terms of cost. The design methodology for the C Machine architecture involved an iterative development approach informed by measurements of C program characteristics, involving the formulation and implementation of new computer architecture revisions, the development of a compiler to target each new revision, the compilation of "a large body of UNIX software", and the analysis of the compiled software. The results from such measurements then informed subsequent architecture revisions.[2]
Following on from the stabilization of the C Machine architecture in 1981 for an uncompleted ECL implementation, a design team was formed for CRISP in April 1983, and CRISP was first produced in a silicon implementation in 1986. The performance objectives were largely met by the fabricated processor, running at 16 MHz and delivering a Dhrystone benchmark score over 13 times greater than the VAX-11/750, achieving approximately 7.7 VAX MIPS. This was competitive with the MIPS R2000 as delivered in the MIPS M/500 Development System (an 8 MHz device delivering around 7.4 VAX MIPS[3]) although some benchmarks showed somewhat stronger performance by the CRISP processor. Compared to the R2000 which required numerous support chips when incorporated into a computer system, the CRISP was a "complete" processor incorporating on-chip caches and had "substantially" reduced board area requirements.[2]
It was subsequently reoriented toward low-power applications and commercialized, resulting in the Hobbit.[4] It was introduced in 1992 in the form of the 92010 and aimed at the personal communicator market. Operating at 3.3V, its reported performance is up to 13.5 VAX MIPS. Initial pricing in multiples of 10,000 units was given as $35 per unit, with the full chipset below $100.[5] Several support chips were produced:[6]
AT&T followed in 1993 with the 92020 family of processors, introducing new support chipsets targeting different applications. These devices can run at 3.3V or at 5V with an elevated clock frequency. The 92020S is pin-compatible with the 92010, has a larger 6 KB instruction cache (as opposed to the 3 KB cache of the 92010), and performs the equivalent of 16 VAX MIPS with a typical power consumption of 210 mW.[7] The 92020S was intended to be used in conjunction with most of the original 92010 chipset, excluding the 92013 peripheral controller. Meanwhile, the 92020M and 92020MX processors were intended for use with the new support chips, also employing a multiplexed address and data bus for reduced pin count, and offering lower levels of performance, with the 92020M also utilizing a 6 KB cache and achieving similar performance to the original 92010. The updated support chips are as follows:[8]
The most highly integrated processor, the 92020MX, preserved the 3 KB cache of the 92010 but has a single-channel PCMCIA interface and a display controller supporting resolutions of up to . Costing $32 per unit in 10,000 unit quantities, it presented opportunities for cost reduction with certain devices when compared to the original Hobbit chipset.[7]
Apple Computer approached AT&T and paid it to develop a newer version of the CRISP suitable for low-power use in the Newton handheld computer. The Hobbit-based Newton was never produced. According to Larry Tesler, "The Hobbit was rife with bugs, ill-suited for our purposes, and overpriced. We balked after AT&T demanded not one but several million more dollars in development fees."[9] Apple rejected the Hobbit and adopted the ARM610 for the Newton,[10] also partnering with Acorn Computers and VLSI Technology to form Advanced RISC Machines (ARM) in late 1990 with a $2.5 million investment. Apple sold its stake in ARM years later for a net $800 million.
The Active Book Company (founded by Hermann Hauser, who also founded Acorn Computers), which had been using an ARM in its Active Book personal digital assistant (PDA),[11] was later purchased by AT&T and was subsumed by AT&T's Eo subsidiary,[12] which produced an early PDA, the EO Personal Communicator, running PenPoint OS from the GO Corporation.[13]
AT&T made early announcements in 1992 of broad vendor adoption.[14] Hobbit was used in the earliest prototypes of the BeBox until in 1993, AT&T announced discontinuation of Hobbit.[15] AT&T closed its Eo operations which were responsible for the only commercially released product using the Hobbit,[16] and finally discontinued the Hobbit in 1994.[17]
In a traditional RISC design implementing a load–store architecture, memory is accessed through instructions that explicitly load data into registers and store data back to memory, with instructions that manipulate data working solely on the registers. By seeking to limit the data processing operations to a single clock cycle, a simpler control mechanism can be employed to dispatch instructions, making it easier to tune the instruction pipelines,[18] and add superscalar support. However, programming languages do not actually operate in this fashion. Generally they use a stack containing local variables and other information for subroutines known as a stack frame or activation record. The compiler writes code to create activation records using the underlying processor's load-store design.
The C Machine in its CRISP implementation, and the Hobbit that followed directly, both aim to support the types of memory access that programming languages use, with the C programming language being a particular consideration.[8] Instructions can access memory directly, referencing values in structures and arrays held within memory and updating memory with computation results. Although this memory-to-memory model is typical of the earlier CISC designs, the C Machine as implemented by CRISP differs from both CISC and RISC designs, including the earlier Bellmac 32, by providing no directly accessible registers. Instead, a "stack cache" of 32-bit register entries is provided, 32 entries in CRISP but extended to 64 entries in Hobbit,[19] mapped to the address space corresponding to the top of the program stack, these being purely accessible using a stack-relative addressing mode. The CRISP architecture was described as a "2½ address memory-to-memory machine", where instructions can employ zero, one, or two memory addresses and can employ a stack entry called the accumulator for computation results. Reminiscent of the Bellmac 32 architecture, various instructions designed to support procedure calling are provided by the CRISP architecture: call saves the return address and branches to a routine; enter allocates a stack frame for a routine, flushing stack cache entries if necessary; return deallocates the stack frame and branches to the caller's return address; catch restores stack entries from memory.[2]
One side effect of the Hobbit design is that it inspired designers of the Dis virtual machine (an offshoot of Plan 9 from Bell Labs) to use a memory-to-memory-based system that more closely matches the internal register-based workings of real-world processors. They found, as RISC designers would have expected, that without a load-store design it was difficult to improve the instruction pipeline and thereby operate at higher speeds. They decided that all future processors would thus move to a load-store design, and built Inferno to reflect this. In contrast, Java and .NET virtual machines are stack-based, a side effect of being designed by language programmers as opposed to chip designers. Translating from a stack-based language to a register-based assembly language is a "heavyweight" operation; Java's virtual machine (VM) and compiler are many times larger and slower than the Dis VM and the Limbo (the most common language compiled for Dis) compiler.[20] The VMs for Android (Dalvik), Parrot, and Lua are also register-based.