AT&T Hobbit

From Wikipedia, the free encyclopedia

The AT&T Hobbit is a microprocessor design that AT&T Corporation developed in the early 1990s. It was based on the company's CRISP (C-language Reduced Instruction Set Processor) design, which in turn grew out of the C Machine design by Bell Labs of the late 1980s. All were optimized for running code compiled from the C programming language.

The design concentrates on fast instruction decoding, indexed array access, and procedure calls. Its processor is partially RISC-like.

The project ended in 1994 because the Hobbit failed to achieve commercially viable sales.

History[]

CRISP was produced in 1987, largely for experimental purposes. Apple Computer approached AT&T and paid it to develop a newer version of the CRISP suitable for low-power use in the Newton handheld computer.[1] The result is the Hobbit, which was initially produced as the 92010 in 1992 with a 3 kB instruction buffer and the 92020 in 1994 with 6 kB. Several support chips were produced:[2]

  • AT&T 92011 System Management Unit
  • AT&T 92012 PCMCIA Controller
  • AT&T 92013 Peripheral Controller
  • AT&T 92014 Display Controller

However, the Hobbit-based Newton was never produced. According to Larry Tesler, "The Hobbit was rife with bugs, ill-suited for our purposes, and overpriced. We balked after AT&T demanded not one but several million more dollars in development fees."[3] Apple dropped interest in the Hobbit and moved on to help form Advanced RISC Machines, ARM, with a $2.5 million investment. Apple sold its stake in ARM years later for a net $800 million.[3]

The (founded by Hermann Hauser, who also founded Acorn Computers), which had been using an ARM in its Active Book personal digital assistant (PDA), was later purchased by AT&T and was subsumed by AT&T's EO Personal Communicator company,[4] which produced an early PDA running PenPoint OS from the GO Corporation.

Hobbit was used in the earliest prototypes of the BeBox until in 1993, AT&T announced discontinuation of Hobbit.[5]

With these exceptions there was almost no commercial use of the design, and production was ended in 1994.[citation needed]

Design[]

In a traditional RISC design, better referred to as load-store architecture, memory is accessed explicitly through commands that load data into registers and back out to memory. Instructions that manipulate those data generally work solely on the registers. This allows the processor to clearly separate the movement of data from the processing done on it, making it easier to tune the instruction pipelines and add superscalar support. However, programming languages do not actually operate in this fashion. Generally they use a stack containing local variables and other information for subroutines known as a stack frame or activation record. The compiler writes code to create activation records using the underlying processor's load-store design.

The C Machine, and the CRISP and Hobbit that followed, directly support the types of memory access that programming languages use and is optimized for running the C programming language.[6] Instructions can access memory directly, including structures within memory such as stack frames and arrays. Although this "memory-data" model was typical of the earlier CISC designs, in the C Machine data access is handled entirely via a stack of 64 32-bit registers; the registers are not otherwise addressable, in contrast with the INMOS Transputer and other stack-based designs. Using a stack for data access can dramatically reduce code size as there is no need to specify the location of the data needed by the instructions. On such a stack machine, most instructions implicitly use the data on the top of the stack. Higher code density means less data movement on the memory bus, improving performance.

One side effect of the Hobbit design is that it inspired designers of the Dis virtual machine (an offshoot of Plan 9 from Bell Labs) to use a memory-to-memory-based system that more closely matches the internal register-based workings of real-world processors. They found, as RISC designers would have expected, that without a load-store design it was difficult to improve the instruction pipeline and thereby operate at higher speeds. They decided that all future processors would thus move to a load-store design, and built Inferno to reflect this. In contrast, Java and .NET virtual machines are stack based, a side effect of being designed by language programmers as opposed to chip designers. Translating from a stack-based language to a register-based assembly language is a "heavyweight" operation; Java's virtual machine (VM) and compiler are many times larger and slower than the Dis VM and the Limbo (the most common language compiled for Dis) compiler.[7] The VMs for Android (operating system) (Dalvik), Parrot, and Lua are also register-based.[citation needed]

See also[]

References[]

  1. ^ Bayko, John. "AT&T CRISP/Hobbit, CISC amongst the RISC (1987)". Great Microprocessors of the Past and Present (V 13.4.0); Section Seven: Weird and Innovative Chips. Retrieved 2020-08-21.
  2. ^ Cerda, Michael. "EO Block Diagram". Archived from the original on March 30, 2003. Retrieved May 15, 2009.
  3. ^ Jump up to: a b Tesler, Larry (11 April 1999). "'The Fallen Apple' Corrections". Archived from the original on 2016-03-04. Retrieved 2020-08-21.
  4. ^ Kirkpatrick, David (1993-05-17). "COULD AT&T RULE THE WORLD?". CNN. Retrieved 2008-06-10.
  5. ^ Gassée, Jean-Louis (2019-01-31). "50 Years In Tech Part 15. Be: From Concept To Near Death". Medium. Retrieved 2020-08-31.
  6. ^ "The AT&T Hobbit Enters Its Second Generation". BYTE Magazine. January 1994. Archived from the original on 2008-10-07.
  7. ^ "The design of the Inferno virtual machine". April 22, 2013. Archived from the original on 2013-04-22.

External links[]

Retrieved from ""