RSS

RISC Design Philosophy Vs. CISC Design Philosophy

04 Mar

RISC design philosophy

In the mid 1970s researchers (particularly John Cocke) at IBM (and similar projects elsewhere) demonstrated that the majority of combinations of these orthogonal addressing modes and instructions were not used by most programs generated by compilers available at the time. It proved difficult in many cases to write a compiler with more than limited ability to take advantage of the features provided by conventional CPUs.

It was also discovered that, on microcoded implementations of certain architectures, complex operations tended to be slower than a sequence of simpler operations doing the same thing. This was in part an effect of the fact that many designs were rushed, with little time to optimize or tune every instruction, but only those used most often. One infamous example was the VAX’s INDEX instruction.

As mentioned elsewhere, core memory had long since been slower than many CPU designs. The advent of semiconductor memory reduced this difference, but it was still apparent that more registers (and later caches) would allow higher CPU operating frequencies. Additional registers would require sizeable chip or board areas which, at the time (1975), could be made available if the complexity of the CPU logic was reduced.

Yet another impetus of both RISC and other designs came from practical measurements on real-world programs. Andrew Tanenbaum summed up many of these, demonstrating that processors often had oversized immediate. For instance, he showed that 98% of all the constants in a program would fit in 13 bits, yet many CPU designs dedicated 16 or 32 bits to store them. This suggests that, to reduce the number of memory accesses, a fixed length machine could store constants in unused bits of the instruction word itself, so that they would be immediately ready when the CPU needs them (much like immediate addressing in a conventional design). This required small opcodes in order to leave room for a reasonably sized constant in a 32-bit instruction word.

Since many real-world programs spend most of their time executing simple operations, some researchers decided to focus on making those operations as fast as possible. The clock rate of a CPU is limited by the time it takes to execute the slowest sub-operation of any instruction; decreasing that cycle-time often accelerates the execution of other instructions. The focus on “reduced instructions” led to the resulting machine being called a “reduced instruction set computer” (RISC). The goal was to make instructions so simple that they could easily be pipelined, in order to achieve a single clock throughput at high frequencies.

Later it was noted that one of the most significant characteristics of RISC processors was that external memory was only accessible by a load or store instruction. All other instructions were limited to internal registers. This simplified many aspects of processor design: allowing instructions to be fixed-length, simplifying pipelines, and isolating the logic for dealing with the delay in completing a memory access (cache miss, etc) to only two instructions. This led to RISC designs being referred to as load/store architectures.

Instruction set size and alternative terminology

A common misunderstanding of the phrase “reduced instruction set computer” is the mistaken idea that instructions are simply eliminated, resulting in a smaller set of instructions. In fact, over the years, RISC instruction sets have grown in size, and today many of them have a larger set of instructions than many CISC CPUs.Some RISC processors such as the INMOS Transputer have instruction sets as large as, say, the CISC IBM System/370; and conversely, the DEC PDP-8 – clearly a CISC CPU because many of its instructions involve multiple memory accesses – has only 8 basic instructions, plus a few extended instructions.

The term “reduced” in that phrase was intended to describe the fact that the amount of work any single instruction accomplishes is reduced – at most a single data memory cycle – compared to the “complex instructions” of CISC CPUs that may require dozens of data memory cycles in order to execute a single instruction. In particular, RISC processors typically have separate instructions for I/O and data processing; as a consequence, industry observers have started using the terms “register-register” or “load-store” to describe RISC processors.

Some CPUs have been retroactively dubbed RISC — a Byte magazine article once referred to the 6502 as “the original RISC processor” due to its simplistic and nearly orthogonal instruction set (most instructions work with most addressing modes) as well as its 256 zero-page “registers”. The 6502 is no load/store design however: arithmetic operations may read memory, and instructions like INC and ROL even modify memory. Furthermore, orthogonality is equally often associated with “CISC”. However, the 6502 may be regarded as similar to RISC (and early machines) in the fact that it uses no microcode sequencing. As for the well known fact that it employed longer but fewer clock cycles compared to many contemporary microprocessors was due to a more asynchronous design with less subdivision of internal machine cycles. This is similar to early machines, but not to RISC.

Some CPUs have been specifically designed to have a very small set of instructions – but these designs are very different from classic RISC designs, so they have been given other names such as minimal instruction set computer (MISC), Zero Instruction Set Computer (ZISC), one instruction set computer (OISC), transport triggered architecture (TTA), etc.

Alternatives

RISC was developed as an alternative to what is now known as CISC. Over the years, other strategies have been implemented as alternatives to RISC and CISC. Some examples are VLIW, MISC, OISC, massive parallel processing, systolic array, reconfigurable computing, and dataflow architecture.

Typical characteristics of RISC

For any given level of general performance, a RISC chip will typically have far fewer transistors dedicated to the core logic which originally allowed designers to increase the size of the register set and increase internal parallelism.

Other features, which are typically found in RISC architectures are:

* Uniform instruction format, using a single word with the opcode in the same bit positions in every instruction, demanding less decoding;

* Identical general purpose registers, allowing any register to be used in any context, simplifying compiler design (although normally there are separate floating point registers);

* Simple addressing modes. Complex addressing performed via sequences of arithmetic and/or load-store operations;

* Few data types in hardware, some CISCs have byte string instructions, or support complex numbers; this is so far unlikely to be found on a RISC.

Exceptions abound, of course, within both CISC and RISC.

RISC designs are also more likely to feature a Harvard memory model, where the instruction stream and the data stream are conceptually separated; this means that modifying the memory where code is held might not have any effect on the instructions executed by the processor (because the CPU has a separate instruction and data cache), at least until a special synchronization instruction is issued. On the upside, this allows both caches to be accessed simultaneously, which can often improve performance.

Many early RISC designs also shared the characteristic of having a branch delay slot. A branch delay slot is an instruction space immediately following a jump or branch. The instruction in this space is executed, whether or not the branch is taken (in other words the effect of the branch is delayed). This instruction keeps the ALU of the CPU busy for the extra time normally needed to perform a branch. Nowadays the branch delay slot is considered an unfortunate side effect of a particular strategy for implementing some RISC designs, and modern RISC designs generally do away with it (such as PowerPC, more recent versions of SPARC, and MIPS).

Early RISC

The first system that would today be known as RISC was the CDC 6600 supercomputer, designed in 1964, a decade before the term was invented. The CDC 6600 had a load-store architecture with only two addressing modes (register+register, and register+immediate constant) and 74 opcodes (whereas an Intel 8086 has 400). The 6600 had eleven pipelined functional units for arithmetic and logic, plus five load units and two store units; the memory had multiple banks so all load-store units could operate at the same time. The basic clock cycle/instruction issue rate was 10 times faster than the memory access time. Jim Thornton and Seymour Cray designed it as a number-crunching CPU supported by 10 simple computers called “peripheral processors” to handle I/O and other operating system functions. Thus the joking comment later that the acronym RISC actually stood for “Really Invented by Seymour Cray”.

Another early load-store machine was the Data General Nova minicomputer, designed in 1968 by Edson de Castro. It had an almost pure RISC instruction set, remarkably similar to that of today’s ARM processors; however it has not been cited as having influenced the ARM designers, although Novas were in use at the University of Cambridge Computer Laboratory in the early 1980s.

The earliest attempt to make a chip-based RISC CPU was a project at IBM which started in 1975. Named after the building where the project ran, the work led to the IBM 801 CPU family which was used widely inside IBM hardware. The 801 was eventually produced in a single-chip form as the ROMP in 1981, which stood for ‘Research OPD [Office Products Division] Micro Processor’. As the name implies, this CPU was designed for “mini” tasks, and when IBM released the IBM RT-PC based on the design in 1986, the performance was not acceptable. Nevertheless the 801 inspired several research projects, including new ones at IBM that would eventually lead to their POWER system.

The most public RISC designs, however, were the results of university research programs run with funding from the DARPA VLSI Program. The VLSI Program, practically unknown today, led to a huge number of advances in chip design, fabrication, and even computer graphics.

UC Berkeley’s RISC project started in 1980 under the direction of David Patterson and Carlo H. Sequin, based on gaining performance through the use of pipelining and an aggressive use of a technique known as register windowing. In a normal CPU one has a small number of registers, and a program can use any register at any time. In a CPU with register windows, there are a huge number of registers, e.g. 128, but programs can only use a small number of them, e.g. 8, at any one time. A program that limits itself to 8 registers per procedure can make very fast procedure calls: The call simply moves the window “down” by 8, to the set of 8 registers used by that procedure, and the return moves the window back. (On a normal CPU, most calls must save at least a few registers’ values to the stack in order to use those registers as working space, and restore their values on return.)

The RISC project delivered the RISC-I processor in 1982. Consisting of only 44,420 transistors (compared with averages of about 100,000 in newer CISC designs of the era) RISC-I had only 32 instructions, and yet completely outperformed any other single-chip design. They followed this up with the 40,760 transistor, 39 instruction RISC-II in 1983, which ran over three times as fast as RISC-I.

At about the same time, John L. Hennessy started a similar project called MIPS at Stanford University in 1981. MIPS focused almost entirely on the pipeline, making sure it could be run as “full” as possible. Although pipelining was already in use in other designs, several features of the MIPS chip made its pipeline far faster. The most important, and perhaps annoying, of these features was the demand that all instructions be able to complete in one cycle. This demand allowed the pipeline to be run at much higher data rates (there was no need for induced delays) and is responsible for much of the processor’s performance. However, it also had the negative side effect of eliminating many potentially useful instructions, like a multiply or a divide.

In the early years, the RISC efforts were well known, but largely confined to the university labs that had created them. The Berkeley effort became so well known that it eventually became the name for the entire concept. Many in the computer industry criticized that the performance benefits were unlikely to translate into real-world settings due to the decreased memory efficiency of multiple instructions, and that that was the reason no one was using them. But starting in 1986, all of the RISC research projects started delivering products.

Later RISC

Berkeley’s research was not directly commercialized, but the RISC-II design was used by Sun Microsystems to develop the SPARC, by Pyramid Technology to develop their line of mid-range multi-processor machines, and by almost every other company a few years later. It was Sun’s use of a RISC chip in their new machines that demonstrated that RISC’s benefits were real, and their machines quickly outpaced the competition and essentially took over the entire workstation market.

John Hennessy left Stanford (temporarily) to commercialize the MIPS design, starting the company known as MIPS Computer Systems. Their first design was a second-generation MIPS chip known as the R2000. MIPS designs went on to become one of the most used RISC chips when they were included in the PlayStation and Nintendo 64 game consoles. Today they are one of the most common embedded processors in use for high-end applications.

IBM learned from the RT-PC failure and went on to design the RS/6000 based on their new POWER architecture. They then moved their existing AS/400 systems to POWER chips, and found much to their surprise that even the very complex instruction set ran considerably faster. POWER would also find itself moving “down” in scale to produce the PowerPC design, which eliminated many of the “IBM only” instructions and created a single-chip implementation. Today the PowerPC is one of the most commonly used CPUs for automotive applications (some cars have more than 10 of them inside). It was also the CPU used in most Apple Macintosh machines from 1994 to 2006. (Starting in February 2006, Apple switched their main production line to Intel x86 processors.)

Almost all other vendors quickly joined. From the UK similar research efforts resulted in the INMOS transputer, the Acorn Archimedes and the Advanced RISC Machine line, which is a huge success today. Companies with existing CISC designs also quickly joined the revolution. Intel released the i860 and i960 by the late 1980s, although they were not very successful. Motorola built a new design called the 88000 in homage to their famed CISC 68000, but it saw almost no use and they eventually abandoned it and joined IBM to produce the PowerPC. AMD released their 29000 which would go on to become the most popular RISC design of the early 1990s.

Today the vast majority of all 32-bit CPUs in use are RISC CPUs, and microcontrollers. RISC design techniques offers power in even small sizes, and thus has become dominant for low-power 32-bit CPUs. Embedded systems are by far the largest market for processors: while a family may own one or two PCs, their car(s), cell phones, and other devices may contain a total of dozens of embedded processors. RISC had also completely taken over the market for larger workstations for much of the 90s (until taken back by inexpensive PC-based solutions). After the release of the Sun SPARCstation the other vendors rushed to compete with RISC based solutions of their own. The high-end server market today is almost completely RISC based. The #1 spot among supercomputers as of 2008[update] was held by IBM’s Roadrunner system, which uses Power Architecture-based Cell processors to provide most of its computing power; however, the #1 spot as of November 2010[update] was held by the Tianhe-IA, which uses a combination of Intel Xeon processors, Nvidia Tesla GPGPUs, and custom processors, and most of the other machines in the top 10 spots use x86 CISC processors instead.

RISC and x86

However, despite many successes, RISC has made few inroads into the desktop PC and commodity server markets, where Intel’s x86 platform remains the dominant processor architecture. There are three main reasons for this:

1. The very large base of proprietary PC applications are written for x86, whereas no RISC platform has a similar installed base, and this meant PC users were locked into the x86.

2. Although RISC was indeed able to scale up in performance quite quickly and cheaply, Intel took advantage of its large market by spending vast amounts of money on processor development. Intel could spend many times as much as any RISC manufacturer on improving low level design and manufacturing. The same could not be said about smaller firms like Cyrix and NexGen, but they realized that they could apply (tightly) pipelined design practices also to the x86-architecture, just like in the 486 and Pentium. The 6×86 and MII series did exactly this, but was more advanced, it implemented superscalar speculative execution via register renaming, directly at the x86-semantic level. Others, like the Nx586 and AMD K5 did the same, but indirectly, via dynamic microcode buffering and semi-independent superscalar scheduling and instruction dispatch at the micro-operation level (older or simpler ‘CISC’ designs typically executes rigid micro-operation sequences directly). The first available chip deploying such dynamic buffering and scheduling techniques was the NexGen Nx586, released in 1994; the AMD K5 was severely delayed and released in 1995.

3. Later, more powerful processors such as Intel P6, AMD K6, AMD K7, Pentium 4, etc employed similar dynamic buffering and scheduling principles and implemented loosely coupled superscalar (and speculative) execution of micro-operation sequences generated from several parallel x86 decoding stages. Today, these ideas have been further refined (some x86-pairs are instead merged, into a more complex micro-operation, for example) and are still used by modern x86 processors such as Intel Core 2 and AMD K8.

While early RISC designs were significantly different than contemporary CISC designs, by 2000 the highest performing CPUs in the RISC line were almost indistinguishable from the highest performing CPUs in the CISC line.

A number of vendors, including Qualcomm, are attempting to enter the PC market with ARM-based devices dubbed smartbooks, riding off the netbook trend and rising acceptance of Linux distributions, a number of which already have ARM builds.[15] Other companies are choosing to use Windows CE.

Diminishing benefits for desktops and servers

Over time, improvements in chip fabrication techniques have improved performance exponentially, according to Moore’s law, whereas architectural improvements have been comparatively small. Modern CISC implementations have implemented many of the performance improvements introduced by RISC, such as single-clock throughput of simple instructions. Compilers have also become more sophisticated, and are better able to exploit complex as well as simple instructions on CISC architectures, often carefully optimizing both instruction selection and instruction and data ordering in pipelines and caches. The RISC-CISC distinction has blurred significantly in practice.

Expanding benefits for mobile and embedded devices

The hardware translation from x86 instructions into RISC operations, which cost relatively little in microprocessors for desktops and servers as Moore’s Law provided more transistors, become significant in area and energy for mobile and embedded devices. Hence, ARM processors dominate cell phones and tablets today just like x86 processors dominate PCs.

RISC success stories

RISC designs have led to a number of successful platforms and architectures, some of the larger ones being:

* ARM — The ARM architecture dominates the market for low power and low cost embedded systems (typically 100–500 MHz in 2008). ARM Ltd., which licenses intellectual property rather than manufacturing chips, reported that 10 billion licensed chips had been shipped as of early 2008. The various generations, variants and implementations of the ARM core are deployed in over 90% of mobile electronics devices, including almost all modern mobile phones, mp3 players and portable video players. Some high profile examples are

o Apple iPods (custom ARM7TDMI SoC)

o Apple iPhone and iPod Touch (Samsung ARM1176JZF, ARM Cortex-                        A8, Apple A4)

o Apple iPad (Apple A4 ARM-based SoC)

o Palm and PocketPC PDAs and smartphones (Marvell XScale family,                           Samsung SC32442 – ARM9)

o RIM BlackBerry smartphone/email devices.

o Microsoft Windows Mobile

o Nintendo Game Boy Advance (ARM7TDMI)

o Nintendo DS (ARM7TDMI, ARM946E-S)

o Sony Network Walkman (Sony in-house ARM based chip)

o T-Mobile G1 (HTC Dream Android, Qualcomm MSM7201A ARM11 @                       528 MHz)

* MIPS’s MIPS line, found in most SGI computers and the PlayStation, PlayStation 2, Nintendo 64 (discontinued), PlayStation Portable game consoles, and residential gateways like Linksys WRT54G series.

* IBM’s and Freescale’s (formerly Motorola SPS) Power Architecture, used in all of IBM’s supercomputers, midrange servers and workstations, in Apple’s PowerPC-based Macintosh computers (discontinued), in Nintendo’s Gamecube and Wii, Microsoft’s Xbox 360 and Sony’s PlayStation 3 game consoles, EMC’s DMX range of the Symmetrix SAN, and in many embedded applications like printers and cars.

* SPARC, by Oracle (previously Sun Microsystems), and Fujitsu

* Hewlett-Packard’s PA-RISC, also known as HP-PA, discontinued December 31, 2008.

* Alpha, used in single-board computers, workstations, servers and supercomputers from Digital Equipment Corporation, Compaq and HP, discontinued as of 2007.

* XAP processor used in many low-power wireless (Bluetooth, wifi) chips from CSR.

* Hitachi’s SuperH, originally in wide use in the Sega Super 32X, Saturn and Dreamcast, now at the heart of many consumer electronics devices. The SuperH is the base platform for the Mitsubishi – Hitachi joint semiconductor group. The two groups merged in 2002, dropping Mitsubishi’s own RISC architecture, the M32R.

* Atmel AVR used in a variety of products including ranging from Xbox handheld controllers to BMW cars.

CISC (Complex instruction set computing)

A complex instruction set computer (CISC) is a computer where single instructions can execute several low-level operations (such as a load from memory, an arithmetic operation, and a memory store) and/or are capable of multi-step operations or addressing modes within single instructions. The term was retroactively coined in contrast to reduced instruction set computer (RISC).

Examples of CISC instruction set architectures are System/360 through z/Architecture, PDP-11, VAX, Motorola 68k, and x86.

Historical design context

Incitements and benefits

Before the RISC philosophy became prominent, many computer architects tried to bridge the so called semantic gap, i.e. to design instruction sets that directly supported high-level programming constructs such as procedure calls, loop control, and complex addressing modes, allowing data structure and array accesses to be combined into single instructions. Instructions are also typically highly encoded in order to further enhance the code density. The compact nature of such instruction sets results in smaller program sizes and fewer (slow) main memory accesses, which at the time (early 1960s and onwards) resulted in a tremendous savings on the cost of computer memory and disc storage, as well as faster execution. It also meant good programming productivity even in assembly language, as high level languages such as Fortran or Algol were not always available or appropriate (microprocessors in this category are sometimes still programmed in assembly language for certain types of critical applications).

New instructions

In the 70’s, analysis of high level languages indicated some complex machine language implementations and it was determined that new instructions could improve performance. Some instructions were added that were never intended to be used in assembly language but fit well with compiled high level languages. Compilers were updated to take advantage of these instructions. The benefits of semantically rich instructions with compact encodings can be seen in modern processors as well, particularly in the high performance segment where caches are a central component (as opposed to most embedded systems). This is because these fast, but complex and expensive, memories are inherently limited in size, making compact code beneficial. Of course, the fundamental reason they are needed is that main memories (i.e. dynamic RAM today) remain slow compared to a (high performance) CPU-core.

Design issues

While many designs achieved the aim of higher throughput at lower cost and also allowed high-level language constructs to be expressed by fewer instructions, it was observed that this was not always the case. For instance, low-end versions of complex architectures (i.e. using less hardware) could lead to situations where it was possible to improve performance by not using a complex instruction (such as a procedure call or enter instruction), but instead using a sequence of simpler instructions.

One reason for this was that architects (microcode writers) sometimes “over-designed” assembler language instructions, i.e. including features which were not possible to implement efficiently on the basic hardware available. This could, for instance, be “side effects” (above conventional flags), such as the setting of a register or memory location that was perhaps seldom used; if this was done via ordinary (non duplicated) internal buses, or even the external bus, it would demand extra cycles every time, and thus be quite inefficient.

Even in balanced high performance designs, highly encoded and (relatively) high-level instructions could be complicated to decode and execute efficiently within a limited transistor budget. Such architectures therefore required a great deal of work on the part of the processor designer in cases where a simpler, but (typically) slower, solution based on decode tables and/or microcode sequencing is not appropriate. At the time where transistors and other components were a limited resource, this also left fewer components and less area for other types of performance optimizations.

The RISC idea

The circuitry that performs the actions defined by the microcode in many (but not all) CISC processors is, in itself, a processor which in many ways is reminiscent in structure to very early CPU designs. In the early 1970s, this gave rise to ideas to return to simpler processor designs in order to make it more feasible to cope without (then relatively large and expensive) ROM tables and/or PLA structures for sequencing and/or decoding. However, the first (retroactively) RISC-labeled processor (IBM 801 – IBMs Watson Research Center, mid-1970s) was a tightly pipelined simple machine originally intended to be used as an internal microcode kernel, or engine, in CISC designs. Simplicity and regularity also in the visible instruction set would make it easier to implement overlapping processor stages (pipelining) at the machine code level (i.e. the level seen by compilers.) However, pipelining at that level was already used in some high performance CISC computers (“supercomputers”) in order to reduce the instruction cycle time, despite the complications of implementing within the limited component count and wiring complexity feasible at the time. Internal microcode execution, on the other hand, could be more or less pipelined depending on the particular design.

Superscalar

In a more modern context, the complex variable length encoding used by some of the typical CISC architectures makes it complicated, but still feasible, to build a superscalar implementation of a CISC programming model directly; the in-order superscalar Original Pentium and the out-of-order superscalar Cyrix 6×86 are well known examples of this. The frequent memory accesses for operands of a typical CISC machine may limit the instruction level parallelism that can be extracted from the code, although this is strongly mediated by the fast cache structures used in modern designs, as well as by other measures. Due to inherently compact and semantically rich instructions, the average amount of work performed per machine code unit (i.e. per byte or bit) is higher for a CISC than a RISC processor, which may give it a significant advantage in a modern cache based implementation. (Whether the downsides versus the upsides justifies a complex design or not is food for a never-ending debate in certain circles.)

Transistors for logic, PLAs, and microcode are no longer scarce resources; only large high-speed cache memories are limited by the maximum number of transistors today. Although complex, the transistor count of CISC decoders do not grow exponentially like the total number of transistors per processor (the majority typically used for caches). Together with better tools and enhanced technologies, this has led to new implementations of highly encoded and variable length designs without load-store limitations (i.e. non-RISC). This governs re-implementations of older architectures such as the ubiquitous x86 as well as new designs for microcontrollers for embedded systems, and similar uses. The superscalar complexity in the case of modern x86 was solved with dynamically issued and buffered micro-operations, i.e. indirect and dynamic superscalar execution; the Pentium Pro and AMD K5 are early examples of this. This allows a fairly simple superscalar design to be located after the (fairly complex) decoders (and buffers), giving, so to speak, the best of both worlds in many respects.

CISC and RISC terms

The terms CISC and RISC have become less meaningful with the continued evolution of both CISC and RISC designs and implementations. The first highly (or tightly) pipelined x86 implementations, the 486 designs from Intel, AMD, Cyrix, and IBM, supported every instruction that their predecessors did, but achieved maximum efficiency only on a fairly simple x86 subset that was only a little more than a typical RISC instruction set (i.e. without typical RISC load-store limitations). The Intel P5 Pentium generation was a superscalar version of these principles. However, modern x86 processors also (typically) decode and split instructions into dynamic sequences of internally-buffered micro-operations, which not only helps execute a larger subset of instructions in a pipelined (overlapping) fashion, but also facilitates more advanced extraction of parallelism out of the code stream, for even higher performance.

Go to Exercise # 3

Go to Chapter Test 1

Go to Chapter Test 2

Advertisements
 
Leave a comment

Posted by on March 4, 2011 in Topic 1

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

 
%d bloggers like this: