RISC rides again: New RISC-V architecture hopes to battle ARM and x86 by being totally open source
August 21, 2014
One of the pioneers of the original RISC instruction set has returned to the design table with a goal that’s nothing short of massive. David Patterson wants to reinvent computing with a completely open ISA, and he’s hoping that the time is right to finally blow the doors off the CPU industry — this time, by advocating for the adoption of the completely open ISA, RISC-V.
There are already a variety of open ISAs, but Patterson is hoping RISC-V will spark interest and uptake where other projects have sputtered. It’s hard to argue with the man’s credentials — he’s one of the original inventors of the RISC concept — but some of his critiques of the problems he wants RISC-V to solve ring truer than others.
Why RISC it? (sorry)
According to the whitepaper published by UC Berkeley, there are multiple reasons to opt for a RISC-V design, including restrictive IP agreements from companies like ARM and IBM, limited options for free licensing, and the length of time it takes to create a license. The paper also argues that RISC-V is supposedly superior to other ISAs because it’s learned from their various mistakes and incorporates a better mix of capabilities.
RISC V
RISC-V is designed for ultra-compact code sizes, allows for quadruple precision (128-bit floating point values) and can allow for 128-bit memory addressing — though it’s utterly impractical to think this will be needed in the short term. The whitepaper points out, however, that address size limitations is one mistake an ISA makes that’s hard to recover from — RISC-V’s 128-bit limit should serve us for the next 40-50 years.
It’s not clear, however, how much momentum actually exists around the standard. The whitepaper points to eight chip designs — a prototype RISC-V chip is pictured at the top of the story — Berkeley has already implemented, and claims that a RISC-V core is substantially more efficient than even a competitive ARM core, as shown below:
RISC V
The problem with RISC-V is that the target markets — small companies looking for extreme customization — simply may not be big enough to ever spark much in the way of toolset development, familiarity, or cost savings. How many companies both want to build their own extremely customized architecture and can afford to hire the engineers that would do the job more ably that a default Cortex-A5 CPU from ARM? Our guess is not many. This leaves RISC-V in an uneasy no-man’s land — the engineers with the expertise to build the products are most likely familiar with other ecosystems, while the companies that would most benefit from the cost savings and customized features can’t afford the engineers.
Reigniting the great debate: CISC vs. RISC
Sun UltraSparc chip. Back in the olden days, there were a lot of RISC chips. With the success of Intel and x86, though, most of the RISC designs were squeezed out.
Furthermore, while the whitepaper leans on the idea of ARM as a RISC design that’s (supposedly) vastly more successful than Intel based on total number of CPUs shipped, that comparison is flawed for a number of reasons. I don’t want to rehash the CPU wars of the past decades in this story, but it’s worth revisiting the old paradigms of CISC and RISC that applied when Patterson did his first groundbreaking work in RISC. The CISC designs of the 1960s and 1970s often emphasized doing as much work as possible per instruction. Memory was both incredibly slow and very small — the more work you could pack into every single cycle, the less assembly code you had to write, the more compact your code could be, and the higher the throughput of the system (at least in theory). Some CPUs could support high-level programming features directly in machine code.
The original RISC philosophy argued that by doing less work per cycle, developers could drastically simplify their designs, cut the complicated number of corner cases and operations they supported, increase CPU clock speed, and reap enormous rewards with smaller designs and far smaller transistor budgets. In the beginning, there were enormous differences of scale that fed RISC’s advance, and while RISC-based architectures never made huge inroads into the PC business, they were extremely successful in servers and embedded product lines.
Read our featured story: 4004 to Sandy Bridge: A walk down CPU memory lane
As time passed, however, the line between CISC and RISC began to blur. CISC chips became more RISC-like, while RISC chips ramped up complexity, capabilities, and die size. A good example of this trend is the entire ARM vs. x86 debate. While the two ISAs are absolutely different, research has repeatedly shown that power consumption, clock speed, performance per watt, and instructions executed per clock cycle (a measure of efficiency) are all dependent on the CPU’s physical architecture — not its actual ISA.
If there’s a reason to be optimistic, however, it’s this: For decades, both CISC and RISC designs were mostly driven by brute-force approaches. When the number of transistors is doubling every 18-24 months with commensurate decreases in power consumption, it’s easy to solve problems by throwing transistors at the problem.
We’ve long since hit the point of diminishing marginal returns from that approach — and that means RISC-V’s brand-new ISA with its
The RISC-I chip, developed by UC Berkeley. 44,420 transistors, 5-micron (5,000nm) process, running at 1MHz
A die shot of the RISC-I chip, developed by UC Berkeley. 44,420 transistors, 5-micron (5,000nm) process, running at 1MHz
emphasis on efficiency and performance per watt could be an approach that yields dividends when other, more traditional (and to be honest, simpler) methods of extracting further gains have been exhausted. It’s also noteworthy that the architecture appears to target the lowest end of the ARM Cortex division — areas where, as we’ve recently discussed, the CISC vs. RISC debate actually retains a shred of relevance. In areas where every square millimeter counts and power consumption is absolutely critical, RISC-V might offer advantages.
Whether these gains could be sustained against dominant players in the market like Qualcomm, Intel, and other companies that would be adopting them as well, is an open question. If history is any judge, it takes far more than some basic ISA support and theoretical appeal to seize market share from more dominant, established players — and we’re dubious of openness as an intrinsically important factor, despite the paper’s reliance on it as a prime justification.