4

I am not so much interested to know about the "small print" the differences while developing code on each platform in terms of what a programmer is used to or what he finds easier to do etc. Nor am I interested in the detailed physical differences in the core (I dont mind them to be mentioned if it suits your narrative I just dont want to focus on the above)

I am just searching about why CISC architecture such as the x86 is superior to RISC architecture or is it not?

I mean why to be "Complex" (CISC) if you can do everything just as well with being Reduced in complexity (RISC)

Is there something that x86 can do that ARM can not? if there isnt anything then why did we bother (historically) on developing CISC and didnt focus on RISC?

Today ARM seems to do everything an Intel computer does they even have server oriented designs...

It bobs my uncle..

Jonas
  • 121,568
  • 97
  • 310
  • 388
papajo
  • 167
  • 1
  • 8
  • 2
    It's just two parallel evolutionary tracks. Like chimpanzee and bonobo apes, RISC and CISC are similar in many ways but different in others. – Some programmer dude May 30 '17 at 19:05
  • 1
    RISC dates from an era when RAM was quite a bit faster than CPUs. Back then a processor could easily take 4 clock cycles to execute an instruction. So it made sense to redesign the instruction set and simplify the processor logic so the speed could be matched. Those days are long gone, RAM is very significant bottleneck today. ARM survived most of all because of an innovate licensing scheme, allowing everybody to include the processor with their logic design. Which is the key difference, Intel shrugs and ignores you if you ask for their design. Well, they licensed ARM too. – Hans Passant May 31 '17 at 00:28

2 Answers2

8

You're trying to re-start a debate that ended 20 years ago. ARM is not RISC anymore and x86 is not CISC anymore.

That said, the reason for CISC was simple: if you could execute 100.000 instructions per second, the CPU which needed the least instructions for a given task would win. One complex instruction would be better than 2 simple instructions.

RISC is based on the observation that as CPU's became faster, the time needed would vary a lot between instructions. Two simple instructions might in fact be faster than one complex, especially when you optimized the CPU for simple instructions.

MSalters
  • 173,980
  • 10
  • 155
  • 350
  • They are still not the same if they where then they could run software written for one to an other without the need of translation or emulation. and if ARM is not RISC anymore then what is it? Also which is better? Should I run premier pro on an ARM or on a Intel/amd x86 CPU of similar clockspeed? – papajo May 30 '17 at 22:24
  • @papajo ARM is not RISC, just ARM. Which is better? It depends on which point you give more scores. It is totally meaningless comparing between the two architectures, since they are targeting totally different markets. Should you run premier pro on an ARM or x86? Premier Pro does not support ARM. – Bumsik Kim May 30 '17 at 22:51
  • well the R in ARM stands for RISC as far as I know.. . and I am aware of the vague arguments.. yea they are not the same but they can still be compared as you can compare a supersport with a naked bike (or even an offroad one) and dont take my question litteral I know premier pro doesnt support ARM my question is isnt a reason for that? Could an ARM run such a heavy complex program performing similar or close to x86? generally on a power user's usage perspective forgetting about marketing orientation etc who should win theoretically ? – papajo May 30 '17 at 23:10
  • 1
    @papajo It was about 25 years ago when the company named ARM. Identifying a CPU as a RISC or a CISC had become vague and meaningless after 25 years. Again, RISC and CISC are just old and historical definitions for types of CPU used about 20 years ago:x86 is x86 and ARM is ARM now. Of course ARM chips can run heavy program if you whish but ARM is not currently targetting gaint CPU for mainframes, they focus on low power, high efficient chips for embedded systems, whereas x86 has big chips for mainframe meaning much higher performance. Are really can't ditch marketing perspectives. – Bumsik Kim May 31 '17 at 07:11
  • @papajo Your question is like "Which OS is better: Windows or Linux?". The answer is: it depends on where you use it. There is no a single theoretical standard to tell which architecture is better. – Bumsik Kim May 31 '17 at 07:33
  • @BumsikKim: Actually, originally, ARM stood for Acorn RISC Machine, a CPU Acorn had developed for its Archimedes line of computers. Only later was the design sold to a company. https://en.wikipedia.org/wiki/ARM_architecture – Rudy Velthuis May 31 '17 at 12:03
  • @RubyVelthis Exactly. It was more than 25 years ago it was named Acron RISC Machine or something. We shouldn't say ARM really sticks to RISC because its name made decades ago. ARM has became their own architecture. We don't need to care if it is RISC or CISC anymore because x86 and ARM has become something beyond RISC and CISC. In other words, the key difference between modern x86 and ARM is not CISC vs RISC anymore. CISC and RISC refers their origin, not the current main architectual difference. – Bumsik Kim May 31 '17 at 14:27
  • @BumsikKim Elaborate on that please what do you mean "ti has become more than RISC or CISC" ? Does ARM talk directly to memory like x86 does? does it have complex instructions? any example of one? – papajo May 31 '17 at 16:50
  • @BumsikKim Also I think its more like asking which OS is better windows or chrome OS ... and there is an answer to that question. Or in other words if that is not right then I am asking to know why (if x86 is windows) arm is not chrome OS but its linux or mac. – papajo May 31 '17 at 16:53
  • @papajo: Every ARM CPU talked to memory; the era of CPU's that did not had ended by the time RISC came into the picture. And ARM has instructions specifically for Java. – MSalters May 31 '17 at 17:55
5

This is part of an answer I wrote for Could a processor be made that supports multiple ISAs? (ex: ARM + x86) (originally posted here when that was closed, now I've edited this down to keep just the parts that answer this question)

This is not an exhaustive list of differences, just some key differences that make building a bi-arch CPU not as easy as slapping a different front-end in front of a common back-end design. (I know that wasn't the aspect this question intended to focus on).


The more different the ISAs, the harder it would be. And the more overhead it would cost in pipeline, especially the back-end.

A CPU that could run both ARM and x86 code would be significantly worse at either one than a pure design that only handles one.

  • efficiently running 32-bit ARM requires support for fully predicated execution, including fault suppression for loads / stores. (Unlike AArch64 or x86, which only have ALU-select type instructions like csinc vs. cmov / setcc that just have a normal data dependency on FLAGS as well as their other inputs.)

  • ARM and AArch64 (especially SIMD shuffles) have several instructions that produce 2 outputs, while almost all x86 instructions only write one output register. So x86 microarchitectures are built to track uops that read up to 3 inputs (2 before Haswell/Broadwell), and write only 1 output (or 1 reg + EFLAGS).

  • x86 requires tracking the separate components of a CISC instruction, e.g. the load and the ALU uops for a memory source operand, or the load, ALU, and store for a memory destination.

  • x86 requires coherent instruction caches, and snooping for stores that modify instructions already fetched and in flight in the pipeline, or some way to handle at least x86's strong self-modifying-code ISA guarantees (Observing stale instruction fetching on x86 with self-modifying code).

  • x86 requires a strongly-ordered memory model. (program order + store buffer with store-forwarding). You have to bake this in to your load and store buffers, so I expect that even when running ARM code, such a CPU would basically still use x86's far stronger memory model. (Modern Intel CPUs speculatively load early and do a memory order machine clear on mis-speculation, so maybe you could let that happen and simply not do those pipeline nukes. Except in cases where it was due to mis-predicting whether a load was reloading a recent store by this thread or not; that of course still has to be handled correctly.)

    A pure ARM could have simpler load / store buffers that didn't interact with each other as much. (Except for the purpose of making stlr / ldar release / acquire cheaper, not just fully stalling.)

  • Different page-table formats. (You'd probably pick one or the other for the OS to use, and only support the other ISA for user-space under a native kernel.)

  • If you did try to fully handle privileged / kernel stuff from both ISAs, e.g. so you could have HW virtualization with VMs of either ISA, you also have stuff like control-register and debug facilities.

So does this mean that the x86 instructions get translated to some weird internal RISC ISA during execution?

Yes, but that "RISC ISA" is not similar to ARM. e.g. it has all the quirks of x86, like shifts leaving FLAGS unmodified if the shift count is 0. (Modern Intel handles that by decoding shl eax, cl to 3 uops; Nehalem and earlier stalled the front-end if a later instruction wanted to read FLAGS from a shift.)

Probably a better example of a back-end quirk that needs to be supported is x86 partial registers, like writing AL and AH, then reading EAX. The RAT (register allocation table) in the back-end has to track all that, and issue merging uops or however it handles it. (See Why doesn't GCC use partial registers?).

See also Why does Intel hide internal RISC core in their processors? - that RISC-like ISA is specialized for executing x86, not a generic neutral RISC pipeline like you'd build as a back-end for an AArch64 or RISC-V.

Peter Cordes
  • 328,167
  • 45
  • 605
  • 847