34

I'm now going to learn ARM Assembly, to develop for my Windows Mobile 5 iPAQ, but I have some questions:

  • What Are The Main Differences Between ARM Assembly and x86 Assembly?
    • Is Any Differences In The Interrupts(New Types)?
      • Which Are They And What Is The Meaning Of They?
    • Best Assembler To Compile And Where To Get It?
  • Where I Can Find Some Good Resources?
Ciro Santilli OurBigBook.com
  • 347,512
  • 102
  • 1,199
  • 985
Nathan Campos
  • 28,769
  • 59
  • 194
  • 300
  • 32
    ARM assembly is understandable by humans. x86 assembly is a maze of historical anomolies, twisty little non-cartesian corridors and eldritch extensions that only a compiler could cope with! – bobince Nov 13 '09 at 23:30
  • 7
    If humans can understand ARM, then surely my dog could program MIPS! – new123456 Jul 16 '11 at 02:11
  • http://stackoverflow.com/a/14795541/1163019 – auselen Mar 16 '14 at 07:32

2 Answers2

56

Main differences:

  • ARM is a RISC style architecture - instructions have a regular size (32-bit for standard ARM and 16-bits for Thumb mode, though Thumb has some instructions that chew up 2 instruction 'slots')

  • up through at least ARM v5 architecture (I'm not sure what v6 does), the interrupt model on ARM is vastly different than on Intel - instead of pushing registers onto the stack, the ARM swaps to a different set of registers which 'shadow' the normal set. The mode of the processor determines which register file is visible (and not all registers are necessarily shadowed). it's a pretty complex arrangement. Newer ARM Architectures (v7 anyway) have an interrupt model that's closer to Intel's where registers are pushed on to the stack when an interrupt occurs.

Arm instruction have some interesting features that aren't in Intel's:

  • instructions have conditional flags built in - so each instruction can execute as a NOP if the specified condition flags don't match the current status register flag state (this can be used to avoid all those jumps around one or two instructions that you often see in Intel assembly).
  • the ARM has shifting logic that can be embedded as part of the instruction. So when using a register as a source operand, you can shift it as an intrinsic part of the instruction. This helps with indexing arrays, sometimes with arithmetic.

On the other side, the ARM can't do much with memory directly except load from and store to it. Intel assembly can perform more operations directly on memory.

Note that the ARM architecture version doesn't correspond directly to the actual ARM processor versions - for example, if I remember right the ARM7 is a architecture v5 processor. Personally, I find this far more confusing than it should be.

The ARM Architecture references are freely downloadable from http://www.arm.com. I also suggest getting copies of Hitex's guides to various ARM microcontrollers for a good starting point.

There have been several Stackoverflow questions regarding pointers to getting started with ARM. Reviewing them will give you a lot of good places to start:

Community
  • 1
  • 1
Michael Burr
  • 333,147
  • 50
  • 533
  • 760
  • 8
    ARM7 is ARMv4. ARM9 is ARMv4T or ARMv5E. (E implies TDMI) "ARM" is the CPU family. "ARMv" is the instruction set version. –  Dec 19 '09 at 12:55
11

You should also realise that ARM license their IP rather than produce chips. A licensee may configure their ARM core microprocessor in a number of ways. Most importantly w.r.t. your question is that the ARM core itself defines only two interrupts IRQ and FIRQ, most often, there is a vendor specific interrupt controller, so you need to know exactly whose microprocessor is used in your device if you need to know how to handle interrupts. iPAQ models have variously used Intel StongARM and XScale processors. If you want to develop at that level, you should download the user reference manual for the specific part.

All that said, interrupt services and device drivers are provided by the OS so you probably don't need to worry about such low level details. In fact I would question the choice of assembler as your development language. There are few reasons to choose assembler over C or C++, on ARM (the compiler will almost certainly out perform you in terms of code performance). Moreover on Windows Mobile, the most productive application level language is likely to be C#.

Clifford
  • 88,407
  • 13
  • 85
  • 165
  • 3
    "the compiler will almost certainly out perform you in terms of code performance". IME, you would have to really suck at programming for that to be the case. – J D Apr 02 '12 at 09:19
  • 2
    @Jon: The point perhaps requires some qualification: A casual assembler programmer, not fully familiar with the instruction set and idioms specific to it, is unlikely to out-perform a compiler optimiser written by a practitioner *with* that knowledge. The cost and availability of someone capable of out-performing an optimising compiler while being as productive is probably also prohibitive. Possibly you are that practitioner, or perhaps the compilers "in your experience" suck? ;) – Clifford Apr 02 '12 at 11:00
  • 1
    As an example, I recently implemented an FIR filter on ARM Cortex-M3 and ARM RealView compiler, and experimented with various idioms in the high-level C code to maximise performance. The execution time was halved by using compiler optimisations, and when compared with the vendor's hand-written assembler DSP library implementation, it performed similarly. The C implementation was in fact marginally faster, but probably only because it was optimised to our application requirements rather than being generic. – Clifford Apr 02 '12 at 11:08
  • Interesting. Maybe ARM compilers have come along since I last looked. I am learning x86 right now and, although I've only written a few trivial benchmarks, I've had no problem thrashing the GCC/MSVC++/F#/OCaml compilers. – J D Apr 03 '12 at 19:30
  • 1
    For example, recursive floating-point Fibonacci function: 14.23s with OCaml (~65 instructions), 11.23s with F#, 9.94s with MS VC++, 9.58s with GCC -O2 (~200 instructions), 6.82s with my x86 assembler (15 instructions). These compilers are all generating *awful* x86 code. I knew OCaml was bad for floats on x86 but I had not idea C++ compilers were so poor as well... – J D Apr 04 '12 at 08:46
  • 1
    @Jon: Benchmarks are a dangerous thing. Most computing tasks are not of that nature, and compilers are better targeting performance of common programming idioms rather than specific benchmarks. We'd have to know compiler version and options used as well as the actual code to verify any results and the validity/fairness of the test, but given that the question is about ARM, to is probably off topic. The StrongARM processor in the iPAQ in question does not even have an FPU, so the figure would be bound to the performance of the floating point software, not the algorithm. – Clifford Apr 04 '12 at 13:35
  • "given that the question is about ARM". Although my example was floating point I find this is usually true in general (and most of my asm experience is on ARMs). – J D Apr 04 '12 at 18:08
  • @Jon Rerun your GCC benchmark w/o libc and compare instruction counts. I'd bet it's not the language/compiler that you were 'thrashing,' but the runtime. – Ben Burns Apr 29 '13 at 17:38