Unlike the M-profile architectures, with their very different exception model which does permit tail-chaining exceptions, the classic/A-profile architectures do things in a completely straightforward manner.
Interrupts are checked for at instruction boundaries, when the respective CPSR.F/CPSR.I bits is clear. Thus, assuming the FIQ handler is straightforward, once the instruction at 0x108 completes, the FIQ is taken (as it has priority over the IRQ) from whatever mode the CPU was in, the FIQ handler runs with FIQs and IRQs masked, then performs an exception return to 0x110. The fact that there happened to be an IRQ pending throughout makes no difference whatsoever.
The point of note is the boundary between the return instruction at the end of the FIQ handler and the one being returned to. The FIQ return will restore the previous SPSR, which (presumably) has IRQs unmasked. Thus, after executing that return instruction but before executing the one at 0x110, the CPU is back in the initial mode, with IRQs unmasked, and an IRQ pending. So it takes it; the IRQ handler runs with IRQs masked, then performs an exception return to 0x110, whereupon execution eventually continues having served both interrupts.
For ARM7TDMI, that's really all there is to it. In newer architecture versions (ARMv7 onwards), there are some rules tightening up precisely when asynchronous exceptions are expected to be taken, since once CPU designs start becoming superscalar and/or out-of-order the notion of "instruction boundary" gets a bit blurry. This particular situation, though, would be no different on modern CPUs, as the exception return from FIQ constitutes a context-synchronising event after which any pending asynchronous exception (i.e. the IRQ) must be immediately taken.