Yes, many times, but it works best if you design the system from the ground up, for testing. That means building a mock board with connections to digital and analog I/O cards in your development or test box, if budget and design criteria allow. I've seen some mock-ups that were arrays of FPGA's and other logic covering an 8' by 4' bus panel mounted to the wall in a lab with racks of integrated HPC's (ASIC design testing). Of course, you sometimes have to slow everything down by orders of magnitude to stay within the limits of your mach hardware.
In your case, it may be enough to simply measure actual boot time from system power-on/reset to whenever the boot code provides some signal to a test pin, or communications packet, combined with some POST code to verify chip and peripheral configurations. For unit/integration testing, that POST is often more extensive than what you might ship with the product. The later implies that you have automation running on a PC/Server class machine with the needed programming interfaces to program the device(s) and monitor any unit/integration test or POST signalling. If you have separate development and shipping POST code, you should run both in the lab environment for every build.
In the early design phase of the system, and through all hardware and software development cycles, keep a watchful eye out for functionality that can't be tested in simulations and isolate those from what can be fully simulated off product. Your DevOp's test cycles should run all tests of those portions of the code base prior to allowing any submissions. This includes maintaining the required mocks as development proceeds. It is almost always cheaper to run unit tests on a PC/Server test class machine, against some mocks, than to modify the hardware and integrate it with DevOps, it's also faster in most cases.
EDIT: You can also embed one or more Cortex M3's in an FPGA and implement your entire mock hardware around it as FPGA logic.