Expanding on the excellent accepted answer (https://stackoverflow.com/a/661900/371793), for my application I took the approach of examining RAM after a watchdog reset a bit further. As this text is way too long for a comment, I'm adding this as an additional answer:
Our application is prefixed by a custom bootloader, that provides a few functions such as OTA firmware update. By setting the initial stack pointer for this bootloader to 1 kb before the end of RAM, enough of the stack remains in RAM to reconstruct a backtrace in case the main firmware is reset by the watchdog. The bootloader then needs to identify the reboot cause to be the watchdog, and copy the last kb of RAM to some designated flash area, from which it can be recovered.
Reconstructing the backtrace is a bit cumbersome, as there is no PC to start from, so in practice the stack should be (manually) unwound from the bottom to the top, and some educated guessing may be needed to locate the point at which the application stalled. (This is not necessarily the point at which stale data is seen and unwinding fails!)
This approach helped me to systematically pinpoint an issue that occurred only very sporadically. To log additional 'breadcrumbs' on the stack, simply insert things like:
__attribute((unused)) volatile uint32_t _state[4];
_state[0] = 0x57a11ed; // magic value to aid manual unwinding
_state[1] = RCC->CSR;
_state[2] = count++; // maybe we're in a runaway loop?
// etc.
In applications without a separate bootloader, the initial stack pointer could be set to 1 kb before the end of RAM, only to be changed to end of RAM after regular boot (which of course is non-trivial!). Then, in case of a watchdog reset, the application may simply store/transmit the last kb of RAM for offline analysis.