With many different choices of sorting algorithms, is it ever appropriate to use a higher complexity algorithm in any example?
It may be, if the big-O complexity of concern is a worst case you are certain you won't hit, or n
is known small as you say (e.g. n is the number of white pawns left on a chess board) and constant factors are more important.
O(2^n) is extreme though... you also have to consider the stability of your reasons for using it - could someone (including you in the future) accidentally modify the code invalidating the suitability of the O(2^n) algorithm, and leaving the app to lock up sometimes when it's invoked and n is higher than originally anticipated or the data's less "friendly"?
For most things, it's worth putting the time in up front to create a reasonably efficient and hopefully reusable algorithm, and just not have to worry, but sometimes that may be massively more complex and error prone and the CPU or memory benefits just don't justify it. Ultimately, it's the impact - short- and long-term - on your actual project that matters.
Quite often when creating algorithmic code, there's a dead-easy, obvious way to solve it, and a complex but efficient way to solve it, and it's a good idea to quickly write the former code so you can use it to test the latter. The former may be called an "oracle" implementation, because it's trusted to know the truth: the right answer. If it also happens to be fast enough and you've limits of n or the data scenarios as discussed, you may not need to progress to the complex implementation.