There is no good reason for it anymore except for some very rare cases.
To debunk the most common argument: It helps the memory allocator to avoid fragmentation.
Most often it will not. If you allocate - lets say - 256 bytes the memory allocator will add some additional space for it's internal management and house-keeping. So your allocation is internally larger. Two 256 buffers have the same size as a 512 byte buffer? Not true.
For peformance it is may even doing harm because how the CPU caches work.
Lets say you need N buffers of some size you may declare them this way:
char buffer[N][256];
Now each buffer[0]
to buffer[N-1]
have identical least significant bits in their address, and these bits are used to allocate cache-lines. The first bytes of your buffers all occupy the same place in your CPU cache.
If you do calculations of the first few bytes of each buffer over and over again you won't see much acceleration from your first level cache.
If on the other hand you would declare them like this:
char buffer[N][300];
The individual buffers don't have identical least significant bits in the address and the first level cache can fully use it.
Lots of people have already run into this issue, for example see this question here: Matrix multiplication: Small difference in matrix size, large difference in timings
There are a few legitimate use-cases for power-of-two buffer sizes. If you write your own memory allocator for example you want to manage your raw memory in sizes equal to the operation system page size. Or you may have hardware constraints that force you to use power-of-two numbers (GPU textures etc).