The primary use is when you have some part of a larger item defined in terms of specific bits.
For an obvious example, consider a 32-bit number holding a color -- 8-bits each for red, green, and blue, and (possibly) the other 8 bits for alpha (signifying how transparent this color/pixel should be). In hexadecimal, the digits would look like:
AARRGGBB
(i.e., two digits, or 8 bits) for each component).
We can take such a thing and break it into components something like:
red = color & 0xff;
green = (color >> 8) & 0xff;
blue = (color >> 16) & 0xff;
alpha = (color >> 24) & 0xff;
Conversely, we can put components together:
color = (alpha << 24) | (blue << 16) | (green << 8) | red;
You also typically end up doing bit-twiddling like this when dealing with hardware. For example, you might have a 16-bit register that dedicates 5 bits to one thing, 2 more bits to something else, 6 bits to a third, and so on. When/if you want to change one of those, you do about like the color example, above: isolate the bits that represent one field, modify as needed, then put them back together with the other bits.
Another (quite unrelated) application is in things like hashing. Here we don't typically have fields as such, but we want some bytes of input to produce a single output, with all the bits of the output affected to at least some degree by the bytes of the input. To accomplish that, most end up shifting bits so each byte of input has at least some chance of affecting different parts of the result.
I'd add that although quite a bit of older code uses bit shifts to optimize multiplication or division by powers of 2, this is usually a waste of time with modern hardware and compilers. You will see it in existing code, and should understand what it's trying to accomplish -- but don't try to emulate its example.