There are likely to be several "problems", or ways to make it faster.
I wouldn't call them "bottlenecks" because often they are not localized.
Typically they are perfectly good code - it was just never supposed that they would be on the "critical path".
Suppose the problems, when fixed, would save these percentages:

Just finding one of them would give you a certain amount of speedup.
Like if you just find A, that would give you a speedup of 1/(1-0.3) = 1.43 or 43%.
If you did that, you could, like most people, be happy and stop.
However, if you continued and also found B, your total speedup would be 1/(1-0.51) = 2.04 or 104%.
That's a lot more than 43%, even though B was smaller than A.
Fixing C brings you up to 2.92 times faster, and D brings you to 4.2 times faster.
What? Fixing smaller problems has higher payoff?
They can, because speedup factors compound.
Fixing A and B in that order gives you 1.43 * 1.43 = 2.04.
If you happen to fix them in the opposite order you get 1.27 * 1.61 = 2.04
Each time you fix something, the other issues become larger, percentage-wise and easier to find, and the speedups accumulate like a high-yield investment.
By the time you fix A, B, C, D, and E, the one left is F and it isn't 5%, it's 30%.
Fix all of them, and now you are 8.5 times faster!
However, if you miss one, like D, because your profiling tool fails to expose it, you are only 4.5 times faster.
That's the price you pay for not finding a problem.
That's why I rely on a manual technique, because, relative to profilers, it finds all problems they find, and it finds ones they don't.
Profilers are often concerned with peripheral issues, like accuracy of measurement, which doesn't help to find problems.
If you're wondering why, here's the math.