This is not to do with Python, it's just the way computers handle floating-point arithmetic.
Imagine trying to write down 1/3 exactly as a decimal in base 10 - you can't as you don't have an infinite amount of time or paper. There are an infinite number of 3s, so any decimal representation can only ever be an approximation.
Similarly, computers don't have an infinite amount of memory, so they can't represent certain fractions exactly (although these are different fractions as computers work in base 2). So in this case, the nearest the computer can get to 2.2*3.0 is 6.6000000000000005. This isn't to do with the multiplication, it's becuase the computer can't store 2.2 completely accurately. However, most of the time, the degree of accuracy given is near enough.
If you need perfect accuracy in Python, you can use the Decimal module.
In relation to problems this causes in "precise business logic", the answer is usually that when dealing with money, don't encode £1.23 as 1.23, but as 123 (pence). However, you may need to do something more complicated when dealing with dividing amounts of money, but this something else shouldn't just be using floats.
In answer to your edited question, it's just that C++ doesn't display as much of the number as Python. It doesn't store it more accurately.
"But still why do C++,C# and VB cut-out the irrelevant part of the result when printing the value while the others do not?"
Because they do. The people who implemented those languages made a different choice to those who implemented other languages. They weighed up the benefits of printing it out fully (it means you don't forget that floating point arithmetic is inaccurate) with the downsides (it is sometimes more difficult to see what the effective result is, shortening it gives an accurate result much of the time) and came to different conclusions.