First of all, the specific problem you cite is one that vexes me greatly. You almost never want to do a problem in integer arithmetic and then convert it to a floating point type, because the computation will be done entirely in integers. I wish the C# compiler warned about this one; I see this all the time.
Second, the reason to prefer integer or decimal arithmetic to double arithmetic is that a double can only represent with perfect accuracy a fraction whose denominator is a power of two. When you say 0.1 in a double, you don't get 1/10, because 1/10 is not a fraction whose denominator is any power of two. You get the fraction that is closest to 1/10 that does have a power of two in the denominator.
This usually is "close enough", right up until it isn't. It is particularly nasty when you have tiny errors close to hard cutoffs. You want to say, for instance, that a student must have a 2.4 GPA in order to meet some condition, and the computations you do involving fractions with two in the denominator just happen to work out to 2.39999999999999999956...
Now, you do not necessarily get away from these problems with decimal arithmetic; decimal arithmetic has the same restriction: it can only represent numbers that are fractions with powers of ten in the denominator. You try to represent 1/3, and you're going to get a small, but non-zero error on every computation.
Thus standard advice is: if you are doing any computation where you expect exact arithmetic when making computations that involve fractions that are powers of ten in the numerator, such as financial computations, use decimal, or do the computation entirely in integers, scaled appropriately. If you're doing computations that involve physical quantities, where there is no inherent "base" to the computations, then use "double".
So why use integer over decimal or vice versa? Integer arithmetic can be smaller and faster; decimals take more time and space. But ultimately you should not worry about these small performance differences: pick the data type that most accurately reflects the mathematical domain you are working in, and use it.