I'm wondering why the book we're using's example is using a mix of decimals and doubles with a cast to decimal when necessary. Would it not be simpler to just make everything decimals and avoid the casts altogether?
Here's the main chunk of code that I'm curious about.
decimal amount;
decimal principal = 1000;
double rate = 0.05;
for (int year = 1; year <= 10; year++)
{
amount = principal * ((decimal) Math.Pow(1.0 + rate, year));
}
Are there any performance or accuracy issues that I'm overlooking?