I am having trouble returning a precise value from a method in which I am subtracting two doubles. I am creating a "bank" software and in this method I am calculating the current interest rate which is set at 30% and is reduced by 2% whenever 5 new accounts are added.
Here is my code:
public class BankAccount {
//static properties
private static double interest = 0.3;
private static ArrayList<BankAccount> accounts = new ArrayList<>();
public static double getInterestRate() {
int y = accounts.size();
double x = interest;
if (y != 0 && y % 5 == 0) {
x-=0.02;
}
return x;
}
}
After I add 5 accounts, my method should return an interest rate of 28%(.28) but it is returning a value of 27.999999999999997%(0.27999999999999997). I understand that this is due to floating points but I'm unsure how to resolve this issue. I am unable to pass any of my test cases which are all looking for exactly 28%, 26%, etc...
I'd like to use the BigDecimal class however this is a homework assignment and is graded automatically. To do so we are given a skeleton code template so we use all the correct variables and types.