I wrote a basic program for calculating money. If I switch the data type of the functions to a float. The count for pennies is incorrect if I set my initial value to 16.16. If I switch it to a double it corrects it. I am not sure why this is happening. I know that a double is more precise but I didn't think that would affect my program.
import UIKit
func moneyCounter(initialValue: Double) -> Array<Any> {
var money = initialValue
func countsTypesOfMoney(moneyValue: Double) -> String {
var moneyTypeAmmount = 0
while money >= moneyValue {
money -= moneyValue
moneyTypeAmmount += 1
}
return String(moneyTypeAmmount)
}
return ["$" + String(initialValue) + " = " + countsTypesOfMoney(moneyValue: 1.00) + "
dollars + " + countsTypesOfMoney(moneyValue: 0.25) + " quarters + " +
countsTypesOfMoney(moneyValue: 0.10) + " dimes + " + countsTypesOfMoney(moneyValue: 0.05) +
" nickels + " + countsTypesOfMoney(moneyValue: 0.01) + " pennies"]
}
print(moneyCounter(initialValue: 16.16))