According to the code in my textbook, and code I've found online for the same thing, checking for leap year in Java looks like this:
if (month == 2 && day == 29 && !(year % 400 == 0 ||
(year % 4 == 0 && year % 100 != 0))) {
// don't do stuff because it's not leap year
}
My question is: Why can't you just write it like so?
if (month == 2 && day == 29 && !(year % 4 == 0)) {
// don't do stuff because it's not leap year
}
Leap year is divisible by 4. So no matter how you cut it if (year % 4 == 0) is not true, then your year is not a leap year and the statement is false. The extra code in there seems redundant. Why is my textbook telling me to write it that way? Is there some kind of convention with Java I'm not aware of?