I'm rewriting some scientific code that someone else wrote a while back, and throughout the code constants are always declared as:
final double value = 2.0000000000D;
with an apparently arbitrary length of supposedly significant digits attached to it. I'm 95% sure that declaring variables in this way actually does nothing, and that setting the value to:
double value = 2.0;
would be exactly the same. But just to be sure, I'm asking SO, does declaring a constant in this fashion make any (meaningful) difference, or is this likely just a relic from another language where it might have made a difference?
EDIT: In response to an answer below: Yes, I verified that the answer is the same in this particular instance before asking this question. Maybe I should have been more specific. Is there ever an instance, where we would expect that adding additional "significant" digits would in fact return a different number? I suppose it's possible if we get to really large or really small values where floating point numbers start to have resolution issues?