The double precision number which prints as 6.2284800183620495E18 has the representation of
0x43D59C0064EA45A9 = 01000011 11010101 10011100 00000000 01100100 11101010 01000101 10101001
(play with this converter)
This means it is 1.0101100111000000000001100100111010100100010110101001₂ * 2 to the power of 10000111101₂ - 1023₁₀ (see wikipedia for how it works)
which works out as 6082500017931689 * 1024 = 6228480018362049536 which is the answer you get.
So the answer given by the conversion to long is correct - 6228480018362049536 is the decimal representation of a 64 bit integer value equivalent to the given double precision value.
Which raises the question of why is the decimal representation of the number is given as 6.2284800183620495E18?
This is because each floating point value represents not a point on the number line, but a range - a one-bit change ( called a unit in the last place ) will change the value by 1024.
The numbers from 6228480018362049536-512 to 6228480018362049536+511 all correspond to this same double precision value.
Java picks the value with the least number of digits to print which falls into this range and no other -
6228480018362049000 - and writes it as scientific notation as 6.228480018362049E18.
If you don't want the approximations, follow Basil Bourque's advise to use arbitrary precision, or (if your use case allows it ) use a fixed point or integer representation based on long.