What is the difference between scientific notation and floating point number?
Not much difference aside from some details. Both use: sign * significant * baseexponent
Floating point (FP), as used with computer languages or specified with standard like IEEE 754, implies a limited precisions and exponent range. Scientific notation (SN) has no such limitation.
The FP limited precision impacts the results of various math operations and conversions. The end result is coherenced into the target floating point format incurring a difference from what could be represented with the open ended scientific notation.
The limited exponent range of floating point also causes operation results like infinity and sub-normals or zero where as scientific notation poses no requirement.
Floating point often does not carry the idea of significance. A FP 1.0 has the same significance (both encoded the same) as 1.00000 whereas SN is 2 vs 6 places of significance. Some modern FP formats (decimal ones) employ different encoding for the same value, but of different significance.
FP has finite different values, e.g. 264 encodings, whereas SN is unlimited.