As always, it depends on the context (sorry for all the "normally" and "usually" words below). Also depends on definition of "scientific". Below is en example of "physical", not "purely mathematical" modeling.
Normally, using computer for scientific / engineering calculations:
- you have reality
- you have an analytical mathematical model of the reality
- to solve the analytical model, usually you have to use some numerical approximation (e.g. finite element method, some numerical scheme for time integration, ...)
- you solve 3. using floating point arithmetics
Now, in the "model chain":
- you loose accuracy from 1) reality to 2) analytical mathematical model
- most theories does some assumptions (neglecting relativity theory and using classical Newtonian mechanics, neglecting effect of gravity, neglecting ...)
- you don't know exactly all the boundary and initial conditions
- you don't know exactly all the material properties
- you don't know ... and have to do some assumption
- you loose accuracy from 2) analytical to 3) numerical model
- from definition. Analytical solution is accurate, but usually practically unachievable.
- in the limit case of infinite computational resources, the numerical methods usually converges to the analytical solution, which is somehow limited by the limited floating point accuracy, but usually the resources are limiting.
- you loose some accuracy using floating point arithmetics
- in some cases it influences the numerical solution
- there are approaches using exact numbers, but they are (usually much) more computationally expensive
You have a lot of trade-offs in the "model chain" (between accuracy, computational costs, amount and quality of input data, ...).
From "practical" point of view, floating point arithmetics is not fully negligible, but usually is one of the least problems in the "model chain".