I have MATLAB code:
clear;
clc;
syms x;
f=log(x)*sin(x^2);
a=vpa(subs(f,x,2),100)
fprintf('a=%.100f\n',a);
doublea=double(a);
fprintf('a=%.100f\n',doublea);
and the result is
a =
-0.5245755158634217064842071630254785076113576311088295152384038229263081153172372089356742060202648499
a=-0.5245755158634216600000000000000000000000000000000000000000000000000000000000000000000000000000000000
doublea =
-0.5246
a=-0.5245755158634216600000000000000000000000000000000000000000000000000000000000000000000000000000000000
>>
Why if I use fprintf
the decimal precision is only up to 16 digits, even though I use 100 digits precision?
Also why if I convert a
to double
then a
is only up to 16 digits?
Can it cause an error in the calculation if I want use more than 16 digits precision? How to fix it?