Discrepancies between single and double precision sum over time
1 view (last 30 days)
Show older comments
I attempted to isolate and better understand an issue happening in a more complex model. It came down to an internal "clock" we have to count elapsed time. For a sample time of 0.05, the clock implementation would just be a sum coupled with a delay. However, we noticed a considerable cumulative error between single and double precision operations. The order of magnitude of the differences seen in the scope below seems to be way higher than the ones due to the difference in precision between single and double. There's a 1s drift after merely 2100 seconds.
Something else that I am confused about is why the difference seems to shift direction (t≈1000s and t≈4100s). Any insights would be appreciated.
0 Comments
Accepted Answer
Jan
on 16 Dec 2021
This is the expected behaviour. Remember that the sum is an instable numerical operation.
d = zeros(1,1e7);
s = zeros(1, 1e7, 'single');
for k = 2:1e7
d(k) = d(k-1) + 0.05;
s(k) = s(k-1) + single(0.05);
end
plot(d - s)
The rounding error accumulate. Single precision means about 7 valid digits. So the magnitude of the rounding effects is in the expected range.
5 Comments
Jan
on 17 Dec 2021
Edited: Jan
on 17 Dec 2021
Remember, that the values have a limited precision.
single(1e7) + single(0.05) - single(1e7)
This is 0, not 0.05, because in single precision the values 1e7 and 1e7+0.05 are represented by the same number.
single(1e7) - single(1e7) + single(0.05)
This replies 0.05, because the addition on the left replies 0 and the 0.05 is not rounded away anymore.
From a certain point adding 0.05 does not change the value anymore.
single(1048576) + single(0.05) == single(1048576) % TRUE
Stephen23
on 17 Dec 2021
Edited: Stephen23
on 17 Dec 2021
"The part I don't get is why the rounding error does not accumulate monotonically?"
Why should it?
The error (difference between the decimal values that you probably expect vs the actual binary values) is not constant, but depends on both addends... one of which is continuously changing. So the binary amount that you are actually adding changes, because the values that you are adding change.
If there was a simple monotonic linear relationship then all binary floating point error could be compensated for using a trivial offset after any calcuation. But that is definitely not the case.
More Answers (0)
See Also
Categories
Find more on Logical in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!