Problemas with "sum" function

10 views (last 30 days)
Fabio Monsalve
Fabio Monsalve on 8 Oct 2014
Answered: Jan on 21 Oct 2014
When I run the following loop to sum several columns and get a new smaller matrix, the total sum of the elements differs from the original one; which makes no sense. The difference is quite small, but still a difference. The variable "Dif" is supposed to be 0, but it isn't
s=35;
r=41;
Z=rand(s*r);
Z_X=zeros(s*r,r);
for k=1:r
iniciok=s*(k-1)+1;
fink=iniciok+s-1;
Z_X(:,k)=sum(Z(:,iniciok:fink),2);
end
SUM_01=sum(sum(Z));
SUM_02=sum(sum(Z_X));
Dif=SUM_01-SUM_02
  5 Comments
Fabio Monsalve
Fabio Monsalve on 21 Oct 2014
Thanks for your answers. Still, it seems to me that such an "small" error is not totally insignificant, depending on the following step in your code. For instance, if you multiply such an error for a really big quantity (as happens in my research) the final result would be distorted.
Adam
Adam on 21 Oct 2014
It would be exactly the same as a percentage of the quantities in question which is usually what matters. You would, for example, accept a higher error in 32-bit maths than in 8-bit maths.

Sign in to comment.

Answers (3)

John D'Errico
John D'Errico on 21 Oct 2014
Edited: John D'Errico on 21 Oct 2014
Well, actually Iain makes a misstatement of sorts. Matlab does not do "sums" to 15 significant digits. It uses floating point arithmetic, doubles to be exact, which effectively use a 52 bit binary mantissa. That is where the 15 significant digit number came from that was referred to by Iain.
log10(2^53-1)
ans =
15.955
So we actually get a little more than 15 significant decimal digits that we can "trust".
If I add two floating point numbers together, MATLAB (or ANY engine that uses floating point arithmetic, not just MATLAB) will lose the bits (information content) that fall below that limit. Since a double cannot carry more precision than that held in a double, this MUST happen. And remember that computers store your numbers internally in a binary form, NOT decimal.
Unfortunately it is true that the order you add a sequence of floating point numbers together affects the result, because of those low order bits being serially lost to the bit bucket. Sorry, but this is simply a fact of life when using ANY computational tool that works with floating point numbers. For example, suppose we were working in 3 significant decimal digits? What would you expect as a result for the operation:
0.123 + 0.0123
Remember, you can only store 3 significant decimal digits in the result! Would you expect 0.135, or 0.1353? There is no magic here. Computers have limits, unless you are writing a TV show, where computers are all-knowing.
Welcome to the sometimes wacky world of computer arithmetic and floating point numbers. Do some reading, starting here . It will help you to understand such simple things as:
(.1 + .2) == .3
ans =
0
(.1 + .2) - .3
ans =
5.5511e-17
Sadly, there is often some divergence between mathematics and what you can perform using computer arithmetic. If your research depends on exact results, then sorry, but you need to either learn enough about numerical analysis to be able to deal with these issues, or learn to work in a higher precision. And sadly, working in a sufficiently high precision will be computationally expensive, as it will be seriously slower.
Personally, I'd suggest learning the numerical methods one needs to avoid problems and to learn to what extent one can to trust those doubles to be yield the results you need. To a large extent, this is simply learning the concepts of significant digits and how to use a tolerance in your tests. (I thought people learned about significant digits at an early age in school? Maybe no more.) An in-depth understanding of numerical methods can help in many ways too. Often one learns better ways to perform a computation, ways that are numerically stable, as opposed to the more direct and brute force solution.

Jan
Jan on 21 Oct 2014
You can reduce the instability of the sum by error compensation : FEX: XSum . The methods applied there can be understood as accumulators with 128 or 192 bits. Then this sum works accurately:
x = [1e17, 1, -1e17]
sum(x) % 0
XSum(x) % 1
The results are stored in doubles, so the standard problem or truncation is not affected:
XSum([1e17, 1]) == 1e17 % TRUE!
Nevertheless, it is the expected behavior that sum(x) replies 0. Follow John's suggestion carefully, because they are important. They concern e.g. the computation of the quotient of differences for estimating the derivative:
d = (f(x2) - f(x1)) / (x2 - x1)
If x1 and x2 are too far away from each other, the local discretization error grows. But if they are too near, both differences are dominated by the truncation error. Take two values known with an accuracy of 15 digits, which are equal for 14 digits. Then the difference has 15 digits also, but only the first one is correct, while the rest is "digital noise".

Iain
Iain on 21 Oct 2014
It is a tolerancing problem.
By default, matlab does sums to the 15thish significant figure.
Small numbers being added to big numbers often loses a few digits off the tail end. These errors are usually ignored, as they're small enough to say "aah, 15th SF, that's beyond the 3/5/7 I normally care about, and is therefore insignificant!"
However, you are correct in that sometimes that is hard to avoid, so there are methods to deal with catching the exact level of error so that it can be added back in at the end.
  1 Comment
Pierre Benoit
Pierre Benoit on 21 Oct 2014
Also, you could use HPF from John D'Errico if you really want more precision.

Sign in to comment.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!