How to plot billions of points efficiently?

I have 3 billions of 2D points to be plotted, which requires a lot of memory. Thus, I can only do this on a server, which has about 1TB memory. But the server does not have a decent graphics card, thus export of figures uses CPU to render and takes more than 5 hours. But this procedure needs to be done many times, because I need to change the scale of axes according to the shape of the scatter diagrams. My desktop has a decent graphics card. Could I utilize the desktop's ability when data can not fit into its memory?
Here is an example of my figures.

6 Comments

Stephen23
Stephen23 on 4 Jul 2018
Edited: Stephen23 on 4 Jul 2018
A few thousand points would produce the same image that you show in your question, so the rest are superfluous. Why not reduce the number of points that you plot, simply by merging those points within a certain tolerance, or subsampling?
Thank you. Could you shed light on how to merge those points? Is there any function could do this?
@Eli4ph: I think it depends on what features of the distribution that you really need to keep. For example subsampling using indexing is trivial and very efficient, but might easily miss some extrema from the plot. Merging close points is more complex, but will keep the extrema. So the question comes down to what information you need to be obtained from the plot.
@Cobeldick: I would like to keep the appearance of figures not changed, thus, I want to merge close points and subsampling is not acceptable. In linear scale, it can be easily done by using round(). But in log scale, I have no clean way to do this. Do you have any idea?
Stephen23
Stephen23 on 4 Jul 2018
Edited: Stephen23 on 4 Jul 2018
" In linear scale, it can be easily done by using round(). "
Yes, I also thought of using round, or some kind of tolerance.
"In linear scale, it can be easily done by using round(). But in log scale, I have no clean way to do this. Do you have any idea?"
Convert to linear scale, round to whatever precision, get the unique X-Y pairs, use the indices to plot a subset of the data. I think with a few billion points this might be possible with the memory that you have available, but you would have to try.
@Cobeldick: Thanks. Let me try it.

Sign in to comment.

 Accepted Answer

Stephen23
Stephen23 on 4 Jul 2018
Edited: Stephen23 on 4 Jul 2018
Here is one way to subsample the data to produce almost identical plots:
% Fake data:
X = 10.^randn(2e4,1);
Y = 10.^randn(2e4,1);
figure()
scatter(X,Y,'filled')
set(gca,'xscale','log','yscale','log','title','AllData')
% Merge data points:
Xb = log10(X);
Yb = log10(Y);
Xf = 0.05; % adjust factor to suit
Yf = 0.05; % adjust factor to suit
Xb = Xf*round(Xb/Xf);
Yb = Yf*round(Yb/Yf);
[~,idx] = unique([Xb,Yb],'rows');
figure()
scatter(X(idx),Y(idx),'filled')
set(gca,'xscale','log','yscale','log','title','SubData')
The number of points plotted:
>> numel(X) % AllData
ans = 20000
>> nnz(idx) % SubData
ans = 6653
You can also see that all extrema are still clearly visible.
PS: you might be able to save some memory by putting the merging onto one line:
[~,idx] = unique([Xf*round(log10(X)/Xf),Yf*round(log10(Y)/Yf)],'rows');

More Answers (2)

Consider storing your data as a tall array and using the tall visualization capabilities introduced in release R2017b.

1 Comment

Thanks. I have used the round solution and gained a considerable speedup.

Sign in to comment.

Categories

Products

Tags

Asked:

on 4 Jul 2018

Commented:

on 6 Jul 2018

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!