GPU arrayfun is so slow, what is going on?

Hi,
I am trying to understand what the GPU arrayfun is doing? The following is a test code.
clear;clc;close all
gd=gpuDevice();
reset(gd);
N=2e3;
a=rand(60,N,'single','gpuArray');
tic;
b=sum(a,1);
wait(gd);
toc;
tic;
c=arrayfun(@(i) sum(a(:,i),1),(1:N));
wait(gd);
toc;
The results are:
Elapsed time is 0.000468 seconds.
Elapsed time is 0.584521 seconds.
What is going on here? 1000 times difference?? I would expect similary runtime since GPU arrayfun is supposed to be executed parallel on GPU cores. Did I make stupid errors on using the arrayfun?
Thanks!

1 Comment

Hi, I come back for more updates. I have successfully vectorized and implemented my particle simulation on GPU. The speed up is astonishing, ~10 times faster than CPU code. Thanks Matt J and Joss Knight for their wonderful suggestions.
Now the other part of the code (except neighbor search but solving the fluid equations) is so fast that the limiting part now is the matlab function knnsearch, which uses kdtree algorithm runing on CPU. It takes 85% percent of the runtime (see the following code profiler results)Gputime.png
The function 'knnCPU_kdtree_func' uses the matlab built-in function knnsearch with kdtree algorithm runing on CPU. The other functions are doing the real math runing on GPU only consumes 10% of the total time.
I wonder is there any GPU implementation of k-nearest neighbor search that I can free download and using as a function call in my matlab code? Many thanks.

Sign in to comment.

 Accepted Answer

Matt J
Matt J on 11 Dec 2018
Edited: Matt J on 11 Dec 2018
What is the most efficient way to vectorize the above code
I would say, as follows,
idx_Neighbor=randi([1 N],60,N,'uint8');
temp=p(idx_Neighbor);
temp=temp+p.';
ax=sum(temp .* delW_x,1);

4 Comments

This is so brilliant! The code speed has improved ~100 times!
my naive for loop (with N=1e6): Elapsed time is 3.501800 seconds.
The above vectorized code on CPU: Elapsed time is 1.200115 seconds.
vectorized code on GPU: Elapsed time is 0.049626 seconds !!!!!!
However it is a bit pity that the vectorized code consume more memory, since the temp is a big matrix. I guess this is limitation of matlab that cannot improbe further. But with his kind of speeding up, it is worth it!
I like temp=p(idx_Neighbor). This is the first time I know one can do this eventhough the indexing is not the same dimension as the inferred array, is there any matlab documentation on this? Learnt something new!
May I ask another question? How about the efficiency of the vectorized code on GPU comparing with highly optimized cuda C code? (It is good know if this is close to the ultimate efficiency possible). Thanks!
Matt J
Matt J on 11 Dec 2018
Edited: Matt J on 11 Dec 2018
However it is a bit pity that the vectorized code consume more memory
It shouldn't. I conserved memory by creating idx_Neighbour as uint8 instead of double.
This is the first time I know one can do this eventhough the indexing is not the same dimension as the inferred array, is there any matlab documentation on this?
Oddly, I can't find any!
How about the efficiency of the vectorized code on GPU comparing with highly optimized cuda C code?
You would have to implement it to find out...
yes, but one can do uint8 to any code above. And if the indexing is larger then one has to use uint32. But the improvement of vectorizing is so much that one has to bear with a bit more memory usage.
Maybe a matlab developer can say something about the efficiency comparing to cuda C. I hope it will be really close.
Hi, which code did you use to run GPU? arrayfun? gpuArray?
Where or how did you insert the code?
And is there any advice you would give to make this code nicer? Undefined function error appears if I remove bbb = 0.
bbb = 0;
bbb = bbb + h*W_4';

Sign in to comment.

More Answers (1)

You haven't called GPU arrayfun here, you've called CPU arrayfun and in the arrayfun function you are doing stuff on the GPU. This is because none of the arguments to your arrayfun call is a gpuArray.
You could force it to use GPU arrayfun by converting your input:
c = arrayfun(@(i) sum(a(:,i),1), gpuArray(1:N));
However, you'll immediately find it errors, because sum is not supported for GPU arrayfun. Obviously this is just a toy example, but the solution here is sum(a,1), not arrayfun.

4 Comments

Hi Joss,
Thanks a lot for your reply! so, the sum functio is doing on the GPU but the arryfun is actually doing on the CPU, I got it, no wonder it is soooo slow.
Is it true that I cannot even pass something with the colon operator to the GPU arrayfun? seems if I try to pass a(:,i), evenif the function itself is supported for GPU arrayfun, I stil get the following error message: Function passed as first input argument contains unsupported or unknown function 'colon'.
I understand that sum(a,1) is the solution here, but what if I need to do something more complex, for example, if I want to mutiply (.*) each colomn of a (60 by 2e3) by another colomn vector b (60 by 1), and then sum over the first dimension of the result matrix. i.e., I need to do
c=sum(a.*repmat(b,1,2e3),1).
This will lead to fast and correct results, however it is not memory efficient since I need to repmat the vector b to a big matrix. If the vector b is very long, this will eventually limit the size of the problem I can solve. Is ther any way to do this memory friendly on the GPU?
Thanks!
I need to do c=sum(a.*repmat(b,1,2e3),1).
Your operation does not require repmat nor even sum, but simply
c=b.'*a;
But even if you were to use sum, recent Matlab no longer requires repmat, e.g.,
a=gpuArray.rand(60,2e3); b=a(:,1);
c=sum(a.*b,1);
will work fine.
Thanks Matt.
GPU arrayfun is very special, you should read the documentation and list of supported functions. It only supports element-wise functionality, so you can't do any vector operations. That means you can't index an array unless you're indexing a single element or an up-level variable, you can't call sum or any other reduction or accumulation, and you can't output anything other than a scalar. This is because your arrayfun function gets compiled into a single CUDA kernel with no inter-thread communication. So it's incredibly useful and efficient when used within its limitations.
Nearly always (in my experience) when you want to do something more complex with vector operations, you can translate your code into a series of vectorized calls to normal MATLAB matrix functions, arrayfun, and pagefun.
This is a brillant solution! Thanks for making this happen.
However, if I want to do something even more complex (And this is what I actually need to do, instead of the toy examples before :)). So I need to do the following code:
clear;clc;close all;
N=2e3;
idx_Neighbor=randi([1 N],60,N);
p=rand(N,1);
delW_x=rand(60,N);
ax=zeros(N,1);
tic;
for j=1:N
temp=idx_Neighbor(:,j);
inner=p(temp)+p(j);
ax(j)=sum(inner.*delW_x(:,j));
end
toc;
What is the most efficient way to vectorize the above code (so without using the for loop) and avoiding using repmat as much as possible? Thanks!

Sign in to comment.

Products

Release

R2017b

Asked:

on 11 Dec 2018

Commented:

on 20 Apr 2019

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!