Pre-load data on multiple GPUs for parfor loop
1 view (last 30 days)
Show older comments
Massimiliano Zanoli
on 15 Mar 2021
Commented: Edric Ellis
on 17 Mar 2021
I have two GPUs with 6 Gb of RAM each.
I need to perform a Particle Swarm optimization which evaluates a cost function that is very well suited for GPU computation, but the data arrays are huge (~4 Gb).
I have code that successfully works using one GPU and no parallelization. The code pre-loads the arrays into the GPU (which is time consuming) and subsequently enters the optimization process, where the cost function is quickly evaluated.
Now, I'd like to exploit the second GPU, but for that I need to start a parallel pool with 2 workers, and assign a GPU to each. The problem is the pre-loading of the arrays.
I have tried different options, also those suggested in MATLAB's blogs, documentation and answers, but they don't work.
For instance this:
% create a 4 Gb array
A = rand(1024, 1024, 512);
spmd
% copies the array to each worker and loads it on their respective GPU
% results in a 2x1 Composite
A = gpuArray(A);
end
% for each potential solution to evaluate
parfor n = 1 : N
% evaluate the cost function (*)
<..> = costFunction(A, n, ..);
end
will throw an error at (*) because "Composites are not supported in parfor loops". Since Particle Swarm uses a parfor, I cannot go this way.
The only other alternative to pre-load data to the workers is via a parallel.pool.Constant, but:
- it does not work meaningfully with gpuArray (at least not in my version 2020a).
- it is some weird wrapper not fully integrated with MATLAB's language (everything has to be changed to <variable>.Value which forces you to have two versions of the code, one for parallelized and one for non-parallelized computing).
In particular:
A = rand(1024, 1024, 512);
A = parallel.pool.Constant(A);
spmd
% will turn A back into a Composite, defeating the purpose
A = gpuArray(A.Value);
end
and:
A = rand(1024, 1024, 512);
% load on one of the GPUs from the main thread, occupying 4 Gb of RAM
A = gpuArray(A);
% copy A to each worker and load it on its respective GPU (*)
A = parallel.pool.Constant(A);
will throw an out of memory at (*) because one GPU has already 4 Gb occupied by the main thread. Plus it suffers from memory leaks, leaving the original gpuArray from the main thread on the GPU even when all references to it have gone.
Is there a way to pre-load massive arrays into each GPU and run parfor evaluations on them? Maybe something in the new releases?
0 Comments
Accepted Answer
Edric Ellis
on 16 Mar 2021
You've got a number of options here depending on whether you can build the value of A directly on the workers. The simplest case is where you can do that, and then you'd do this:
Ac = parallel.pool.Constant(@() rand(1024,1024,512,'gpuArray'));
parfor ...
doStuff(Ac.Value);
end
Things are a little trickier if the value of A must be calculated on the client. But it should work to do this:
A = rand(1024,1024,512);
Ac = parallel.pool.Constant(@() gpuArray(A));
In that case, the CPU value of A is embedded in the anonymous function handle workspace, and it gets pushed to the GPU only on the workers when the function handle is evaluated.
2 Comments
Edric Ellis
on 17 Mar 2021
One thing to note about Composite values - they do not release worker memory until the spmd block immediately after they are cleared (this is to avoid excessive client-worker communication). So, you if you do
clear A
spmd, end
you should see the memory returned. (In your code, this would be CPU memory).
More Answers (0)
See Also
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!