Can I use my CPU and GPU simultaneously for Numerical Simulation?
46 views (last 30 days)
Show older comments
Hi All,
I am writing a bespoke Finite Element Scheme (FEM) and have parallelised it with my CPU but was wondering whether there is a way to share that load to both a GPU and CPU and recombine at the end? This would likely be most important when I am building matrices, numerical integration and solving linear systems.
My other comment here is, is there a way to say 'if there is no GPU present, only use the CPU and visa versa'?
I have never done any GPU computations so would welcome any tips, tricks, examples or literature.
Thanks in advance.
0 Comments
Answers (3)
Joss Knight
on 22 Jan 2022
canUseGPU is the easiest way to write code that diverges based on whether a supported GPU is available. A good design should need little more than a decision about whether to convert input data to a gpuArray.
To spread the load between CPU and GPU your best bet is to take advantage of parallel language such as parfor, parfeval, and spmd. Assuming your CPU code is already running MATLAB functions that are internally multithreaded across the CPU cores (most MATLAB functions are), you may find that it's enough to open a parpool with two workers (using parpool(local,2)) and execute your GPU code on one of them. You could, for instance, select a device on only one worker like this:
spmd
if labindex == 1
gpuDevice(1);
end
end
Then you can run code on your workers using parfeval and decide whether or not to run on the GPU using any(gpuDeviceTable().DeviceSelected), which is a way to test whether a device is selected without selecting anything. So for instance on my pool of two workers if I run the following:
f = parfevalOnAll(@()any(gpuDeviceTable().DeviceSelected),1);
fetchOutputs(f)
ans =
2×1 logical array
1
0
For code that doesn't use all your CPU cores you can open a bigger pool so that more workers are running on the CPU.
The useful thing about parfeval is that it does some of the load balancing for you. The jobs running on the GPU and CPU workers will run at different speeds but new jobs will be assigned to whichever workers come available as appropriate. You may still, however, need to balance the amount of data being processed in each job, for instance, the batch size for batch processing, since GPUs and CPUs have different performance characteristics - the GPU will need more data for it to be used efficiently. But you can learn all about that from the documentation.
Another way to spread work between CPU and GPU is to open a pool with a single worker and run work on the worker and on the client MATLAB. backgroundPool is a useful tool here too, although it has more limitations (in terms of what functions are supported) than a normal process pool.
Experiment Manager is an App which makes it easy to parallelize tasks across workers and manage their execution and output in a graphical way.
0 Comments
KSSV
on 22 Jan 2022
DEfault code runs in CPU unless you specify the code to run in GPU. To run code in GPU you need to convert the variable/ stage the variable into GPU. Read about gpuArray.
0 Comments
Ive J
on 22 Jan 2022
Edited: Ive J
on 22 Jan 2022
Technically, yes you can (given that you've installed CUDA) combine both. However, whether or not you get anything useful out of it, is another matter.
gpuDevice
Error tells us MATLAB cannot find any GPU on the machine, in that case follow the instructions to setup the GPU.
Afterwards, you can start working with gpu arrays:
mygpuArray = gpuArray(1:20); % don't run
class(mygpuArray)
'gpuArray'
If you want to generalize your function to check if GPU available (as you mentioned), you can implement something like:
if(gpuDeviceCount)
runGPUfunc
else
runCPUfunc
end
0 Comments
See Also
Categories
Find more on GPU Computing in MATLAB in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!