multiple gpu slow?
4 views (last 30 days)
I just got a machine with 4 1080 TI GPUs and I want to see how fast it can go. So I ran the demo DeepLearningRCNNObjectDetectionExample.m with various ExecutionEnvironment settings in trainingOptions. I first used 'gpu' and it runs about 0.6 sec per minibatch. Then I tried to use 'multi-gpu', then it created 4 workers (one for each GPU), but then the time for each minibatch took about 4 sec. Why the multi-gpu option resulted in about 7X slower than one gpu option? Is it a bug or what? BTW, I use Matlab 2017a on Windows 10 Server 2016 and CUDA 8.0 kit.
Birju Patel on 8 May 2017
Edited: Birju Patel on 8 May 2017
This is due to a limitation with NVIDIA’s GPU-to-GPU communication on Windows. If you have the option, you should consider using Linux for multi-GPU training instead. You should increase the batch size from 128 to something like 1024. This provides more work to each GPU while reducing the communication cost, giving better overall utilization.
See the following reference page for additional details about the MiniBatchSize:
You can read more about getting the best performance from multi-gpu training here:
Marco Francini on 4 Sep 2017
I also have this issue with a 2x GTX1080 Ti GPUs system. I use transfer learning (Alexnet) for my application following https://www.mathworks.com/content/dam/mathworks/tag-team/Objects/d/Deep_Learning_in_Cloud_Whitepaper.pdf using ImageDatastore with a ssd drive.
The number of images per second the system can process during training with 2 GPUs is the half of what the system can do with 1 GPU! Looking at GPU load with GPU-Z, I see that with 2 GPUs the utilization jumps from 40% to 0% continuiosly while with one GPU the utilization is always above 50%.
I use Windows 10 Enterprise with Nvidia driver 385.41-desktop-win10-64bit-international-whql.exe installed.