Why do I get the error "CUDNN_STATUS_EXECUTION_FAILED" when training a neural network on a GPU on a server?
6 views (last 30 days)
Show older comments
MathWorks Support Team
on 8 Nov 2019
Edited: MathWorks Support Team
on 25 Sep 2022
When training a neural network on a GPU on a server, it usually fails after some time with the following error message:
Error using trainNetwork (line 154)
Unexpected error calling cuDNN: CUDNN_STATUS_EXECUTION_FAILED.
Caused by:
Error using nnet.internal.cnngpu.lstmForwardTrain
Unexpected error calling cuDNN: CUDNN_STATUS_EXECUTION_FAILED.
This generally happens when someone else launches another program on the same GPU.
Accepted Answer
MathWorks Support Team
on 25 Jul 2022
Edited: MathWorks Support Team
on 25 Sep 2022
In general, it is not a good idea to share the GPU for computations across different programs or users. This will very likely cause kernel execution timeouts, memory issues and other failures.
Please try to change "Compute Mode" in the GPU to "Exclusive Mode", so that no other process can grab the GPU while MATLAB is performing computations. Please see the following link for more information:
0 Comments
More Answers (0)
See Also
Categories
Find more on Parallel and Cloud in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!