MATLAB® enables you to use NVIDIA® GPUs to accelerate AI, deep learning, and other computationally intensive analytics without having to be a CUDA® programmer. Using MATLAB and Parallel Computing Toolbox™, you can:
- Use NVIDIA GPUs directly from MATLAB with over 500 built-in functions.
- Access multiple GPUs on desktop, compute clusters, and cloud using MATLAB workers and MATLAB Parallel Server™.
- Generate CUDA code directly from MATLAB for deployment to data centers, clouds, and embedded devices using GPU Coder™.
- Generate NVIDIA TensorRT™ code from MATLAB for low latency and high-throughput inference with GPU Coder.
- Deploy MATLAB AI applications to NVIDIA-enabled data centers to integrate with enterprise systems using MATLAB Production Server™.
“Our legacy code took up to 40 minutes to analyze a single wind tunnel test; by using MATLAB and a GPU, computation time is now under a minute. It took 30 minutes to get our MATLAB algorithm working on the GPU—no low-level CUDA programming was needed.”Christopher Bahr, NASA
MATLAB EXPO 2021
May 4–5 | Online
Using MATLAB for GPU Computing
Develop, Scale, and Deploy Deep Learning Models with MATLAB
MATLAB allows a single user to implement an end-to-end workflow to develop and train deep learning models using Deep Learning Toolbox™. You can then scale training using cloud and cluster resources using Parallel Computing Toolbox and MATLAB Parallel Server, and deploy to data centers or embedded devices using GPU Coder.
Develop Deep Learning and Other Computationally Intensive Analytics with GPUs
MATLAB is an end-to-end workflow platform for AI and deep learning development. MATLAB provides tools and apps for importing training datasets, visualization and debugging, scaling training CNNs, and deployment.
Scale up to additional compute and GPU resources on desktop, clouds, and clusters with a single line of code.
Scale MATLAB on GPUs With Minimal Code Changes
Run MATLAB code on NVIDIA GPUs using over 500 CUDA-enabled MATLAB functions. Use GPU-enabled functions in toolboxes for applications such as deep learning, machine learning, computer vision, and signal processing. Parallel Computing Toolbox provides
gpuArray, a special array type with associated functions, which lets you perform computations on CUDA-enabled NVIDIA GPUs directly from MATLAB without having to learn low-level GPU computing libraries.
Engineers can use GPU resources without having to write any additional code, so they can focus on their applications rather than performance tuning.
Using parallel language constructs such as
spmd you can perform calculations on multiple GPUs. Training a model on multiple GPUs is a simple matter of changing a training option.
MATLAB also lets you integrate your existing CUDA kernels into MATLAB applications without requiring any additional C programming.
Deploy Generated CUDA Code from MATLAB for Inference Deployment with TensorRT
Use GPU Coder to generate optimized CUDA code from MATLAB code for deep learning, embedded vision, and autonomous systems. The generated code automatically calls optimized NVIDIA CUDA libraries, including TensorRT, cuDNN, and cuBLAS, to run on NVIDIA GPUs with low latency and high-throughput. Integrate the generated code into your project as source code, static libraries, or dynamic libraries, and deploy them to run on GPUs such as the NVIDIA Volta®, NVIDIA Tesla®, NVIDIA Jetson®,and NVIDIA DRIVE®.