Use code generation options and optimizations to improve the execution speed
of the generated code. You can modify or disable dynamic memory allocation,
which can affect execution speed.
Parallelized code can be generated by using
When available, take advantage of preexisting optimized C code and specialized
libraries to speed up execution.
For more information about how to optimize your code for specific conditions, see Optimization Strategies.
|Declare variable-size data|
|Fold expressions into constants in generated code|
|Control inlining of a specific function in generated code|
|Disable automatic parallelization of a |
|Call external C/C++ function|
|Abstract class for specifying the LAPACK library and LAPACKE header file for LAPACK calls in generated code|
|Abstract class for specifying the BLAS library and CBLAS header and data type information for BLAS calls in generated code|
|Abstract class for specifying an FFTW library for FFTW calls in generated code|
Generated Code Optimizations
- Optimization Strategies
Optimize the execution speed or memory usage of generated code.
- MATLAB Coder Optimizations in Generated Code
To improve the performance of generated code, the code generator uses optimizations.
- Optimize Implicit Expansion in Generated Code
Implicit expansion in the generated code is enabled by default.
memcpy and memset Optimizations
- Dynamic Memory Allocation and Performance
Dynamic memory allocation can slow down execution speeds.
- Minimize Dynamic Memory Allocation
Improve execution time by minimizing dynamic memory allocation.
- Provide Maximum Size for Variable-Size Arrays
Use techniques to help the code generator determine the upper bound for a variable-size array.
- Disable Dynamic Memory Allocation During Code Generation
Disable dynamic memory allocation in the app or at the command line.
- Set Dynamic Memory Allocation Threshold
Disable dynamic memory allocation for arrays less than a certain size.
- Optimize Dynamic Array Access
Improve execution time of dynamic arrays in generated C code.
- Generate Code That Uses Row-Major Array Layout
Generate C/C++ code with row elements stored contiguously in memory.
- Algorithm Acceleration Using Parallel for-Loops (parfor)
Generate MEX functions for
- Classification of Variables in parfor-Loops
parfor-loops are classified as loop, sliced, broadcast, reduction, or temporary.
- Generate Code with Parallel for-Loops (parfor)
Generate a loop that runs in parallel on shared-memory multicore platforms.
- Specify Maximum Number of Threads in parfor-Loops
Generate a MEX function that executes loop iterations in parallel on specific number of available cores.
- Specify Maximum Number of Threads to Run Parallel for-Loops in the Generated Code
for-loops on specific number of available cores in the generated code.
- Reduction Assignments in parfor-Loops
A reduction variable accumulates a value that depends on all the loop iterations together.
- Control Compilation of parfor-Loops
parfor-loops that run on a single thread.
- Install OpenMP Library on macOS Platform
Install OpenMP library to generate parallel
for-loops on macOS platform.
- Minimize Redundant Operations in Loops
Move operations outside of loop when possible.
- Unroll for-Loops and parfor-Loops
Control loop unrolling.
- Automatically Parallelize for Loops in Generated Code
Iterations of parallel
forloops can run simultaneously on multiple cores on the target hardware.
- Generate SIMD Code for MATLAB Functions
Improve the execution speed of the generated code using Intel® SSE and Intel AVX technology.
- Avoid Data Copies of Function Inputs in Generated Code
Generate code that passes input arguments by reference.
- Control Inlining to Fine-Tune Performance and Readability of Generated Code
Inlining eliminates the overhead of function calls but can produce larger C/C++ code and reduce code readability.
- Fold Function Calls into Constants
Reduce execution time by replacing expression with constant in the generated code.
Numerical Edge Cases
- Disable Support for Integer Overflow or Nonfinites
Improve performance by suppressing generation of supporting code to handle integer overflow or nonfinites.
External Code Integration
- LAPACK Calls in Generated Code
LAPACK function calls improve the execution speed of code generated for certain linear algebra functions.
- BLAS Calls in Generated Code
BLAS function calls improve the execution speed of code generated for certain low-level vector and matrix operations.
- Optimize Generated Code for Fast Fourier Transform Functions
Choose the correct fast Fourier transform implementation for your workflow and target hardware.
- Integrate External/Custom Code
Improve performance by integrating your own optimized code.
- Speed Up Linear Algebra in Generated Standalone Code by Using LAPACK Calls
Generate LAPACK calls for certain linear algebra functions. Specify LAPACK library to use.
- Speed Up Matrix Operations in Generated Standalone Code by Using BLAS Calls
Generate BLAS calls for certain low-level matrix operations. Specify BLAS library to use.
- Speed Up Fast Fourier Transforms in Generated Standalone Code by Using FFTW Library Calls
Generate FFTW library calls for fast Fourier transforms. Specify the FFTW library.
- Synchronize Multithreaded Access to FFTW Planning in Generated Standalone Code
Implement FFT library callback class methods and provide supporting C code to prevent concurrent access to FFTW planning.
Diagnose errors for code generation of
Troubleshoot issues that occur when the source MATLAB® code contains global or persistent variables that are reachable
from the body of a