Deep Learning HDL Toolbox™ provides functions to configure, build, and generate custom bitstreams and a custom processor IP. Obtain performance and resource utilization of a pretrained series network on the custom processor. Optimize the custom processor by using the estimation results.
|Configure custom deep learning processor|
|Build and generate custom processor IP|
|Retrieve layer-level latencies and performance by using
|Return estimated resources used by custom bitstream configuration|
| Use the |
| Use the |
|Retrieve optimized network-specific deep learning processor configuration|
Accelerate the estimation and optimization of custom deep learning processor by
configuring parameters of the
conv processor and
processor, created by using the
Analyze the deep learning network layer level latencies and overall performance before deployment.
Expedite the time to identify a target hardware board that meets resource utilization budgets before deployment.
Rapidly prototype custom processor configuration and networks by understanding how deep learning processor parameters affect resource utilization and network performance.
Deploy your custom network that only has layers with the convolution module output format or only layers with the fully connected module output format by generating a resource optimized custom bitstream that satisfies your performance and resource requirements.
Rapidly prototype and iterate custom deep learning networks performance by configuring, building and generating custom bitstreams which can then be deployed to target FPGA and SoC boards.
Build and generate IP for the