Skip to content
MathWorks - Mobile View
  • Sign In to Your MathWorks AccountSign In to Your MathWorks Account
  • Access your MathWorks Account
    • My Account
    • My Community Profile
    • Link License
    • Sign Out
  • Products
  • Solutions
  • Academia
  • Support
  • Community
  • Events
  • Get MATLAB
MathWorks
  • Products
  • Solutions
  • Academia
  • Support
  • Community
  • Events
  • Get MATLAB
  • Sign In to Your MathWorks AccountSign In to Your MathWorks Account
  • Access your MathWorks Account
    • My Account
    • My Community Profile
    • Link License
    • Sign Out

Videos and Webinars

  • MathWorks
  • Videos
  • Videos Home
  • Search
  • Videos Home
  • Search
  • Contact sales
  • Trial software
2:14 Video length is 2:14.
  • Description
  • Full Transcript
  • Related Resources

What Is Half Precision?

This video introduces the concept of half precision or float16, a relatively new floating-point data. It can be used to reduce memory usage by half and has become very popular for accelerating deep learning training and inference. We also look at the benefits as well as the tradeoffs over traditional 32-bit single precision or 64-bit double-precision data types for traditional control applications.

Half precision or float16 is a relatively new floating-point data type that uses 16 bits, unlike traditional 32-bit single precision or 64-bit double-precision data types.

So, when you declare a variable as half in MATLAB, say the number pi, you may notice some loss of precision when compared to single or double representation as we see here.

The difference comes from the limited numbers of bits used by half precision. We only have 10 bits of precision and 5 bits for the exponent as opposed to 23 bits of precision and 8 bits for exponent in single. Hence the eps is much larger and also the dynamic range is limited.

So why is it important? Half’s recent popularity is because of its usefulness in accelerating deep learning training and inference mainly on NVIDIA GPUs as highlighted in the articles here. In addition, both Intel and ARM platforms also support half to accelerate computations.

The obvious benefit of using half precision is in reducing the memory and reducing the data bandwidth by 50% as we see here for Resnet50. In addition, the hardware vendors also provide hardware acceleration for computations in half such as the CUDA intrinsics in the case of NVIDIA GPUs.

We are seeing traditional applications such as powertrain control systems do the same where you may have data in the form of lookup tables as shown in a simple illustration here. By using half as the storage type, you are able to reduce the memory footprint of this 2D lookup table by 4x.

However, it is important to understand the tradeoff of the limited precision and range of half precision. For instance, in case of the deep learning network, the quantization error was of the order of 10^-4 and one has to analyze how this impacts the overall accuracy of the network.

This was a short introduction to half precision. Please refer to links below to learn more on how to simulate and generate C/C++ or CUDA code from half in MATLAB and Simulink.

Related Products

  • MATLAB
  • Fixed-Point Designer
  • Simulink

Half-Precision Data Type in MATLAB
Floating Point Numbers
Fixed-Point Arithmetic
Construct Fixed-Point Numeric Object
Optimizing Lookup Tables
Lookup Table Optimization (2:21)
What Is Quantization?

Bridging Wireless Communications Design and Testing with MATLAB

Read white paper

Feedback

Featured Product

MATLAB

  • Request Trial
  • Get Pricing

View more related videos

MathWorks - Domain Selector

Select a Web Site

Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .

  • Switzerland (English)
  • Switzerland (Deutsch)
  • Switzerland (Français)
  • 中国 (简体中文)
  • 中国 (English)

You can also select a web site from the following list:

How to Get Best Site Performance

Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.

Americas

  • América Latina (Español)
  • Canada (English)
  • United States (English)

Europe

  • Belgium (English)
  • Denmark (English)
  • Deutschland (Deutsch)
  • España (Español)
  • Finland (English)
  • France (Français)
  • Ireland (English)
  • Italia (Italiano)
  • Luxembourg (English)
  • Netherlands (English)
  • Norway (English)
  • Österreich (Deutsch)
  • Portugal (English)
  • Sweden (English)
  • Switzerland
    • Deutsch
    • English
    • Français
  • United Kingdom (English)

Asia Pacific

  • Australia (English)
  • India (English)
  • New Zealand (English)
  • 中国
    • 简体中文Chinese
    • English
  • 日本Japanese (日本語)
  • 한국Korean (한국어)

Contact your local office

  • Contact sales
  • Trial software

MathWorks

Accelerating the pace of engineering and science

MathWorks is the leading developer of mathematical computing software for engineers and scientists.

Discover…

Explore Products

  • MATLAB
  • Simulink
  • Student Software
  • Hardware Support
  • File Exchange

Try or Buy

  • Downloads
  • Trial Software
  • Contact Sales
  • Pricing and Licensing
  • How to Buy

Learn to Use

  • Documentation
  • Tutorials
  • Examples
  • Videos and Webinars
  • Training

Get Support

  • Installation Help
  • MATLAB Answers
  • Consulting
  • License Center
  • Contact Support

About MathWorks

  • Careers
  • Newsroom
  • Social Mission
  • Customer Stories
  • About MathWorks
  • Select a Web Site United States
  • Trust Center
  • Trademarks
  • Privacy Policy
  • Preventing Piracy
  • Application Status

© 1994-2022 The MathWorks, Inc.

  • Facebook
  • Twitter
  • Instagram
  • YouTube
  • LinkedIn
  • RSS

Join the conversation