Get Ready for AI with MATLAB

AI is everywhere. It's not just powering applications like smart assistants, machine translation, and automated driving, it's also giving engineers and scientists a set of techniques for tackling common tasks in new ways. And yet, according to recent estimates, while many organizations recognize the value and potential of AI, few are using it—Gartner's recent survey of 3,000 companies indicated that, of the 50% that are beginning to plan for AI, only 4% have actually implemented it.1

Many organizations are deterred by what they see as the overwhelming challenges of implementing AI:

  • Belief that to do AI, you need to be an expert in data science
  • Concern that developing an AI system is time-consuming and expensive
  • Lack of access to good quality, labeled data
  • The cost and complexities of integrating AI into existing algorithms and systems

Three real-world examples will show how MATLAB® makes it easy to get started with AI. MATLAB provides AI capabilities similar to those of dedicated AI tools like Caffe and TensorFlow—and more importantly, only MATLAB lets you integrate AI into the complete workflow for developing a fully engineered system.

An AI model is just one part of the complete workflow for developing a fully engineered system.

An AI model is just one part of the complete workflow for developing a fully engineered system.

What is AI and how is it done?

The definition of AI coined in the 1950s and still in use is "the capability of a machine to imitate intelligent human behavior." AI gets more interesting when the machine can not just imitate, but match or even exceed human performance—it gives us the opportunity to offload repetitive tasks, or even to get computers to do jobs more safely and efficiently than we can.

Practically speaking, when people think of AI today, they almost always mean machine learning: training a machine to learn a desired behavior.

In traditional programming, you'd write a program that processes data to produce a desired output.
With machine learning, the steps are reversed: you feed in data and the desired output, and the computer writes the program for you. Machine learning programs (or more accurately, models) are largely black boxes. They can generate the desired output, but they aren't composed of a sequence of operations like a traditional program or algorithm.

There's a lot of excitement today about a specialized type of machine learning called deep learning. Deep learning uses neural networks. (The term "deep" refers to the number of layers in the network—the more layers, the deeper the network.) One key advantage of deep learning is that it removes the need for manual data processing steps and extensive domain knowledge required for other techniques.

To put the key terms into context, think of machine learning and deep learning as ways of achieving AI—they are the most common techniques applied today.

Artificial Intelligence

Our first example shows how a scientist learned and applied machine learning with MATLAB to tackle a problem that she could not solve in any other way.

Using Machine Learning to Detect Snack Food Crispiness

Using Machine Learning to Detect Snack Food Crispiness

Solange Sanahuja, a food scientist, needed to develop a repeatable process for determining the crispiness of snack food. She tried developing physical models of snacks, but that didn't work. Other scientists had used signal processing to analyze the sound of crunching snacks, but nobody could develop a process that could detect differences between perfectly fresh and slightly stale.

Dr. Sanahuja saw that MATLAB supports machine learning, and decided to give it a try. She ran hundreds of experiments to record the sound and force of crushing snacks at varying freshness levels, and recorded freshness ratings by trained tasters.

She used her domain expertise as a food scientist to identify features from the force measurements, computing values like hardness and fracturability. She then tried several different approaches to extract additional features from the sound recordings, eventually finding that octave analysis worked best.

The next step was new to her: developing a model based on the selected features. Finding the right model can be difficult, because there are so many options. Instead of manually trying out each option, Dr. Sanahuja used the Classification Learner app in Statistics and Machine Learning Toolbox™ to automatically try every possible model.

She first selected the data for training the model. She then used MATLAB to train all possible models—MATLAB generated a list of models, trained each one, and produced visualizations showing its overall accuracy.

Based on these results, Dr. Sanahuja selected a quadratic support vector machine as the best model for the project. The model is about 90–95% accurate, and is even able to detect small differences in how we perceive crispiness.

In the next example, engineers use deep learning to tackle a complex image recognition problem. Training a deep learning network from scratch takes a lot of data. But by using transfer learning, these engineers were able to apply deep learning even with a modest amount of data.

Efficient Tunnel Excavation with Deep Learning

Central Artery Project in Boston

The Japanese construction company Obayashi Corporation uses an excavation technique called the New Austrian Tunneling Method. In this approach, geologists monitor the strength of a tunnel face as excavation progresses, assessing metrics such as the spacing between fractures. While this method reduces construction costs, it has several limitations. It can take hours to analyze one site, and so the analysis can only be performed occasionally. In addition, there is a shortage of geologists skilled in this technique.

Obayashi decided to address these limitations with deep learning—they would train a deep learning network to automatically recognize the various metrics based on images of the tunnel face. Their challenge was in getting enough data. The best deep learning networks have been trained on millions of images, but Obayashi had just 70.

Obayashi geologists first labeled three regions of each of the 70 images, recording the values of metrics like weathering alteration and fracture state for each one. Then they divided these labeled regions into smaller images, ultimately yielding about 3,000 labeled images. Since training a deep learning network from scratch requires a lot of time, specialized expertise, and many times more images, they used transfer learning to create a custom network based on AlexNet, a pre-trained deep learning network.

AlexNet has been trained on literally millions of images to recognize common objects like food, household items, and animals, but, of course, it doesn't know anything about interpreting geological conditions from pictures of a tunnel face. With transfer learning, the Obayashi engineers retrained just a small portion of AlexNet to estimate the geological measures based on images of the tunnel face.

Transfer learning workflow.

So far, Obayashi's retrained network has achieved prediction accuracy approaching 90% for weathering alteration and fracture state.

Integrating AI into a Complete Engineering System

We've seen that with MATLAB, you can create and train a machine learning model or a deep learning network even if you have no experience and little data. But, of course, the work doesn't end there. In most cases, you'll want to integrate your model into a larger system.

Our final example brings together all the elements needed for building an AI system and integrating it into a production system.

Automating Agricultural Harvester Filling Operations


Case New Holland's massive FR9000 series forage harvesters are capable of harvesting corn, grass, and other crops at throughputs of more than 300 tons per hour while cutting the crop in pieces as short as 4mm. In addition to steering and maintaining an optimal speed, harvester operators must direct the crop flow into a trailer and monitor its fill level. The need to focus on driving and filling tasks simultaneously makes a complex job even more difficult.

They couldn't replicate complex operating conditions in the lab, and the harvest season was too short to allow for extensive prototyping in the field. Instead, they imported the AI algorithms into their Simulink system model and performed closed-loop simulations on the desktop, using a 3D scene simulator to mimic the field conditions.

A simplified view of the Case New Holland simulation framework.

A simplified view of the Case New Holland simulation framework.

Simulation results. Left: harvester boom and trailer. Top right: Camera outputs. Bottom right: Distances and fill levels.

Simulation results. Left: harvester boom and trailer. Top right: Camera outputs.
Bottom right: Distances and fill levels.

Once the functionality had been tested using desktop simulations, they put the laptop with the computer vision and controls methods into a working harvester, fine-tuning the AI algorithms in real time based on operator feedback.

They generated production C code from the controller model and deployed it to an ARM®9 processor, which runs the harvester’s display panel software.

Autonomous Trailer Filling

Operators reported that the system performed just as it had when running on the laptop. The New Holland IntelliFill™ system is now in production on FR9000 series forage harvesters.


With MATLAB, you are ready for AI even if you have no experience with machine learning. You can use apps to quickly try out different approaches, and apply your domain expertise to prepare the data.

If it’s not feasible to identify features in your data, you can use deep learning, which identifies features for you as part of the training process. Deep learning requires lots of data, but you can use transfer learning to extend an existing network to work with the data you have.

Finally, you can deploy the model as part of a complete AI system on an embedded device.

1 "The Real Truth of Artificial Intelligence." Presented at Gartner Data & Analytics Summit, March 2018.