Design Industrial Robot Applications from Perception to Motion
Overview
Developing industrial robotics require knowledge and experience in many engineering domains, including mechanical design, perception, decision making, control design, and embedded systems. This webinar discusses a complete industrial robotics workflow from perception to motion. We will walk through the development of a pick-and-place robot manipulator application. Some of the topics that will be covered include:
- Performing scalable physics simulation
- Designing perception algorithm using computer vision and deep learning
- Setting up co-simulation with sensor and environment models
- Using motion planning with obstacle avoidance
- Using supervisory logic and control using state machine
- Achieving advanced control via reinforcement learning
- Connecting hardware through ROS network and deployment
Highlights
Through a pick-and-place robot manipulator application, we will cover:
- Creating custom robot designs and importing robot models from CAD tools
- Developing autonomous robotics algorithm
- Simulation-based testing and validation
- Deployment
About the Presenters
YJ Lim is a Senior Technical Product Manager of robotics and autonomous systems at the MathWorks. He has over 20 years of experience in robotics and autonomous systems area. Before joining MathWorks, Lim worked at Vecna Robotics based in Waltham, MA as a Project Manager focused on Vecna’s advanced robotics system development. Prior to Vecna, he served as the Chief Innovation Officer at Hstar Technologies, a startup focused on agile mobile robotic platform and healthcare service robotics system. He worked with government agencies and served on governmental working groups on matters of advanced robotics system research. Lim also led development teams at Energid Technologies, a firm that provides engineering services and products for advanced robotic, machine-vision, and simulation applications, for robotic software development. Lim received his Ph.D. in mechanical engineering from Rensselaer Polytechnic Institute (RPI) and his Master from KAIST in S. Korea.
Hannes Daepp is the Product Lead for the Robotics System Toolbox. He has been with MathWorks for 5 years, where he specializes in developing tools for robotic simulation, inverse kinematics, and manipulation. Prior to joining the MathWorks, Daepp completed his PhD at the Georgia Institute of Technology, where he focused on compliant control to increase the safety of human-robot collaboration.
Recorded: 31 Mar 2021
Hello, everyone. Thank you for joining, and welcome to this webinar on Designing Industrial Robot Applications from Perception to Motion. Hi, my name is YJ Lim, Technical Product Manager of Robots and Autonomous Systems at MathWorks. I'm with Hannes. He's our Development Lead for Robotics System Toolbox. Hannes will cover some of the detail on motion planning and hardware implementation in the later section.
Before we begin, let me walk you through a few logistics. If you have any problem hearing the audio or seeing the presentation, please contact the webinar host by typing in the chat panel. If you have any question for the presenter related to the topic, you can type them in the question and answer panel any time. Those question will be answered at the end of the presentation. Thank you.
OK, so let me start with some industrial robot trend. Digital technology development is now enabling a new wave of manufacturing innovation. The connected and flexible manufacturing system use continuous stream of the data by using industrial IoT and the big data to create intelligence value chain and to optimize production. Specifically, factories are now combining advanced system, like collaborate with robot and AI-enabled advanced robotic system.
With this exciting industrial robot trend, today we are going to talk about the following. I will begin by what is advanced robotics in smart factory, then talk about how can we develop autonomous robotic system, then we will show you pick-and-place robot user cases using suggested reference workflow. Finally, we will summarize what we discussed here today.
In recent years, with the introduction of digital transformation, smart factory concept has been promoted in manufacturing system to optimize asset, operation, and workforce in the factory floor using IoT, through the cloud, and AI. With IoT-based solution, we can now extract intelligence from the data coming from the sensor, equipment, and the machine so that this would enable the optimization of the whole value stream in operation. It is also important to note that autonomy in smart factory will also increase worker safety, as well as our efficiency from our factory with collaborate with robot and AI-based advanced robotic system.
It becomes clear that smart factories go beyond simple automation. Advanced robots are foremost important technology that smart factory must employ. AI and the data analytic will also make these in industrial robot more reliable than ever.
The traditional robot performs mostly a single task repetitively. It requires manual programming to set it up and use a robot with a safety fence around it. These robots do not use information from the environment.
Mitsubishi Heavy Industry is this case. To design, there's a control system for conventional industry robot. They employed model-based design with MATLAB Simulink for classical and modern control. This made it possible to respond easily to any change in design constraint and to meet demanding accuracy requirement for this type of robot. However, industries were looking for more flexible automation to meet the customized product production.
Robotics is growing exponentially, especially for interaction between industrial robot and human workforce. Collaborative robots came to the factory floor for the more autonomous and the flexible task. For a painting to the packaging and the pick-and-place robot application while maintaining easy to the programming, set it up, and deployment. Cobot uses sensory input from the environment to control and make a decision. This type of robot finally allow to combine safety with maximum productivity.
Yaskawa Electronic Corporation in Japan used their Motoman for the pick-and-place robot system with a position-enabled solution. This robot used visual and audio perception for voice retrieval control, object detection, and path planning with sense information.
We don't really see the difference if robot only does one thing. What AI bring to the robotic is enabling a move away from automation to the true autonomy. AI-enabled robot equipped with autonomous algorithm and perceptions are getting started during the past few year to carry out various task.
Now these advanced robots can understand the material presented and the environment to make a decision and execute planning autonomously. Agile Justin robot, developed by German aerospace DLR, is such a type of AI-based robot that uses television camera to see the world and has a tactile sensor to feel the object and perform human-like task. DLR team use model-based designed with MATLAB Simulink to develop advanced control, calibration, and path planning algorithm for the Justin robot.
However, there are some underlying market forces that are challenging us in moving forward smart factory. First is design complexity while implementing novel technologies. Second is software complexity from the current evolution of the system to become more connected, intelligent, and autonomous. New technology trend, like collaborative robot and AI-enabled robots are adding the complexity of software feature as well.
I know you may have your own different challenges in developing autonomous industrial robot. Please share those with us in the chat panel. So let me show you how MATLAB Simulink can make this complexity a little bit easy for designing robot application. It is really important to have a development tool to support end-to-end workflow in autonomous system development.
The common workflow for developing autonomous system may need these three pillars. First, platform design with an environment model. Second, autonomous application design that includes perception and planning and control and deployment and testing with the hardware. It will be really ideal if we can design, simulate, analyze, implement, and all the way to test with your system within a unified development ecosystem.
Let me talk about the first pillar to design and develop the robot platform with environment model. Robot manipulator platform development consists of multiple components, including mechanical system design, actuator, electrical system, and the environment model, of course. With MATLAB Simulink, you can optimize your custom design after creating a physical model. You can import CAD model to add autonomous algorithm for robot manipulator. If you have URDF file of your robot model, you can create a simulation model with just one line of code to import and stimulate right away.
Conducting experiment with a real system is too expensive and time consuming, and sometimes it is dangerous. So simulation is a really powerful tool for analyzing, optimizing your system, especially when your system is complex. For many workflow, like testing and debugging your system or training your AI system, the physical system is replaced with a simulation.
This simulation is needed through all stage of the development process with different fidelity, like low-fidelity simulation to high-fidelity simulation. But engineer may need to build a simulation model and the scenario using different tools. So it might be great to simulate your system progressively from low-level especially algorithm to high-fidelity system integration to reduce the risk and development time.
Let's discuss how to model a system with different level of fidelity to better focus on associate design. At the early stage of application development, we may focus on task scheduling of your application using interactive inverse kinematic tool, for example. Then your system can be modeled using simplified motion model. You then quickly iterate your prototype design of the application.
The MATLAB function block here in the left will do those kinds of task scheduling. Here, and we don't pay attention much for any grasping modeling at this stage. Later on, this motion model in the right hand side can be upgraded with more accurate the model of system dynamics.
After task scheduling has been designed and verified with a simple motion model iteratively, then we can add a controller with your robot model, such as more complex manipulator dynamics model that takes joint torque and grip command. But the leftover part remains the same with the previous. Now we can add more high-fidelity robot model.
The main difference from the previous model is plant model. This plant model incorporates the dynamics with a built-in joint limit and the contact modeling as well. This step adds simulation accuracy at the cost of the modeling complexity and simulation speed.
If your application interacts with the environment, you can now do co-simulation relation by connecting Simulink with external simulator, such as a Gazebo simulator, for your robot interacting with the environment with sense data. Unreal game engine can be used for high fidelity and photorealistic simulation with the physics engine. So this is a great example of the importance of having tools that have interoperability with other tools seamlessly.
Moving on to the next pillar, adding autonomous algorithm for perception planning and control. Before we jump into a perception example, let's take a look at how deep learning can be applied for your robot perception. First, advanced in speech-to-text technology are significantly enabling voice-driven robot.
And images are another rich resource over in the inside of the environment around the robot. Many computer vision technology with deep learning capability has been developed to process information from the immediate pixel, to label object for the robot to work on it, and to find the image abnormality for industrial inspection. 3D point cloud provides an opportunity for better understanding of the surrounding environment of the robot to localize itself within the environment and to estimate the pose of the object.
Let me show you an example of using the labeling tool for object classification for a robot to pick-and-place. In this example, we use a robot to create a set of image data. We move around the robot to capture IGP and the depth stream from a different location and different lighting conditions. We now do ground truth labeling by using image label app to facilitate automated labeling. You can perform the pull image-wise or frame-wise labeling of the object from a collection of the image.
Finally, we can classify the object on the image. Here, you can see the whole actual robot use the perception layer to classify the two object and to localize them for the pick-and-place robot application. In the left lower image, the perception algorithm classified the mug, cup, and the membrane from the camera input, as shown in the left upper-- the camera view.
So now let me look at a little bit of high-level, the workflow. This flowchart shown here is one of the entire workflow examples for a pick-and-place robot that's showing how robot manipulator interact with the environment. It starts with basic initialization step for the preceding environment, like a scan-and-build environment. Compared to the traditional pick-and-place task, where everything is known beforehand, this step is very important for picking high mixed part as well as for flexible operation. So now robot can react to the dynamic environment, such as changing the part location or an obstacle.
And then I already discussed about this middle part, to detect and classify the part. And then finally, robot execute pick-and-place task. Please try out this complete work for example that apply in learning for robotics. Now I would like to turn it to Hannes to take you through the detail of motion planning and control and hardware connection.
Thanks, YJ. I'm now going to talk about the pick-and-place part of this workflow and the underlying motion planning algorithms, which accept poses and output collision-free trajectories. So there's two parts to this. The first part is path planning. Path planning finds collision-free waypoint configurations between the start and the goal. And then the second part is trajectory generation, which translates those into smooth motion that solves the practical application.
I'll start by talking about the path planner. Now, with the path planner, we start with an initial pose, a final pose, and environment as an input. And the very first thing we need to do is to relate these to start and goal configurations, meaning the joint angles that describe the position of the robot in these two poses. And this is typically done using inverse kinematics.
Once we have that, we can then provide these inputs to a path planner, which will aim to connect the configurations. What makes this problem interesting is the constraints that need to be satisfied. So examples of the constraints include robot joint limits or the obstacles. The path planner will then find the collision-free joint path from the starting configuration to the goal configuration.
So depending on the characteristics of the application, the environment, and your robot you use, you can solve this using either optimization theory or sampling base path planners. And today we'll use the latter approach. So specifically, we'll use manipulator RRT, which is a feature we ship in the robotic system toolbox that uses a bidirectional rapidly-exploring random tree algorithm, or an RRT.
To show you how this works, I'm going to use this graphic to explain the planner a bit. So the bidirectional RRT creates two trees, one starting at the start configuration and one at the goal configuration, and then it tries to connect them to find a path. To understand how it connects this, we need to go through some properties. The first is the max connection distance. The max connection distance is the maximum distance that the tree can be extended.
There's also an optional enable connect heuristic. This is a heuristic that can potentially increase planning speed. And I'll talk a little bit more about that later.
So when the planner tries to extend each tree, it starts with a random configuration and then checks whether that configuration is valid. That is, whether on the segment all the points are feasible, meaning they're within the joint limits and they don't collide with the obstacle in any way. If they are then we add this, and then we try to connect the tree to the opposing tree.
When there's an obstacle in the path of the connection, then the trees can't be connected, and instead we have to keep extending the trees. But when there can be connected, then it goes through. When the EnableConnectHeuristic is true, you can also ignore the max connection distance, which allows us to connect to the trees more quickly, which essentially can lead to a faster planning solution.
So in this slide, I'm showing one of the examples that showcases how to use the bidirectional RRT function, the manipulator RRT for robot manipulators. And so here, the examples of the simple pick-and-place problem where the robot first picks up a can on the left-- sorry, on the right. Stage right. And then moves that across the barrier to the table. So in this case, the example actually shows how tuning these different parameters can affect the performance of the planner. But in this case, I'll instead go through that in the following slides in detail.
So let's start with the max connection distance. In these videos, I'll show two planner results that are identical except that the max connection differs by an order of magnitude. What you can see is they're both planning from that start to this goal configuration at the pick position, but the one on the right swings significantly more broadly. And you can see why that is by looking at the algorithm.
So when the max connection distance is smaller, the waypoints can only be so far apart, and so all the waypoints will be pretty close. But when the max connection distance is larger, the waypoints can be much further. And so you can get these configurations that are further apart, which leads to the behavior like we see in the image below.
Now, you can still modify this path that you get to get a shorter workflow. You don't have to replan completely. So you can do this using the shortened workflow. And the way the shortened workflow works is it iterates over this planned feasible path that we now have to get a shorter one. So this video on the right is going to show the same thing where we're going from the start to the pick position.
But you can see that it ends up with a much tighter solution. And the way this works is that we give the-- when we call the shortened function call, the planner looks for two non-adjacent edges, and it looks for points on those edges. And then it tries to connect them. If there's an obstacle in the way, it throws it out. But it can try again, and if the points were able to be successfully connected in a collision-free manner, then this becomes the new path. The shortened workflow is called iteratively over a specified number of iterations, so it can continue to shorten the path, getting one like this visual that we got on the right here.
The next property I'd like to talk about is the validation distance. So up until now, when I've been talking about extending the tree, we talk about checking whether these segments are valid or not. And what we actually mean is not just checking whether the end points are valid or even continuously in between. We check along the segment with a resolution that's given by the validation distance.
So for example, for these segments we're checking that all these individual gray points are collision-free and that they're feasible-- they are within the joint limits. In this case they are, so we continue. Now for these ones, you can see with the segment that extends upward that this will intersect the obstacle, and so that one is thrown out. And we do this for the entire length of the planner.
So one strategy, then, would just be to specify a really fine validation distance. And that would work. You can see that this planner succeeds. But it really takes a while to do so.
So instead, we could increase the validation distance, decrease the resolution. And you can see now that when I do that in this application, by an order of magnitude, that this planner still finds a solution, but it does so in half the time. And you can see how that's possible in the graphic above.
One thing to point out with these planning times-- these are MATLAB planning times. If you want faster times to generate the planner, you could generate code and then call that instead. But the relative differences between the two planners with different validations distances would still persist.
Of course, the extreme version of this is that you can set the validation distance to be too high, where the resolution essentially isn't fine enough. And in that case, you can get a solution like this, where this robot passes right through the obstacle. Again because, as we can see in the graphical above, essentially, the resolution is fine enough, and it's not even checking at a point sufficient to detect the obstacle intersection.
The last property I'd like to talk about is enable connect. So enable connect lets the planner ignore the maximum connection distance when it tries to connect the two trees. So currently, when we try to connect the trees, we do something like this. But then we can't because of the max connection distance, so we have to find a different path.
So for example here, we're planning from the start position to this pick position. And you can see we find a solution, and it takes about three seconds. If instead we set the EnableConnectHeuristic to true, we can now ignore the max connection distance when we're trying to connect the two trees, and we can get something that is able to connect faster.
So essentially, the planner is able to search more greedily and get a faster solution. And you can see that in the results here. This is the same planner, just with the EnableConnectHeuristic flipped. And it plans in about a third of the time. It also has less waypoints overall.
Now, you won't always see that performance, and in particular, you won't see the performance change when there's more obstacles in the search field. So here we're going to plan from the pick position on the right to the place position on the table on the left. And so in this case, you can see that when we turn the EnableConnectHeuristic on, we still get a performance improvement, but the performance improvement isn't nearly as significant.
And again, we can see why in our graphic. Essentially, when we add more obstacles, the planner is now trying more frequently to connect the two trees, but it's also failing more often. And so as a result, the planner expends a lot of effort that isn't really paying out in any way. Obviously, as the field becomes more crowded, this effect can become even more pronounced.
So up until now, we've been assuming that we're planning configuration to configuration. But if we step back and look at the overall workflow, a lot of pick-and-place applications are more broad than that. So for example here in this image, this robot uses a depth sensing camera to scan the field and detect the objects it wants to pick up and the obstacles. And really, it just wants to get to a position where it can pick up the bottle. It doesn't necessarily want to get to a particular pose.
So in applications like this, a workspace goal region is handy. The workspace goal region lets you specify a pose and then a region about that pose. So here for example, we can say we want to pick up the bottle, but actually, we just want to pick up a point where the gripper will be pointing down at the bottle. It's inside some specified range, and it can have a full rotation of 360 degrees. We can do something similar with the can, and we could even constrain it and say that instead of having the full 360 degrees, we just want a subset.
This type of application also extends to what we were looking at before. Suppose that the table on the left is a conveyor belt. And in that case, we probably just want a plan to somewhere on the conveyor belt. So we've defined a place region that is inside some constrained Cartesian bounds. And then the orientation in the middle, again, specifies that the gripper has to be pointed down, but it can have any orientation about the z-axis.
And so here this is going to show three separate results to the same planner function call. So you can see on the right, we've defined a place region, we call it. And you can see that all of these are valid solutions.
So to review, we started with a set of input configurations-- the start, the pick, and the place configuration-- and then we pass that to a path planner, which translated this to an ordered sequence of waypoint configurations. But when we actually pass this to the motion controller or the manipulator firmware, we'll need to encode a time association, which is where a trajectory generation comes into play. So one intuitive way of doing this would be to just linearly interpolate between the points. We could say, well, we have all the points, we know some time that we want to do it over, we'll just assume that it hits each point at a specified time.
The problem with this is that this doesn't result in a smooth trajectory. As you can see here, the velocity is discontinuous and jumps between points. And so this trajectory ultimately isn't feasible, and the firmware could even reject it because, again, the controller can't execute it in the way that it specified.
So this is the role that trajectory generation fulfills. It's the task of mapping the ordered path from the planner to a time-based control sequence. And typically, this uses a class of functions to connect the waypoints.
So there are different ways to create trajectories that interpolate the different joint configurations. And this can apply in the joint space where we've been working, among the joint angles, or in the task space as seen here, meaning at the end effector. So one option would be a trapezoidal velocity trajectory, which is piecewise trajectories where each trajectory has a constant acceleration, zero acceleration, and a constant deceleration.
This connects the waypoint, stopping at each waypoint and using smooth trajectories. Now if instead you wanted to move through the waypoints with non-zero velocity and continuous velocity and acceleration, you could consider using polynomial trajectories. In RST we provide cubic, quintic, and B-spline polynomial trajectories which allow you to generate results like this.
Moving back to our application from earlier, instead of using the linear interpolation, we can use cubic polynomial trajectories here. So here, the velocity boundary conditions are specified by MATLAB spline. And you can see that this results in a smooth trajectory, and most importantly, it satisfies our constraints. So the velocity is smooth, and this is a trajectory that can be executed by the manipulator.
One thing I'd like to talk about a little bit more is the motion controller. So if you're just passing to the manipulator firmware this is handled by the firmware. But if you want to design your own motion controller, you can do that with our toolbox as well.
So here's an example that YJ showed earlier. In this example, we're controlling a Simscape model of the robot. And in this case we use a computed torque controller. So the computed torque controller is here, and it uses blocks from the manipulator Simulink library to compensate for the dynamics of the robot and then assign a prescribed set of dynamics. This ensures that the robot will follow the trajectory according to the motion profile that we expect.
These kinds of tools can also be used for more advanced controllers. For example, this example. This is a shipping example where we show users how to design an impedance controller. Impedance controllers let you specify how the robot will interact-- behave to unexpected contact. Which is important for situations like those seen with cobots, where you're operating around humans, and you want to make sure that it reacts in a safe way.
So now that we've shown how to implement the pick-and-place from path planning to trajectory generation to motion control, we can deploy that. So here, the MATLAB simulation is shown on the left, the implementation is on the right. This does require a final deployment step, which I'll talk about in a few slides. You can see how everything is successfully translated from simulation to hardware.
Once we're sure that we can execute the pick-and-place portion of the workflow, we can step back and integrate it into the overall application. As you can see here, this is a continuous process in which we repeatedly go through the identification and classification steps and then the pick-and-place steps. And so the parameters can vary a bit, and it be quite complex.
So for these kinds of applications, the Stateflow can be helpful. Stateflow charts let you build a state machine that then schedules the high-level tasks and can integrate smoothly with existing MATLAB and Simulink workflows. This then lets you schedule these tasks and move from task to task in the pick-and-place workflow.
This now brings us to our last pillar, which is hardware implementation. And here I'm going to cover the different ways that you can connect and deploy your code to the robot. So one option is to use existing MATLAB APIs either that we provide in a support package or that are provided directly by the manufacturer.
A second option is to rely on code generation. So here we would design algorithms in MATLAB and Simulink, generate code, package that code using a workflow like packNGo, and then integrate it with the manufacturer's existing C and C++ API. This is the workflow that we used in the demo that I showed a couple slides ago.
A third option is to use ROS. So ROS is a middleware where, when you have an application where you have algorithms you've designed and a robot in hardware or simulation that you're interacting with, ROS is a communication layer that you can use to talk between these two. In addition to being able to connect to the robot directly, ROS can also be deployed to a network of machines or to a GPU.
So in the Robotics System Toolbox, we also ship a support package for manipulators that shows how to use these options that I discussed with the Kinova Gen3. The support package comes with an API as well as a host of examples that cover everything from connectivity via the MATLAB API to ROS connectivity to deployment.
I'd now like to turn this presentation back over to YJ, who will wrap up.
Thank you, Hannes. So I hope you are now able to see how MATLAB Simulink provide a unified environment to develop autonomous industrial robot application from perception to the motion and to the interface with real robot hardware. It is really important to have such a development tool that supports an end-to-end workflow for designing, simulating, and testing your autonomous system. Here, and I'll be showing two examples with more industrial setting. One for warehouse pick-and-place application and another for parallel robot pick-and-place.
In here, robots sort in a detected object onto the shelf. Robotics System Toolbox is used to model, simulate, and visualize the manipulator and for collision detection. Stateflow is used to schedule the high-level task and steps from task to task in this example. You can apply the same workflow for delta robot to sort out different type of a bottle from the conveyor belt and then placing from to the predetermined position. The feature used here include path planner and trajectory generation with customizable velocity profile and inverse kinematic.
OK, so here is a quick recap what we discussed today. We discussed what is the advanced robotic system, such as collaborative robot and AI-enabled advanced robot that leveraging perception to understand the environment and make a decision, execute the planning. We also discussed in three pillars for developing autonomous industrial robot application. And then finally, we discussed MATLAB Simulink as an entire-- a unified development ecosystem to develop autonomous robotic application from perception to the motion and interface with the hardware.
OK, so to learn more, please visit our webpage, mathworks.com/robotics, and download a trial to check out reference example. I encourage you to visit our GitHub repository as well and also robotic examples to see if these can help with your application. Thank you for your attention.