Main Content

Constraint Enforcement for Control Design

Some control applications require the controller to select control actions such that the plant states do not violate certain critical constraints. In many cases, the constraints are on plant states that the controller does not control directly. Instead, you define a constraint function that defines the constraint in terms of the control action signal. This constraint function can be a known relationship or one that you must learn from experimental data.

Constraint Enforcement Block

The Constraint Enforcement block, which requires Optimization Toolbox™ software, computes modified control actions that are closest to specified control actions subject to constraints and action bounds. The block uses a quadratic programming (QP) solver to find the control action u that minimizes the function |uu0|2 in real time. Here, u0 is the unmodified control action from the controller.

The solver applies the following constraints to the optimization problem.



  • fx and gx are coefficients of the constraint function.

  • c is a bound for the constraint function.

  • umin is a lower bound for the control action.

  • umax is an upper bound for the control action.

Constraint Function Coefficients

Depending on your application, the coefficients fx and gx of the constraint function can be linear or nonlinear functions of the plant states and can be either known or unknown.

For an example that uses known nonlinear constraint function coefficients, see Enforce Constraints for PID Controllers. This example derives the constraint function from the plant dynamics.

When you are unable to derive the constraint function from the plant directly, you must learn the coefficients using input/output data from experiments or simulations. To learn such constraints, you can create a function approximator and tune the approximator to reproduce the input-to-output mapping from simulation or experimental data.

To learn linear coefficient functions, you can find a least-squares solution from the data. For examples that use this approach, see Train RL Agent for Adaptive Cruise Control with Constraint Enforcement and Train RL agent for Lane Keeping Assist with Constraint Enforcement.

For nonlinear coefficient functions, you must tune a nonlinear function approximator. Examples of such an approximator include:

  • Deep neural networks (requires Deep Learning Toolbox™ software)

  • Nonlinear identified system models (requires System Identification Toolbox™ software)

  • Fuzzy inference systems (requires Fuzzy Logic Toolbox™ software)

For examples that learn nonlinear coefficient function by training a deep neural network, see Learn and Apply Constraints for PID Controllers and Train Reinforcement Learning Agent with Constraint Enforcement.

See Also


Related Topics