Main Content

Back-to-Back Test Status for Normal and PIL Mode

Metric ID



The back-to-back testing metrics perform translation validation between a model and the generated code.

This metric returns the status of back-to-back testing for each test by comparing, at each time step, the outputs of the model simulation and the outputs of the code executed in processor-in-the-loop (PIL) mode. The metric compares the normal mode and PIL mode test runs from baseline, equivalence, and simulation tests.

Computation Details

Scope of Analysis

The metric only analyzes unit tests. A unit test directly tests either the entire unit or lower-level elements in the unit, like subsystems.


The way that the metric compares the normal mode and PIL mode results depends on the test type:

Equivalence Tests.  For equivalence tests, the metric uses the method getComparisonResult to get the equivalence data comparison results from the test and determine the back-to-back testing status. If you specified signal tolerances, the metric uses those signal tolerances to determine the acceptable tolerance for differences between your results.

Baseline Tests and Simulation Tests.  For baseline tests and simulation tests, the metric uses the Simulation Data Inspector function Simulink.sdi.compareRuns to compare the simulation outputs from the logged signals in the test and determine the back-to-back testing status. For information on how to add logged signals to a test, see Capture Simulation Data in a Test Case (Simulink Test). The comparison checks for mismatches in the number of signals, signal data types, signal time steps, and other metadata. Each output that the test logged in normal mode must have a matching output logged in PIL mode, otherwise the back-to-back comparison fails. For more information on the comparison, see Simulink.sdi.compareRuns and How the Simulation Data Inspector Compares Data.

The metric specifies the absolute tolerance ('AbsTol') and relative tolerance ('RelTol') values for the function Simulink.sdi.compareRuns depending on the data type of the signal. If you need the metric to consider individual signal tolerances in the comparison, use an equivalence test instead.

Data TypeAbsolute ToleranceRelative Tolerance

For data types not listed above, the absolute and relative tolerances are 0.


To collect data for this metric, execute the metric engine and use getMetrics with the metric ID slcomp.pil.B2BTestStatus.

metric_engine = metric.Engine;
results = getMetrics(metric_engine,"slcomp.pil.B2BTestStatus")

Collecting data for this metric loads the model file and test result files and requires a Simulink® Test™ license.


For this metric, the function getMetrics returns a metric.Result instance for each test.

Instances of metric.Result return Value as one of these outputs:

  • 0 — The comparison between normal and PIL mode test runs failed. For information on how the metric compares the results, see Computation Details.

  • 1 — The comparison between normal and PIL mode test runs passed.

  • 2 — The test was not tested back-to-back. The metric considers a test as untested if the test is missing either normal mode results, PIL mode results, or both. Make sure that you run the test in both normal mode and PIL mode. For equivalence tests, make sure that you run the test in the same test run and in both normal mode and PIL mode.

To view the comparison results that the metric uses to determine the status of back-to-back testing, use the function metric.loadB2BResults.

Compliance Thresholds

The default compliance thresholds for this metric are:

  • Compliant — 100% of tests in the unit passed.

  • Non-Compliant — One or more tests failed or are untested.

  • Warning — None.

See Also


Related Topics