Main Content

metric.loadB2BResults

View comparison results from back-to-back testing metric

Since R2024a

    Description

    example

    metric.loadB2BResults(metricResult) loads the back-to-back comparison results from the metric result object metricResult.

    For tests that directly store the back-to-back comparison results, the function metric.loadB2BResults loads the comparison results into Simulink Test Manager (Simulink Test). Otherwise, the function loads the comparison results into Simulation Data Inspector.

    Examples

    collapse all

    Find the status of back-to-back testing for the model cc_DriverSwRequest and view the back-to-back comparison results.

    Open the project that contains the model and testing artifacts.

    openExample("slcheck/BackToBackTestingExample")
    openProject("cc_CruiseControl");

    Run the test Detect cruise in normal mode and SIL mode.

    tf = sltest.testmanager.load("cc_DriverSwRequest_Tests.mldatx");
    ts = getTestSuiteByName(tf,"Unit test for DriverSwRequest");
    tc = getTestCaseByName(ts,"Detect cruise");
    run(tc,SimulationMode="Normal")
    run(tc,SimulationMode="Software-in-the-Loop")
    The test runs in both modes and passes.

    Create a metric.Engine object that you can use to collect metric results for the current project.

    metric_engine = metric.Engine;

    Find the status of back-to-back testing between normal mode and SIL mode for the model cc_DriverSwRequest by executing and getting the results for the metric slcomp.sil.B2BTestStatus.

    scope = [which("cc_DriverSwRequest"),"cc_DriverSwRequest"];
    execute(metric_engine,"slcomp.sil.B2BTestStatus", ...
        ArtifactScope = scope);
    metricResults = getMetrics(metric_engine,"slcomp.sil.B2BTestStatus", ...
        ArtifactScope = scope);
    The function getMetrics returns a metric.Result instance for each test in the project. The Value property of each instance contains the back-to-back testing status for that test.

    The back-to-back metrics compare, at each time step, the outputs of the model simulation and the outputs of the code executed in SIL or PIL mode. For more information on the back-to-back metrics, see Code Testing Metrics.

    Open the test Detect cruise and load the comparison results that the back-to-back metrics used to determine the status of back-to-back testing by using the function metric.loadB2BResults. The metric results might be in a different order on your machine.

    testCaseArtifact = metricResults(1).Artifacts(1);
    openArtifact(metric_engine,testCaseArtifact.UUID);
    metric.loadB2BResults(metricResults(1))
    Since the Detect cruise test is a simulation test that does not store back-to-back testing comparison results, the function loads the back-to-back comparison results into Simulation Data Inspector.

    If you only want to load the comparison results for tests that failed back-to-back testing, you can iterate over the metric results to find where the Value property is 0. For the back-to-back testing status metrics, a Value of 0 indicates a failure.

    for n=1:length(metricResults)
        testCaseArtifact = metricResults(n).Artifacts(1);
        if metricResults(n).Value == 0
            openArtifact(metric_engine,testCaseArtifact.UUID);
            metric.loadB2BResults(metricResults(n))
        end
    end
    For this example, none of the tests fail back-to-back testing. For information on the back-to-back testing metrics, see Code Testing Metrics.

    Input Arguments

    collapse all

    Metric results, specified as a metric.Result object.

    The metric results must include results from one of the back-to-back test status metrics:

    Version History

    Introduced in R2024a