Main Content

padv.builtin.task.MergeTestResults Class

Namespace: padv.builtin.task
Superclasses: padv.Task

Task for generating consolidated test results report and merged coverage reports

Description

This class requires CI/CD Automation for Simulink Check.

The padv.builtin.task.MergeTestResults class provides a task that can generate a consolidated test results report and merged coverage reports using Simulink® Test™ and Simulink Coverage™. The task can generate the following artifacts for a model:

  • a consolidated test results report

  • a merged model coverage report for normal mode simulation results

  • a merged code coverage report for software-in-the-loop (SIL) mode results

  • a merged code coverage report for processor-in-the-loop (PIL) mode results

You can run your tests using the built-in task padv.builtin.task.RunTestsPerTestCase and then generate the reports using the MergeTestResults task. You can add these tasks to your process model by using the method addTask. After you add the tasks to your process model, you can run the tasks from the Process Advisor app or by using the function runprocess.

Alternatively, you can run your tests using the built-in task padv.builtin.task.RunTestsPerModel, but to generate the consolidated test results report and merged coverage report you need to reconfigure the MergeTestResults task.

To view the source code for this built-in task, in the MATLAB® Command Window, enter:

open padv.builtin.task.MergeTestResults

The padv.builtin.task.MergeTestResults class is a handle class.

Creation

Description

task = padv.builtin.task.MergeTestResults() creates a task for generating a consolidated test results report and merged coverage reports using Simulink Test and Simulink Coverage.

example

task = padv.builtin.task.MergeTestResults(Name=Value) sets certain properties using one or more name-value arguments. For example, task = padv.builtin.task.MergeTestResults(Name = "MyTestAndCoverageReportsTask") creates a task with the specified name.

You can use this syntax to set property values for Name, Title, IterationQuery, InputQueries, InputDependencyQuery, or Licenses.

The padv.builtin.task.MergeTestResults class also has other properties, but you cannot set those properties during task creation.

Properties

expand all

The MergeTestResults class inherits properties from padv.Task. The properties listed in Specialized Inherited Properties are padv.Task properties that the MergeTestResults task overrides.

The task also has properties for specifying:

Specialized Inherited Properties

Unique identifier for task in process, specified as a string.

Example: "MyTestAndCoverageReportsTask"

Data Types: string

Human-readable name that appears in Process Advisor app, specified as a string.

Example: "My Test And Coverage Reports Task"

Data Types: string

Task description, specified as a string.

When you point to a task in Process Advisor and click the information icon, the tooltip shows the task description.

Example: "This task uses Simulink Test and Simulink Coverage to generate a consolidated test results report and a merged coverage report for a model."

Data Types: string

Path to task documentation, specified as a string.

When you point to a task in Process Advisor, click the ellipsis (...), and click Help, Process Advisor opens the task documentation.

Example: fullfile(pwd,"taskHelpFiles","myTaskDocumentation.pdf")

Data Types: string

Type of artifact, specified as one or more of the values listed in this table. To specify multiple values, use an array.

CategoryArtifact TypeDescription

MATLAB

"m_class"MATLAB class
"m_file"MATLAB file
"m_func"MATLAB function
"m_method"MATLAB class method
"m_property"MATLAB class property

Model Advisor

"ma_config_file"Model Advisor configuration file
"ma_justification_file"Model Advisor justification file

Process Advisor

"padv_dep_artifacts"

Related artifacts that current artifact depends on

"padv_output_file"

Process Advisor output file

Project

"project"Current project file

Requirements

"mwreq_item"Requirement (since R2024b)

"sl_req"

Requirement (for R2024a and earlier)
"sl_req_file"Requirement file
"sl_req_table"Requirements Table

Stateflow®

"sf_chart"Stateflow chart
"sf_graphical_fcn"Stateflow graphical function
"sf_group"Stateflow group
"sf_state"Stateflow state
"sf_state_transition_chart"Stateflow state transition chart
"sf_truth_table"Stateflow truth table

Simulink

"sl_block_diagram"Block diagram
"sl_data_dictionary_file"Data dictionary file
"sl_embedded_matlab_fcn"MATLAB function
"sl_block_diagram"Block diagram
"sl_library_file"Library file
"sl_model_file"Simulink model file
"sl_protected_model_file"Protected Simulink model file
"sl_subsystem"Subsystem
"sl_subsystem_file"Subsystem file

System Composer™

"zc_block_diagram"System Composer architecture
"zc_component"System Composer architecture component
"zc_file"System Composer architecture file
Tests"harness_info_file"Harness info file
"sl_harness_block_diagram"Harness block diagram
"sl_harness_file"Test harness file
"sl_test_case"Simulink Test case
"sl_test_case_result"Simulink Test case result
"sl_test_file"Simulink Test file
"sl_test_iteration"Simulink Test iteration
"sl_test_iteration_result"Simulink Test iteration result
"sl_test_report_file"Simulink Test result report
"sl_test_result_file"Simulink Test result file
"sl_test_resultset"Simulink Test result set
"sl_test_seq"Test Sequence
"sl_test_suite"Simulink Test suite
"sl_test_suite_result"Simulink Test suite result

Example: "sl_model_file"

Example: ["sl_model_file "zc_file"]

Query that finds the artifacts that the task iterates over, specified as a padv.Query object or the name of a padv.Query object. When you specify IterationQuery, the task runs one time for each artifact returned by the query. In the Process Advisor app, the artifacts returned by IterationQuery appear under task title.

For more information about task iterations, see Overview of Process Model.

Query that finds artifact dependencies for task inputs, specified as a padv.Query object or the name of a padv.Query object.

The build system runs the query specified by InputDependencyQuery to find the dependencies for the task inputs, since those dependencies can impact if task results are up-to-date. For more information, see Overview of Process Model.

Example: padv.builtin.query.GetDependentArtifacts

List of licenses that the task requires, specified as a string.

Data Types: string

Type of CI-compatible result files that the task itself generates when run, specified as either:

  • "JUnit" — JUnit-style XML report for task results.

  • "" — None. The build system generates a JUnit-style XML report for the task instead.

Inputs to the task, specified as:

  • a padv.Query object

  • the name of padv.Query object

  • an array of padv.Query objects

  • an array of names of padv.Query objects

By default, the task MergeTestResults finds the models with test cases and the associated test results by using the built-in queries:

  • padv.builtin.query.GetIterationArtifact

  • padv.builtin.query.GetOutputsOfDependentTask on the task padv.builtin.task.RunTestsPerTestCase

Location for standard task outputs, specified as a string.

The built-in tasks use tokens, like $DEFAULTOUTPUTDIR$, as placeholders for dynamic path resolution during run-time. For more information, see Dynamically Resolve Paths with Tokens.

Data Types: string

Test Result Options

Name of the report author, specified as a string.

Example: "My Team Name"

Data Types: string

Include the signal comparison plots defined under baseline criteria, equivalence criteria, or assessments using the verify operator in the test case, specified as a numeric or logical 1 (true) or 0 (false).

Example: true

Data Types: logical

Include coverage metrics that are collected at test execution, specified as a numeric or logical 1 (true) or 0 (false).

Example: false

Data Types: logical

Include error messages from the test case simulations, specified as a numeric or logical 1 (true) or 0 (false).

Example: false

Data Types: logical

Include the figures opened from a callback script, custom criteria, or by the model in the report, specified as a numeric or logical 1 (true) or 0 (false).

Example: true

Data Types: logical

Include the version of MATLAB used to run the test cases, specified as a numeric or logical 1 (true) or 0 (false).

Example: false

Data Types:

Include simulation metadata for each test case or iteration, specified as a numeric or logical 1 (true) or 0 (false).

Example: true

Data Types: logical

Include the simulation output plots of each signal, specified as a numeric or logical 1 (true) or 0 (false).

Example: true

Data Types: logical

Include the test requirement link defined under Requirements in the test case, specified as a numeric or logical 1 (true) or 0 (false).

Example: false

Data Types: logical

Include all or a subset of test results in the report, specified as either:

  • 0 — Passed and failed results

  • 1 — Only passed results

  • 2 — Only failed results

Example: 2

Open the generated report, specified as a numeric or logical 1 (true) or 0 (false).

Example: true

Data Types: logical

Task loads simulation signal data when loading test results, specified as a numeric or logical 1 (true) or 0 (false).

Example: true

Data Types: logical

Number of columns of plots to include on report pages, specified as an integer 1, 2, 3, or 4.

Example: 2

Number of rows of plots to include on report pages, specified as an integer 1, 2, 3, or 4.

Example: 1

Output format for the generated report, specified as either:

  • "pdf" — PDF format

  • "docx"Microsoft® Word document format

  • "zip" — Zipped file

Example: "zip"

Path to the generated report, specified as a string.

The built-in tasks use tokens, like $DEFAULTOUTPUTDIR$, as placeholders for dynamic path resolution during run-time. For more information, see Dynamically Resolve Paths with Tokens.

Data Types: string

File name for the generated report, specified as a string.

The built-in tasks use tokens, like $ITERATIONARTIFACT$, as placeholders for dynamic path resolution during run-time. For more information, see Dynamically Resolve Paths with Tokens.

Data Types: string

Title of the report, specified as a string.

The built-in tasks use tokens, like $ITERATIONARTIFACT$, as placeholders for dynamic path resolution during run-time. For more information, see Dynamically Resolve Paths with Tokens.

Data Types: string

Coverage Report Options

Include each test in the model summary, specified as a numeric or logical 1 (true) or 0 (false).

Example: true

Data Types: logical

Produce bar graphs in the model summary, specified as a numeric or logical 1 (true) or 0 (false).

Example: false

Data Types: logical

Include cyclomatic complexity numbers in block details, specified as a numeric or logical 1 (true) or 0 (false).

Example: false

Data Types: logical

Include cyclomatic complexity numbers in summary, specified as a numeric or logical 1 (true) or 0 (false).

Example: false

Data Types: logical

Exclude fully covered model objects from report, specified as a numeric or logical 1 (true) or 0 (false).

Example: true

Data Types: logical

Exclude fully covered model object details from report, specified as a numeric or logical 1 (true) or 0 (false).

Example: false

Data Types: logical

Filter Execution metric from report, specified as a numeric or logical 1 (true) or 0 (false).

Example: true

Data Types: logical

Filter Stateflow events from report, specified as a numeric or logical 1 (true) or 0 (false).

Example: true

Data Types: logical

Generate web view report, specified as a numeric or logical 1 (true) or 0 (false).

Example: true

Data Types: logical

Display hit/count ratio in the model summary, specified as a numeric or logical 1 (true) or 0 (false).

Example: true

Data Types: logical

Name of generated model coverage report, specified as a string. The report is an aggregated coverage report for normal simulation mode results.

The built-in tasks use tokens, like $ITERATIONARTIFACT$, as placeholders for dynamic path resolution during run-time. For more information, see Dynamically Resolve Paths with Tokens.

Example: "myModel_NormalModelCoverage_Report.html"

Data Types: string

Name of generated software-in-the-loop (SIL) code coverage report, specified as a string. The report is an aggregated coverage report for SIL mode results.

The built-in tasks use tokens, like $ITERATIONARTIFACT$, as placeholders for dynamic path resolution during run-time. For more information, see Dynamically Resolve Paths with Tokens.

Example: "myModel_SIL_CodeCoverage_Report.html"

Data Types: string

Name of generated processor-in-the-loop (PIL) code coverage report, specified as a string. The report is an aggregated coverage report for PIL mode results.

The built-in tasks use tokens, like $ITERATIONARTIFACT$, as placeholders for dynamic path resolution during run-time. For more information, see Dynamically Resolve Paths with Tokens.

Example: "myModel_PIL_CodeCoverage_Report.html"

Data Types: string

Show coverage report, specified as a numeric or logical 1 (true) or 0 (false).

Example: true

Data Types: logical

Use two-color bar graphs, specified as a numeric or logical 1 (true) or 0 (false).

Example: false

Data Types: logical

Methods

expand all

Examples

collapse all

Add a task that can generate a consolidated test results report and merged coverage reports for models in your project.

Open the process model for your project. If you do not have a process model, open the Process Advisor app to automatically create a process model.

In the process model file, add the GenerateSimulinkWebView task to your process model by using the addTask method.

mergeTestTask = pm.addTask(padv.builtin.task.MergeTestResults);

You can reconfigure the task behavior by using the task properties. For example, to change where the consolidated test results report and merged model coverage report generate:

defaultTestResultPath = fullfile('$DEFAULTOUTPUTDIR$','test_results');
mergeTestTask.ReportPath = defaultTestResultPath;
mergeTestTask.CovReportPath = defaultTestResultPath;

The MergeTestResults task requires outputs from the RunTestsPerTestCase task. Specify this dependency in your process model by using the dependsOn method.

To make sure that you run your tasks using the built-in task RunTestsPerTestCase before you add the MergeTestResults task to the process model, you can use conditional logic in your process model. For example:

includeTestsPerTestCaseTask = true;
includeMergeTestResultsTask = true;

%% Run tests per test case
% Tools required: Simulink Test
if includeTestsPerTestCaseTask
    milTask = pm.addTask(padv.builtin.task.RunTestsPerTestCase);
    % ... Optionally specify task property values
end

%% Merge test results
% Tools required: Simulink Test (and optionally Simulink Coverage)
if includeTestsPerTestCaseTask && includeMergeTestResultsTask
    mergeTestTask = pm.addTask(padv.builtin.task.MergeTestResults);
    % ... Optionally specify task property values
end

% Set task dependencies
if includeTestsPerTestCaseTask && includeMergeTestResultsTask
    mergeTestTask.dependsOn(milTask,WhenStatus=["Pass","Fail"]);
end
This code specifies that the MergeTestResults task runs when the status of the RunTestsPerTestCase task is either a passing or failing result. If you only want to run the MergeTestResults when the RunTestsPerTestCase task passes, you can specify WhenStatus as "Pass" instead. For more information, see dependsOn.

Tips

Run your tests using the built-in task padv.builtin.task.RunTestsPerTestCase.