Heat Validation via Command Line
Note
This section assumes you have already acquired and setup the validation scripts as described in the VVP Installation section.
Basic Script Execution
The scripts must be executed from the
ice_validator
directory. Navigate to the directory first:> cd <vvp-directory>/ice_validator
To run the script with default options, issue the following command where
<Directory>
is the directory containing the Heat templates:/path/to/validation-scripts/ice_validator>$ pytest tests --template-directory=<Directory>``
Test results will be written to the consoles describing any failures that are encountered. If no failures are found, then a success message will be displayed. Additional reports can be found in the
<VVP Directory>/ice_validator/output
directory. See Reports for more details.
Command Line Options
Additional options can be specified at the command line to customize output and execution. VVP uses the pytest framework for test execution, and therefore all pytest options are available.
Please refer to the pytest documentation for information on the command-line options that framework provides. We will only document the specific options that are used by VVP:
--template-directory=TEMPLATE_DIR
Directory which holds the templates for validation
--category=CATEGORY_NAME
Additional validations that can be selected.
The only supported value at this time is
environment_file which will enforce that specific
parameter values are excluded from the environment
file per ONAP Heat requirements.
--self-test Test the unit tests against their fixtured data
--report-format=REPORT_FORMAT
Format of output report (html, csv, excel, json)
--continue-on-failure
Continue validation even when structural errors exist
in input files
--output-directory=OUTPUT_DIR
Alternate directory for report output.
--category=TEST_CATEGORIES
optional category of test to execute
--env-directory=ENV_DIR
optional directory of .env files for preload
generation
--preload-format=PRELOAD_FORMATS
Preload format to create (multiple allowed). If not
provided then all available formats will be created:
GR-API, VNF-API
VVP Reports
After completion of the validations, several reports will be written to the
ouput directory (specified via --output-directory
or
<vvp-directory>/ice_validator/output
by default).
Report |
Description |
---|---|
|
Full report of validation results showing the pass or failure of the validation and details on all errors. See Validation Report for information on the HTML, CSV, and Excel versions of the report, or JSON Report for details on the machine-readable results. |
|
This shows a mapping from Heat requirement to test case. |
|
File should be empty, but if present shows tests that are mapped to requirements that no longer exist |
|
Similar information to |
|
Deprecated JSON version of test failures. Use
|
|
A blank preload will be created for every VF Module
in the template directory. The |
Validation Report
If the report format of html
(default), excel
, or csv
are
requested via the --report-format
option, then the a report file will
be written to the output directory. Regardless of format, the file will contain
a header section that summarizes the results and files scanned, and an error
section that has a row for each failure with four columns.
Header Information
Header Element |
Description |
---|---|
Categories Selected |
Any additional categories selected via |
Tool Version |
Version of the tool that produced the report |
Report Generated At |
The timestamp of when the report was generated |
Directory Validated |
Absolute path to directory validated |
Checksum |
Unique MD5 has of the contents of template directory |
Total Errors |
Number of errors/violations found |
Error Information
If any violations are found, then there will be one row for each violation with the following columns:
Column Name |
Description |
---|---|
Files |
The file or files that were scanned as part of the test. |
Tests |
Name of the test case (not shown in HTML version) |
Error Message |
This shows the test and brief error message from the test that failed. This will contain details about the element that triggered the violation such as the parameter name, resource ID, etc. In the HTML version of the report this column will
also show the test case name, and provide a link to
|
Requirements |
The requirement ID and text that was violated |
Resolution Steps |
For some violations, there are pre-defined resolution steps that indicate what action the user should take to resolve the violation. Note: Not all violations will have resolution steps, rather the error message and requirement is sufficient. |
Raw Test Output |
Full output from the pytest test case. This not a dedicated column in the HTML version of the report. |
JSON Report
This report is intended to provide a machine-readable version of the test execution, and provides the most comprehensive summary of the test execution and results.
File Header/Top Level
JSON Report <vvp-json-report> The top level will include a summary of available execution metadata.
NOTE: The tests
and requirements
entries are elided in the
example below.
Example Header:
{
"version": "dublin",
"template_directory": "/path/to/template",
"timestamp": "2019-01-21T02:11:07.305000",
"checksum": "6296aa211870634f9b4a23477c5eab28",
"profile": "",
"outcome": "FAIL",
"tests": [],
"requirements": [],
}
Header Definition:
Field Name |
Required/ Optional/ Conditional |
Data Type |
Valid Values |
Description |
---|---|---|---|---|
|
Required |
|
MD5 hash of all file contents in the |
|
|
Required |
|
|
One of the valid values:
|
|
Optional |
array of |
Categories selected via the |
|
|
Required |
|
Absolute path of the directory containing the Heat templates that were validated |
|
|
Required |
|
Version of the validation scripts that produced the report |
|
|
Required |
|
ISO 8601 Timestamp in UTC of when the report was generated |
|
|
Required |
List of Test Result |
See Test Result |
|
|
Required |
List of Requirement Result |
Test Result
For each test result a JSON object will be provided that informs the consumer what tests was run, its result, and the requirements it validated.
Example Test Result:
{
"files": [
"/Users/username/Desktop/stark_template2/STARKDB-nested-1.yaml",
"/Users/username/Desktop/stark_template2/base_starkdb.yaml",
],
"test_module": "test_resource_indices",
"test_case": "test_indices_start_at_0_increment",
"result": "FAIL",
"error": " Index values associated with resource ID prefix STARKDB_server_ do not start at 0\n",
"requirements": [
{
"id": "R-11690",
"text": "When a VNF's Heat Orchestration Template's Resource ID contains an\n``{index}``, the ``{index}`` is a numeric value that **MUST** start at\nzero and **MUST** increment by one.\n\nAs stated in R-16447,\n*a VNF's <resource ID> MUST be unique across all Heat\nOrchestration Templates and all HEAT Orchestration Template\nNested YAML files that are used to create the VNF*. While the ``{index}``\nwill start at zero in the VNF, the ``{index}`` may not start at zero\nin a given Heat Orchestration Template or HEAT Orchestration Template\nNested YAML file.",
"keyword": "MUST"
}
]
}
Test Result Definition:
Field Name |
Required/Optional/Conditional |
Data Type |
Valid Values |
Description |
---|---|---|---|---|
|
Required |
list of |
List of files that were passed to the test case. NOTE: If |
|
|
Required |
|
Name of module/file anme that contains the test case |
|
|
Required |
|
Name of the test case |
|
|
Required |
|
|
One of the valid values:
|
|
Required |
|
If the test failed or encountered an error, then ths will be a message summarizing the error. If the test passed or was skipped, then this will be an empty string |
|
|
Required |
List of Requirement Metadata |
Requirement Metadata
For each test case, the following requirement metadata will be reported.
Example Requirement Metadata:
{
"id": "R-11690",
"text": "When a VNF's Heat Orchestration Template's Resource ID contains an\n``{index}``, the ``{index}`` is a numeric value that **MUST** start at\nzero and **MUST** increment by one.\n\nAs stated in R-16447,\n*a VNF's <resource ID> MUST be unique across all Heat\nOrchestration Templates and all HEAT Orchestration Template\nNested YAML files that are used to create the VNF*. While the ``{index}``\nwill start at zero in the VNF, the ``{index}`` may not start at zero\nin a given Heat Orchestration Template or HEAT Orchestration Template\nNested YAML file.",
"keyword": "MUST"
}
Requirement Metadata Definition:
Field Name |
Required Optional Conditional |
Data Type |
Valid Values |
Description |
---|---|---|---|---|
|
Required |
|
Requirement ID from the VNFRQTS project |
|
|
Required |
|
Full text of requirement |
|
|
Required |
|
MUST, MUST NOT, MAY, SHOULD, SHOULD NOT |
RFC 2119 keyword of the requirement |
Requirement Result
The file also includes an aggregated view of adherence to the VNF Requirements validated by the validation scripts. Since some requirements have multiple test cases, these results roll-up the result to an aggregated result for each requirement. This section does not include detailed test results. If you require detailed error information, then refer to the tests section of the results.
Example Requirement Result:
{
"id": "R-16447",
"text": "A VNF's <resource ID> **MUST** be unique across all Heat\nOrchestration Templates and all HEAT Orchestration Template\nNested YAML files that are used to create the VNF.",
"keyword": "MUST",
"result": "FAIL"
"errors": [
"The error message"
]
}
Requirement Result Definition:
Field Name |
Required/Optional/Conditional |
Data Type |
Valid Values |
Description |
---|---|---|---|---|
|
Required |
|
Requirement ID from the VNFRQTS project. NOTE:a requirement ID of “Unmapped” may be included if one or more tests are not mapped to a requirement. |
|
|
Required |
|
Full text of the requirement. |
|
|
Required |
|
|
RFC 2119 keyword associated with the requirement |
|
Required |
|
|
One of the valid values:
|
|
Required |
List of |
Error messages associated with this requirement. This will be an empty string if the result is |
Docker Execution
A version of VVP is also provided as a Docker image. If your environment supports Docker, then this eliminates the need to setup and install the application from source code.
To execute from Docker, issue the following command where
<Local Template Directory>
is where the Heat templates are located and
<Local Report Directory>
is where you would like the reports to be
written on your local machine:
docker run --rm -i -v ~/<Local Template Directory>/:/template \
-v ~/<Local Report Directory>:/reports \
onap/vvp/validation-scripts --template-directory=/template \
--output-directory=/reports
The same command line options can be used with the Docker image that are used with the version from source.