CONTROLLER DESIGN STUDIO (CDS)
Introduction
The system is designed to be self service, which means that users, not just programmers, can reconfigure the software system as needed to meet customer requirements. To accomplish this goal, the system is built around models that provide for real-time changes in how the system operates. Users merely need to change a model to change how a service operates.
Self service is a completely new way of delivering services. It removes the dependence on code releases and the delays they cause and puts the control of services into the hands of the service providers. They can change a model and its parameters and create a new service without writing a single line of code. This makes SERVICE PROVIDER(S) more responsive to its customers and able to deliver products that more closely match the needs of its customers.
Architecture
- The Controller Design Studio is composed of two major components:
The GUI (or frontend)
The Run Time (or backend)
The GUI handles direct user input and allows for displaying both design time and run time activities. For design time, it allows for the creation of controller blueprint, from selecting the DGs to be included, to incorporating the artifact templates, to adding necessary components. For run time, it allows the user to direct the system to resolve the unresolved elements of the controller blueprint and download the resulting configuration into a VNF.
At a more basic level, it allows for creation of data dictionaries, capabilities catalogs, and controller blueprint, the basic elements that are used to generate a configuration. The essential function of the Controller Design Studio is to create and populate a controller blueprint, create a configuration file from this Controller blueprint, and download this configuration file (configlet) to a VNF/PNF.
Modeling Concept
In Dublin release, the CDS community has contributed a framework to automate the resolution of resources for instantiation and any config provisioning operation, such as day0, day1 or day2 configuration.
The content of the CBA Package is driven from a catalog of reusable data dictionary, component and workflow, delivering a reusable and simplified self service experience.
TOSCA based JSON formatted model following standard: http://docs.oasis-open.org/tosca/TOSCA-Simple-Profile-YAML/v1.2/csd01/TOSCA-Simple-Profile-YAML-v1.2-csd01.html
Most of the TOSCA modeled entity presented in the bellow documentation can be found here: https://github.com/onap/ccsdk-cds/tree/master/components/model-catalog/definition-type/starter-type
Tosca Model Reference:
Modeling Concept Links:
Modeling Concepts
CDS is a framework to automate the resolution of resources for instantiation and any config provisioning operation, such as day0, day1 or day2 configuration.
CDS has a both design time and run time activities; during design time, Designer can define what actions are required for a given service, along with anything comprising the action. The design produce a CBA Package. Its content is driven from a catalog of reusable data dictionary and component, delivering a reusable and simplified self service experience.
DS modelling is mainly based on TOSCA standard, using JSON as reprensentation.
Most of the TOSCA modeled entity presented in the bellow documentation can be found here.
Controller Blueprint Archive (.cba)
The Controller Blueprint Archive is the overall service design, fully model-driven, intent based package needed for SELF SERVICE provisioning and configuration management automation.
The CBA is .zip file, comprised of the following folder structure, the files may vary:
├── Definitions
│ ├── blueprint.json Overall TOSCA service template (workflow + node_template)
│ ├── artifact_types.json (generated by enrichment)
│ ├── data_types.json (generated by enrichment)
│ ├── policy_types.json (generated by enrichment)
│ ├── node_types.json (generated by enrichment)
│ ├── relationship_types.json (generated by enrichment)
│ ├── resources_definition_types.json (generated by enrichment, based on Data Dictionaries)
│ └── *-mapping.json One per Template
│
├── Environments Contains *.properties files as required by the service
│
├── Plans Contains Directed Graph
│
├── Tests Contains uat.yaml file for testing cba actions within a cba package
│
├── Scripts Contains scripts
│ ├── python Python scripts
│ └── kotlin Kotlin scripts
│
├── TOSCA-Metadata
│ └── TOSCA.meta Meta-data of overall package
│
└── Templates Contains combination of mapping and template
To process a CBA for any service we need to enrich it first. This will gather all the node- type, data-type, artifact-type, data-dictionary definitions provided in the blueprint.json.
Tosca Meta
Tosca meta file captures the model entities that compose the cba package name, version, type and searchable tags.
Attribute |
R/C/O |
Data Type |
Description |
---|---|---|---|
TOSCA-Meta-File-Version |
Required |
String |
The attribute that holds TOSCA-Meta-File-Version. Set to 1.0.0 |
CSAR-Version |
Required |
String |
The attribute that holds CSAR-version. Set to 1.0 |
Created-By |
Required |
String |
The user/s that created the CBA |
Entry-Definitions |
Required |
String |
The attribute that holds the entry points file PATH to the main cba tosca definition file or non tosca script file. |
Template-Name |
Required |
String |
The attribute that holds the blueprint name |
Template-Version |
Required |
String |
The attribute that holds the blueprint version
X.Y.Z
X=Major version
Y=Minor Version
Z=Revision Version
X=Ex. 1.0.0
|
Template-Type |
Required |
String |
The attribute that holds the blueprint package types.
Valid Options:
If not specified in the tosca.meta file the default is “DEFAULT”
|
Template-Tags |
Required |
String |
The attribute that holds the blueprint package comma delimited list of Searchable attributes. |
Template Type Reference
Default Template Type
KOTLIN_DSL Template Type
GENERIC_SCRIPT Template Type
Dynamic Payload
One of the most important API provided by the run time is to execute a CBA Package.
The nature of this API request and response is model driven and dynamic.
Here is how the a generic request and response look like.
request |
response |
---|---|
{
"commonHeader": {
"originatorId": "",
"requestId": "",
"subRequestId": ""
},
"actionIdentifiers": {
"blueprintName": "",
"blueprintVersion": "",
"actionName": "",
"mode": ""
},
"payload": {
"$actionName-request": {
"$actionName-properties": {
}
}
}
}
|
{
"commonHeader": {
"originatorId": "",
"requestId": "",
"subRequestId": ""
},
"actionIdentifiers": {
"blueprintName": "",
"blueprintVersion": "",
"actionName": "",
"mode": ""
},
"payload": {
"$actionName-response": {
}
}
}
|
The actionName
, under the actionIdentifiers
refers to the name of a
Workflow (see Workflow)
The content of the payload
is what is fully dynamic / model driven.
The first top level element will always be either
$actionName-request
for a request or $actionName-response
for a response.
Then the content within this element is fully based on the workflow inputs and outputs.
During the Enrichment CDS will aggregate all the resources
defined to be resolved as input (see Node type -> Source -> Input), within mapping definition files
(see Artifact Type -> Mapping), as data-type, that will then be use as type
of an input called $actionName-properties
.
Enrichment
The idea is that the CBA is a self-sufficient package, hence requires all the various types definition its using.
Reason for this is the types its using might evolve. In order for the CBA to be bounded to the version it has been using when it has been designed, these types are embedded in the CBA, so if they change, the CBA is not affected.
The enrichment process will complete the package by providing all the definition of types used:
gather all the node-type used and put them into a
node_types.json
filegather all the data-type used and put them into a
data_types.json
filegather all the artifact-type used and put them into a
artifact_types.json
filegather all the data dictionary definitions used from within the mapping files and put them into a
resources_definition_types.json
file
Warning
Before uploading a CBA, it must be enriched. If your package is already enrich, you do not need to perform enrichment again.
The enrichment can be run using REST API, and required the .zip file as input.
It will return an enriched-cba.zip
file.
curl -X POST \
'http://{{ip}}:{{cds-designtime}}/api/v1/blueprint-model/enrich' \
-H 'content-type: multipart/form-data' \
-F file=@cba.zip
The enrichment process will also, for all resources to be resolved as input and default:
dynamically gather them under a data-type, named
dt-${actionName}-properties
will add it as a input of the workflow, as follow using this name:
${actionName}-properties
Example for workflow named resource-assignment:
{
"resource-assignment-properties": {
"required": true,
"type": "dt-resource-assignment-properties"
}
External Systems support
Interaction with external systems is made dynamic and plug-able removing development cycle to support new endpoint. In order to share the external system information, TOSCA provides a way to create macros using dsl_definitions: Link to TOSCA spec: info 1, info 2.
Use cases: * Resource resolution using REST (see tab Node Type) or SQL (see tab Node Type) external systems * gRPC is supported for remote execution * Any REST endpoint can be dynamically injected as part of the scripting framework.
Here are some examples on how to populate the system information within the package:
token-auth |
---|
{
. . .
"dsl_definitions": {
"ipam-1": {
"type": "token-auth",
"url": "http://netbox-nginx.netprog:8080",
"token": "Token 0123456789abcdef0123456789abcdef01234567"
}
}
|
basic-auth |
---|
{
. . .
"dsl_definitions": {
"ipam-1": {
"type": "basic-auth",
"url": "http://localhost:8080",
"username": "bob",
"password": "marley"
}
}
. . .
}
|
ssl-basic-auth |
---|
{
. . .
"dsl_definitions": {
"ipam-1": {
"type" : "ssl-basic-auth",
"url" : "http://localhost:32778",
"keyStoreInstance": "JKS or PKCS12",
"sslTrust": "trusture",
"sslTrustPassword": "trustore password",
"sslKey": "keystore",
"sslKeyPassword: "keystore password"
}
}
. . .
}
|
grpc-executor |
---|
{
. . .
"dsl_definitions": {
"remote-executor": {
"type": "token-auth",
"host": "cds-command-executor.netprog",
"port": "50051",
"token": "Basic Y2NzZGthcHBzOmNjc2RrYXBwcw=="
}
}
. . .
}
|
maria-db |
---|
{
. . .
"dsl_definitions": {
"netprog-db": {
"type": "maria-db",
"url": "jdbc:mysql://10.195.196.123:32050/netprog",
"username": "netprog",
"password": "netprog"
}
}
. . .
}
|
Expression
TOSCA provides for a set of functions to reference elements within the template or to retrieve runtime values.
Below is a list of supported expressions
get_input
The get_input function is used to retrieve the values of properties declared within the inputs section of a TOSCA Service Template.
Within CDS, this is mainly Workflow inputs.
Example:
"resolution-key": {
"get_input": "resolution-key"
}
get_property
The get_property function is used to retrieve property values between modelable entities defined in the same service template.
Example:
"get_property": ["SELF", "property-name"]
get_attribute
The get_attribute function is used to retrieve the values of named attributes declared by the referenced node or relationship template name.
Example:
"get_attribute": [
"resource-assignment",
"assignment-params"
]
get_operation_output
The get_operation_output function is used to retrieve the values of variables exposed / exported from an interface operation.
Example:
"get_operation_output": ["SELF", "interface-name", "operation-name", "output-property-name"]
get_artifact
The get_artifact function is used to retrieve artifact location between modelable entities defined in the same service template.
Example:
"get_artifact" : ["SELF", "artifact-template", "location", true]
Data Dictionary
A data dictionary models the how a specific resource can be resolved.
A resource is a variable/parameter in the context of the service. It can be anything, but it should not be confused with SDC or Openstack resources.
A data dictionary can have multiple sources to handle resolution in different ways.
The main goal of data dictionary is to define re-usable entity that could be shared.
Creation of data dictionaries is a standalone activity, separated from the blueprint design.
As part of modelling a data dictionary entry, the following generic information should be provided:
Property |
Description |
Scope |
---|---|---|
updated-by |
The creator |
Mandatory |
tags |
Information related |
Mandatory |
sources |
List of resource source instance (see resource source) |
Mandatory |
property |
Defines type and description, as nested JSON |
Mandatory |
name |
Data dictionary name |
Mandatory |
Bellow are properties that all the resource source can have
The modeling does allow for data translation between external capability and CDS for both input and output key mapping.
Property |
Description |
Scope |
---|---|---|
input-key-mapping |
map of resources required to perform the request/query. The left hand-side is what is used within the query/request, the right hand side refer to a data dictionary instance. |
Optional |
output-key-mapping |
name of the resource to be resolved mapped to the value resolved by the request/query. |
Optional |
key-dependencies |
list of data dictionary instances to be resolved prior the resolution of this specific resource.
during run time execution the key dependencies are recursively sorted and resolved
in batch processing using the acyclic graph algorithm
|
Optional |
Example:
vf-module-model-customization-uuid
and vf-module-label
are two data dictionaries.
A SQL table, VF_MODULE_MODEL, exist to correlate them.
Here is how input-key-mapping, output-key-mapping and key-dependencies can be used:
vf-module-label data dictionary |
---|
{
"name" : "vf-module-label",
"tags" : "vf-module-label",
"updated-by" : "adetalhouet",
"property" : {
"description" : "vf-module-label",
"type" : "string"
},
"sources" : {
"primary-db" : {
"type" : "source-primary-db",
"properties" : {
"type" : "SQL",
"query" : "select sdnctl.VF_MODULE_MODEL.vf_module_label as vf_module_label from sdnctl.VF_MODULE_MODEL where sdnctl.VF_MODULE_MODEL.customization_uuid=:customizationid",
"input-key-mapping" : {
"customizationid" : "vf-module-model-customization-uuid"
},
"output-key-mapping" : {
"vf-module-label" : "vf_module_label"
},
"key-dependencies" : [ "vf-module-model-customization-uuid" ]
}
}
}
}
|
Data type
Represents the schema of a specific type of data.
Supports both primitive and complex data types:
Primitive |
Complex |
---|---|
|
|
For complex data type, an entry schema is required, defining the type of value contained within the complex type, if list or array.
Users can create as many data type as needed.
Note
Creating Custom Data Types:
To create a custom data-type you can use a POST call to CDS endpoint: “<cds-ip>:<cds-port>/api/v1/model-type”
{
"model-name": "<model-name>",
"derivedFrom": "tosca.datatypes.Root",
"definitionType": "data_type",
"definition": {
"description": "<description>",
"version": "<version-number: eg 1.0.0>",
"properties": "code-block::{<add properties of your custom data type in JSON format>}",
"derived_from": "tosca.datatypes.Root"
},
"description": "<description",
"version": "<version>",
"tags": "<model-name>,datatypes.Root.data_type",
"creationDate": "<creation timestamp>",
"updatedBy": "<name>"
}
Data type are useful to manipulate data during resource resolution. They can be used to format the JSON output as needed.
List of existing data type: https://github.com/onap/ccsdk-cds/tree/master/components/model-catalog/definition-type/starter-type/data_type
Below is a list of existing data types
datatype-resource-assignment
Used to define entries within artifact-mapping-resource (see tab Artifact Type -> artifact-mapping-resource)
That datatype represent a resource to be resolved. We also refer this as an instance of a data dictionary as it’s directly linked to its definition.
Property |
Description |
---|---|
property |
Defines how the resource looks like (see datatype-property on the right tab) |
input-param |
Whether the resource can be provided as input. |
dictionary-name |
Reference to the name of the data dictionary (see Data Dictionary). |
dictionary-source |
Reference the source to use to resolve the resource (see resource source). |
dependencies |
List of dependencies required to resolve this resource. |
updated-date |
Date when mapping was upload. |
updated-by |
Name of the person that updated the mapping. |
{
"version": "1.0.0",
"description": "This is Resource Assignment Data Type",
"properties": {
"property": {
"required": true,
"type": "datatype-property"
},
"input-param": {
"required": true,
"type": "boolean"
},
"dictionary-name": {
"required": false,
"type": "string"
},
"dictionary-source": {
"required": false,
"type": "string"
},
"dependencies": {
"required": true,
"type": "list",
"entry_schema": {
"type": "string"
}
},
"updated-date": {
"required": false,
"type": "string"
},
"updated-by": {
"required": false,
"type": "string"
}
},
"derived_from": "tosca.datatypes.Root"
}
datatype-property
Used to defined the property entry of a resource assignment.
Property |
Description |
---|---|
type |
Whether it’s a primitive type, or a defined data-type |
description |
Description of for the property |
required |
Whether it’s required or not |
default |
If there is a default value to provide |
entry_schema |
If the type is a complex one, such as list, define what is the type of element within the list. |
{
"version": "1.0.0",
"description": "This is Resource Assignment Data Type",
"properties": {
"property": {
"required": true,
"type": "datatype-property"
},
"input-param": {
"required": true,
"type": "boolean"
},
"dictionary-name": {
"required": false,
"type": "string"
},
"dictionary-source": {
"required": false,
"type": "string"
},
"dependencies": {
"required": true,
"type": "list",
"entry_schema": {
"type": "string"
}
},
"updated-date": {
"required": false,
"type": "string"
},
"updated-by": {
"required": false,
"type": "string"
}
},
"derived_from": "tosca.datatypes.Root"
}
Artifact Type
Represents the type of a artifact, used to identify the implementation of the functionality supporting this type of artifact.
This node was created, derived from tosca.artifacts.Root
to be the root TOSCA node for all artifact.
{
"description": "TOSCA base type for implementation artifacts",
"version": "1.0.0",
"derived_from": "tosca.artifacts.Root"
}
Bellow is a list of supported artifact types
artifact-template-velocity
Represents an Apache Velocity template.
Apache Velocity allow to insert logic (if / else / loops / etc) when processing the output of a template/text.
File must have .vtl extension.
The template can represent anything, such as device config, payload to interact with 3rd party systems, resource-accumulator template, etc…
Often a template will be parameterized, and each parameter must be defined within an mapping file (see ‘Mapping’ in this tab).
Here is the TOSCA artifact type:
{
"description": "TOSCA base type for implementation artifacts",
"version": "1.0.0",
"derived_from": "tosca.artifacts.Root"
}
artifact-template-jinja
Represents an Jinja template.
Jinja template allow to insert logic (if / else / loops / etc) when processing the output of a template/text.
File must have .jinja extension.
The template can represent anything, such as device config, payload to interact with 3rd party systems, resource-accumulator template, etc…
Often a template will be parameterized, and each parameter must be defined within an mapping file.
Here is the TOSCA artifact type:
{
"description": " Jinja Template used for Configuration",
"version": "1.0.0",
"file_ext": [
"jinja"
],
"derived_from": "tosca.artifacts.Implementation"
}
artifact-mapping-resource
This type is meant to represent mapping files defining the contract of each resource to be resolved.
Each parameter in a template must have a corresponding mapping definition, modeled using datatype-resource-assignment (see Data type-> resources-asignment).
Hence the mapping file is meant to be a list of entries defined using datatype-resource-assignment (see Data type-> resources-asignment).
File must have .json extension.
The template can represent anything, such as device config, payload to interact with 3rd party systems, resource-accumulator template, etc…
Here is the TOSCA artifact type:
{
"description": "Resource Mapping File used along with Configuration template",
"version": "1.0.0",
"file_ext": [
"json"
],
"derived_from": "tosca.artifacts.Implementation"
}
The mapping file basically contains a reference to the data dictionary to use to resolve a particular resource.
The data dictionary defines the HOW and the mapping defines the WHAT.
Relation between data dictionary, mapping and template.
Below are two examples using color coding to help understand the relationships.
In orange is the information regarding the template. As mentioned before, template is part of the blueprint itself, and for the blueprint to know what template to use, the name has to match.
In green is the relationship between the value resolved within the template, and how it’s mapped coming from the blueprint.
In blue is the relationship between a resource mapping to a data dictionary.
In red is the relationship between the resource name to be resolved and the HEAT environment variables.
The key takeaway here is that whatever the value is for each color, it has to match all across. This means both right and left hand side are equivalent; it’s all on the designer to express the modeling for the service. That said, best practice is example 1.
artifact-directed-graph
Represents a directed graph.
This is to represent a workflow.
File must have .xml extension.
Here is the list of executors currently supported (see here for explanation and full potential list: Service Logic Interpreter Nodes
execute
block
return
break
exit
Here is the TOSCA artifact type:
{
"description": "Directed Graph File",
"version": "1.0.0",
"file_ext": [
"json",
"xml"
],
"derived_from": "tosca.artifacts.Implementation"
}
Node type
In CDS, we have mainly two distinct types: components and source. We have some other type as well, listed in the other section.
Component:
Used to represent a functionality along with its contract, such as inputs, ouputs, and attributes
Here is the root component TOSCA node type from which other node type will derive:
{
"description": "This is default Component Node",
"version": "1.0.0",
"derived_from": "tosca.nodes.Root"
}
Bellow is a list of supported components
component-resource-resolution:
Used to perform resolution of resources.
Requires as many as artifact-mapping-resource (see Artifact Type -> Mapping) AND artifact-template-velocity (see Artifact Type -> Jinja) as needed.
Output result:
Will put the resolution result as an attribute in the workflow context called assignment-params.
Using the undefined, this attribute can be retrieve to be provided as workflow output (see Workflow).
Specify which template to resolve:
Currently, resolution is bounded to a template. To specify which template to use, you need to fill in the artifact-prefix-names field.
See Template to understand what the artifact prefix name is.
Storing the result:
To store each resource being resolved, along with their status, and the resolved template, store-result should be set to true.
Also, when storing the data, it must be in the context of either a resource-id and resource-type, or based on a given resolution-key
The concept of resource-id / resource-type, or resolution-key, is to uniquely identify a specific resolution that has been performed for a given action. Hence the resolution-key has to be unique for a given blueprint name, blueprint version, action name.
Through the combination of the fields mentioned previously, one could retrieved what has been resolved. This is useful to manage the life-cycle of the resolved resource, the life-cycle of the template, along with sharing with external systems the outcome of a given resolution.
The resource-id / resource-type combo is more geared to uniquely identify a resource in AAI, or external system. For example, for a given AAI resource, say a PNF, you can trigger a given CDS action, and then you will be able to manage all the resolved resources bound to this PNF. Even we could have a history of what has been assigned, unassigned for this given AAI resource.
Warning
Important not to confuse and AAI resource (e.g. a topology element, or service related element) with the resources resolved by CDS, which can be seen as parameters required to derived a network configuration.
Run the resolution multiple time:
If you need to run the same resolution component multiple times, use the field occurence. This will add the notion of occurrence to the resolution, and if storing the results, resources and templates, they will be accessible for each occurrence.
Occurrence is a number between 1 and N; when retrieving information for a given occurrence, the first iteration starts at 1.
This feature is useful when you need to apply the same configuration accross network elements.
Here is the definition:
{
"description": "This is Resource Assignment Component API",
"version": "1.0.0",
"attributes": {
"assignment-params": {
"required": true,
"type": "string"
}
},
"capabilities": {
"component-node": {
"type": "tosca.capabilities.Node"
}
},
"interfaces": {
"ResourceResolutionComponent": {
"operations": {
"process": {
"inputs": {
"resolution-key": {
"description": "Key for service instance related correlation.",
"required": false,
"type": "string"
},
"occurrence": {
"description": "Number of time to perform the resolution.",
"required": false,
"default": 1,
"type": "integer"
},
"store-result": {
"description": "Whether or not to store the output.",
"required": false,
"type": "boolean"
},
"resource-type": {
"description": "Request type.",
"required": false,
"type": "string"
},
"artifact-prefix-names": {
"required": true,
"description": "Template , Resource Assignment Artifact Prefix names",
"type": "list",
"entry_schema": {
"type": "string"
}
},
"request-id": {
"description": "Request Id, Unique Id for the request.",
"required": true,
"type": "string"
},
"resource-id": {
"description": "Resource Id.",
"required": false,
"type": "string"
},
"action-name": {
"description": "Action Name of the process",
"required": false,
"type": "string"
},
"dynamic-properties": {
"description": "Dynamic Json Content or DSL Json reference.",
"required": false,
"type": "json"
}
},
"outputs": {
"resource-assignment-params": {
"required": true,
"type": "string"
},
"status": {
"required": true,
"type": "string"
}
}
}
}
}
},
"derived_from": "tosca.nodes.Component"
}
component-script-executor:
Used to execute a script to perform NETCONF, RESTCONF, SSH commands from within the runtime container of CDS.
Two type of scripts are supported:
Kotlin: offer a way more integrated scripting framework, along with a way faster processing capability. See more about Kotlin script: https://github.com/Kotlin/KEEP/blob/master/proposals/scripting-support.md
Python: uses Jython which is bound to Python 2.7, end of life Januray 2020. See more about Jython: https://www.jython.org/
The script-class-reference field need to reference
for kotlin: the package name up to the class. e.g. com.example.Bob
for python: it has to be the path from the Scripts folder, e.g. Scripts/python/Bob.py
Here is the definition
{
"description": "This is Netconf Transaction Configuration Component API",
"version": "1.0.0",
"interfaces": {
"ComponentScriptExecutor": {
"operations": {
"process": {
"inputs": {
"script-type": {
"description": "Script type, kotlin type is supported",
"required": true,
"type": "string",
"default": "internal",
"constraints": [
{
"valid_values": [
"kotlin",
"jython",
"internal"
]
}
]
},
"script-class-reference": {
"description": "Kotlin Script class name with full package or jython script name.",
"required": true,
"type": "string"
},
"dynamic-properties": {
"description": "Dynamic Json Content or DSL Json reference.",
"required": false,
"type": "json"
}
},
"outputs": {
"response-data": {
"description": "Execution Response Data in JSON format.",
"required": false,
"type": "string"
},
"status": {
"description": "Status of the Component Execution ( success or failure )",
"required": true,
"type": "string"
}
}
}
}
}
},
"derived_from": "tosca.nodes.Component"
}
component-remote-script-executor:
Used to execute a python script in a dedicated micro-service, providing a Python 3.6 environment.
Output result:
prepare-environment-logs: will contain the logs for all the pip install of ansible_galaxy setup
execute-command-logs: will contain the execution logs of the script, that were printed into stdout
Using the get_attribute expression (see Expression -> get_attribute), this attribute can be retrieve to be provided as workflow output (see Workflow).
Params:
The command field need to reference the path from the Scripts folder of the scripts to execute, e.g. Scripts/python/Bob.py
The packages field allow to provide a list of PIP package to install in the target environment, or a requirements.txt file. Also, it supports Ansible role.
If requirements.txt is specified, then it should be provided as part of the Environment folder of the CBA.
"packages": [
{
"type": "pip",
"package": [
"requirements.txt"
]
},
{
"type": "ansible_galaxy",
"package": [
"juniper.junos"
]
}
]
The argument-properties allows to specified input argument to the script to execute. They should be expressed in a DSL, and they will be ordered as specified.
"ansible-argument-properties": {
"arg0": "-i",
"arg1": "Scripts/ansible/inventory.yaml",
"arg2": "--extra-vars",
"arg3": {
"get_attribute": [
"resolve-ansible-vars",
"",
"assignment-params",
"ansible-vars"
]
}
}
The dynamic-properties can be anything that needs to be passed to the script that couldn’t be passed as an argument, such as JSON object, etc… If used, they will be passed in as the last argument of the Python script.
Here is the definition
{
"description": "This is Remote Python Execution Component.",
"version": "1.0.0",
"attributes": {
"prepare-environment-logs": {
"required": false,
"type": "string"
},
"execute-command-logs": {
"required": false,
"type": "list",
"entry_schema": {
"type": "string"
}
},
"response-data": {
"required": false,
"type": "json"
}
},
"capabilities": {
"component-node": {
"type": "tosca.capabilities.Node"
}
},
"interfaces": {
"ComponentRemotePythonExecutor": {
"operations": {
"process": {
"inputs": {
"endpoint-selector": {
"description": "Remote Container or Server selector name.",
"required": false,
"type": "string",
"default": "remote-python"
},
"dynamic-properties": {
"description": "Dynamic Json Content or DSL Json reference.",
"required": false,
"type": "json"
},
"argument-properties": {
"description": "Argument Json Content or DSL Json reference.",
"required": false,
"type": "json"
},
"command": {
"description": "Command to execute.",
"required": true,
"type": "string"
},
"packages": {
"description": "Packages to install based on type.",
"required": false,
"type" : "list",
"entry_schema" : {
"type" : "dt-system-packages"
}
}
}
}
}
}
},
"derived_from": "tosca.nodes.Component"
}
component-remote-ansible-executor:
Used to execute an ansible playbook hosted in AWX/Anisble Tower.
Ouput result:
ansible-command-status: status of the command
ansible-command-logs: will contain the execution logs of the playbook
Using the get_attribute expression, this attribute can be retrieve to be provided as workflow output (see Workflow).
Param:
TBD
Here is the definition
{
"description": "This is Remote Ansible Playbook (AWX) Execution Component.",
"version": "1.0.0",
"attributes": {
"ansible-command-status": {
"required": true,
"type": "string"
},
"ansible-command-logs": {
"required": true,
"type": "string"
}
},
"capabilities": {
"component-node": {
"type": "tosca.capabilities.Node"
}
},
"interfaces": {
"ComponentRemoteAnsibleExecutor": {
"operations": {
"process": {
"inputs": {
"job-template-name": {
"description": "Primary key or name of the job template to launch new job.",
"required": true,
"type": "string"
},
"limit": {
"description": "Specify host limit for job template to run.",
"required": false,
"type": "string"
},
"inventory": {
"description": "Specify inventory for job template to run.",
"required": false,
"type": "string"
},
"extra-vars": {
"required": false,
"type": "json",
"description": "json formatted text that contains extra variables to pass on."
},
"tags": {
"description": "Specify tagged actions in the playbook to run.",
"required": false,
"type": "string"
},
"skip-tags": {
"description": "Specify tagged actions in the playbook to omit.",
"required": false,
"type": "string"
},
"endpoint-selector": {
"description": "Remote AWX Server selector name.",
"required": true,
"type": "string"
}
}
}
}
}
},
"derived_from": "tosca.nodes.Component"
}
Source:
Used to represent a type of source to resolve a resource, along with the expected properties
Defines the contract to resolve a resource.
Here is the root component TOSCA node type from which other node type will derive:
{
"description": "TOSCA base type for Resource Sources",
"version": "1.0.0",
"derived_from": "tosca.nodes.Root"
}
Bellow is a list of supported sources
Input:
Expects the value to be provided as input to the request.
Here is the Definition
{
"description": "This is Input Resource Source Node Type",
"version": "1.0.0",
"properties": {},
"derived_from": "tosca.nodes.ResourceSource"
}
Default:
Expects the value to be defaulted in the model itself.
Here is the Definition
{
"description": "This is Default Resource Source Node Type",
"version": "1.0.0",
"properties": {},
"derived_from": "tosca.nodes.ResourceSource"
}
REST
Expects the URI along with the VERB and the payload, if needed.
CDS is currently deployed along the side of SDNC, hence the default rest connection provided by the framework is to SDNC MDSAL.
Property |
Description |
Scope |
---|---|---|
type |
Expected output value, only JSON supported for now |
Optional |
verb |
HTTP verb for the request - default value is GET |
Optional |
payload |
Payload to sent |
Optional |
endpoint-selector |
Specific REST system to interact with to (see Dynamic Endpoint) |
Optional |
url-path |
URI |
Mandatory |
path |
JSON path to the value to fetch from the response |
Mandatory |
expression-type |
Path expression type - default value is JSON_PATH |
Optional |
Here is the definition:
{
"description": "This is Rest Resource Source Node Type",
"version": "1.0.0",
"properties": {
"type": {
"required": false,
"type": "string",
"default": "JSON",
"constraints": [
{
"valid_values": [
"JSON"
]
}
]
},
"verb": {
"required": false,
"type": "string",
"default": "GET",
"constraints": [
{
"valid_values": [
"GET",
"POST",
"DELETE",
"PUT"
]
}
]
},
"payload": {
"required": false,
"type": "string",
"default": ""
},
"endpoint-selector": {
"required": false,
"type": "string"
},
"url-path": {
"required": true,
"type": "string"
},
"path": {
"required": true,
"type": "string"
},
"expression-type": {
"required": false,
"type": "string",
"default": "JSON_PATH",
"constraints": [
{
"valid_values": [
"JSON_PATH",
"JSON_POINTER"
]
}
]
},
"input-key-mapping": {
"required": false,
"type": "map",
"entry_schema": {
"type": "string"
}
},
"output-key-mapping": {
"required": false,
"type": "map",
"entry_schema": {
"type": "string"
}
},
"key-dependencies": {
"required": true,
"type": "list",
"entry_schema": {
"type": "string"
}
}
},
"derived_from": "tosca.nodes.ResourceSource"
}
SQL
Expects the SQL query to be modeled; that SQL query can be parameterized,
and the parameters be other resources resolved through other means.
If that’s the case, this data dictionary definition will have to define key-dependencies
along with input-key-mapping
.
CDS is currently deployed along the side of SDNC, hence the primary database connection provided by the framework is to SDNC database.
Property |
Description |
Scope |
type |
Database type, only SQL supported for now |
Mandatory |
endpoint-selector |
Specific Database system to interact with to (see Dynamic Endpoint) |
Optional |
query |
Statement to execute |
Mandatory |
Here is the definition:
{
"description": "This is Database Resource Source Node Type",
"version": "1.0.0",
"properties": {
"type": {
"required": true,
"type": "string",
"constraints": [
{
"valid_values": [
"SQL"
]
}
]
},
"endpoint-selector": {
"required": false,
"type": "string"
},
"query": {
"required": true,
"type": "string"
},
"input-key-mapping": {
"required": false,
"type": "map",
"entry_schema": {
"type": "string"
}
},
"output-key-mapping": {
"required": false,
"type": "map",
"entry_schema": {
"type": "string"
}
},
"key-dependencies": {
"required": true,
"type": "list",
"entry_schema": {
"type": "string"
}
}
},
"derived_from": "tosca.nodes.ResourceSource"
}
Capability:
Expects a script to be provided.
Property
Description
Scope
script-type
The type of the script - default value is Koltin
Optional
script-class-reference
The name of the class to use to create an instance of the script
Mandatory
Here is the definition:
{
"description": "This is Component Resource Source Node Type",
"version": "1.0.0",
"properties": {
"script-type": {
"required": true,
"type": "string",
"default": "kotlin",
"constraints": [
{
"valid_values": [
"internal",
"kotlin",
"jython"
]
}
]
},
"script-class-reference": {
"description": "Capability reference name for internal and kotlin, for jython script file path",
"required": true,
"type": "string"
},
"key-dependencies": {
"description": "Resource Resolution dependency dictionary names.",
"required": true,
"type": "list",
"entry_schema": {
"type": "string"
}
}
},
"derived_from": "tosca.nodes.ResourceSource"
}
Other:
dg-generic:
Identifies a Directed Graph used as imperative workflow.
Property |
Description |
Scope |
---|---|---|
dependency-node-templates |
The node template the workflow depends on |
Required |
Here is the definition:
{
"description": "This is Generic Directed Graph Type",
"version": "1.0.0",
"properties": {
"content": {
"required": true,
"type": "string"
},
"dependency-node-templates": {
"required": true,
"description": "Dependent Step Components NodeTemplate name.",
"type": "list",
"entry_schema": {
"type": "string"
}
}
},
"derived_from": "tosca.nodes.DG"
}
A node_template of this type always provide one artifact, of type artifact-directed-graph, which will be located under the Plans/ folder within the CBA.
{
"config-deploy-process": {
"type": "dg-generic",
"properties": {
"content": {
"get_artifact": [
"SELF",
"dg-config-deploy-process"
]
},
"dependency-node-templates": [
"nf-account-collection",
"execute"
]
},
"artifacts": {
"dg-config-deploy-process": {
"type": "artifact-directed-graph",
"file": "Plans/CONFIG_ConfigDeploy.xml"
}
}
}
}
In the DG bellow, the execute node refers to the node_template.
<service-logic
xmlns='http://www.onap.org/sdnc/svclogic'
xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'
xsi:schemaLocation='http://www.onap.org/sdnc/svclogic ./svclogic.xsd' module='CONFIG' version='1.0.0'>
<method rpc='ConfigDeploy' mode='sync'>
<block atomic="true">
<execute plugin="nf-account-collection" method="process">
<outcome value='failure'>
<return status="failure">
</return>
</outcome>
<outcome value='success'>
<execute plugin="execute" method="process">
<outcome value='failure'>
<return status="failure">
</return>
</outcome>
<outcome value='success'>
<return status='success'>
</return>
</outcome>
</execute>
</outcome>
</execute>
</block>
</method>
</service-logic>
tosca.nodes.VNF
Identifies a VNF, can be used to correlate any type of VNF related information.
{
"description": "This is VNF Node Type",
"version": "1.0.0",
"derived_from": "tosca.nodes.Root"
}
vnf-netconf-device
Represents the VNF information to establish a NETCONF communication.
{
"description": "This is VNF Device with Netconf Capability",
"version": "1.0.0",
"capabilities": {
"netconf": {
"type": "tosca.capabilities.Netconf",
"properties": {
"login-key": {
"required": true,
"type": "string",
"default": "sdnc"
},
"login-account": {
"required": true,
"type": "string",
"default": "sdnc-tacacs"
},
"source": {
"required": false,
"type": "string",
"default": "npm"
},
"target-ip-address": {
"required": true,
"type": "string"
},
"port-number": {
"required": true,
"type": "integer",
"default": 830
},
"connection-time-out": {
"required": false,
"type": "integer",
"default": 30
}
}
}
},
"derived_from": "tosca.nodes.Vnf"
}
Workflow
Note
Workflow Scope within CDS Framework
The workflow is within the scope of the micro provisioning and configuration management in controller domain and does NOT account for the MACRO service orchestration workflow which is covered by the SO Project.
A workflow defines an overall action to be taken on the service, hence is an entry-point for the run-time execution of the CBA Package.
A workflow also defines inputs and outputs that will defined the payload contract of the request and response (see Dynamic Payload)
A workflow can be composed of one or multiple sub-actions to execute.
A CBA package can have as many workflows as needed.
Single action
The workflow is directly backed by a component (see Node type -> Component).
In the example bellow, the target of the workflow’s steps resource-assignment is resource-assignment
which actually is the name of the node_template
defined after, of type component-resource-resolution
.
. . .
"topology_template": {
"workflows": {
"resource-assignment": {
"steps": {
"resource-assignment": {
"description": "Resource Assign Workflow",
"target": "resource-assignment"
}
}
},
"inputs": {
"resource-assignment-properties": {
"description": "Dynamic PropertyDefinition for workflow(resource-assignment).",
"required": true,
"type": "dt-resource-assignment-properties"
}
},
"outputs": {
"meshed-template": {
"type": "json",
"value": {
"get_attribute": [
"resource-assignment",
"assignment-params"
]
}
}
}
},
"node_templates": {
"resource-assignment": {
"type": "component-resource-resolution",
"interfaces": {
"ResourceResolutionComponent": {
"operations": {
"process": {
"inputs": {
"artifact-prefix-names": [
"vf-module-1"
]
}
}
}
}
},
"artifacts": {
"vf-module-1-template": {
"type": "artifact-template-velocity",
"file": "Templates/vf-module-1-template.vtl"
},
"vf-module-1-mapping": {
"type": "artifact-mapping-resource",
"file": "Templates/vf-module-1-mapping.json"
}
}
}
}
}
. . .
Multiple sub-actions
The workflow is backed by a Directed Graph engine, dg-generic (see Node type -> DG, and is an imperative workflow.
A DG used as workflow for CDS is composed of multiple execute nodes; each individual execute node refers to an modelled Component (see Node type -> Component) instance.
In the example above, you can see the target of the workflow’s steps execute-script is
execute-remote-ansible-process
, which is a node_template of type dg_generic
. . .
"topology_template": {
"workflows": {
"execute-remote-ansible": {
"steps": {
"execute-script": {
"description": "Execute Remote Ansible Script",
"target": "execute-remote-ansible-process"
}
}
},
"inputs": {
"ip": {
"required": false,
"type": "string"
},
"username": {
"required": false,
"type": "string"
},
"password": {
"required": false,
"type": "string"
},
"execute-remote-ansible-properties": {
"description": "Dynamic PropertyDefinition for workflow(execute-remote-ansible).",
"required": true,
"type": "dt-execute-remote-ansible-properties"
}
},
"outputs": {
"ansible-variable-resolution": {
"type": "json",
"value": {
"get_attribute": [
"resolve-ansible-vars",
"assignment-params"
]
}
},
"prepare-environment-logs": {
"type": "string",
"value": {
"get_attribute": [
"execute-remote-ansible",
"prepare-environment-logs"
]
}
},
"execute-command-logs": {
"type": "string",
"value": {
"get_attribute": [
"execute-remote-ansible",
"execute-command-logs"
]
}
}
},
"node_templates": {
"execute-remote-ansible-process": {
"type": "dg-generic",
"properties": {
"content": {
"get_artifact": [
"SELF",
"dg-execute-remote-ansible-process"
]
},
"dependency-node-templates": [
"resolve-ansible-vars",
"execute-remote-ansible"
]
},
"artifacts": {
"dg-execute-remote-ansible-process": {
"type": "artifact-directed-graph",
"file": "Plans/CONFIG_ExecAnsiblePlaybook.xml"
}
}
}
}
}
}
Properties of a workflow
Property |
Description |
||||||
---|---|---|---|---|---|---|---|
workflow-name |
Defines the name of the action that can be triggered by external system |
||||||
inputs |
They are two types of inputs, the dynamic ones, and the static one.
Specified at workflow level
These will end up under Represent the resources defined as input (see Node type -> Source -> Input) within mapping definition files (see Artifact Type -> Mapping). The enrichment process will (see Enrichment)
Example for workflow named resource-assignment: "resource-assignment-properties": {
"required": true,
"type": "dt-resource-assignment-properties"
}
|
||||||
outputs |
Defines the outputs of the execution; there can be as many output as needed.
Depending on the Component (see Node type -> Component) of use, some attribute might be retrievable.
|
||||||
steps |
Defines the actual step to execute as part of the workflow
|
Example:
{
"workflow": {
"resource-assignment": { <- workflow-name
"inputs": {
"vnf-id": { <- static inputs
"required": true,
"type": "string"
},
"resource-assignment-properties": { <- dynamic inputs
"required": true,
"type": "dt-resource-assignment-properties"
}
},
"steps": {
"call-resource-assignment": { <- step-name
"description": "Resource Assignment Workflow",
"target": "resource-assignment-process" <- node_template targeted by the step
}
},
"outputs": {
"template-properties": { <- output
"type": "json", <- complex type
"value": {
"get_attribute": [ <- uses expression to retrieve attribute from context
"resource-assignment",
"assignment-params"
]
}
}
}
}
}
}
Template
A template is an artifact, and uses artifact-mapping-resource (see Artifact Type -> Mapping) and artifact-template-velocity (see Artifact Type -> Velocity).
A template is parameterized and each parameter must be defined in a corresponding mapping file.
In order to know which mapping correlates to which template, the file name must start with an artifact-prefix
,
serving as identifier to the overall template + mapping.
The requirement is as follows:
${artifact-prefix}-template
${artifact-prefix}-mapping
Scripts
Library
NetconfClient
In order to facilitate NETCONF interaction within scripts, a python NetconfClient binded to our Kotlin implementation is made available. This NetconfClient can be used when using the component-netconf-executor.
The client can be find here: https://github.com/onap/ccsdk-cds/blob/master/components/scripts/python/ccsdk_netconf/netconfclient.py
ResolutionHelper
When executing a component executor script, designer might want to perform resource resolution along with template meshing directly from the script itself.
The helper can be find here: https://github.com/onap/ccsdk-cds/blob/master/components/scripts/python/ccsdk_netconf/common.py
Southbound Interfaces
CDS comes with native python 3.6 support and Ansible AWX (Ansible Tower): idea is Network Ops are familiar with Python and/or Ansible, and our goal is not to dictate the SBI to use for their operations. Ansible and Python provide already many, and well adopted, SBI libraries, hence they could be utilized as needed.
CDS also provide native support for the following libraries:
NetConf
REST
CLI
SSH
gRPC (hence gNMI / gNOI should be supported)
CDS also has extensible REST support, meaning any RESTful interface used for network interaction can be used, such as external VNFM or EMS.
Tests
The tests folder contains the uat.yaml file for execution the cba actions for sunny day and rainy day scenario using mock data. The process to generate the uat file is documented TBD. The file can be dragged and drop to the Tests folder after the test for all actions are executed.
NOTE: You need to activate the “uat” Spring Boot profile in order to enable the spy/verify endpoints.
They are disabled by default because the mocks created at runtime can potentially cause collateral problems in production.
You can either pass an option to JVM (-Dspring.profiles.active=uat
) or set and export an
environment variable (export spring_profiles_active=uat
).
A quick outline of the UAT generation process follows:
Create a minimum
uat.yaml
containing only the NB requests to be sent to the BlueprintsProcessor (BPP) service;Submit the blueprint CBA and this draft
uat.yaml
to BPP in a single HTTP POST call:curl -u ccsdkapps:ccsdkapps -F cba=@<path to your CBA file> -F uat=@<path to the draft uat.yaml> http://localhost:8080/api/v1/uat/spy
If your environment is properly setup, at the end this service will generate the complete
uat.yaml
;Revise the generate file, eventually removing superfluous message fields;
Include this file in your CBA under
Tests/uat.yaml
;Submit the candidate CBA + UAT to be validated by BPP, that now will create runtime mocks to simulate all SB collaborators, by running:
$ curl -u ccsdkapps:ccsdkapps -F cba=@<path to your CBA file> http://localhost:8080/api/v1/uat/verify
Once validated, your CBA enhanced with its corresponding UAT is eligible to be integrated into the CDS project, under the folder
components/model-catalog/blueprint-model/uat-blueprints
.
Reference link for sample generated uat.yaml file for pnf plug & play use case: uat.yaml file.
As UAT is part of unit testing, it runs in jenkins job ccsdk-cds-master-verify-java whenever a new commit/patch pushed on gerrit in ccsdk/cds repo.
Scripts
Library
NetconfClient
In order to facilitate NETCONF interaction within scripts, a python NetconfClient binded to our Kotlin implementation is made available. This NetconfClient can be used when using the component-netconf-executor.
The client can be find here: https://github.com/onap/ccsdk-cds/blob/master/components/scripts/python/ccsdk_netconf/netconfclient.py
ResolutionHelper
When executing a component executor script, designer might want to perform resource resolution along with template meshing directly from the script itself.
The helper can be found in below link: https://github.com/onap/ccsdk-apps/blob/master/components/scripts/python/ccsdk_netconf/common.py
User Guides
Developer Guide
Note
Get Started with CDS
Running Blueprints Processor Microservice in an IDE
Objective
Run the blueprint processor locally in an IDE, while having the database running in a container. This way, code changes can be conveniently tested and debugged.
Check out the code
Check out the code from Gerrit: https://gerrit.onap.org/r/#/admin/projects/ccsdk/cds
Build it locally
In the checked out directory, type
mvn clean install -Pq -Dadditionalparam=-Xdoclint:none
Note
If an error invalid flag: --release
appears when executing the maven install command, you need to upgrade Java version of your local
Maven installation. Use something like export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64
.
Wait for the maven install command to finish until you go further.
Spin up a Docker container with the database
The Blueprints Processor project uses a database to store information about the blueprints and therefore it needs to be online before attempting to run it.
One way to create the database is by using the docker-compose.yaml
file.
This database will require a local directory to mount a volume, therefore before running docker-compose create following directory:
mkdir -p -m 755 /opt/app/cds/mysql/data
Navigate to the docker-compose file in the distribution module:
cd ms/blueprintsprocessor/application/src/main/dc
ms/blueprintsprocessor/distribution/src/main/dc
And run docker-composer:
docker-compose up -d db
This should spin up a container of the MariaDB image in the background. To check if it has worked, this command can be used:
docker-compose logs -f
The phrase mysqld: ready for connections
indicates that the database was started correctly.
From now on, the Docker container will be available on the computer; if it ever gets stopped, it can be started again by the command:
docker start <id of mariadb container>
Set permissions on the local file system
Blueprints processor uses the local file system for some operations and, therefore, need some existing and accessible paths to run properly.
Execute the following commands to create the needed directories, and grant access to the current user to modify them:
mkdir -p -m 755 /opt/app/onap/blueprints/archive
mkdir -p -m 755 /opt/app/onap/blueprints/deploy
mkdir -p -m 755 /opt/app/onap/scripts
sudo chown -R $(id -u):$(id -g) /opt/app/onap/
Import the project into the IDE
Note
This is the recommended IDE for running CDS blueprint processor.
Go to File | Open and choose the pom.xml
file of the cds/ms/blueprintprocessor directory:
Import as a project. Sometimes it may be necessary to reimport Maven project, e.g. if some dependencies can’t be found:
Override some application properties:
Next steps will create a run configuration profile overriding some application properties with custom values, to reflect the local environment characteristics.
Navigate to the main class of the Blueprints Processor, the BlueprintProcessorApplication class:
ms/blueprintsprocessor/application/src/main/kotlin/org/onap/ccsdk/cds/blueprintsprocessor/BlueprintProcessorApplication.kt
.
After dependencies are imported and indexes are set up you will see a green arrow next to main function of BlueprintProcessorApplication class, indicating that the run configuration can now be created. Right-click inside the class at any point to load the context menu and select create a run configuration from context:
The following window will open:
Add the following in the field `VM Options`:
-Dspring.profiles.active=dev
Optional: You can override any value from application-dev.properties file here. In this case use the following pattern:
-D<application-dev.properties key>=<application-dev.properties value>
Navigate to the main class of the Blueprints Processor, the BlueprintProcessorApplication class:
ms/blueprintsprocessor/application/src/main/java/org/onap/ccsdk/cds/blueprintsprocessor/BlueprintProcessorApplication.java.
After dependencies are imported and indexes are set up you will see a green arrow next to main function of BlueprintProcessorApplication class, indicating that the run configuration can now be created. Right-click inside the class at any point to load the context menu and select create a run configuration from context:
The following window will open:
Add the following in the field `VM Options`:
-Dspring.profiles.active=dev
Optional: You can override any value from application-dev.properties file here. In this case use the following pattern:
-D<application-dev.properties key>=<application-dev.properties value>
Navigate to the main class of the Blueprints Processor, the BlueprintProcessorApplication class:
ms/blueprintsprocessor/application/src/main/java/org/onap/ccsdk/cds/blueprintsprocessor/BlueprintProcessorApplication.java
.
After dependencies are imported and indexes are set up you will see a green arrow next to main function of BlueprintProcessorApplication class, indicating that the run configuration can now be created. Right-click inside the class at any point to load the context menu and select create a run configuration from context:
The following window will open:
Add the following in the field `VM Options`
-DappName=ControllerBluePrints
-Dms_name=org.onap.ccsdk.apps.controllerblueprints
-DappVersion=1.0.0
-Dspring.config.location=opt/app/onap/config/
-Dspring.datasource.url=jdbc:mysql://127.0.0.1:3306/sdnctl
-Dspring.datasource.username=sdnctl
-Dspring.datasource.password=sdnctl
-Dcontrollerblueprints.loadInitialData=true
-Dblueprintsprocessor.restclient.sdncodl.url=http://localhost:8282/
-Dblueprintsprocessor.db.primary.url=jdbc:mysql://localhost:3306/sdnctl
-Dblueprintsprocessor.db.primary.username=sdnctl
-Dblueprintsprocessor.db.primary.password=sdnctl
-Dblueprintsprocessor.db.primary.driverClassName=org.mariadb.jdbc.Driver
-Dblueprintsprocessor.db.primary.hibernateHbm2ddlAuto=update
-Dblueprintsprocessor.db.primary.hibernateDDLAuto=none
-Dblueprintsprocessor.db.primary.hibernateNamingStrategy=org.hibernate.cfg.ImprovedNamingStrategy
-Dblueprintsprocessor.db.primary.hibernateDialect=org.hibernate.dialect.MySQL5InnoDBDialect
-Dblueprints.processor.functions.python.executor.executionPath=./components/scripts/python/ccsdk_blueprints
-Dblueprints.processor.functions.python.executor.modulePaths=./components/scripts/python/ccsdk_blueprints,./components/scripts/python/ccsdk_netconf,./components/scripts/python/ccsdk_restconf
-Dblueprintsprocessor.restconfEnabled=true
-Dblueprintsprocessor.restclient.sdncodl.type=basic-auth
-Dblueprintsprocessor.restclient.sdncodl.url=http://localhost:8282/
-Dblueprintsprocessor.restclient.sdncodl.username=admin
-Dblueprintsprocessor.restclient.sdncodl.password=Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U
-Dblueprintsprocessor.grpcEnable=false
-Dblueprintsprocessor.grpcPort=9111
-Dblueprintsprocessor.blueprintDeployPath=/opt/app/onap/blueprints/deploy
-Dblueprintsprocessor.blueprintArchivePath=/opt/app/onap/blueprints/archive
-Dblueprintsprocessor.blueprintWorkingPath=/opt/app/onap/blueprints/work
-Dsecurity.user.password={bcrypt}$2a$10$duaUzVUVW0YPQCSIbGEkQOXwafZGwQ/b32/Ys4R1iwSSawFgz7QNu
-Dsecurity.user.name=ccsdkapps
-Dblueprintsprocessor.messageclient.self-service-api.kafkaEnable=false
-Dblueprintsprocessor.messageclient.self-service-api.topic=producer.t
-Dblueprintsprocessor.messageclient.self-service-api.type=kafka-basic-auth
-Dblueprintsprocessor.messageclient.self-service-api.bootstrapServers=127.0.0.1:9092
-Dblueprintsprocessor.messageclient.self-service-api.consumerTopic=receiver.t
-Dblueprintsprocessor.messageclient.self-service-api.groupId=receiver-id
-Dblueprintsprocessor.messageclient.self-service-api.clientId=default-client-id
-Dspring.profiles.active=dev
-Dblueprintsprocessor.httpPort=8080
-Dserver.port=55555
In the field ‘Working Directory’ browse to your application path .../cds/ms/blueprintsprocessor/application
if path is not already specified correctly.
Run configuration should now look something like this:
Add/replace the following in Blueprint’s application-dev.properties file.
blueprintsprocessor.grpcclient.remote-python.type=token-auth
blueprintsprocessor.grpcclient.remote-python.host=localhost
blueprintsprocessor.grpcclient.remote-python.port=50051
blueprintsprocessor.grpcclient.remote-python.token=Basic Y2NzZGthcHBzOmNjc2RrYXBwcw==
blueprintprocessor.remoteScriptCommand.enabled=true
Take care that if a parameter already exist you need to change the value of the existing parameter to avoid duplicates.
Run the application:
Before running Blueprint Processor check that you use the correct Java version in IntelliJ. Select either run or debug for the created Run Configuration to start the Blueprints Processor:
Step #1 - Make sure your installation of Visual Studio Code is up to date. This guide was writen using version 1.48
Step #2 - Install Kotlin extension from the Visual Studio Code Marketplace
Step #3 - On the top menu click Run | Open Configurations
Warning
This should open the file called launch.json but in some cases you’ll need to wait for the Kotlin Language Server to be installed before you can do anything. Please watch the bottom bar in Visual Studio Code for messages about things getting installed.
Step #4 - add configuration shown below to your configurations list.
{
"type": "kotlin",
"request": "launch",
"name": "Blueprint Processor",
"projectRoot": "${workspaceFolder}/ms/blueprintsprocessor/application",
"mainClass": "-Dspring.profiles.active=dev org.onap.ccsdk.cds.blueprintsprocessor.BlueprintProcessorApplicationKt"
}
Warning
The projectRoot path assumes that you created your Workspace in the main CDS repository folder. If not - please change the path accordingly
Note
The mainClass contains a spring profile param before the full class name. This is done because args is not supported by Kotlin launch.json configuration. If you have a cleaner idea how to solve this - please let us know.
Add/replace the following in Blueprint’s application-dev.properties file:
blueprintsprocessor.grpcclient.remote-python.type=token-auth
blueprintsprocessor.grpcclient.remote-python.host=localhost
blueprintsprocessor.grpcclient.remote-python.port=50051
blueprintsprocessor.grpcclient.remote-python.token=Basic Y2NzZGthcHBzOmNjc2RrYXBwcw==
blueprintprocessor.remoteScriptCommand.enabled=true
Currently the following entries need to be added in VSC too:
logging.level.web=DEBUG
logging.level.org.springframework.web: DEBUG
#Encrypted username and password for health check service
endpoints.user.name=eHbVUbJAj4AG2522cSbrOQ==
endpoints.user.password=eHbVUbJAj4AG2522cSbrOQ==
#BaseUrls for health check blueprint processor services
blueprintprocessor.healthcheck.baseUrl=http://localhost:8080/
blueprintprocessor.healthcheck.mapping-service-name-with-service-link=[Execution service,/api/v1/execution-service/health-check],[Resources service,/api/v1/resources/health-check],[Template service,/api/v1/template/health-check]
#BaseUrls for health check Cds Listener services
cdslistener.healthcheck.baseUrl=http://cds-sdc-listener:8080/
cdslistener.healthcheck.mapping-service-name-with-service-link=[SDC Listener service,/api/v1/sdclistener/healthcheck]
#Actuator properties
management.endpoints.web.exposure.include=*
management.endpoint.health.show-details=always
management.info.git.mode=full
In VSC the properties are read from target folder, thats why the following maven command needs to be rerun:
mvn clean install -DskipTests=true -Dmaven.test.skip=true -Dmaven.javadoc.skip=true -Dadditionalparam=-Xdoclint:none
Click Run in Menu bar.
Testing the application
There are two main features of the Blueprints Processor that can be of interest of a developer: blueprint publish and blueprint process.
To upload custom blueprints, the endpoint api/v1/execution-service/publish
is used.
To process, the endpoint is api/v1/execution-service/process
.
Postman is a software that can be used to send these request, and an example of them is present on https://www.getpostman.com/collections/b99863b0cde7565a32fc.
A detailed description of the usage of different APIs of CDS will follow.
Possible Fixes
Imported packages or annotiations are not found, Run Config not available?
Rebuild with
maven install ...
(see above)Potentially change Maven home directory in Settings
Maven reimport in IDE
Compilation error?
Change Java Version to 11
Running CDS UI Locally
Prerequisites
Node version: >= 8.9 NPM version: >=6.4.1
Check-out code
git clone "https://gerrit.onap.org/r/ccsdk/cds"
Install Node Modules (UI)
From cds-ui/client directory, execute npm install to fetch project dependent Node modules
Install Node Modules (Server)
From cds-ui/server directory, execute npm install to fetch project dependent Node modules
Run UI in Development Mode
From cds-ui/client directory, execute npm start to run the Angular Live Development Server
nirvanr01-mac:client nirvanr$ npm start
> cds-ui@0.0.0 start /Users/nirvanr/dev/git/onap/ccsdk/cds/cds-ui/client
> ng serve
** Angular Live Development Server is listening on localhost:4200, open your browser on http://localhost:4200/ **
Run UI Server
From cds-ui/client directory, execute mvn clean compile then npm run build to copy all front-end artifacts to server/public directory
nirvanr01-mac:client nirvanr$ npm run build
> cds-ui@0.0.0 build /Users/nirvanr/dev/git/onap/ccsdk/cds/cds-ui/client
> ng build
From cds-ui/server directory, execute npm run start to build and start the front-end server
nirvanr01-mac:server nirvanr$ npm run start
> cds-ui-server@1.0.0 prestart /Users/nirvanr/dev/git/onap/ccsdk/cds/cds-ui/server
> npm run build
> cds-ui-server@1.0.0 build /Users/nirvanr/dev/git/onap/ccsdk/cds/cds-ui/server
> lb-tsc es2017 --outDir dist
> cds-ui-server@1.0.0 start /Users/nirvanr/dev/git/onap/ccsdk/cds/cds-ui/server
> node .
Server is running at http://127.0.0.1:3000
Try http://127.0.0.1:3000/ping
Build UI Docker Image
From cds-ui/server directory, execute docker build -t cds-ui . to build a local CDS-UI Docker image
nirvanr01-mac:server nirvanr$ docker build -t cds-ui .
Sending build context to Docker daemon 96.73MB
Step 1/11 : FROM node:10-slim
---> 914bfdbef6aa
Step 2/11 : USER node
---> Using cache
---> 04d66cc13b46
Step 3/11 : RUN mkdir -p /home/node/app
---> Using cache
---> c9a44902da43
Step 4/11 : WORKDIR /home/node/app
---> Using cache
---> effb2329a39e
Step 5/11 : COPY --chown=node package*.json ./
---> Using cache
---> 4ad01897490e
Step 6/11 : RUN npm install
---> Using cache
---> 3ee8149b17e2
Step 7/11 : COPY --chown=node . .
---> e1c72f6caa15
Step 8/11 : RUN npm run build
---> Running in 5ec69a1961d0
> cds-ui-server@1.0.0 build /home/node/app
> lb-tsc es2017 --outDir dist
Removing intermediate container 5ec69a1961d0
---> ec9fb899e52c
Step 9/11 : ENV HOST=0.0.0.0 PORT=3000
---> Running in 19963303a09c
Removing intermediate container 19963303a09c
---> 6b3b45709e27
Step 10/11 : EXPOSE ${PORT}
---> Running in 78b9833c5050
Removing intermediate container 78b9833c5050
---> 3835c14ad17b
Step 11/11 : CMD [ "node", "." ]
---> Running in 79a98e6242dd
Removing intermediate container 79a98e6242dd
---> c41f6e6ba4de
Successfully built c41f6e6ba4de
Successfully tagged cds-ui:latest
Run UI Docker Image
Create docker-compose.yaml as below.
Note:
Replace <ip> with host/port where blueprint processor mS is running.
version: '3.3'
services:
cds-ui:
image: cds-ui:latest
container_name: cds-ui
ports:
- "3000:3000"
restart: always
environment:
- HOST=0.0.0.0
- API_BLUEPRINT_PROCESSOR_HTTP_BASE_URL=http://<ip>:8080/api/v1
- API_BLUEPRINT_PROCESSOR_HTTP_AUTH_TOKEN=Basic Y2NzZGthcHBzOmNjc2RrYXBwcw==
- API_BLUEPRINT_PROCESSOR_GRPC_HOST=<IP>
- API_BLUEPRINT_PROCESSOR_GRPC_PORT=9111
- API_BLUEPRINT_PROCESSOR_GRPC_AUTH_TOKEN=Basic Y2NzZGthcHBzOmNjc2RrYXBwcw==
Execute docker-compose up cds-ui
nirvanr01-mac:cds nirvanr$ docker-compose up cds-ui
Creating cds-ui ... done
Attaching to cds-ui
cds-ui | Server is running at http://127.0.0.1:3000
cds-ui | Try http://127.0.0.1:3000/ping
Next
Blueprints Processor Microservice
Micro service to Manage Controller Blueprint Models, such as Resource Dictionary, Service Models, Velocity Templates etc, which will serve service for Controller Design Studio and Controller runtimes.
This microservice is used to deploy Controller Blueprint Archive file in Run time database. This also helps to test the Valid CBA.
Architecture
Testing in local environment
Point your browser to http://localhost:8000/api/v1/execution-service/ping (please note that the port is 8000, not 8080)
To authenticate, use ccsdkapps/ccsdkapps login user id and password.
Installation Guide
Installation
ONAP is meant to be deployed within a Kubernetes environment. Hence, the de-facto way to deploy CDS is through Kubernetes.
ONAP also package Kubernetes manifest as Chart, using Helm.
Prerequisite
https://docs.onap.org/en/latest/guides/onap-developer/settingup/index.html
Setup local Helm
helm repo
helm serve &
helm repo add local http://127.0.0.1:8879
Get the chart
Make sure to checkout the release to use, by replacing $release-tag in bellow command
git clone https://gerrit.onap.org/r/oom git checkout tags/$release-tag cd oom/kubernetes make cds
Install CDS
helm install –name cds cds
Result
1$ kubectl get all --selector=release=cds
2NAME READY STATUS RESTARTS AGE
3pod/cds-blueprints-processor-54f758d69f-p98c2 0/1 Running 1 2m
4pod/cds-cds-6bd674dc77-4gtdf 1/1 Running 0 2m
5pod/cds-cds-db-0 1/1 Running 0 2m
6pod/cds-controller-blueprints-545bbf98cf-zwjfc 1/1 Running 0 2m
7
8NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
9service/blueprints-processor ClusterIP 10.43.139.9 <none> 8080/TCP,9111/TCP 2m
10service/cds NodePort 10.43.254.69 <none> 3000:30397/TCP 2m
11service/cds-db ClusterIP None <none> 3306/TCP 2m
12service/controller-blueprints ClusterIP 10.43.207.152 <none> 8080/TCP 2m
13
14NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
15deployment.apps/cds-blueprints-processor 1 1 1 0 2m
16deployment.apps/cds-cds 1 1 1 1 2m
17deployment.apps/cds-controller-blueprints 1 1 1 1 2m
18
19NAME DESIRED CURRENT READY AGE
20replicaset.apps/cds-blueprints-processor-54f758d69f 1 1 0 2m
21replicaset.apps/cds-cds-6bd674dc77 1 1 1 2m
22replicaset.apps/cds-controller-blueprints-545bbf98cf 1 1 1 2m
23
24NAME DESIRED CURRENT AGE
25statefulset.apps/cds-cds-db 1 1 2m
Running CDS UI:
Client:
Install Node.js and angularCLI. Refer https://angular.io/guide/quickstart npm install in the directory cds/cds-ui/client npm run build - to build UI module
Loopback Server:
npm install in the directory cds/cds-ui/server npm start should bring you the CDS UI page in your local machine with the link https://127.0.0.1:3000/
Design Time Tools Guide
Below are the requirements to enable automation for a service within ONAP.
For instantiation, the goal is to be able to automatically resolve all the HEAT/Helm variables, called cloud parameters.
For post-instantiation, the goal is to configure the VNF with initial configuration.
Prerequisite
Gather the cloud parameters:
Instantiation:
Have the HEAT template along with the HEAT environment file (or) Have the Helm chart along with the Values.yaml file
(CDS supports, but whether SO → Multicloud support for Helm/K8S is different story)
Post-instantiation:
Have the configuration template to apply on the VNF.
XML for NETCONF
JSON / XML for RESTCONF
not supported yet - CLI
JSON for Ansible [not supported yet]
Identify which template parameters are static and dynamic
Create and fill-in the a table for all the dynamic values
While doing so, identify the resources using the same process to be resolved; for instance, if two IPs has to be resolved through the same IPAM, the process the resolve the IP is the same.
Services:
Controller Blueprint Archived Designer Tool (CBA)
Introduction
The Controller Blueprint Archive is the overall service design, fully model-driven, intent based package needed for SELF SERVICE provisioning and configuration management automation.
The CBA is .zip file, comprised of the following folder structure, the files may vary:
├── Definitions
│ ├── blueprint.json Overall TOSCA service template (workflow + node_template)
│ ├── artifact_types.json (generated by enrichment)
│ ├── data_types.json (generated by enrichment)
│ ├── policy_types.json (generated by enrichment)
│ ├── node_types.json (generated by enrichment)
│ ├── relationship_types.json (generated by enrichment)
│ ├── resources_definition_types.json (generated by enrichment, based on Data Dictionaries)
│ └── *-mapping.json One per Template
│
├── Environments Contains *.properties files as required by the service
│
├── Plans Contains Directed Graph
│
├── Tests Contains uat.yaml file for testing cba actions within a cba package
│
├── Scripts Contains scripts
│ ├── python Python scripts
│ └── kotlin Kotlin scripts
│
├── TOSCA-Metadata
│ └── TOSCA.meta Meta-data of overall package
│
└── Templates Contains combination of mapping and template
To process a CBA for any service we need to enrich it first. This will gather all the node- type, data-type, artifact-type, data-dictionary definitions provided in the blueprint.json.
Architecture
Data Flow
Installation
FROM alpine:3.8 as builder
RUN apk add –no-cache npm
WORKDIR /opt/cds-ui/client/
COPY client/package.json /opt/cds-ui/client/
RUN npm install
COPY client /opt/cds-ui/client/
RUN npm run build
FROM alpine:3.8
WORKDIR /opt/cds-ui/
RUN apk add –no-cache npm
COPY server/package.json /opt/cds-ui/
RUN npm install
COPY server /opt/cds-ui/
COPY –from=builder /opt/cds-ui/server/public /opt/cds-ui/public
RUN npm run build
EXPOSE 3000
CMD [ “npm”, “start” ]
Development
Visual Studio code editor
Git bash
Node.js & npm
loopback 4 cl
To compile CDS code:
Make sure your local Maven settings file ($HOME/.m2/settings.xml) contains references to the ONAP repositories and OpenDaylight repositories.
cd cds ; mvn clean install ; cd ..
Open the cds-ui/client code for development
Functional Decomposition
Resource Definition
Introduction:
A Resource definition models the how a specific resource can be resolved.
A resource is a variable/parameter in the context of the service. It can be anything, but it should not be confused with SDC or Openstack resources.
A Resource definition can have multiple sources to handle resolution in different ways. The main goal of Resource definition is to define re-usable entity that could be shared.
Creation of Resource definition is a standalone activity, separated from the blueprint design.
As part of modelling a Resource definition entry, the following generic information should be provided:
Below are properties that all the resource source have will have
The modeling does allow for data translation between external capability and CDS for both input and output key mapping.
Example:
vf-module-model-customization-uuid and vf-module-label are two data dictionaries. A SQL table, VF_MODULE_MODEL, exist to correlate them.
Here is how input-key-mapping, output-key-mapping and key-dependencies can be used:
1 {
2 "description": "This is Component Resource Source Node Type",
3 "version": "1.0.0",
4 "properties": {
5 "script-type": {
6 "required": true,
7 "type": "string",
8 "default": "kotlin",
9 "constraints": [
10 {
11 "valid_values": [
12 "kotlin",
13 "jython"
14 ]
15 }
16 ]
17 },
18 "script-class-reference": {
19 "description": "Capability reference name for internal and kotlin, for jython script file path",
20 "required": true,
21 "type": "string"
22 },
23 "instance-dependencies": {
24 "required": false,
25 "description": "Instance dependency Names to Inject to Kotlin / Jython Script.",
26 "type": "list",
27 "entry_schema": {
28 "type": "string"
29 }
30 },
31 "key-dependencies": {
32 "description": "Resource Resolution dependency dictionary names.",
33 "required": true,
34 "type": "list",
35 "entry_schema": {
36 "type": "string"
37 }
38 }
39 },
40 "derived_from": "tosca.nodes.ResourceSource"
41 }
Resource source:
Defines the contract to resolve a resource.
A resource source is modeled, following TOSCA node type definition and derives from the Resource source.
Also please click below for resource source available details
Expects the value to be provided as input to the request.
1{
2 "source-input" :
3 {
4 "description": "This is Input Resource Source Node Type",
5 "version": "1.0.0",
6 "properties": {},
7 "derived_from": "tosca.nodes.ResourceSource"
8 }
9}
Expects the value to be defaulted in the model itself.
1{
2 "source-default" :
3 {
4 "description": "This is Default Resource Source Node Type",
5 "version": "1.0.0",
6 "properties": {},
7 "derived_from": "tosca.nodes.ResourceSource"
8 }
9}
Expects the SQL query to be modeled; that SQL query can be parameterized, and the parameters be other resources resolved through other means. If that’s the case, this data dictionary definition will have to define key-dependencies along with input-key-mapping.
CDS is currently deployed along the side of SDNC, hence the primary database connection provided by the framework is to SDNC database.
1 {
2 "description": "This is Database Resource Source Node Type",
3 "version": "1.0.0",
4 "properties": {
5 "type": {
6 "required": true,
7 "type": "string",
8 "constraints": [
9 {
10 "valid_values": [
11 "SQL"
12 ]
13 }
14 ]
15 },
16 "endpoint-selector": {
17 "required": false,
18 "type": "string"
19 },
20 "query": {
21 "required": true,
22 "type": "string"
23 },
24 "input-key-mapping": {
25 "required": false,
26 "type": "map",
27 "entry_schema": {
28 "type": "string"
29 }
30 },
31 "output-key-mapping": {
32 "required": false,
33 "type": "map",
34 "entry_schema": {
35 "type": "string"
36 }
37 },
38 "key-dependencies": {
39 "required": true,
40 "type": "list",
41 "entry_schema": {
42 "type": "string"
43 }
44 }
45 },
46 "derived_from": "tosca.nodes.ResourceSource"
47 }
Connection to a specific database can be expressed through the endpoint-selector property, which refers to a macro defining the information about the database the connect to. Understand TOSCA Macro in the context of CDS.
1{
2 "dsl_definitions": {
3 "dynamic-db-source": {
4 "type": "maria-db",
5 "url": "jdbc:mysql://localhost:3306/sdnctl",
6 "username": "<username>",
7 "password": "<password>"
8 }
9 }
10}
Expects the URI along with the VERB and the payload, if needed.
CDS is currently deployed along the side of SDNC, hence the default rest connection provided by the framework is to SDNC MDSAL.
1 {
2 "description": "This is Rest Resource Source Node Type",
3 "version": "1.0.0",
4 "properties": {
5 "type": {
6 "required": false,
7 "type": "string",
8 "default": "JSON",
9 "constraints": [
10 {
11 "valid_values": [
12 "JSON"
13 ]
14 }
15 ]
16 },
17 "verb": {
18 "required": false,
19 "type": "string",
20 "default": "GET",
21 "constraints": [
22 {
23 "valid_values": [
24 "GET", "POST", "DELETE", "PUT"
25 ]
26 }
27 ]
28 },
29 "payload": {
30 "required": false,
31 "type": "string",
32 "default": ""
33 },
34 "endpoint-selector": {
35 "required": false,
36 "type": "string"
37 },
38 "url-path": {
39 "required": true,
40 "type": "string"
41 },
42 "path": {
43 "required": true,
44 "type": "string"
45 },
46 "expression-type": {
47 "required": false,
48 "type": "string",
49 "default": "JSON_PATH",
50 "constraints": [
51 {
52 "valid_values": [
53 "JSON_PATH",
54 "JSON_POINTER"
55 ]
56 }
57 ]
58 },
59 "input-key-mapping": {
60 "required": false,
61 "type": "map",
62 "entry_schema": {
63 "type": "string"
64 }
65 },
66 "output-key-mapping": {
67 "required": false,
68 "type": "map",
69 "entry_schema": {
70 "type": "string"
71 }
72 },
73 "key-dependencies": {
74 "required": true,
75 "type": "list",
76 "entry_schema": {
77 "type": "string"
78 }
79 }
80 },
81 "derived_from": "tosca.nodes.ResourceSource"
82 }
Connection to a specific REST system can be expressed through the endpoint-selector property, which refers to a macro defining the information about the REST system the connect to. Understand TOSCA Macro in the context of CDS.
- Few ways are available to authenticate to the REST system:
token-auth
basic-auth
ssl-basic-auth
1{
2 "dsl_definitions": {
3 "dynamic-rest-source": {
4 "type" : "token-auth",
5 "url" : "http://localhost:32778",
6 "token" : "<token>"
7 }
8 }
9}
1{
2 "dsl_definitions": {
3 "dynamic-rest-source": {
4 "type" : "basic-auth",
5 "url" : "http://localhost:32778",
6 "username" : "<username>",
7 "password": "<password>"
8 }
9 }
10}
1{
2 "dsl_definitions": {
3 "dynamic-rest-source": {
4 "type" : "ssl-basic-auth",
5 "url" : "http://localhost:32778",
6 "keyStoreInstance": "JKS or PKCS12",
7 "sslTrust": "trusture",
8 "sslTrustPassword": "<password>",
9 "sslKey": "keystore",
10 "sslKeyPassword": "<password>"
11 }
12 }
13}
Expects a script to be provided.
1 {
2 "description": "This is Component Resource Source Node Type",
3 "version": "1.0.0",
4 "properties": {
5 "script-type": {
6 "required": true,
7 "type": "string",
8 "default": "kotlin",
9 "constraints": [
10 {
11 "valid_values": [
12 "kotlin",
13 "jython"
14 ]
15 }
16 ]
17 },
18 "script-class-reference": {
19 "description": "Capability reference name for internal and kotlin, for jython script file path",
20 "required": true,
21 "type": "string"
22 },
23 "instance-dependencies": {
24 "required": false,
25 "description": "Instance dependency Names to Inject to Kotlin / Jython Script.",
26 "type": "list",
27 "entry_schema": {
28 "type": "string"
29 }
30 },
31 "key-dependencies": {
32 "description": "Resource Resolution dependency dictionary names.",
33 "required": true,
34 "type": "list",
35 "entry_schema": {
36 "type": "string"
37 }
38 }
39 },
40 "derived_from": "tosca.nodes.ResourceSource"
41 }
Value will be resolved through REST., and output will be a complex type.
Modeling reference: Modeling Concepts#rest
In this example, we’re making a POST request to an IPAM system with no payload.
Some ingredients are required to perform the query, in this case, $prefixId. Hence It is provided as an input-key-mapping and defined as a key-dependencies. Please refer to the modeling guideline for more in depth understanding.
As part of this request, the expected response will be as below.
1 {
2 "id": 4,
3 "address": "192.168.10.2/32",
4 "vrf": null,
5 "tenant": null,
6 "status": 1,
7 "role": null,
8 "interface": null,
9 "description": "",
10 "nat_inside": null,
11 "created": "2018-08-30",
12 "last_updated": "2018-08-30T14:59:05.277820Z"
13 }
What is of interest is the address and id fields. For the process to return these two values, we need to create a custom data-type, as bellow
1{
2 "version": "1.0.0",
3 "description": "This is Netbox IP Data Type",
4 "properties": {
5 "address": {
6 "required": true,
7 "type": "string"
8 },
9 "id": {
10 "required": true,
11 "type": "integer"
12 }
13 },
14 "derived_from": "tosca.datatypes.Root"
15}
The type of the data dictionary will be dt-netbox-ip.
To tell the resolution framework what is of interest in the response, the output-key-mapping section is used. The process will map the output-key-mapping to the defined data-type.
{
"tags" : "oam-local-ipv4-address",
"name" : "create_netbox_ip",
"property" : {
"description" : "netbox ip",
"type" : "dt-netbox-ip"
},
"updated-by" : "adetalhouet",
"sources" : {
"config-data" : {
"type" : "source-rest",
"properties" : {
"type" : "JSON",
"verb" : "POST",
"endpoint-selector" : "ipam-1",
"url-path" : "/api/ipam/prefixes/$prefixId/available-ips/",
"path" : "",
"input-key-mapping" : {
"prefixId" : "prefix-id"
},
"output-key-mapping" : {
"address" : "address",
"id" : "id"
},
"key-dependencies" : [ "prefix-id" ]
}
}
}
}
Resource Assignment
Component executor:
A workflow defines an overall action to be taken for the service; it can be composed of a set of sub-actions to execute. Currently, workflows are backed by Directed Graph engine.
A CBA can have as many workflow as needed.
A template is an artifact.
A template is parameterized and each parameter must be defined in a corresponding mapping file.
In order to know which mapping correlate to which template, the file name must start with an artifact-prefix, serving as identifier to the overall template + mapping.
The requirement is as follow:
${artifact-prefix}-template ${artifact-prefix}-mapping
A template can represent anything, such as device config, payload to interact with 3rd party systems, resource-accumulator template, etc…
Defines the contract of each resource to be resolved. Each placeholder in the template must have a corresponding mapping definition.
A mapping is comprised of:
name
required / optional
type (support complex type)
dictionary-name
dictionary-source
This allows to make sure given resources get resolved prior the resolution of the resources defining the dependency. The dictionary fields reference to a specific data dictionary.
Resource accumulator:
In order to resolve HEAT environment variables, resource accumulator templates are being in used for Dublin.
These templates are specific to the pre-instantiation scenario, and relies on GR-API within SDNC.
It is composed of the following sections:
resource-accumulator-resolved-data: defines all the resources that can be resolved directly from the context. It expresses a direct mapping between the name of the resource and its value.
capability-data: defines what capability to use to create a specific resource, along with the ingredients required to invoke the capability and the output mapping.
Scripts
Library
NetconfClient
In order to facilitate NETCONF interaction within scripts, a python NetconfClient binded to our Kotlin implementation is made available. This NetconfClient can be used when using the netconf-component-executor.
The client can be find here: https://github.com/onap/ccsdk-apps/blob/master/components/scripts/python/ccsdk_netconf/netconfclient.py
Use Cases
Use Cases
Wordpress CNF in CDS (POC)
This demo by CableLabs shows an easy to use POC how to use/deploy VNFs in CDS and do resource asignment.
Detailed description will follow as soon as there is an acknowledgement from CableLabs that content can be published.
Goal is to use CDS (ONAP) in a very simple and understandable way. Azure, AWS and Kubernetes are used as VIMs trough scripting. Wordpress is used as a VNF.
This demo was tested on Frankfurt.
Presentation of Gerald Karam (2020-09-08)
PNF Simulator Day-N config-assign/deploy
Overview
This use case shows in a very simple way how the day-n configuration is assigned and deployed to a PNF through CDS. A Netconf server (docker image sysrepo/sysrepo-netopeer2) is used for simulating the PNF.
This use case (POC) solely requires a running CDS and the PNF Simulator running on a VM (Ubuntu is used by the author). No other module of ONAP is needed.
There are different ways to run CDS and the PNF simulator. This guide will show different possible options to allow the greatest possible flexibility.
Run CDS (Blueprint Processor)
CDS can be run in Kubernetes (Minikube, Microk8s) or in an IDE. You can choose your favorite option. Just the blueprint processor of CDS is needed. If you have desktop access it is recommended to run CDS in an IDE since it is easy and enables debugging.
CDS in Microk8s: https://wiki.onap.org/display/DW/Running+CDS+on+Microk8s (RDT link to be added)
CDS in Minikube: https://wiki.onap.org/display/DW/Running+CDS+in+minikube (RDT link to be added)
CDS in an IDE: https://docs.onap.org/projects/onap-ccsdk-cds/en/latest/userguide/running-bp-processor-in-ide.html
Run PNF Simulator and install module
There are many different ways to run a Netconf Server to simulate the PNF, in this guide sysrepo/sysrepo-netopeer2 docker image is commonly used. The easiest way is to run the out-of-the-box docker container without any other configuration, modules or scripts. In the ONAP community there are other workflows existing for running the PNF Simulator. These workflows are also using sysrepo/sysrepo-netopeer2 docker image. These workflow are also linked here but they are not tested by the author of this guide.
Download and run docker container with docker run -d --name netopeer2 -p 830:830 -p 6513:6513 sysrepo/sysrepo-netopeer2:latest
Enter the container with docker exec -it netopeer2 bin/bash
Browse to the target location where all YANG modules exist: cd /etc/sysrepo/yang
Create a simple mock YANG model for a packet generator (pg.yang
).
module sample-plugin {
yang-version 1;
namespace "urn:opendaylight:params:xml:ns:yang:sample-plugin";
prefix "sample-plugin";
description
"This YANG module defines the generic configuration and
operational data for sample-plugin in VPP";
revision "2016-09-18" {
description "Initial revision of sample-plugin model";
}
container sample-plugin {
uses sample-plugin-params;
description "Configuration data of sample-plugin in Honeycomb";
// READ
// curl -u admin:admin http://localhost:8181/restconf/config/sample-plugin:sample-plugin
// WRITE
// curl http://localhost:8181/restconf/operational/sample-plugin:sample-plugin
}
grouping sample-plugin-params {
container pg-streams {
list pg-stream {
key id;
leaf id {
type string;
}
leaf is-enabled {
type boolean;
}
}
}
}
}
Create the following sample XML data definition for the above model (pg-data.xml
).
Later on this will initialise one single PG stream.
<sample-plugin xmlns="urn:opendaylight:params:xml:ns:yang:sample-plugin">
<pg-streams>
<pg-stream>
<id>1</id>
<is-enabled>true</is-enabled>
</pg-stream>
</pg-streams>
</sample-plugin>
Execute the following command within netopeer docker container to install the pg.yang model
sysrepoctl -v3 -i pg.yang
Note
This command will just schedule the installation, it will be applied once the server is restarted.
Stop the container from outside with docker stop netopeer2
and start it again with docker start netopeer2
Enter the container like it’s mentioned above with docker exec -it netopeer2 bin/bash
.
You can check all installed modules with sysrepoctl -l
. sample-plugin module should appear with I
flag.
Execute the following the commands to initialise the Yang model with one pg-stream record. We will be using CDS to perform the day-1 and day-2 configuration changes.
netopeer2-cli
> connect --host localhost --login root
# passwort is root
> get --filter-xpath /sample-plugin:*
# shows existing pg-stream records (empty)
> edit-config --target running --config=/etc/sysrepo/yang/pg-data.xml
# initialises Yang model with one pg-stream record
> get --filter-xpath /sample-plugin:*
# shows initialised pg-stream
If the output of the last command is like this, everything went successful:
DATA
<sample-plugin xmlns="urn:opendaylight:params:xml:ns:yang:sample-plugin">
<pg-streams>
<pg-stream>
<id>1</id>
<is-enabled>true</is-enabled>
</pg-stream>
</pg-streams>
</sample-plugin>
Download and run docker container with docker run -d --name netopeer2 -p 830:830 -p 6513:6513 sysrepo/sysrepo-netopeer2:legacy
Enter the container with docker exec -it netopeer2 bin/bash
Browse to the target location where all YANG modules exist: cd /opt/dev/sysrepo/yang
Create a simple mock YANG model for a packet generator (pg.yang
).
module sample-plugin {
yang-version 1;
namespace "urn:opendaylight:params:xml:ns:yang:sample-plugin";
prefix "sample-plugin";
description
"This YANG module defines the generic configuration and
operational data for sample-plugin in VPP";
revision "2016-09-18" {
description "Initial revision of sample-plugin model";
}
container sample-plugin {
uses sample-plugin-params;
description "Configuration data of sample-plugin in Honeycomb";
// READ
// curl -u admin:admin http://localhost:8181/restconf/config/sample-plugin:sample-plugin
// WRITE
// curl http://localhost:8181/restconf/operational/sample-plugin:sample-plugin
}
grouping sample-plugin-params {
container pg-streams {
list pg-stream {
key id;
leaf id {
type string;
}
leaf is-enabled {
type boolean;
}
}
}
}
}
Create the following sample XML data definition for the above model (pg-data.xml
).
Later on this will initialise one single PG (packet-generator) stream.
<sample-plugin xmlns="urn:opendaylight:params:xml:ns:yang:sample-plugin">
<pg-streams>
<pg-stream>
<id>1</id>
<is-enabled>true</is-enabled>
</pg-stream>
</pg-streams>
</sample-plugin>
Execute the following command within netopeer docker container to install the pg.yang model
sysrepoctl -i -g pg.yang
You can check all installed modules with sysrepoctl -l
. sample-plugin module should appear with I
flag.
In legacy version of sysrepo/sysrepo-netopeer2 subscribers of a module are required, otherwise they are not running and configurations changes are not accepted, see https://github.com/sysrepo/sysrepo/issues/1395. There is an predefined application mock up which can be used for that. The usage is described here: https://asciinema.org/a/160247. You need to run the following commands to start the example application for subscribing to our sample-plugin YANG module.
cd /opt/dev/sysrepo/build/examples
./application_example sample-plugin
Following output should appear:
========== READING STARTUP CONFIG sample-plugin: ==========
/sample-plugin:sample-plugin (container)
/sample-plugin:sample-plugin/pg-streams (container)
========== STARTUP CONFIG sample-plugin APPLIED AS RUNNING ==========
The terminal session needs to be kept open after application has started.
Open a new terminal and enter the container with docker exec -it netopeer2 bin/bash
.
Execute the following commands in the container to initialise the Yang model with one pg-stream record.
We will be using CDS to perform the day-1 configuration and day-2 configuration changes.
netopeer2-cli
> connect --host localhost --login netconf
# passwort is netconf
> get --filter-xpath /sample-plugin:*
# shows existing pg-stream records (empty)
> edit-config --target running --config=/opt/dev/sysrepo/yang/pg-data.xml
# initialises Yang model with one pg-stream record
> get --filter-xpath /sample-plugin:*
# shows initialised pg-stream
If the output of the last command is like this, everything went successful:
DATA
<sample-plugin xmlns="urn:opendaylight:params:xml:ns:yang:sample-plugin">
<pg-streams>
<pg-stream>
<id>1</id>
<is-enabled>true</is-enabled>
</pg-stream>
</pg-streams>
</sample-plugin>
You can also see that there are additional logs in the subscriber application after editing the configuration of our YANG module.
Warning
This method of setting up the PNF simulator is not tested by the author of this guide
You can refer to PnP PNF Simulator wiki page to clone the GIT repo and start the required docker containers. We are interested in the sysrepo/sysrepo-netopeer2 docker container to load a simple YANG similar to vFW Packet Generator.
Start PNF simulator docker containers. You can consider changing the netopeer image verion to image: sysrepo/sysrepo-netopeer2:iop in docker-compose.yml file If you find any issues with the default image.
cd $HOME
git clone https://github.com/onap/integration.git
Start PNF simulator
cd ~/integration/test/mocks/pnfsimulator
./simulator.sh start
Verify that you have netopeer docker container are up and running. It will be mapped to host port 830.
docker ps -a | grep netopeer
Config-assign and config-deploy in CDS
In the following steps config-assignment is done and the config is deployed to the
Netconf server through CDS. Example requests are in the following Postman collection
JSON
. You can also use bash scripting to call the APIs.
Note
The CBA for this PNF Demo gets loaded, enriched and saved in CDS through calling bootstrap. If not done before, call Bootstrap API
Password and username for API calls will be ccsdkapps.
Config-Assign:
The assumption is that we are using the same host to run PNF NETCONF simulator as well as CDS. You will need the
IP Adress of the Netconf server container which can be found out with
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' netopeer2
. In the
following example payloads we will use 172.17.0.2.
Call the process API (http://{{host}}:{{port}}/api/v1/execution-service/process
) with POST method to
create day-1 configuration. Use the following payload:
{
"actionIdentifiers": {
"mode": "sync",
"blueprintName": "pnf_netconf",
"blueprintVersion": "1.0.0",
"actionName": "config-assign"
},
"payload": {
"config-assign-request": {
"resolution-key": "day-1",
"config-assign-properties": {
"stream-count": 5
}
}
},
"commonHeader": {
"subRequestId": "143748f9-3cd5-4910-81c9-a4601ff2ea58",
"requestId": "e5eb1f1e-3386-435d-b290-d49d8af8db4c",
"originatorId": "SDNC_DG"
}
}
You can verify the day-1 NETCONF RPC payload looking into CDS DB. You should see the NETCONF RPC with 5 streams (fw_udp_1 TO fw_udp_5). Connect to the DB and run the below statement. You should see the day-1 configuration as an output.
MariaDB [sdnctl]> select * from TEMPLATE_RESOLUTION where resolution_key='day-1' AND artifact_name='netconfrpc';
<rpc xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1">
<edit-config>
<target>
<running/>
</target>
<config>
<sample-plugin xmlns="urn:opendaylight:params:xml:ns:yang:sample-plugin">
<pg-streams>
<pg-stream>
<id>fw_udp_1</id>
<is-enabled>true</is-enabled>
</pg-stream>
<pg-stream>
<id>fw_udp_2</id>
<is-enabled>true</is-enabled>
</pg-stream>
<pg-stream>
<id>fw_udp_3</id>
<is-enabled>true</is-enabled>
</pg-stream>
<pg-stream>
<id>fw_udp_4</id>
<is-enabled>true</is-enabled>
</pg-stream>
<pg-stream>
<id>fw_udp_5</id>
<is-enabled>true</is-enabled>
</pg-stream>
</pg-streams>
</sample-plugin>
</config>
</edit-config>
</rpc>
For creating day-2 configuration call the same endpoint and use the following payload:
{
"actionIdentifiers": {
"mode": "sync",
"blueprintName": "pnf_netconf",
"blueprintVersion": "1.0.0",
"actionName": "config-assign"
},
"payload": {
"config-assign-request": {
"resolution-key": "day-2",
"config-assign-properties": {
"stream-count": 10
}
}
},
"commonHeader": {
"subRequestId": "143748f9-3cd5-4910-81c9-a4601ff2ea58",
"requestId": "e5eb1f1e-3386-435d-b290-d49d8af8db4c",
"originatorId": "SDNC_DG"
}
}
Note
Until this step CDS did not interact with the PNF simulator or device. We just created the day-1 and day-2 configurations and stored it in CDS database
Config-Deploy:
Now we will make the CDS REST API calls to push the day-1 and day-2 configuration changes to the PNF simulator. Call the same endpoint process with the following payload:
{
"actionIdentifiers": {
"mode": "sync",
"blueprintName": "pnf_netconf",
"blueprintVersion": "1.0.0",
"actionName": "config-deploy"
},
"payload": {
"config-deploy-request": {
"resolution-key": "day-1",
"pnf-ipv4-address": "127.17.0.2",
"netconf-username": "netconf",
"netconf-password": "netconf"
}
},
"commonHeader": {
"subRequestId": "143748f9-3cd5-4910-81c9-a4601ff2ea58",
"requestId": "e5eb1f1e-3386-435d-b290-d49d8af8db4c",
"originatorId": "SDNC_DG"
}
}
Go back to PNF netopeer cli console like mentioned above and verify if you can see 5 streams fw_udp_1 to fw_udp_5 enabled. If the 5 streams appear in the output as follows, the day-1 configuration got successfully deployed and the use case is successfully done.
> get --filter-xpath /sample-plugin:*
DATA
<sample-plugin xmlns="urn:opendaylight:params:xml:ns:yang:sample-plugin">
<pg-streams>
<pg-stream>
<id>1</id>
<is-enabled>true</is-enabled>
</pg-stream>
<pg-stream>
<id>fw_udp_1</id>
<is-enabled>true</is-enabled>
</pg-stream>
<pg-stream>
<id>fw_udp_2</id>
<is-enabled>true</is-enabled>
</pg-stream>
<pg-stream>
<id>fw_udp_3</id>
<is-enabled>true</is-enabled>
</pg-stream>
<pg-stream>
<id>fw_udp_4</id>
<is-enabled>true</is-enabled>
</pg-stream>
<pg-stream>
<id>fw_udp_5</id>
<is-enabled>true</is-enabled>
</pg-stream>
</pg-streams>
</sample-plugin>
>
The same can be done for day-2 config (follow same steps just with day-2 in payload).
Note
Through deployment we did not deploy the PNF, we just modified the PNF. The PNF could also be installed by CDS but this is not targeted in this guide.
Creators of this guide
Deutsche Telekom AG
Jakob Krieg (Rocketchat @jakob.Krieg); Eli Halych (Rocketchat @elihalych)
This guide is a derivate from https://wiki.onap.org/display/DW/PNF+Simulator+Day-N+config-assign+and+config-deploy+use+case.
vFW CNF with CDS (Use Case)
Heat/Helm/CDS models: vFW_CNF_CDS_Model
The vFW CNF use case is a demonstration of deployment of CNF application, defined as a set of Helm packages. CDS plays a crucial role in the process of CNF instantiation and is responsible for delivery of instantiation parameters, CNF customization, configuration of CBF after the deployment and may be used in the process of CNF status verification.
Base on this example there are demonstrated following features of CDS and CBA model
resource assignment string, integer and json types
sourcing of resolved value on vf-module level from vnf level assignment
extracting data from AAI and MD-SAL during the resource assignment
custom resource assignment with Kotlin script
templating of the vtl files
building of imperative workflows
utilization of on_succes and on_failure event in imperative workflow
handling of the failure in the workflow
implementation of custom workflow logic with Kotlin script
example of config-assign and config-deploy operation decomposed into many steps
complex parametrization of config deploy operation
combination and aggregation of AAI and MD-SAL data in config-assign and config-deploy operations
The prepared CBA model demonstrates also how to utilize CNF specific features of CBA, suited for the deployment of CNF with k8splugin in ONAP:
building and upload of k8s profile template into k8splugin
building and upload of k8s configuration template into k8splugin
parametrization and creation of configuration instance from configuration template
validation of CNF status with Kotlin script
The CNF in ONAP is modeled as a collection of Helm packages, and in case of vFW use case, CNF application is split into four Helm packages to match vf-modules. For each vf-module there is own template in CBA package. The list of associated resource assignment artifacts with the templates is following:
"artifacts" : {
"helm_base_template-template" : {
"type" : "artifact-template-velocity",
"file" : "Templates/base_template-template.vtl"
},
"helm_base_template-mapping" : {
"type" : "artifact-mapping-resource",
"file" : "Templates/base_template-mapping.json"
},
"helm_vpkg-template" : {
"type" : "artifact-template-velocity",
"file" : "Templates/vpkg-template.vtl"
},
"helm_vpkg-mapping" : {
"type" : "artifact-mapping-resource",
"file" : "Templates/vpkg-mapping.json"
},
"helm_vfw-template" : {
"type" : "artifact-template-velocity",
"file" : "Templates/vfw-template.vtl"
},
"helm_vfw-mapping" : {
"type" : "artifact-mapping-resource",
"file" : "Templates/vfw-mapping.json"
},
"vnf-template" : {
"type" : "artifact-template-velocity",
"file" : "Templates/vnf-template.vtl"
},
"vnf-mapping" : {
"type" : "artifact-mapping-resource",
"file" : "Templates/vnf-mapping.json"
},
"helm_vsn-template" : {
"type" : "artifact-template-velocity",
"file" : "Templates/vsn-template.vtl"
},
"helm_vsn-mapping" : {
"type" : "artifact-mapping-resource",
"file" : "Templates/vsn-mapping.json"
}
}
SO requires for instantiation name of the profile in the parameter: k8s-rb-profile-name and name of the release of thr application: k8s-rb-instance-release-name. The latter one, when not specified, will be replaced with combination of profile name and vf-module-id for each Helm instance/vf-module instantiated. Both values can be found in vtl templates dedicated for vf-modules.
CBA offers possibility of the automatic generation and upload to multicloud/k8s plugin the RB profile content. RB profile is required if you want to deploy your CNF into k8s namesapce other than default. Also, if you want to ensure particular templating of your Helm charts, specific to particular version of the cluster into which Helm packages will deployed on, profile is used to specify the version of your cluster.
RB profile can be used to enrich or to modify the content of the original helm package. Profile can be also used to add additional k8s helm templates for helm installation or can be used to modify existing k8s helm templates for each create CNF instance. It opens another level of CNF customization, much more than customization of the Helm package with override values. K8splugin offers also default profile without content, for default namespace and default cluster version.
---
version: v1
type:
values: "override_values.yaml"
configresource:
- filepath: resources/deployment.yaml
chartpath: templates/deployment.yaml
Above we have exemplary manifest file of the RB profile. Since Frankfurt override_values.yaml file does not need to be used as instantiation values are passed to the plugin over Instance API of k8s plugin. In the example, profile contains additional k8s Helm template which will be added on demand to the helm package during its installation. In our case, depending on the SO instantiation request input parameters, vPGN helm package can be enriched with additional ssh service. Such service will be dynamically added to the profile by CDS and later on CDS will upload whole custom RB profile to multicloud/k8s plugin.
In order to support generation and upload of profile, our vFW CBA model has enhanced resource-assignment workflow which contains additional step: profile-upload. It leverages dedicated functionality introduced in Guilin release that can be used to upload predefined profile or to generate and upload content of the profile with Velocity templating mechanism.
"resource-assignment": {
"steps": {
"resource-assignment": {
"description": "Resource Assign Workflow",
"target": "resource-assignment",
"activities": [
{
"call_operation": "ResourceResolutionComponent.process"
}
],
"on_success": [
"profile-upload"
]
},
"profile-upload": {
"description": "Generate and upload K8s Profile",
"target": "k8s-profile-upload",
"activities": [
{
"call_operation": "ComponentScriptExecutor.process"
}
]
}
},
In our example for vPKG helm package we may select vfw-cnf-cds-vpkg-profile profile that is included into CBA as a folder. Profile generation step uses Velocity templates processing embedded CDS functionality on its basis ssh port number (specified in the SO request as vpg-management-port).
{
"name": "vpg-management-port",
"property": {
"description": "The number of node port for ssh service of vpg",
"type": "integer",
"default": "0"
},
"input-param": false,
"dictionary-name": "vpg-management-port",
"dictionary-source": "default",
"dependencies": []
}
vpg-management-port can be included directly into the helm template and such template will be included into vPKG helm package in time of its instantiation.
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.vpg_name_0 }}-ssh-access
labels:
vnf-name: {{ .Values.vnf_name }}
vf-module-name: {{ .Values.vpg_name_0 }}
release: {{ .Release.Name }}
chart: {{ .Chart.Name }}
spec:
type: NodePort
ports:
- port: 22
nodePort: ${vpg-management-port}
selector:
vf-module-name: {{ .Values.vpg_name_0 }}
release: {{ .Release.Name }}
chart: {{ .Chart.Name }}
The mechanism of profile generation and upload requires specific node teamplate in the CBA definition. In our case, it comes with the declaration of two profiles: one static vfw-cnf-cds-base-profile in a form of an archive and the second complex vfw-cnf-cds-vpkg-profile in a form of a folder for processing and profile generation. Below is the example of the definition of node type for execution of the profile upload operation.
"k8s-profile-upload": {
"type": "component-k8s-profile-upload",
"interfaces": {
"K8sProfileUploadComponent": {
"operations": {
"process": {
"inputs": {
"artifact-prefix-names": {
"get_input": "template-prefix"
},
"resource-assignment-map": {
"get_attribute": [
"resource-assignment",
"assignment-map"
]
}
}
}
}
}
},
"artifacts": {
"vfw-cnf-cds-base-profile": {
"type": "artifact-k8sprofile-content",
"file": "Templates/k8s-profiles/vfw-cnf-cds-base-profile.tar.gz"
},
"vfw-cnf-cds-vpkg-profile": {
"type": "artifact-k8sprofile-content",
"file": "Templates/k8s-profiles/vfw-cnf-cds-vpkg-profile"
},
"vfw-cnf-cds-vpkg-profile-mapping": {
"type": "artifact-mapping-resource",
"file": "Templates/k8s-profiles/vfw-cnf-cds-vpkg-profile/ssh-service-mapping.json"
}
}
}
Artifact file determines a place of the static profile or the content of the complex profile. In the latter case we need a pair of profile folder and mapping file with a declaration of the parameters that CDS needs to resolve first, before the Velocity templating is applied to the .vtl files present in the profile content. After Velocity templating the .vtl extensions will be dropped from the file names. The embedded mechanism will include in the profile only files present in the profile MANIFEST file that needs to contain the list of final names of the files to be included into the profile.
The figure below shows the idea of profile templating.
The component-k8s-profile-upload that stands behind the profile uploading mechanism has input parameters that can be passed directly (checked in the first order) or can be taken from the resource-assignment-map parameter which can be a result of associated component-resource-resolution result, like in our case their values are resolved on vf-module level resource assignment. The component-k8s-profile-upload inputs are following:
k8s-rb-definition-name - the name under which RB definition was created - VF Module Model Invariant ID in ONAP
k8s-rb-definition-version - the version of created RB definition name - VF Module Model Version ID in ONAP
k8s-rb-profile-name - (mandatory) the name of the profile under which it will be created in k8s plugin. Other parameters are required only when profile must be uploaded because it does not exist yet
k8s-rb-profile-source - the source of profile content - name of the artifact of the profile. If missing k8s-rb-profile-name is treated as a source
k8s-rb-profile-namespace - the k8s namespace name associated with profile being created
k8s-rb-profile-kubernetes-version - the version of the cluster on which application will be deployed - it may impact the helm templating process like selection of the api versions for resources.
resource-assignment-map - result of the associated resource assignment step - it may deliver values of inputs if they are not specified directly
artifact-prefix-names - (mandatory) the list of artifact prefixes like for resource-assigment step in the resource-assigment workflow or its subset
In the SO request user can pass parameter of name k8s-rb-profile-name which in our case may have value: vfw-cnf-cds-base-profile, vfw-cnf-cds-vpkg-profile or default. The default profile does not contain any content inside and allows instantiation of CNF without the need to define and upload any additional profiles. vfw-cnf-cds-vpkg-profile has been prepared to test instantiation of the second modified vFW CNF instance.
K8splugin allows to specify override parameters (similar to –set behavior of helm client) to instantiated resource bundles. This allows for providing dynamic parameters to instantiated resources without the need to create new profiles for this purpose. This mechanism should be used with default profile but may be used also with any custom profile.
The overall flow of helm overrides parameters processing is visible on following figure. When rb definition (helm package) is being instantiated for specified rb profile K8splugin combines override values from the helm package, rb profile and from the instantiation request - in the respective order. It means that the value from the instantiation request (SO request input or CDS resource assignement result) has a precedence over the value from the rb profile and value from the rb profile has a precedence over the helm package default override value. Similarly, profile can contain resource files that may extend or ammend the existing files for the original helm package content.
Both profile content (4) like the instantiation request values (5) can be generated during the resource assignment process according to its definition for CBA associated with helm package. CBA may generate i.e. names, IP addresses, ports and can use this information to produce the rb-profile (3) content. Finally, all three sources of override values, temnplates and additional resources files are merged together (6) by K8splugin in the order exaplained before.
Beside the deployment of Helm application the CBA of vFW demonstrates also how to use deicated features for config-assign (7) and config-deploy (8) operations. In the use case, config-assign and config-deploy operations deal mainly with creation and instantiation of configuration template for k8s plugin. The configuration template has a form of Helm package. When k8s plugin instantiates configuration, it creates or may replace existing resources deployed on k8s cluster. In our case the configuration template is used to provide alternative way of upload of the additional ssh-service but it coud be used to modify configmap of vfw or vpkg vf-modules.
In order to provide configuration instantiation capability standard condfig-assign and config-deploy workflows have been changed into imperative workflows with first step responsible for collection of informatino for configuration templating and configuration instantiation. The source of data for this operations is AAI, MDSAL with data for vnf and vf-modules as config-assign and config-deploy does not receive dedicated input parameters from SO. In consequence both operations need to source from resource-assignent phase and data placed in the AAI and MDSAL.
vFW CNF config-assign workflow is following:
"config-assign": {
"steps": {
"config-setup": {
"description": "Gather necessary input for config template upload",
"target": "config-setup-process",
"activities": [
{
"call_operation": "ResourceResolutionComponent.process"
}
],
"on_success": [
"config-template"
]
},
"config-template": {
"description": "Generate and upload K8s config template",
"target": "k8s-config-template",
"activities": [
{
"call_operation": "K8sConfigTemplateComponent.process"
}
]
}
},
vFW CNF config-deploy workflow is following:
"config-deploy": {
"steps": {
"config-setup": {
"description": "Gather necessary input for config init and status verification",
"target": "config-setup-process",
"activities": [
{
"call_operation": "ResourceResolutionComponent.process"
}
],
"on_success": [
"config-apply"
]
},
"config-apply": {
"description": "Activate K8s config template",
"target": "k8s-config-apply",
"activities": [
{
"call_operation": "K8sConfigTemplateComponent.process"
}
],
"on_success": [
"status-verification-script"
]
},
In our example configuration template for vFW CNF is a helm package that contains the same resource that we can find in the vPKG vfw-cnf-cds-vpkg-profile profile - extra ssh service. This helm package contains Helm encapsulation for ssh-service and the values.yaml file with declaration of all the inputs that may parametrize the ssh-service. The configuration templating step leverages the component-k8s-config-template component that prepares the configuration template and uploads it to k8splugin. In consequence, it may be used later on for instatiation of the configuration.
In this use case we have two options with ssh-service-config and ssh-service-config-customizable as a source of the same configuration template. In consequence, or we take a complete template or we have have the templatefolder with the content of the helm package and CDS may perform dedicated resource resolution for it with templating of all the files with .vtl extensions. The process is very similar to the one describe for profile upload functionality.
"k8s-config-template": {
"type": "component-k8s-config-template",
"interfaces": {
"K8sConfigTemplateComponent": {
"operations": {
"process": {
"inputs": {
"artifact-prefix-names": [
"helm_vpkg"
],
"resource-assignment-map": {
"get_attribute": [
"config-setup-process",
"",
"assignment-map",
"config-deploy",
"config-deploy-setup"
]
}
}
}
}
}
},
"artifacts": {
"ssh-service-config": {
"type": "artifact-k8sconfig-content",
"file": "Templates/k8s-configs/ssh-service.tar.gz"
},
"ssh-service-config-customizable": {
"type": "artifact-k8sconfig-content",
"file": "Templates/k8s-configs/ssh-service-config"
},
"ssh-service-config-customizable-mapping": {
"type": "artifact-mapping-resource",
"file": "Templates/k8s-configs/ssh-service-config/ssh-service-mapping.json"
}
}
}
The component-k8s-config-template that stands behind creation of configuration template has input parameters that can be passed directly (checked in the first order) or can be taken from the resource-assignment-map parameter which can be a result of associated component-resource-resolution result, like in vFW CNF use case their values are resolved on vf-module level dedicated for config-assign and config-deploy resource assignment step. The component-k8s-config-template inputs are following:
k8s-rb-definition-name - the name under which RB definition was created - VF Module Model Invariant ID in ONAP
k8s-rb-definition-version - the version of created RB definition name - VF Module Model Version ID in ONAP
k8s-rb-config-template-name - (mandatory) the name of the configuration template under which it will be created in k8s plugin. Other parameters are required only when configuration template must be uploaded because it does not exist yet
k8s-rb-config-template-source - the source of config template content - name of the artifact of the configuration template. If missing k8s-rb-config-template-name is treated as a source
resource-assignment-map - result of the associated resource assignment step - it may deliver values of inputs if they are not specified directly
artifact-prefix-names - (mandatory) the list of artifact prefixes like for resource-assigment step in the resource-assigment workflow or its subset
In our case the component-k8s-config-template component receives all the inputs from the dedicated resource-assignment process config-setup that is responsible for resolution of all the inputs for configuration templating. This process generates data for helm_vpkg prefix and such one is specified in the list of prefixes of the configuration template component. It means that configuration template will be prepared only for vPKG function.
"k8s-config-apply": {
"type": "component-k8s-config-value",
"interfaces": {
"K8sConfigValueComponent": {
"operations": {
"process": {
"inputs": {
"artifact-prefix-names": [
"helm_vpkg"
],
"k8s-config-operation-type": "create",
"resource-assignment-map": {
"get_attribute": [
"config-setup-process",
"",
"assignment-map",
"config-deploy",
"config-deploy-setup"
]
}
}
}
}
}
},
"artifacts": {
"ssh-service-default": {
"type": "artifact-k8sconfig-content",
"file": "Templates/k8s-configs/ssh-service-config/values.yaml"
},
"ssh-service-config": {
"type": "artifact-k8sconfig-content",
"file": "Templates/k8s-configs/ssh-service-values/values.yaml.vtl"
},
"ssh-service-config-mapping": {
"type": "artifact-mapping-resource",
"file": "Templates/k8s-configs/ssh-service-values/ssh-service-mapping.json"
}
}
}
The component-k8s-config-value that stands behind creation of configuration instance has input parameters that can be passed directly (checked in the first order) or can be taken from the resource-assignment-map parameter which can be a result of associated component-resource-resolution result, like in vFW CNF use case their values are resolved on vf-module level dedicated for config-assign and config-deploy resource assignment step. The component-k8s-config-value inputs are following:
k8s-rb-config-name - (mandatory) the name of the configuration template under which it will be created in k8s plugin. Other parameters are required only when configuration template must be uploaded because it does not exist yet
k8s-rb-config-template-name - (mandatory) the name of the configuration template under which it will be created in k8s plugin. Other parameters are required only when configuration template must be uploaded because it does not exist yet
k8s-rb-config-value-source - the source of config template content - name of the artifact of the configuration template. If missing k8s-rb-config-name is treated as a source
k8s-instance-id - (mandatory) the identifier of the rb instance for which the configuration should be applied
k8s-config-operation-type - the type of the configuration operation to perform: create, update or delete. By default create operation is performed
resource-assignment-map - result of the associated resource assignment step - it may deliver values of inputs if they are not specified directly
artifact-prefix-names - (mandatory) the list of artifact prefixes like for resource-assigment step in the resource-assigment workflow or its subset
Like for the configuration template, the component-k8s-config-value component receives all the inputs from the dedicated resource-assignment process config-setup that is responsible for resolution of all the inputs for configuration. This process generates data for helm_vpkg prefix and such one is specified in the list of prefixes of the configuration values component. It means that configuration instance will be created only for vPKG function (component allows also update or delete of the configuration but in the vFW CNF case it is used only to create configuration instance).
CBA of vFW CNF use case is already enriched and VSP of vFW CNF has CBA included inside. In conequence, when VSP is being onboarded and service is being distributed, CBA is uploaded into CDS. Anyway, CDS contains in the starter dictionary all data dictionary values used in the use case and enrichment of CBA should work as well.
Note
The CBA for this use case is already enriched and there is no need to perform enrichment process for it. It is also automatically uploaded into CDS in time of the model distribution from the SDC.
Further information about the use case, role of the CDS and all the steps required to reproduce the process can be found in the dedicated web page
The vFW CNF use case is an official use case used for verification of the CNF Orchestration extensions.
CDS Designer UI
Designer Guide
Note
How to Get Started with CDS Designer UI
If you’re new to CDS Designer UI and need to get set up, the following guides may be helpful:
Getting Started
This is your CDS Designer UI guide. No matter how experienced you are or what you want to achieve, it should cover everything you need to know — from navigating the interface to making the most of different features.
What is CDS Designer UI?
CDS Designer UI is a framework to automate the resolution of resources for instantiation and any config provisioning operation, such as day0, day1, or day2 configuration. CDS has both design-time and run-time activities; during design time, Designer can define what actions are required for a given service, along with anything comprising the action. The design produces a CBA Package Its content is driven from a catalog of reusable data dictionary and component, delivering a reusable and simplified self-service experience. CDS modeling is mainly based on the TOSCA standard, using JSON as a representation. |
What’s new?
Create full CBA packages from built-in forms without programming Customizable CBA Package actions |
Import old packages for edit and collaboration Easily create and manage lists of data via interface (Data Dictionary, controller catalog, and config management) |
Create sophisticated package workflows in a no-code graphical designer Integration between CDS UI and SDC Services |
Overview of CDS Interface
Full CDS UI screens are available in InVision
CDS main menu: Access all CDS module list including Packages, Data Dictionary, Controller Catalog, etc.
Profile: Access user profile information
Module Title: See the current module name and the total number of items in the module list
Module list: View all active items in module and tools for search and filtering
CBA Packages
Package List
It gives you quick access to all and most recent created/edit packages
Module Tabs: Access All, Deployed, Under Construction, or Archived packages
Search: Search for a package by title
Filter: Filter packages by package tags
Package Sort: Sort packages by recent or alphanumeric (name) or version
List Pagination: navigate between package list pages
Create Package: Create a new CBA package
Import Package: Import other packages that are created previously on CDS Editor or Designer or created by other/current user
Package box: It shows a brief detail of the package and gives access to some actions of the package
Deployed package indicator
Package name and version
More menu: Access a list of actions including Clone, Archive, Download, and Delete
Last modified: Shows user name and date and time of last modifications made in the package
Package Description
Package Tags
Collaborators: See who’s collaborating to edit in the package
Configuration button: Go directly to package configuration
Designer Mode: It indicates package mode (Designer, Scripting, and Generic scripting) and by clicking on it, it will load to mode screen
Create a New CBA Package
User Flow
Create a New Package
You can create a new CBA Package by creating a new custom package or by import a package file that is already created before.
Note
Create/Import Package You can’t create/import a CBA package that has the same name and version of an existing package. Packages can be in the same name but in different version numbers (ex., Package one v1.0.0 & Package one v1.0.1).
Create a New Custom CBA Package From the Packages page, click on the Create Package button to navigate to Package Configuration
MetaData
In MetaData Tab, select Package Mode, enter package Name, Version, Description and other configurations
Once you fill in all required inputs, you can save this package by clicking the Save button in the Actions menu
Package Info Box: It is in top of configurations tabs and it appears after you save a package for the first time
You can continue adding package configuration or go directly to the Designer Mode screen from Package info box
All changes will be saved when you click on the Save button
To close the package configuration and go back to the Package list, navigate to the top left in breadcrumb and click the CBA Packages link or click on the Packages link in the Main menu.
Template & Mapping
You can create as many templates using
artifact-mapping-resource (Artifact Type -> Mapping) or/and artifact-template-velocity (Artifact Type -> Velocity)
Template name
Template Section: Where you include template attributes
Manage Mapping: Here the automapping process occurs to template attributes to refer to the data dictionary that will be used to resolve a particular resource.
Template Section
Template Type: Template is defined by one of three templates (Velocity, Jinja, Kotlin)
Import Template Attributes/Parameters: You can add attributes by Import attribute list file or by
Insert Template Attributes/Parameters Manually: You can insert Attributes manually in the code editor. Code editor validates attributes according to the pre-selected template type
Import Template Attributes
After import attributes, you can add/edit/delete attributes in the code editor.
Manage Mapping Section
Use current Template Instance: You can use attributes from the Template section
Upload Attributes List: In case you don’t have existing attributes in Template section or have different attributes, you can upload the attributes list
Once you select the source of attributes, you get a confirmation of success fetching.
Then the Mapped Table appears to show the Resource Dictionary reference.
When you finish the creation process, you must click on the Finish button (1) to submit the template, or you can clear all data by click on the Clear button (2).
Scripts
Allowed file type: Kotlin(kt), Python(py), Jython, Ansible
To add script file/s, you have two options:
Create Script
Import File
Enter file URL: Script file can be stored in server and you can add this script file by copy and paste file URL in URL input then press ENTER key from the keyboard
Create a Script File
File Name: Add the script file name
Script Type: Choose script type (Kotlin, Jython, Ansible)
Script Editor: Enter the script file content
After you type the script, click on the Create Script button to save it
By adding script file/s, you can: 1. Edit file: You can edit each script file from the code editor 2. Delete file
Definitions
To define a data type that represents the schema of a specific type of data, you have to enrich the package to automatically generate all definition files:
Enrich Package: from the package details box, click on the Enrich button
Once you successfully enrich the package, all definition files will be listed.
By definition file/s, you can Delete file
External System Authentication Properties
In order to populate the system information within the package, you have to provide dsl_definitions
Topology Template
Here you can manually add your package:
Workflow that define an overall action to be taken on the service
Node/Component template that is used to represent a functionality along with its contracts, such as inputs, outputs, and attributes
Hello World CBA Reference
Offered APIs
Offered APIs
Blueprint Processor API Reference
Introduction
This section shows all resources and endpoints which CDS BP processor currently provides through a swagger file
which is automatically created during CDS build process by Swagger Maven Plugin. A corresponding Postman collection is
also included. Endpoints can also be described using this template
api-doc-template.rst
but this is not the preferred way to describe the CDS API.
You can find a sample workflow tutorial below which will show how to use the endpoints in the right order. This will give you a better understanding of the CDS Blueprint Processor API.
Getting Started
If you cant access a running CDS Blueprint Processor yet, you can choose one of the below options to run it. Afterwards you can start trying out the API.
CDS in Microk8s: https://wiki.onap.org/display/DW/Running+CDS+on+Microk8s (RDT link to be added)
CDS in Minikube: https://wiki.onap.org/display/DW/Running+CDS+in+minikube (RDT link to be added)
CDS in an IDE: Running BP Processor Microservice in an IDE
Download
Here is the automatically created swagger file for CDS Blueprint Processor API:
cds-bp-processor-api-swagger.json
You can find a postman collection including sample requests for all endpoints here:
bp-processor.postman_collection.json
.
Please keep the Postman Collection up-to-date for new endpoints.
General Setup
All endpoints are accessable under http://{{host}}:{{port}}/api/v1/
. Host and port depends on your CDS BP processor
deployment.
List all endpoints
Lists all available endpoints from blueprints processor API.
Request
http://{{host}}:{{port}}/actuator/mappings
Lists all endpoints from blueprints processor.
curl --location --request GET 'http://localhost:8081/actuator/mappings' \
--header 'Authorization: Basic Y2NzZGthcHBzOmNjc2RrYXBwcw=='
Success Response
HTTP Status 202 OK
{
"contexts": {
"application": {
"mappings": {
"dispatcherHandlers": {
"webHandler": [
...
{
"predicate": "{GET /api/v1/blueprint-model, produces [application/json]}",
"handler": "org.onap.ccsdk.cds.blueprintsprocessor.designer.api.BlueprintModelController#allBlueprintModel()",
"details": {
"handlerMethod": {
"className": "org.onap.ccsdk.cds.blueprintsprocessor.designer.api.BlueprintModelController",
"name": "allBlueprintModel",
"descriptor": "()Ljava/util/List;"
},
"handlerFunction": null,
"requestMappingConditions": {
"consumes": [],
"headers": [],
"methods": [
"GET"
],
"params": [],
"patterns": [
"/api/v1/blueprint-model"
],
"produces": [
{
"mediaType": "application/json",
"negated": false
}
]
}
}
},
{
"predicate": "{GET /api/v1/blueprint-model/meta-data/{keyword}, produces [application/json]}",
"handler": "org.onap.ccsdk.cds.blueprintsprocessor.designer.api.BlueprintModelController#allBlueprintModelMetaData(String, Continuation)",
"details": {
"handlerMethod": {
"className": "org.onap.ccsdk.cds.blueprintsprocessor.designer.api.BlueprintModelController",
"name": "allBlueprintModelMetaData",
"descriptor": "(Ljava/lang/String;Lkotlin/coroutines/Continuation;)Ljava/lang/Object;"
},
"handlerFunction": null,
"requestMappingConditions": {
"consumes": [],
"headers": [],
"methods": [
"GET"
],
"params": [],
"patterns": [
"/api/v1/blueprint-model/meta-data/{keyword}"
],
"produces": [
{
"mediaType": "application/json",
"negated": false
}
]
}
}
}
...
]
}
},
"parentId": null
}
}
}
API Reference
Warning
In the used Sphinx plugin sphinxcontrib-swaggerdoc some information of the swagger file is not rendered completely, e.g. the request body. Use your favorite Swagger Editor and paste the swagger file to get a complete view of the API reference, e.g. on https://editor.swagger.io/.
Blueprint Model Catalog
GET /api/v1/blueprint-model
List all Blueprint Models
Description: Lists all meta-data of blueprint models which are saved in CDS.
Produces: [‘application/json’]
Responses
200 - OK
500 - Internal Server Error
POST /api/v1/blueprint-model
Save a Blueprint Model
Description: Saves a blueprint model by the given CBA zip file input. There is no validation of the attached CBA happening when this API is called.
Consumes: [‘multipart/form-data’]
Produces: [‘application/json’]
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
file |
body |
CBA file to be uploaded (example: cba.zip) |
Responses
200 - OK
500 - Internal Server Error
POST /api/v1/blueprint-model/bootstrap
Bootstrap CDS
Description: Loads all Model Types, Resource Dictionaries and Blueprint Models which are included in CDS by default. Before starting to work with CDS, bootstrap should be called to load all the basic models that each orginization might support. Parameter values can be set as `false` to skip loading e.g. the Resource Dictionaries but this is not recommended.
Consumes: [‘application/json’]
Produces: [‘application/json’]
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
body |
body |
Specifies which elements to load |
Responses
200 - OK
500 - Internal Server Error
GET /api/v1/blueprint-model/by-name/{name}/version/{version}
Get a Blueprint Model by Name and Version
Description: Get Meta-Data of a Blueprint Model by its name and version.
Produces: [‘application/json’]
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
name |
path |
Name of the blueprint model |
string |
version |
path |
Version of the blueprint model |
string |
Responses
200 - OK
404 - Not Found
GET /api/v1/blueprint-model/download/by-name/{name}/version/{version}
Download a Blueprint Model
Description: Gets the CBA of a blueprint model by its name and version. Response can be saved to a file to download the CBA.
Produces: [‘application/json’]
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
name |
path |
Name of the blueprint model |
string |
version |
path |
Version of the blueprint model |
string |
Responses
200 - OK
404 - Not Found
GET /api/v1/blueprint-model/download/{id}
Download a Blueprint Model by ID
Description: Gets the CBA of a blueprint model by its ID. Response can be saved to a file to download the CBA.
Produces: [‘application/json’]
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
id |
path |
ID of the blueprint model to download |
string |
Responses
200 - OK
404 - Not Found
POST /api/v1/blueprint-model/enrich
Enrich a Blueprint Model
Description: Enriches the attached CBA and returns the enriched CBA zip file in the response. The enrichment process will complete the package by providing all the definition of types used.
Consumes: [‘multipart/form-data’]
Produces: [‘application/json’]
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
file |
body |
CBA zip file to be uploaded (example: cba_unenriched.zip) |
Responses
200 - successful operation
POST /api/v1/blueprint-model/enrichandpublish
Enrich and publish a Blueprint Model
Description: Enriches the attached CBA, validates it and saves it in CDS if validation was successful.
Consumes: [‘multipart/form-data’]
Produces: [‘application/json’]
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
file |
body |
Unenriched CBA zip file to be uploaded (example: cba_unenriched.zip) |
Responses
200 - OK
503 - Service Unavailable
GET /api/v1/blueprint-model/meta-data/{keyword}
Search for Blueprints by a Keyword
Description: Lists all blueprint models by a matching keyword in any of the meta-data of the blueprint models. Blueprint models are just returned if a whole keyword is matching, not just parts of it. Not case-sensitive. Used by CDS UI.
Produces: [‘application/json’]
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
keyword |
path |
Keyword to search for in blueprint model meta-data |
string |
Responses
200 - successful operation
DELETE /api/v1/blueprint-model/name/{name}/version/{version}
Delete a Blueprint Model by Name
Description: Deletes a blueprint model identified by its name and version from CDS.
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
name |
path |
Name of the blueprint model |
string |
version |
path |
Version of the blueprint model |
string |
Responses
200 - successful operation
GET /api/v1/blueprint-model/paged
Get Blueprints ordered
Description: Lists all blueprint models which are saved in CDS in an ordered mode.
Produces: [‘application/json’]
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
limit |
query |
Maximum number of returned blueprint models |
integer |
offset |
query |
Offset |
integer |
sort |
query |
Order of returned blueprint models |
string |
sortType |
query |
Ascend or descend ordering |
string |
Responses
200 - successful operation
GET /api/v1/blueprint-model/paged/meta-data/{keyword}
Search for Blueprints by a Keyword in an ordered mode
Description: Lists all blueprint models by a matching keyword in any of the meta-data of the blueprint models in an ordered mode. Blueprint models are just returned if a whole keyword is matching, not just parts of it. Not case-sensitive. Used by CDS UI.
Produces: [‘application/json’]
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
keyword |
path |
Keyword to search for in blueprint model meta-data |
string |
limit |
query |
Maximum number of returned blueprint models |
integer |
offset |
query |
Offset |
integer |
sort |
query |
Order of returned blueprint models |
string |
sortType |
query |
Ascend or descend ordering |
string |
Responses
200 - successful operation
POST /api/v1/blueprint-model/publish
Publish a Blueprint Model
Description: Validates the attached CBA file and saves it in CDS if validation was successful. CBA needs to be already enriched.
Consumes: [‘multipart/form-data’]
Produces: [‘application/json’]
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
file |
body |
Enriched CBA zip file to be uploaded (example: cba_enriched.zip) |
Responses
200 - successful operation
GET /api/v1/blueprint-model/search/{tags}
Search for a Blueprint by Tag
Description: Searches for all blueprint models which contain the specified input parameter in their tags. Blueprint models which contain just parts of the searched word in their tags are also returned.
Produces: [‘application/json’]
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
tags |
path |
Tag to search for |
string |
Responses
200 - successful operation
POST /api/v1/blueprint-model/workflow-spec
Get Workflow Specification
Description: Get the workflow of a blueprint identified by Blueprint and workflow name. Inputs, outputs and data types of workflow is returned.
Consumes: [‘application/json’]
Produces: [‘application/json’]
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
body |
body |
Blueprint and workflow identification |
Responses
200 - successful operation
GET /api/v1/blueprint-model/workflows/blueprint-name/{name}/version/{version}
Get Workflows of a Blueprint
Description: Get all available workflows of a Blueprint identified by its name and version.
Produces: [‘application/json’]
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
name |
path |
Name of the blueprint model |
string |
version |
path |
Version of the blueprint model |
string |
Responses
200 - successful operation
GET /api/v1/blueprint-model/{id}
Get a Blueprint Model by ID
Description: Get meta-data of a blueprint model by its internally created ID.
Produces: [‘application/json’]
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
id |
path |
ID of the blueprint model to search for |
string |
Responses
200 - OK
404 - Not Found
DELETE /api/v1/blueprint-model/{id}
Delete a Blueprint Model by ID
Description: Delete a blueprint model by its ID. ID is the internally created ID of blueprint, not the name of blueprint.
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
id |
path |
ID of the blueprint model to delete |
string |
Responses
200 - OK
404 - RESOURCE_NOT_FOUND
Model Type Catalog
POST /api/v1/model-type/
Save a model type
Description: Save a model type by model type definition provided.
Consumes: [‘application/json’]
Produces: [‘application/json’]
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
body |
body |
Responses
200 - successful operation
GET /api/v1/model-type/by-definition/{definitionType}
Retrieve a list of model types
Description: Retrieve a list of model types by definition type provided.
Produces: [‘application/json’]
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
definitionType |
path |
string |
Responses
200 - successful operation
GET /api/v1/model-type/search/{tags}
Retrieve a list of model types
Description: Retrieve a list of model types by tags provided.
Produces: [‘application/json’]
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
tags |
path |
string |
Responses
200 - successful operation
GET /api/v1/model-type/{name}
Retrieve a model type
Description: Retrieve a model type by name provided.
Produces: [‘application/json’]
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
name |
path |
string |
Responses
200 - successful operation
DELETE /api/v1/model-type/{name}
Remove a model type
Description: Remove a model type by name provided.
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
name |
path |
string |
Responses
200 - successful operation
Resource configuration
GET /api/v1/configs
Retrieve a resource configuration snapshot
Description: Retrieve a config snapshot, identified by its Resource Id and Type. An extra ‘format’ parameter can be passed to tell what content-type is expected.
Produces: [‘text/plain’, ‘application/json’, ‘application/xml’]
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
resourceType |
query |
Resource Type associated of the resource configuration snapshot |
string |
resourceId |
query |
Resource Id associated of the resource configuration snapshot |
string |
status |
query |
Status of the snapshot being retrieved |
string |
format |
query |
Expected format of the snapshot being retrieved |
string |
Responses
200 - successful operation
GET /api/v1/configs/allByID
Retrieve all resource configuration snapshots identified by a given resource_id
Description: Retrieve all config snapshots, identified by its Resource Id, ordered by most recently created/modified date.
Produces: [‘application/json’]
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
resourceId |
query |
Resource Id associated of the resource configuration snapshots |
string |
status |
query |
Status of the snapshot being retrieved |
string |
Responses
200 - successful operation
GET /api/v1/configs/allByType
Retrieve all resource configuration snapshots for a given resource type
Description: Retrieve all config snapshots matching a specified Resource Type, ordered by most recently created/modified date.
Produces: [‘application/json’]
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
resourceType |
query |
Resource Type associated of the resource configuration snapshot |
string |
status |
query |
Status of the snapshot being retrieved |
string |
Responses
200 - successful operation
POST /api/v1/configs/{resourceType}/{resourceId}/{status}
Store a resource configuration snapshot identified by resourceId, resourceType, status
Description: Store a resource configuration snapshot, identified by its resourceId and resourceType, and optionally its status, either RUNNING or CANDIDATE.
Produces: [‘application/json’]
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
resourceType |
path |
Resource Type associated with the resolution |
string |
resourceId |
path |
Resource Id associated with the resolution |
string |
status |
path |
Status of the snapshot being retrieved |
string |
body |
body |
Config snapshot to store |
Responses
200 - successful operation
DELETE /api/v1/configs/{resourceType}/{resourceId}/{status}
Delete a resource configuration snapshot identified by resourceId, resourceType, status.
Description: Delete a resource configuration snapshot, identified by its resourceId and resourceType, and optionally its status, either RUNNING or CANDIDATE.
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
resourceType |
path |
Resource Type associated with the resolution. |
string |
resourceId |
path |
Resource Id associated with the resolution. |
string |
status |
path |
Status of the snapshot being deleted. |
string |
Responses
200 - successful operation
Resource dictionary
POST /api/v1/dictionary
Save a resource dictionary
Description: Save a resource dictionary by dictionary provided.
Consumes: [‘application/json’]
Produces: [‘application/json’]
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
body |
body |
Resource dictionary to store |
Responses
200 - successful operation
POST /api/v1/dictionary/by-names
Search for a resource dictionary
Description: Search for a resource dictionary by names provided.
Consumes: [‘application/json’]
Produces: [‘application/json’]
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
body |
body |
List of names |
Responses
200 - successful operation
POST /api/v1/dictionary/definition
Save a resource dictionary
Description: Save a resource dictionary by resource definition provided.
Consumes: [‘application/json’]
Produces: [‘application/json’]
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
body |
body |
Resource definition to generate |
Responses
200 - successful operation
GET /api/v1/dictionary/resource_dictionary_group
Retrieve all resource dictionary groups
Description: Retrieve all resource dictionary groups.
Produces: [‘application/json’]
Responses
200 - successful operation
GET /api/v1/dictionary/search/{tags}
Search for a resource dictionary
Description: Search for a resource dictionary by tags provided.
Produces: [‘application/json’]
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
tags |
path |
Tags list |
string |
Responses
200 - successful operation
GET /api/v1/dictionary/source-mapping
Search for a source mapping
Description: Search for a source mapping.
Produces: [‘application/json’]
Responses
200 - successful operation
GET /api/v1/dictionary/{name}
Retrieve a resource dictionary
Description: Retrieve a resource dictionary by name provided.
Produces: [‘application/json’]
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
name |
path |
Name of the resource |
string |
Responses
200 - successful operation
DELETE /api/v1/dictionary/{name}
Remove a resource dictionary
Description: Remove a resource dictionary by name provided.
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
name |
path |
Name of the resource |
string |
Responses
200 - successful operation
Resource template
POST /api/v1/template/{bpName}/{bpVersion}/{artifactName}/{resolutionKey}
Store a resolved template w/ resolution-key
Description: Store a template for a given CBA’s action, identified by its blueprint name, blueprint version, artifact name and resolution key.
Produces: [‘application/json’]
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
bpName |
path |
Name of the CBA |
string |
bpVersion |
path |
Version of the CBA |
string |
artifactName |
path |
Artifact name for which to retrieve a resolved resource |
string |
resolutionKey |
path |
Resolution Key associated with the resolution |
string |
body |
body |
Template to store |
Responses
200 - successful operation
POST /api/v1/template/{bpName}/{bpVersion}/{artifactName}/{resourceType}/{resourceId}
Store a resolved template w/ resourceId and resourceType
Description: Store a template for a given CBA’s action, identified by its blueprint name, blueprint version, artifact name, resourceId and resourceType.
Produces: [‘application/json’]
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
bpName |
path |
Name of the CBA |
string |
bpVersion |
path |
Version of the CBA |
string |
artifactName |
path |
Artifact name for which to retrieve a resolved resource |
string |
resourceType |
path |
Resource Type associated with the resolution |
string |
resourceId |
path |
Resource Id associated with the resolution |
string |
body |
body |
Template to store |
Responses
200 - successful operation
Resources
GET /api/v1/resources
Get all resolved resources using the resolution key
Description: Retrieve all stored resolved resources using the blueprint name, blueprint version, artifact name and the resolution-key.
Produces: [‘application/json’]
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
bpName |
query |
Name of the CBA |
string |
bpVersion |
query |
Version of the CBA |
string |
artifactName |
query |
Artifact name for which to retrieve a resolved resource |
string |
resolutionKey |
query |
Resolution Key associated with the resolution |
string |
resourceType |
query |
Resource Type associated with the resolution |
string |
resourceId |
query |
Resource Id associated with the resolution |
string |
Responses
200 - successful operation
DELETE /api/v1/resources
Delete resources using resolution key
Description: Delete all the resources associated to a resolution-key using blueprint metadata, artifact name and the resolution-key.
Produces: [‘application/json’]
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
bpName |
query |
Name of the CBA |
string |
bpVersion |
query |
Version of the CBA |
string |
artifactName |
query |
Artifact name for which to retrieve a resolved resource |
string |
resolutionKey |
query |
Resolution Key associated with the resolution |
string |
Responses
200 - successful operation
GET /api/v1/resources/resource
Fetch a resource value using resolution key
Description: Retrieve a stored resource value using the blueprint metadata, artifact name, resolution-key along with the name of the resource value to retrieve.
Produces: [‘application/json’]
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
bpName |
query |
Name of the CBA |
string |
bpVersion |
query |
Version of the CBA |
string |
artifactName |
query |
Artifact name for which to retrieve a resolved resource |
string |
resolutionKey |
query |
Resolution Key associated with the resolution |
string |
name |
query |
Name of the resource to retrieve |
string |
Responses
200 - successful operation
Workflow Tutorial
Introduction
This section will show a basic workflow how to proceed a CBA. For this we will follow the PNF Simulator use case guide. We will use the same CBA but since this CBA is loaded during bootstrap per default we will first delete it and afterwards manually enrich and save it in CDS. The referred use case shows how the day-n configuration is assigned and deployed to a PNF through CDS. You don’t necessarily need a netconf server (which will act as an PNF Simulator) running to get a understanding about this workflow tutorial. Just take care that without a set up netconf server the day-n configuration deployment will fail in the last step.
Use the Postman Collection from the referred use case to get sample requests for the following steps:
json
.
The CBA which we are using is downloadable here zip
. Hint: this CBA is
also included in the CDS source code for bootstrapping.
Set up CDS
If not done before, run Bootrap request which will call Bootstrap API of CDS (POST /api/v1/blueprint-model/bootstrap
)
to load all the CDS default model artifacts into CDS. You should get HTTP status 200 for the below command.
Call Get Blueprints request to get all blueprint models which are saved in CDS. This will call the GET /api/v1/blueprint-model
endpoint. You will see the blueprint model "artifactName": "pnf_netconf"
which is loaded by calling bootstrap since Guilin release.
Since we manually want to load the CBA delete the desired CBA from CDS first through calling the delete endpoint
DELETE /api/v1/blueprint-model/name/{name}/version/{version}
. If you call Get Blueprints again you can see that the
pnf_netconf
CBA is missing now.
Because the CBA contains a custom data dictionary we need to push the custom entries to CDS first through calling Data Dictionary request. Actually the custom entries are also already loaded through bootstrap but just pretend they are not present in CDS so far.
Note
For every data dictionary entry CDS API needs to be called seperately. The postman collection contains a loop to go through all custom entries and call data dictionary endpoint seperately. To execute this loop, open Runner in Postman and run Data Dictionary request like it is shown in the picture below.
Enrichment
Enrich the blueprint through executing the Enrich Blueprint request. Take care to provide the CBA file which you
can download here zip
in the request body. After the request got executed
download the response body like shown in the picture below, this will be your enriched CBA file.
Deploy/Save the Blueprint
Run Save Blueprint request to save/deploy the Blueprint into the CDS database. Take care to provide the enriched CBA file which you downloaded earlier in the request body.
After that you should see the new model "artifactName": "pnf_netconf"
by calling Get Blueprints request.
An alternative would be to use POST /api/v1/blueprint-model/publish
endpoint, which would also validate the CBA.
For doing enrichment and saving the CBA in a single call POST /api/v1/blueprint-model/enrichandpublish
could also be used.
Config-Assign / Config-Deploy
From now on you can continue with the PNF Simulator use case from section Config-assign and config-deploy to finish the workflow tutorial. The provided Postman collection already contains all the needed requests also for this part so you don’t need to create the calls and payloads manually. Take care that the last step will fail if you don’t have a netconf server set up.
Controller Design Studio Presentation
Details about CDS Architecture and Design detail, Please click the link.
CDS_Architecture_Design