CONTROLLER DESIGN STUDIO (CDS)

Introduction

The system is designed to be self service, which means that users, not just programmers, can reconfigure the software system as needed to meet customer requirements. To accomplish this goal, the system is built around models that provide for real-time changes in how the system operates. Users merely need to change a model to change how a service operates.

Self service is a completely new way of delivering services. It removes the dependence on code releases and the delays they cause and puts the control of services into the hands of the service providers. They can change a model and its parameters and create a new service without writing a single line of code. This makes SERVICE PROVIDER(S) more responsive to its customers and able to deliver products that more closely match the needs of its customers.

Architecture

The Controller Design Studio is composed of two major components:
  • The GUI (or frontend)

  • The Run Time (or backend)

The GUI handles direct user input and allows for displaying both design time and run time activities. For design time, it allows for the creation of controller blueprint, from selecting the DGs to be included, to incorporating the artifact templates, to adding necessary components. For run time, it allows the user to direct the system to resolve the unresolved elements of the controller blueprint and download the resulting configuration into a VNF.

At a more basic level, it allows for creation of data dictionaries, capabilities catalogs, and controller blueprint, the basic elements that are used to generate a configuration. The essential function of the Controller Design Studio is to create and populate a controller blueprint, create a configuration file from this Controller blueprint, and download this configuration file (configlet) to a VNF/PNF.

cdsArchitectureImage

Modeling Concept

In Dublin release, the CDS community has contributed a framework to automate the resolution of resources for instantiation and any config provisioning operation, such as day0, day1 or day2 configuration.

The content of the CBA Package is driven from a catalog of reusable data dictionary, component and workflow, delivering a reusable and simplified self service experience.

TOSCA based JSON formatted model following standard: http://docs.oasis-open.org/tosca/TOSCA-Simple-Profile-YAML/v1.2/csd01/TOSCA-Simple-Profile-YAML-v1.2-csd01.html

Most of the TOSCA modeled entity presented in the bellow documentation can be found here: https://github.com/onap/ccsdk-cds/tree/master/components/model-catalog/definition-type/starter-type

Tosca Model Reference:

toscaModel

Scripts

Library

User Guides

Developer Guide

Note

Get Started with CDS

Running Blueprints Processor Microservice in an IDE

Objective

Run the blueprint processor locally in an IDE, while having the database running in a container. This way, code changes can be conveniently tested and debugged.

Check out the code

Check out the code from Gerrit: https://gerrit.onap.org/r/#/admin/projects/ccsdk/cds

Build it locally

In the checked out directory, type

mvn clean install -Pq -Dadditionalparam=-Xdoclint:none

Note

If an error invalid flag: --release appears when executing the maven install command, you need to upgrade Java version of your local Maven installation. Use something like export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64.

Wait for the maven install command to finish until you go further.

Spin up a Docker container with the database

The Blueprints Processor project uses a database to store information about the blueprints and therefore it needs to be online before attempting to run it.

One way to create the database is by using the docker-compose.yaml file. This database will require a local directory to mount a volume, therefore before running docker-compose create following directory:

mkdir -p -m 755 /opt/app/cds/mysql/data

Navigate to the docker-compose file in the distribution module:

cd ms/blueprintsprocessor/application/src/main/dc

And run docker-composer:

docker-compose up -d db

This should spin up a container of the MariaDB image in the background. To check if it has worked, this command can be used:

docker-compose logs -f

The phrase mysqld: ready for connections indicates that the database was started correctly.

From now on, the Docker container will be available on the computer; if it ever gets stopped, it can be started again by the command:

docker start <id of mariadb container>
Set permissions on the local file system

Blueprints processor uses the local file system for some operations and, therefore, need some existing and accessible paths to run properly.

Execute the following commands to create the needed directories, and grant access to the current user to modify them:

mkdir -p -m 755 /opt/app/onap/blueprints/archive
mkdir -p -m 755 /opt/app/onap/blueprints/deploy
mkdir -p -m 755 /opt/app/onap/scripts
sudo chown -R $(id -u):$(id -g) /opt/app/onap/
Import the project into the IDE

Note

This is the recommended IDE for running CDS blueprint processor.

Go to File | Open and choose the pom.xml file of the cds/ms/blueprintprocessor directory:

imageImportProject

Import as a project. Sometimes it may be necessary to reimport Maven project, e.g. if some dependencies can’t be found:

imageReimportMaven

Override some application properties:

Next steps will create a run configuration profile overriding some application properties with custom values, to reflect the local environment characteristics.

Navigate to the main class of the Blueprints Processor, the BlueprintProcessorApplication class:

ms/blueprintsprocessor/application/src/main/kotlin/org/onap/ccsdk/cds/blueprintsprocessor/BlueprintProcessorApplication.kt.

After dependencies are imported and indexes are set up you will see a green arrow next to main function of BlueprintProcessorApplication class, indicating that the run configuration can now be created. Right-click inside the class at any point to load the context menu and select create a run configuration from context:

imageCreateRunConfigKt

The following window will open:

imageRunConfigKt

Add the following in the field `VM Options`:

Custom values for properties
-Dspring.profiles.active=dev

Optional: You can override any value from application-dev.properties file here. In this case use the following pattern:

-D<application-dev.properties key>=<application-dev.properties value>

In the field ‘Working Directory’ browse to your application path .../cds/ms/blueprintsprocessor/application if path is not already specified correctly.

Run configuration should now look something like this:

imageRunConfigSetUp

Add/replace the following in Blueprint’s application-dev.properties file.

blueprintsprocessor.grpcclient.remote-python.type=token-auth
blueprintsprocessor.grpcclient.remote-python.host=localhost
blueprintsprocessor.grpcclient.remote-python.port=50051
blueprintsprocessor.grpcclient.remote-python.token=Basic Y2NzZGthcHBzOmNjc2RrYXBwcw==

blueprintprocessor.remoteScriptCommand.enabled=true

Take care that if a parameter already exist you need to change the value of the existing parameter to avoid duplicates.

Run the application:

Before running Blueprint Processor check that you use the correct Java version in IntelliJ. Select either run or debug for the created Run Configuration to start the Blueprints Processor:

imageRunDebug

imageBuildLogs

Testing the application

There are two main features of the Blueprints Processor that can be of interest of a developer: blueprint publish and blueprint process.

To upload custom blueprints, the endpoint api/v1/execution-service/publish is used.

To process, the endpoint is api/v1/execution-service/process.

Postman is a software that can be used to send these request, and an example of them is present on https://www.getpostman.com/collections/b99863b0cde7565a32fc.

A detailed description of the usage of different APIs of CDS will follow.

Possible Fixes
Imported packages or annotiations are not found, Run Config not available?
  1. Rebuild with maven install ... (see above)

  2. Potentially change Maven home directory in Settings

  3. Maven reimport in IDE

Compilation error?
  • Change Java Version to 11

Running CDS UI Locally

Prerequisites

Node version: >= 8.9 NPM version: >=6.4.1

Check-out code
git clone "https://gerrit.onap.org/r/ccsdk/cds"
Install Node Modules (UI)

From cds-ui/client directory, execute npm install to fetch project dependent Node modules

Install Node Modules (Server)

From cds-ui/server directory, execute npm install to fetch project dependent Node modules

Run UI in Development Mode

From cds-ui/client directory, execute npm start to run the Angular Live Development Server

nirvanr01-mac:client nirvanr$ npm start
> cds-ui@0.0.0 start /Users/nirvanr/dev/git/onap/ccsdk/cds/cds-ui/client
> ng serve

** Angular Live Development Server is listening on localhost:4200, open your browser on http://localhost:4200/ **
Run UI Server

From cds-ui/client directory, execute mvn clean compile then npm run build to copy all front-end artifacts to server/public directory

nirvanr01-mac:client nirvanr$ npm run build
> cds-ui@0.0.0 build /Users/nirvanr/dev/git/onap/ccsdk/cds/cds-ui/client
> ng build

From cds-ui/server directory, execute npm run start to build and start the front-end server

nirvanr01-mac:server nirvanr$ npm run start
> cds-ui-server@1.0.0 prestart /Users/nirvanr/dev/git/onap/ccsdk/cds/cds-ui/server
> npm run build
> cds-ui-server@1.0.0 build /Users/nirvanr/dev/git/onap/ccsdk/cds/cds-ui/server
> lb-tsc es2017 --outDir dist
> cds-ui-server@1.0.0 start /Users/nirvanr/dev/git/onap/ccsdk/cds/cds-ui/server
> node .

Server is running at http://127.0.0.1:3000
Try http://127.0.0.1:3000/ping
Build UI Docker Image

From cds-ui/server directory, execute docker build -t cds-ui . to build a local CDS-UI Docker image

nirvanr01-mac:server nirvanr$ docker build -t cds-ui .
Sending build context to Docker daemon 96.73MB
Step 1/11 : FROM node:10-slim
---> 914bfdbef6aa
Step 2/11 : USER node
---> Using cache
---> 04d66cc13b46
Step 3/11 : RUN mkdir -p /home/node/app
---> Using cache
---> c9a44902da43
Step 4/11 : WORKDIR /home/node/app
---> Using cache
---> effb2329a39e
Step 5/11 : COPY --chown=node package*.json ./
---> Using cache
---> 4ad01897490e
Step 6/11 : RUN npm install
---> Using cache
---> 3ee8149b17e2
Step 7/11 : COPY --chown=node . .
---> e1c72f6caa15
Step 8/11 : RUN npm run build
---> Running in 5ec69a1961d0
> cds-ui-server@1.0.0 build /home/node/app
> lb-tsc es2017 --outDir dist
Removing intermediate container 5ec69a1961d0
---> ec9fb899e52c
Step 9/11 : ENV HOST=0.0.0.0 PORT=3000
---> Running in 19963303a09c
Removing intermediate container 19963303a09c
---> 6b3b45709e27
Step 10/11 : EXPOSE ${PORT}
---> Running in 78b9833c5050
Removing intermediate container 78b9833c5050
---> 3835c14ad17b
Step 11/11 : CMD [ "node", "." ]
---> Running in 79a98e6242dd
Removing intermediate container 79a98e6242dd
---> c41f6e6ba4de
Successfully built c41f6e6ba4de
Successfully tagged cds-ui:latest
Run UI Docker Image

Create docker-compose.yaml as below.

Note:

  • Replace <ip> with host/port where blueprint processor mS is running.

version: '3.3'
services:
     cds-ui:
         image: cds-ui:latest
         container_name: cds-ui
         ports:
         - "3000:3000"
         restart: always
         environment:
         - HOST=0.0.0.0
         - API_BLUEPRINT_PROCESSOR_HTTP_BASE_URL=http://<ip>:8080/api/v1
         - API_BLUEPRINT_PROCESSOR_HTTP_AUTH_TOKEN=Basic Y2NzZGthcHBzOmNjc2RrYXBwcw==
         - API_BLUEPRINT_PROCESSOR_GRPC_HOST=<IP>
         - API_BLUEPRINT_PROCESSOR_GRPC_PORT=9111
         - API_BLUEPRINT_PROCESSOR_GRPC_AUTH_TOKEN=Basic Y2NzZGthcHBzOmNjc2RrYXBwcw==

Execute docker-compose up cds-ui

nirvanr01-mac:cds nirvanr$ docker-compose up cds-ui
Creating cds-ui ... done
Attaching to cds-ui
cds-ui         | Server is running at http://127.0.0.1:3000
cds-ui         | Try http://127.0.0.1:3000/ping
Next

CDS Designer UI

Blueprints Processor Microservice

Micro service to Manage Controller Blueprint Models, such as Resource Dictionary, Service Models, Velocity Templates etc, which will serve service for Controller Design Studio and Controller runtimes.

This microservice is used to deploy Controller Blueprint Archive file in Run time database. This also helps to test the Valid CBA.

Architecture

image0

Testing in local environment

Point your browser to http://localhost:8000/api/v1/execution-service/ping (please note that the port is 8000, not 8080)

To authenticate, use ccsdkapps/ccsdkapps login user id and password.

Installation Guide

Installation

ONAP is meant to be deployed within a Kubernetes environment. Hence, the de-facto way to deploy CDS is through Kubernetes. ONAP also packages Kubernetes manifest as Charts, using Helm.

Prerequisites

https://docs.onap.org/en/latest/guides/onap-operator/settingup/index.html#installation

Get the chart

Make sure to checkout the release to use, by replacing $release-tag in bellow command

git clone https://gerrit.onap.org/r/oom git checkout tags/$release-tag

Customize blueprint-processor kafka messaging config (Optional)

Optionally, cds can use kafka native messaging to execute a blueprint use case. The blueprint-processor self-service api is the main api for interacting with CDS at runtime. The self-service-api topics carry actual request and response payloads, whereas blueprint-processor self-service-api.audit topics will carry redacted payloads (without sensitive data) for audit purposes.

By default, cds will target the strimzi kafka cluster in ONAP. The strimzi kafka config is as follows:

# strimzi kafka config

useStrimziKafka: <true|false>

If useStrimziKafka is true, the following also applies:

  1. Strimzi will create an associated kafka user and the topics defined for Request and Audit elements below.

  2. The type must be kafka-scram-plain-text-auth.

  3. The bootstrapServers will target the strimzi kafka cluster by default.

The following fields are configurable via the charts values.yaml (oom/kubernetes/cds/components/cds-blueprints-processor/values.yaml)

kafkaRequestConsumer:
  enabled: false
  type: kafka-basic-auth
  groupId: cds-consumer
  topic: cds.blueprint-processor.self-service-api.request
  clientId: request-receiver-client-id
  pollMillSec: 1000
kafkaRequestProducer:
  type: kafka-basic-auth
  clientId: request-producer-client-id
  topic: cds.blueprint-processor.self-service-api.response
  enableIdempotence: false
kafkaAuditRequest:
  enabled: false
  type: kafka-basic-auth
  clientId: audit-request-producer-client-id
  topic: cds.blueprint-processor.self-service-api.audit.request
  enableIdempotence: false
kafkaAuditResponse:
  type: kafka-basic-auth
  clientId: audit-response-producer-client-id
  topic: cds.blueprint-processor.self-service-api.audit.response
  enableIdempotence: false

Note: If more fine grained customization is required, this can be done manually in the application.properties file before making the helm chart. (oom/kubernetes/cds/components/cds-blueprints-processor/resources/config/application.properties)

Make the chart

cd oom/kubernetes make cds

Install CDS

helm install –name cds cds

Result

 1$ kubectl get all --selector=release=cds
 2NAME                                             READY     STATUS    RESTARTS   AGE
 3pod/cds-blueprints-processor-54f758d69f-p98c2    0/1       Running   1          2m
 4pod/cds-cds-6bd674dc77-4gtdf                     1/1       Running   0          2m
 5pod/cds-cds-db-0                                 1/1       Running   0          2m
 6pod/cds-controller-blueprints-545bbf98cf-zwjfc   1/1       Running   0          2m
 7
 8NAME                            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
 9service/blueprints-processor    ClusterIP   10.43.139.9     <none>        8080/TCP,9111/TCP   2m
10service/cds                     NodePort    10.43.254.69    <none>        3000:30397/TCP      2m
11service/cds-db                  ClusterIP   None            <none>        3306/TCP            2m
12service/controller-blueprints   ClusterIP   10.43.207.152   <none>        8080/TCP            2m
13
14NAME                                        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
15deployment.apps/cds-blueprints-processor    1         1         1            0           2m
16deployment.apps/cds-cds                     1         1         1            1           2m
17deployment.apps/cds-controller-blueprints   1         1         1            1           2m
18
19NAME                                                   DESIRED   CURRENT   READY     AGE
20replicaset.apps/cds-blueprints-processor-54f758d69f    1         1         0         2m
21replicaset.apps/cds-cds-6bd674dc77                     1         1         1         2m
22replicaset.apps/cds-controller-blueprints-545bbf98cf   1         1         1         2m
23
24NAME                          DESIRED   CURRENT   AGE
25statefulset.apps/cds-cds-db   1         1         2m

Running CDS UI:

Running CDS UI Locally

Client:

Install Node.js and angularCLI. Refer https://angular.io/guide/quickstart npm install in the directory cds/cds-ui/client npm run build - to build UI module

Loopback Server:

npm install in the directory cds/cds-ui/server npm start should bring you the CDS UI page in your local machine with the link https://127.0.0.1:3000/

Design Time Tools Guide

Below are the requirements to enable automation for a service within ONAP.

For instantiation, the goal is to be able to automatically resolve all the HEAT/Helm variables, called cloud parameters.

For post-instantiation, the goal is to configure the VNF with initial configuration.

Prerequisite

  • Gather the cloud parameters:

Instantiation:

Have the HEAT template along with the HEAT environment file (or) Have the Helm chart along with the Values.yaml file

(CDS supports, but whether SO → Multicloud support for Helm/K8S is different story)

Post-instantiation:

Have the configuration template to apply on the VNF.

  • XML for NETCONF

  • JSON / XML for RESTCONF

  • not supported yet - CLI

  • JSON for Ansible [not supported yet]

  • Identify which template parameters are static and dynamic

  • Create and fill-in the a table for all the dynamic values

While doing so, identify the resources using the same process to be resolved; for instance, if two IPs has to be resolved through the same IPAM, the process the resolve the IP is the same.

Services:

Controller Blueprint Archived Designer Tool (CBA)
Introduction

The Controller Blueprint Archive is the overall service design, fully model-driven, intent based package needed for SELF SERVICE provisioning and configuration management automation.

The CBA is .zip file, comprised of the following folder structure, the files may vary:

├── Definitions
│   ├── blueprint.json                          Overall TOSCA service template (workflow + node_template)
│   ├── artifact_types.json                     (generated by enrichment)
│   ├── data_types.json                         (generated by enrichment)
│   ├── policy_types.json                       (generated by enrichment)
│   ├── node_types.json                         (generated by enrichment)
│   ├── relationship_types.json                 (generated by enrichment)
│   ├── resources_definition_types.json         (generated by enrichment, based on Data Dictionaries)
│   └── *-mapping.json                          One per Template
│
├── Environments                                Contains *.properties files as required by the service
│
├── Plans                                       Contains Directed Graph
│
├── Tests                                       Contains uat.yaml file for testing cba actions within a cba package
│
├── Scripts                                     Contains scripts
│   ├── python                                  Python scripts
│   └── kotlin                                  Kotlin scripts
│
├── TOSCA-Metadata
│   └── TOSCA.meta                              Meta-data of overall package
│
└── Templates                                   Contains combination of mapping and template

To process a CBA for any service we need to enrich it first. This will gather all the node- type, data-type, artifact-type, data-dictionary definitions provided in the blueprint.json.

Architecture

image1

Data Flow

image2

Installation
Building client html and js files
  • FROM alpine:3.8 as builder

  • RUN apk add –no-cache npm

  • WORKDIR /opt/cds-ui/client/

  • COPY client/package.json /opt/cds-ui/client/

  • RUN npm install

  • COPY client /opt/cds-ui/client/

  • RUN npm run build

Building and creating server
  • FROM alpine:3.8

  • WORKDIR /opt/cds-ui/

  • RUN apk add –no-cache npm

  • COPY server/package.json /opt/cds-ui/

  • RUN npm install

  • COPY server /opt/cds-ui/

  • COPY –from=builder /opt/cds-ui/server/public /opt/cds-ui/public

  • RUN npm run build

  • EXPOSE 3000

  • CMD [ “npm”, “start” ]

Development
Pre-requiste
  • Visual Studio code editor

  • Git bash

  • Node.js & npm

  • loopback 4 cl

Steps

To compile CDS code:

  1. Make sure your local Maven settings file ($HOME/.m2/settings.xml) contains references to the ONAP repositories and OpenDaylight repositories.

  2. git clone https://(LFID)@gerrit.onap.org/r/a/ccsdk/cds

  3. cd cds ; mvn clean install ; cd ..

  4. Open the cds-ui/client code for development

Functional Decomposition

image3

Resource Definition
Introduction:

A Resource definition models the how a specific resource can be resolved.

A resource is a variable/parameter in the context of the service. It can be anything, but it should not be confused with SDC or Openstack resources.

A Resource definition can have multiple sources to handle resolution in different ways. The main goal of Resource definition is to define re-usable entity that could be shared.

Creation of Resource definition is a standalone activity, separated from the blueprint design.

As part of modelling a Resource definition entry, the following generic information should be provided:

image0

Below are properties that all the resource source have will have

The modeling does allow for data translation between external capability and CDS for both input and output key mapping.

image1

Example:

vf-module-model-customization-uuid and vf-module-label are two data dictionaries. A SQL table, VF_MODULE_MODEL, exist to correlate them.

Here is how input-key-mapping, output-key-mapping and key-dependencies can be used:

 1 {
 2   "description": "This is Component Resource Source Node Type",
 3   "version": "1.0.0",
 4   "properties": {
 5     "script-type": {
 6       "required": true,
 7       "type": "string",
 8       "default": "kotlin",
 9       "constraints": [
10         {
11           "valid_values": [
12             "kotlin",
13             "jython"
14           ]
15         }
16       ]
17     },
18     "script-class-reference": {
19       "description": "Capability reference name for internal and kotlin, for jython script file path",
20       "required": true,
21       "type": "string"
22     },
23     "instance-dependencies": {
24       "required": false,
25       "description": "Instance dependency Names to Inject to Kotlin / Jython Script.",
26       "type": "list",
27       "entry_schema": {
28         "type": "string"
29       }
30     },
31     "key-dependencies": {
32       "description": "Resource Resolution dependency dictionary names.",
33       "required": true,
34       "type": "list",
35       "entry_schema": {
36         "type": "string"
37       }
38     }
39   },
40   "derived_from": "tosca.nodes.ResourceSource"
41 }
Resource source:

Defines the contract to resolve a resource.

A resource source is modeled, following TOSCA node type definition and derives from the Resource source.

Also please click below for resource source available details

Resource Source
Input:

Expects the value to be provided as input to the request.

1{
2  "source-input" :
3  {
4    "description": "This is Input Resource Source Node Type",
5    "version": "1.0.0",
6    "properties": {},
7    "derived_from": "tosca.nodes.ResourceSource"
8  }
9}
Default:

Expects the value to be defaulted in the model itself.

1{
2  "source-default" :
3  {
4    "description": "This is Default Resource Source Node Type",
5    "version": "1.0.0",
6    "properties": {},
7    "derived_from": "tosca.nodes.ResourceSource"
8  }
9}
Sql:

Expects the SQL query to be modeled; that SQL query can be parameterized, and the parameters be other resources resolved through other means. If that’s the case, this data dictionary definition will have to define key-dependencies along with input-key-mapping.

CDS is currently deployed along the side of SDNC, hence the primary database connection provided by the framework is to SDNC database.

image0

 1 {
 2   "description": "This is Database Resource Source Node Type",
 3   "version": "1.0.0",
 4   "properties": {
 5     "type": {
 6       "required": true,
 7       "type": "string",
 8       "constraints": [
 9         {
10           "valid_values": [
11             "SQL"
12           ]
13         }
14       ]
15     },
16     "endpoint-selector": {
17       "required": false,
18       "type": "string"
19     },
20     "query": {
21       "required": true,
22       "type": "string"
23     },
24     "input-key-mapping": {
25       "required": false,
26       "type": "map",
27       "entry_schema": {
28         "type": "string"
29       }
30     },
31     "output-key-mapping": {
32       "required": false,
33       "type": "map",
34       "entry_schema": {
35         "type": "string"
36       }
37     },
38     "key-dependencies": {
39       "required": true,
40       "type": "list",
41       "entry_schema": {
42         "type": "string"
43       }
44     }
45   },
46   "derived_from": "tosca.nodes.ResourceSource"
47 }

Connection to a specific database can be expressed through the endpoint-selector property, which refers to a macro defining the information about the database the connect to. Understand TOSCA Macro in the context of CDS.

 1{
 2  "dsl_definitions": {
 3    "dynamic-db-source": {
 4      "type": "maria-db",
 5      "url": "jdbc:mysql://localhost:3306/sdnctl",
 6      "username": "<username>",
 7      "password": "<password>"
 8    }
 9  }
10}
Rest:

Expects the URI along with the VERB and the payload, if needed.

CDS is currently deployed along the side of SDNC, hence the default rest connection provided by the framework is to SDNC MDSAL.

image1

 1 {
 2   "description": "This is Rest Resource Source Node Type",
 3   "version": "1.0.0",
 4   "properties": {
 5     "type": {
 6       "required": false,
 7       "type": "string",
 8       "default": "JSON",
 9       "constraints": [
10         {
11           "valid_values": [
12             "JSON"
13           ]
14         }
15       ]
16     },
17     "verb": {
18       "required": false,
19       "type": "string",
20       "default": "GET",
21       "constraints": [
22         {
23           "valid_values": [
24             "GET", "POST", "DELETE", "PUT"
25           ]
26         }
27       ]
28     },
29     "payload": {
30       "required": false,
31       "type": "string",
32       "default": ""
33     },
34     "endpoint-selector": {
35       "required": false,
36       "type": "string"
37     },
38     "url-path": {
39       "required": true,
40       "type": "string"
41     },
42     "path": {
43       "required": true,
44       "type": "string"
45     },
46     "expression-type": {
47       "required": false,
48       "type": "string",
49       "default": "JSON_PATH",
50       "constraints": [
51         {
52           "valid_values": [
53             "JSON_PATH",
54             "JSON_POINTER"
55           ]
56         }
57       ]
58     },
59     "input-key-mapping": {
60       "required": false,
61       "type": "map",
62       "entry_schema": {
63         "type": "string"
64       }
65     },
66     "output-key-mapping": {
67       "required": false,
68       "type": "map",
69       "entry_schema": {
70         "type": "string"
71       }
72     },
73     "key-dependencies": {
74       "required": true,
75       "type": "list",
76       "entry_schema": {
77         "type": "string"
78       }
79     }
80   },
81   "derived_from": "tosca.nodes.ResourceSource"
82 }

Connection to a specific REST system can be expressed through the endpoint-selector property, which refers to a macro defining the information about the REST system the connect to. Understand TOSCA Macro in the context of CDS.

Few ways are available to authenticate to the REST system:
  • token-auth

  • basic-auth

  • ssl-basic-auth

token-auth:
1{
2  "dsl_definitions": {
3    "dynamic-rest-source": {
4      "type" : "token-auth",
5      "url" : "http://localhost:32778",
6      "token" : "<token>"
7    }
8  }
9}
basic-auth:
 1{
 2  "dsl_definitions": {
 3    "dynamic-rest-source": {
 4      "type" : "basic-auth",
 5      "url" : "http://localhost:32778",
 6      "username" : "<username>",
 7      "password": "<password>"
 8    }
 9  }
10}
ssl-basic-auth:
 1{
 2  "dsl_definitions": {
 3    "dynamic-rest-source": {
 4      "type" : "ssl-basic-auth",
 5      "url" : "http://localhost:32778",
 6      "keyStoreInstance": "JKS or PKCS12",
 7      "sslTrust": "trusture",
 8      "sslTrustPassword": "<password>",
 9      "sslKey": "keystore",
10      "sslKeyPassword": "<password>"
11    }
12  }
13}
Capability:

Expects a script to be provided.

image2

 1 {
 2   "description": "This is Component Resource Source Node Type",
 3   "version": "1.0.0",
 4   "properties": {
 5     "script-type": {
 6       "required": true,
 7       "type": "string",
 8       "default": "kotlin",
 9       "constraints": [
10         {
11           "valid_values": [
12             "kotlin",
13             "jython"
14           ]
15         }
16       ]
17     },
18     "script-class-reference": {
19       "description": "Capability reference name for internal and kotlin, for jython script file path",
20       "required": true,
21       "type": "string"
22     },
23     "instance-dependencies": {
24       "required": false,
25       "description": "Instance dependency Names to Inject to Kotlin / Jython Script.",
26       "type": "list",
27       "entry_schema": {
28         "type": "string"
29       }
30     },
31     "key-dependencies": {
32       "description": "Resource Resolution dependency dictionary names.",
33       "required": true,
34       "type": "list",
35       "entry_schema": {
36         "type": "string"
37       }
38     }
39   },
40   "derived_from": "tosca.nodes.ResourceSource"
41 }
Complex Type:

Value will be resolved through REST., and output will be a complex type.

Modeling reference: Modeling Concepts#rest

In this example, we’re making a POST request to an IPAM system with no payload.

Some ingredients are required to perform the query, in this case, $prefixId. Hence It is provided as an input-key-mapping and defined as a key-dependencies. Please refer to the modeling guideline for more in depth understanding.

As part of this request, the expected response will be as below.

 1 {
 2   "id": 4,
 3   "address": "192.168.10.2/32",
 4   "vrf": null,
 5   "tenant": null,
 6   "status": 1,
 7   "role": null,
 8   "interface": null,
 9   "description": "",
10   "nat_inside": null,
11   "created": "2018-08-30",
12   "last_updated": "2018-08-30T14:59:05.277820Z"
13 }

What is of interest is the address and id fields. For the process to return these two values, we need to create a custom data-type, as bellow

 1{
 2  "version": "1.0.0",
 3  "description": "This is Netbox IP Data Type",
 4  "properties": {
 5    "address": {
 6      "required": true,
 7      "type": "string"
 8    },
 9    "id": {
10      "required": true,
11      "type": "integer"
12    }
13  },
14  "derived_from": "tosca.datatypes.Root"
15}

The type of the data dictionary will be dt-netbox-ip.

To tell the resolution framework what is of interest in the response, the output-key-mapping section is used. The process will map the output-key-mapping to the defined data-type.

{
  "tags" : "oam-local-ipv4-address",
  "name" : "create_netbox_ip",
  "property" : {
    "description" : "netbox ip",
    "type" : "dt-netbox-ip"
  },
  "updated-by" : "adetalhouet",
  "sources" : {
    "config-data" : {
      "type" : "source-rest",
      "properties" : {
        "type" : "JSON",
        "verb" : "POST",
        "endpoint-selector" : "ipam-1",
        "url-path" : "/api/ipam/prefixes/$prefixId/available-ips/",
        "path" : "",
        "input-key-mapping" : {
          "prefixId" : "prefix-id"
        },
        "output-key-mapping" : {
          "address" : "address",
          "id" : "id"
        },
        "key-dependencies" : [ "prefix-id" ]
      }
    }
  }
}
Resource Assignment
Component executor:
Workflow:

A workflow defines an overall action to be taken for the service; it can be composed of a set of sub-actions to execute. Currently, workflows are backed by Directed Graph engine.

A CBA can have as many workflow as needed.

Template:

A template is an artifact.

A template is parameterized and each parameter must be defined in a corresponding mapping file.

In order to know which mapping correlate to which template, the file name must start with an artifact-prefix, serving as identifier to the overall template + mapping.

The requirement is as follow:

${artifact-prefix}-template ${artifact-prefix}-mapping

A template can represent anything, such as device config, payload to interact with 3rd party systems, resource-accumulator template, etc…

Mapping:

Defines the contract of each resource to be resolved. Each placeholder in the template must have a corresponding mapping definition.

A mapping is comprised of:

  • name

  • required / optional

  • type (support complex type)

  • dictionary-name

  • dictionary-source

Dependencies:

This allows to make sure given resources get resolved prior the resolution of the resources defining the dependency. The dictionary fields reference to a specific data dictionary.

Resource accumulator:

In order to resolve HEAT environment variables, resource accumulator templates are being in used for Dublin.

These templates are specific to the pre-instantiation scenario, and relies on GR-API within SDNC.

It is composed of the following sections:

resource-accumulator-resolved-data: defines all the resources that can be resolved directly from the context. It expresses a direct mapping between the name of the resource and its value.

capability-data: defines what capability to use to create a specific resource, along with the ingredients required to invoke the capability and the output mapping.

  • Scripts

  • Library

  • NetconfClient

In order to facilitate NETCONF interaction within scripts, a python NetconfClient binded to our Kotlin implementation is made available. This NetconfClient can be used when using the netconf-component-executor.

The client can be find here: https://github.com/onap/ccsdk-apps/blob/master/components/scripts/python/ccsdk_netconf/netconfclient.py

Use Cases

Use Cases

Wordpress CNF in CDS (POC)

This demo by CableLabs shows an easy to use POC how to use/deploy VNFs in CDS and do resource asignment.

Detailed description will follow as soon as there is an acknowledgement from CableLabs that content can be published.

Goal is to use CDS (ONAP) in a very simple and understandable way. Azure, AWS and Kubernetes are used as VIMs trough scripting. Wordpress is used as a VNF.

This demo was tested on Frankfurt.

Presentation of Gerald Karam (2020-09-08)

PNF Simulator Day-N config-assign/deploy

Overview

This use case shows in a very simple way how the day-n configuration is assigned and deployed to a PNF through CDS. A Netconf server (docker image sysrepo/sysrepo-netopeer2) is used for simulating the PNF.

This use case (POC) solely requires a running CDS and the PNF Simulator running on a VM (Ubuntu is used by the author). No other module of ONAP is needed.

There are different ways to run CDS and the PNF simulator. This guide will show different possible options to allow the greatest possible flexibility.

Run CDS (Blueprint Processor)

CDS can be run in Kubernetes (Minikube, Microk8s) or in an IDE. You can choose your favorite option. Just the blueprint processor of CDS is needed. If you have desktop access it is recommended to run CDS in an IDE since it is easy and enables debugging.

Run PNF Simulator and install module

There are many different ways to run a Netconf Server to simulate the PNF, in this guide sysrepo/sysrepo-netopeer2 docker image is commonly used. The easiest way is to run the out-of-the-box docker container without any other configuration, modules or scripts. In the ONAP community there are other workflows existing for running the PNF Simulator. These workflows are also using sysrepo/sysrepo-netopeer2 docker image. These workflow are also linked here but they are not tested by the author of this guide.

Download and run docker container with docker run -d --name netopeer2 -p 830:830 -p 6513:6513 sysrepo/sysrepo-netopeer2:latest

Enter the container with docker exec -it netopeer2 bin/bash

Browse to the target location where all YANG modules exist: cd /etc/sysrepo/yang

Create a simple mock YANG model for a packet generator (pg.yang).

pg.yang
module sample-plugin {

   yang-version 1;
   namespace "urn:opendaylight:params:xml:ns:yang:sample-plugin";
   prefix "sample-plugin";

   description
   "This YANG module defines the generic configuration and
   operational data for sample-plugin in VPP";

   revision "2016-09-18" {
      description "Initial revision of sample-plugin model";
   }

   container sample-plugin {

      uses sample-plugin-params;
      description "Configuration data of sample-plugin in Honeycomb";

      // READ
      // curl -u admin:admin http://localhost:8181/restconf/config/sample-plugin:sample-plugin

      // WRITE
      // curl http://localhost:8181/restconf/operational/sample-plugin:sample-plugin

   }

   grouping sample-plugin-params {
      container pg-streams {
         list pg-stream {

            key id;
            leaf id {
               type string;
            }

            leaf is-enabled {
               type boolean;
            }
         }
      }
   }
}

Create the following sample XML data definition for the above model (pg-data.xml). Later on this will initialise one single PG stream.

pg-data.xml
<sample-plugin xmlns="urn:opendaylight:params:xml:ns:yang:sample-plugin">
   <pg-streams>
      <pg-stream>
         <id>1</id>
         <is-enabled>true</is-enabled>
      </pg-stream>
   </pg-streams>
</sample-plugin>

Execute the following command within netopeer docker container to install the pg.yang model

sysrepoctl -v3 -i pg.yang

Note

This command will just schedule the installation, it will be applied once the server is restarted.

Stop the container from outside with docker stop netopeer2 and start it again with docker start netopeer2

Enter the container like it’s mentioned above with docker exec -it netopeer2 bin/bash.

You can check all installed modules with sysrepoctl -l. sample-plugin module should appear with I flag.

Execute the following the commands to initialise the Yang model with one pg-stream record. We will be using CDS to perform the day-1 and day-2 configuration changes.

netopeer2-cli
> connect --host localhost --login root
# passwort is root
> get --filter-xpath /sample-plugin:*
# shows existing pg-stream records (empty)
> edit-config --target running --config=/etc/sysrepo/yang/pg-data.xml
# initialises Yang model with one pg-stream record
> get --filter-xpath /sample-plugin:*
# shows initialised pg-stream

If the output of the last command is like this, everything went successful:

DATA
<sample-plugin xmlns="urn:opendaylight:params:xml:ns:yang:sample-plugin">
   <pg-streams>
      <pg-stream>
         <id>1</id>
         <is-enabled>true</is-enabled>
      </pg-stream>
   </pg-streams>
</sample-plugin>
Config-assign and config-deploy in CDS

In the following steps config-assignment is done and the config is deployed to the Netconf server through CDS. Example requests are in the following Postman collection JSON. You can also use bash scripting to call the APIs.

Note

The CBA for this PNF Demo gets loaded, enriched and saved in CDS through calling bootstrap. If not done before, call Bootstrap API

Password and username for API calls will be ccsdkapps.

Config-Assign:

The assumption is that we are using the same host to run PNF NETCONF simulator as well as CDS. You will need the IP Adress of the Netconf server container which can be found out with docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' netopeer2. In the following example payloads we will use 172.17.0.2.

Call the process API (http://{{host}}:{{port}}/api/v1/execution-service/process) with POST method to create day-1 configuration. Use the following payload:

{
   "actionIdentifiers": {
      "mode": "sync",
      "blueprintName": "pnf_netconf",
      "blueprintVersion": "1.0.0",
      "actionName": "config-assign"
   },
   "payload": {
      "config-assign-request": {
            "resolution-key": "day-1",
            "config-assign-properties": {
               "stream-count": 5
            }
      }
   },
   "commonHeader": {
      "subRequestId": "143748f9-3cd5-4910-81c9-a4601ff2ea58",
      "requestId": "e5eb1f1e-3386-435d-b290-d49d8af8db4c",
      "originatorId": "SDNC_DG"
   }
}

You can verify the day-1 NETCONF RPC payload looking into CDS DB. You should see the NETCONF RPC with 5 streams (fw_udp_1 TO fw_udp_5). Connect to the DB and run the below statement. You should see the day-1 configuration as an output.

MariaDB [sdnctl]> select * from TEMPLATE_RESOLUTION where resolution_key='day-1' AND artifact_name='netconfrpc';

<rpc xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1">
   <edit-config>
      <target>
         <running/>
      </target>
      <config>
         <sample-plugin xmlns="urn:opendaylight:params:xml:ns:yang:sample-plugin">
            <pg-streams>
               <pg-stream>
                  <id>fw_udp_1</id>
                  <is-enabled>true</is-enabled>
               </pg-stream>
               <pg-stream>
                  <id>fw_udp_2</id>
                  <is-enabled>true</is-enabled>
               </pg-stream>
               <pg-stream>
                  <id>fw_udp_3</id>
                  <is-enabled>true</is-enabled>
               </pg-stream>
               <pg-stream>
                  <id>fw_udp_4</id>
                  <is-enabled>true</is-enabled>
               </pg-stream>
               <pg-stream>
                  <id>fw_udp_5</id>
                  <is-enabled>true</is-enabled>
               </pg-stream>
            </pg-streams>
         </sample-plugin>
      </config>
   </edit-config>
</rpc>

For creating day-2 configuration call the same endpoint and use the following payload:

{
   "actionIdentifiers": {
      "mode": "sync",
      "blueprintName": "pnf_netconf",
      "blueprintVersion": "1.0.0",
      "actionName": "config-assign"
   },
   "payload": {
      "config-assign-request": {
            "resolution-key": "day-2",
            "config-assign-properties": {
               "stream-count": 10
            }
      }
   },
   "commonHeader": {
      "subRequestId": "143748f9-3cd5-4910-81c9-a4601ff2ea58",
      "requestId": "e5eb1f1e-3386-435d-b290-d49d8af8db4c",
      "originatorId": "SDNC_DG"
   }
}

Note

Until this step CDS did not interact with the PNF simulator or device. We just created the day-1 and day-2 configurations and stored it in CDS database

Config-Deploy:

Now we will make the CDS REST API calls to push the day-1 and day-2 configuration changes to the PNF simulator. Call the same endpoint process with the following payload:

{
   "actionIdentifiers": {
      "mode": "sync",
      "blueprintName": "pnf_netconf",
      "blueprintVersion": "1.0.0",
      "actionName": "config-deploy"
   },
   "payload": {
      "config-deploy-request": {
         "resolution-key": "day-1",
            "pnf-ipv4-address": "127.17.0.2",
            "netconf-username": "netconf",
            "netconf-password": "netconf"
      }
   },
   "commonHeader": {
      "subRequestId": "143748f9-3cd5-4910-81c9-a4601ff2ea58",
      "requestId": "e5eb1f1e-3386-435d-b290-d49d8af8db4c",
      "originatorId": "SDNC_DG"
   }
}

Go back to PNF netopeer cli console like mentioned above and verify if you can see 5 streams fw_udp_1 to fw_udp_5 enabled. If the 5 streams appear in the output as follows, the day-1 configuration got successfully deployed and the use case is successfully done.

> get --filter-xpath /sample-plugin:*
DATA
<sample-plugin xmlns="urn:opendaylight:params:xml:ns:yang:sample-plugin">
   <pg-streams>
      <pg-stream>
         <id>1</id>
         <is-enabled>true</is-enabled>
      </pg-stream>
      <pg-stream>
         <id>fw_udp_1</id>
         <is-enabled>true</is-enabled>
      </pg-stream>
      <pg-stream>
         <id>fw_udp_2</id>
         <is-enabled>true</is-enabled>
      </pg-stream>
      <pg-stream>
         <id>fw_udp_3</id>
         <is-enabled>true</is-enabled>
      </pg-stream>
      <pg-stream>
         <id>fw_udp_4</id>
         <is-enabled>true</is-enabled>
      </pg-stream>
      <pg-stream>
         <id>fw_udp_5</id>
         <is-enabled>true</is-enabled>
      </pg-stream>
   </pg-streams>
</sample-plugin>
>

The same can be done for day-2 config (follow same steps just with day-2 in payload).

Note

Through deployment we did not deploy the PNF, we just modified the PNF. The PNF could also be installed by CDS but this is not targeted in this guide.

Creators of this guide

Deutsche Telekom AG

Jakob Krieg (Rocketchat @jakob.Krieg); Eli Halych (Rocketchat @elihalych)

This guide is a derivate from https://wiki.onap.org/display/DW/PNF+Simulator+Day-N+config-assign+and+config-deploy+use+case.

vFW CNF with CDS (Use Case)

The vFW CNF use case is a demonstration of deployment of CNF application, defined as a set of Helm packages. CDS plays a crucial role in the process of CNF instantiation and is responsible for delivery of instantiation parameters, CNF customization, configuration of CBF after the deployment and may be used in the process of CNF status verification.

Base on this example there are demonstrated following features of CDS and CBA model

  • resource assignment string, integer and json types

  • sourcing of resolved value on vf-module level from vnf level assignment

  • extracting data from AAI and MD-SAL during the resource assignment

  • custom resource assignment with Kotlin script

  • templating of the vtl files

  • building of imperative workflows

  • utilization of on_succes and on_failure event in imperative workflow

  • handling of the failure in the workflow

  • implementation of custom workflow logic with Kotlin script

  • example of config-assign and config-deploy operation decomposed into many steps

  • complex parametrization of config deploy operation

  • combination and aggregation of AAI and MD-SAL data in config-assign and config-deploy operations

The prepared CBA model demonstrates also how to utilize CNF specific features of CBA, suited for the deployment of CNF with k8splugin in ONAP:

  • building and upload of k8s profile template into k8splugin

  • building and upload of k8s configuration template into k8splugin

  • parametrization and creation of configuration instance from configuration template

  • validation of CNF status with Kotlin script

_images/vfw_role_of_cba.png

Role of the CBA for CNF/Helm Day 0/1/2 processing

The CNF in ONAP is modeled as a collection of Helm packages, and in case of vFW use case, CNF application is split into four Helm packages to match vf-modules. For each vf-module there is own template in CBA package. The list of associated resource assignment artifacts with the templates is following:

"artifacts" : {
  "helm_base_template-template" : {
    "type" : "artifact-template-velocity",
    "file" : "Templates/base_template-template.vtl"
  },
  "helm_base_template-mapping" : {
    "type" : "artifact-mapping-resource",
    "file" : "Templates/base_template-mapping.json"
  },
  "helm_vpkg-template" : {
    "type" : "artifact-template-velocity",
    "file" : "Templates/vpkg-template.vtl"
  },
  "helm_vpkg-mapping" : {
    "type" : "artifact-mapping-resource",
    "file" : "Templates/vpkg-mapping.json"
  },
  "helm_vfw-template" : {
    "type" : "artifact-template-velocity",
    "file" : "Templates/vfw-template.vtl"
  },
  "helm_vfw-mapping" : {
    "type" : "artifact-mapping-resource",
    "file" : "Templates/vfw-mapping.json"
  },
  "vnf-template" : {
    "type" : "artifact-template-velocity",
    "file" : "Templates/vnf-template.vtl"
  },
  "vnf-mapping" : {
    "type" : "artifact-mapping-resource",
    "file" : "Templates/vnf-mapping.json"
  },
  "helm_vsn-template" : {
    "type" : "artifact-template-velocity",
    "file" : "Templates/vsn-template.vtl"
  },
  "helm_vsn-mapping" : {
    "type" : "artifact-mapping-resource",
    "file" : "Templates/vsn-mapping.json"
  }
}

SO requires for instantiation name of the profile in the parameter: k8s-rb-profile-name and name of the release of thr application: k8s-rb-instance-release-name. The latter one, when not specified, will be replaced with combination of profile name and vf-module-id for each Helm instance/vf-module instantiated. Both values can be found in vtl templates dedicated for vf-modules.

CBA offers possibility of the automatic generation and upload to multicloud/k8s plugin the RB profile content. RB profile is required if you want to deploy your CNF into k8s namesapce other than default. Also, if you want to ensure particular templating of your Helm charts, specific to particular version of the cluster into which Helm packages will deployed on, profile is used to specify the version of your cluster.

RB profile can be used to enrich or to modify the content of the original helm package. Profile can be also used to add additional k8s helm templates for helm installation or can be used to modify existing k8s helm templates for each create CNF instance. It opens another level of CNF customization, much more than customization of the Helm package with override values. K8splugin offers also default profile without content, for default namespace and default cluster version.

---
version: v1
type:
  values: "override_values.yaml"
  configresource:
    - filepath: resources/deployment.yaml
      chartpath: templates/deployment.yaml

Above we have exemplary manifest file of the RB profile. Since Frankfurt override_values.yaml file does not need to be used as instantiation values are passed to the plugin over Instance API of k8s plugin. In the example, profile contains additional k8s Helm template which will be added on demand to the helm package during its installation. In our case, depending on the SO instantiation request input parameters, vPGN helm package can be enriched with additional ssh service. Such service will be dynamically added to the profile by CDS and later on CDS will upload whole custom RB profile to multicloud/k8s plugin.

In order to support generation and upload of profile, our vFW CBA model has enhanced resource-assignment workflow which contains additional step: profile-upload. It leverages dedicated functionality introduced in Guilin release that can be used to upload predefined profile or to generate and upload content of the profile with Velocity templating mechanism.

"resource-assignment": {
    "steps": {
        "resource-assignment": {
            "description": "Resource Assign Workflow",
            "target": "resource-assignment",
            "activities": [
                {
                    "call_operation": "ResourceResolutionComponent.process"
                }
            ],
            "on_success": [
                "profile-upload"
            ]
        },
        "profile-upload": {
            "description": "Generate and upload K8s Profile",
            "target": "k8s-profile-upload",
            "activities": [
                {
                    "call_operation": "ComponentScriptExecutor.process"
                }
            ]
        }
    },

In our example for vPKG helm package we may select vfw-cnf-cds-vpkg-profile profile that is included into CBA as a folder. Profile generation step uses Velocity templates processing embedded CDS functionality on its basis ssh port number (specified in the SO request as vpg-management-port).

{
    "name": "vpg-management-port",
    "property": {
        "description": "The number of node port for ssh service of vpg",
        "type": "integer",
        "default": "0"
    },
    "input-param": false,
    "dictionary-name": "vpg-management-port",
    "dictionary-source": "default",
    "dependencies": []
}

vpg-management-port can be included directly into the helm template and such template will be included into vPKG helm package in time of its instantiation.

apiVersion: v1
kind: Service
metadata:
  name: {{ .Values.vpg_name_0 }}-ssh-access
  labels:
    vnf-name: {{ .Values.vnf_name }}
    vf-module-name: {{ .Values.vpg_name_0 }}
    release: {{ .Release.Name }}
    chart: {{ .Chart.Name }}
spec:
  type: NodePort
  ports:
    - port: 22
      nodePort: ${vpg-management-port}
  selector:
    vf-module-name: {{ .Values.vpg_name_0 }}
    release: {{ .Release.Name }}
    chart: {{ .Chart.Name }}

The mechanism of profile generation and upload requires specific node teamplate in the CBA definition. In our case, it comes with the declaration of two profiles: one static vfw-cnf-cds-base-profile in a form of an archive and the second complex vfw-cnf-cds-vpkg-profile in a form of a folder for processing and profile generation. Below is the example of the definition of node type for execution of the profile upload operation.

"k8s-profile-upload": {
    "type": "component-k8s-profile-upload",
    "interfaces": {
        "K8sProfileUploadComponent": {
            "operations": {
                "process": {
                    "inputs": {
                        "artifact-prefix-names": {
                            "get_input": "template-prefix"
                        },
                        "resource-assignment-map": {
                            "get_attribute": [
                                "resource-assignment",
                                "assignment-map"
                            ]
                        }
                    }
                }
            }
        }
    },
    "artifacts": {
        "vfw-cnf-cds-base-profile": {
            "type": "artifact-k8sprofile-content",
            "file": "Templates/k8s-profiles/vfw-cnf-cds-base-profile.tar.gz"
        },
        "vfw-cnf-cds-vpkg-profile": {
            "type": "artifact-k8sprofile-content",
            "file": "Templates/k8s-profiles/vfw-cnf-cds-vpkg-profile"
        },
        "vfw-cnf-cds-vpkg-profile-mapping": {
            "type": "artifact-mapping-resource",
            "file": "Templates/k8s-profiles/vfw-cnf-cds-vpkg-profile/ssh-service-mapping.json"
        }
    }
}

Artifact file determines a place of the static profile or the content of the complex profile. In the latter case we need a pair of profile folder and mapping file with a declaration of the parameters that CDS needs to resolve first, before the Velocity templating is applied to the .vtl files present in the profile content. After Velocity templating the .vtl extensions will be dropped from the file names. The embedded mechanism will include in the profile only files present in the profile MANIFEST file that needs to contain the list of final names of the files to be included into the profile.

The figure below shows the idea of profile templating.

_images/profile-templating.png

K8s Profile Templating

The component-k8s-profile-upload that stands behind the profile uploading mechanism has input parameters that can be passed directly (checked in the first order) or can be taken from the resource-assignment-map parameter which can be a result of associated component-resource-resolution result, like in our case their values are resolved on vf-module level resource assignment. The component-k8s-profile-upload inputs are following:

  • k8s-rb-definition-name [string] - (mandatory) the name under which RB definition was created - VF Module Model Invariant ID in ONAP

  • k8s-rb-definition-version [string] - (mandatory) the version of created RB definition name - VF Module Model Customization ID in ONAP

  • k8s-rb-profile-name [string] - (mandatory) the name of the profile under which it will be created in k8s plugin. Other parameters are required only when profile must be uploaded because it does not exist yet

  • k8s-rb-profile-source [string] - the source of profile content - name of the artifact of the profile. If missing k8s-rb-profile-name is treated as a source

  • k8s-rb-profile-namespace [string] - (mandatory) the k8s namespace name associated with profile being created

  • k8s-rb-profile-kubernetes-version [string] - the version of the cluster on which application will be deployed - it may impact the helm templating process like selection of the api versions for resources so it should match the version of k8s cluster in which resources are bing deployed.

  • k8s-rb-profile-labels [json] - the extra labels (label-name: label-value) to add for each k8s resource created for CNF in the k8s cluster (since Jakarta release).

  • k8s-rb-profile-extra-types [list<json>] - the list of extra k8s types that should be returned by StatusAPI. It may be usefull when k8s resources associated with CNF instance are created outside of the helm package (i.e. by k8s operator) but should be treated like resources of CNF. To make it hapens such resources should have the instance label k8splugin.io/rb-instance-id what may be assured by such tools like kyverno. Each extra type json object needs Group, Version and Kind attributes. (since Jakarta release).

  • resource-assignment-map [json] - result of the associated resource assignment step - it may deliver values of inputs if they are not specified directly

  • artifact-prefix-names [list<string>] - (mandatory) the list of artifact prefixes like for resource-assigment step in the resource-assigment workflow or its subset

In the SO request user can pass parameter of name k8s-rb-profile-name which in our case may have value: vfw-cnf-cds-base-profile, vfw-cnf-cds-vpkg-profile or default. The default profile does not contain any content inside and allows instantiation of CNF without the need to define and upload any additional profiles. vfw-cnf-cds-vpkg-profile has been prepared to test instantiation of the second modified vFW CNF instance.

K8splugin allows to specify override parameters (similar to –set behavior of helm client) to instantiated resource bundles. This allows for providing dynamic parameters to instantiated resources without the need to create new profiles for this purpose. This mechanism should be used with default profile but may be used also with any custom profile.

The overall flow of helm overrides parameters processing is visible on following figure. When rb definition (helm package) is being instantiated for specified rb profile K8splugin combines override values from the helm package, rb profile and from the instantiation request - in the respective order. It means that the value from the instantiation request (SO request input or CDS resource assignment result) has a precedence over the value from the rb profile and value from the rb profile has a precedence over the helm package default override value. Similarly, profile can contain resource files that may extend or ammend the existing files for the original helm package content.

_images/helm-overrides.png

The overall flow of helm data processing

Both profile content (4) like the instantiation request values (5) can be generated during the resource assignment process according to its definition for CBA associated with helm package. CBA may generate i.e. names, IP addresses, ports and can use this information to produce the rb-profile (3) content. Finally, all three sources of override values, temnplates and additional resources files are merged together (6) by K8splugin in the order exaplained before.

Besides the deployment of Helm application the CBA of vFW demonstrates also how to use deicated features for config-assign (7) and config-deploy (8) operations. In the use case, config-assign and config-deploy operations deal mainly with creation and instantiation of configuration template for k8s plugin. The configuration template has a form of Helm package. When k8s plugin instantiates configuration, it creates or may replace existing resources deployed on k8s cluster. In our case the configuration template is used to provide alternative way of upload of the additional ssh-service but it coud be used to modify configmap of vfw or vpkg vf-modules.

In order to provide configuration instantiation capability standard config-assign and config-deploy workflows have been changed into imperative workflows with first step responsible for collection of information for configuration templating and configuration instantiation. The source of data for this operations is AAI, MDSAL with data for vnf and vf-modules as config-assign and config-deploy does not receive dedicated input parameters from SO. In consequence both operations need to source from resource-assignment phase and data placed in the AAI and MDSAL.

vFW CNF config-assign workflow is following:

"config-assign": {
    "steps": {
        "config-setup": {
            "description": "Gather necessary input for config template upload",
            "target": "config-setup-process",
            "activities": [
                {
                    "call_operation": "ResourceResolutionComponent.process"
                }
            ],
            "on_success": [
                "config-template"
            ]
        },
        "config-template": {
            "description": "Generate and upload K8s config template",
            "target": "k8s-config-template",
            "activities": [
                {
                    "call_operation": "K8sConfigTemplateComponent.process"
                }
            ]
        }
    },

vFW CNF config-deploy workflow is following:

"config-deploy": {
    "steps": {
        "config-setup": {
            "description": "Gather necessary input for config init and status verification",
            "target": "config-setup-process",
            "activities": [
                {
                    "call_operation": "ResourceResolutionComponent.process"
                }
            ],
            "on_success": [
                "config-apply"
            ]
        },
        "config-apply": {
            "description": "Activate K8s config template",
            "target": "k8s-config-apply",
            "activities": [
                {
                    "call_operation": "K8sConfigTemplateComponent.process"
                }
            ],
            "on_success": [
                "status-verification-script"
            ]
        },

In our example configuration template for vFW CNF is a helm package that contains the same resource that we can find in the vPKG vfw-cnf-cds-vpkg-profile profile - extra ssh service. This helm package contains Helm encapsulation for ssh-service and the values.yaml file with declaration of all the inputs that may parametrize the ssh-service. The configuration templating step leverages the component-k8s-config-template component that prepares the configuration template and uploads it to k8splugin. In consequence, it may be used later on for instatiation of the configuration.

In this use case we have two options with ssh-service-config and ssh-service-config-customizable as a source of the same configuration template. In consequence, or we take a complete template or we have have the templatefolder with the content of the helm package and CDS may perform dedicated resource resolution for it with templating of all the files with .vtl extensions. The process is very similar to the one describe for profile upload functionality.

"k8s-config-template": {
    "type": "component-k8s-config-template",
    "interfaces": {
        "K8sConfigTemplateComponent": {
            "operations": {
                "process": {
                    "inputs": {
                        "artifact-prefix-names": [
                            "helm_vpkg"
                        ],
                        "resource-assignment-map": {
                            "get_attribute": [
                                "config-setup-process",
                                "",
                                "assignment-map",
                                "config-deploy",
                                "config-deploy-setup"
                            ]
                        }
                    }
                }
            }
        }
    },
    "artifacts": {
        "ssh-service-config": {
            "type": "artifact-k8sconfig-content",
            "file": "Templates/k8s-configs/ssh-service.tar.gz"
        },
        "ssh-service-config-customizable": {
            "type": "artifact-k8sconfig-content",
            "file": "Templates/k8s-configs/ssh-service-config"
        },
        "ssh-service-config-customizable-mapping": {
            "type": "artifact-mapping-resource",
            "file": "Templates/k8s-configs/ssh-service-config/ssh-service-mapping.json"
        }
    }
}

The component-k8s-config-template that stands behind creation of configuration template has input parameters that can be passed directly (checked in the first order) or can be taken from the resource-assignment-map parameter which can be a result of associated component-resource-resolution result, like in vFW CNF use case their values are resolved on vf-module level dedicated for config-assign and config-deploy resource assignment step. The component-k8s-config-template inputs are following:

  • k8s-rb-definition-name [string] - (mandatory) the name under which RB definition was created - VF Module Model Invariant ID in ONAP

  • k8s-rb-definition-version [string] - (mandatory) the version of created RB definition name - VF Module Model Customization ID in ONAP

  • k8s-rb-config-template-name [string] - (mandatory) the name of the configuration template under which it will be created in k8s plugin. Other parameters are required only when configuration template must be uploaded because it does not exist yet

  • k8s-rb-config-template-source [string] - the source of config template content - name of the artifact of the configuration template. When missing, the main definition helm package will be used as a configuration template source (since Jakarta release).

  • resource-assignment-map [json] - result of the associated resource assignment step - it may deliver values of inputs if they are not specified directly

  • artifact-prefix-names [list<string>] - (mandatory) the list of artifact prefixes like for resource-assigment step in the resource-assigment workflow or its subset

In our case the component-k8s-config-template component receives all the inputs from the dedicated resource-assignment process config-setup that is responsible for resolution of all the inputs for configuration templating. This process generates data for helm_vpkg prefix and such one is specified in the list of prefixes of the configuration template component. It means that configuration template will be prepared only for vPKG function.

"k8s-config-apply": {
    "type": "component-k8s-config-value",
    "interfaces": {
        "K8sConfigValueComponent": {
            "operations": {
                "process": {
                    "inputs": {
                        "artifact-prefix-names": [
                            "helm_vpkg"
                        ],
                        "k8s-config-operation-type": "create",
                        "resource-assignment-map": {
                            "get_attribute": [
                                "config-setup-process",
                                "",
                                "assignment-map",
                                "config-deploy",
                                "config-deploy-setup"
                            ]
                        }
                    }
                }
            }
        }
    },
    "artifacts": {
        "ssh-service-default": {
            "type": "artifact-k8sconfig-content",
            "file": "Templates/k8s-configs/ssh-service-config/values.yaml"
        },
        "ssh-service-config": {
            "type": "artifact-k8sconfig-content",
            "file": "Templates/k8s-configs/ssh-service-values/values.yaml.vtl"
        },
        "ssh-service-config-mapping": {
            "type": "artifact-mapping-resource",
            "file": "Templates/k8s-configs/ssh-service-values/ssh-service-mapping.json"
        }
    }
}

The component-k8s-config-value that stands behind creation of configuration instance has input parameters that can be passed directly (checked in the first order) or can be taken from the resource-assignment-map parameter which can be a result of associated component-resource-resolution result, like in vFW CNF use case their values are resolved on vf-module level dedicated for config-assign and config-deploy’s’ resource-assignment step. The component-k8s-config-value inputs are following:

  • k8s-rb-config-name [string] - (mandatory) the name of the configuration template under which it will be created in k8s plugin. Other parameters are required only when configuration template must be uploaded because it does not exist yet

  • k8s-rb-config-template-name [string] - (mandatory) the name of the configuration template under which it will be created in k8s plugin. Other parameters are required only when configuration template must be uploaded because it does not exist yet

  • k8s-rb-config-value-source [string] - the source of config template content - name of the artifact of the configuration template. If missing k8s-rb-config-name is treated as a source

  • k8s-rb-config-version [string] - the version of the configuration to restore during the rollback operation. First configuratino after create has version 1 and new ones, after update will have version of the following numbers. When rollback operation is performed all previous versions on the path to the desired one are being restored one, by one. (since Jakarta)

  • k8s-instance-id [string] - (mandatory) the identifier of the rb instance for which the configuration should be applied

  • k8s-config-operation-type [string] - the type of the configuration operation to perform: create, update, rollback, delete or delete_config. By default create operation is performed. rollback and delete_config types are present since Jakarta release. The update operation creates new version of the configuration. delete operation creates also new version of configuratino that deletes all the resources in k8s from the cluster. delete_config operation aims to delete configuration entirely but it does not delete or update any resources associated with the configuration.

  • resource-assignment-map [json] - result of the associated resource assignment step - it may deliver values of inputs if they are not specified directly

  • artifact-prefix-names [list<string>] - (mandatory) the list of artifact prefixes like for resource-assigment step in the resource-assigment workflow or its subset

Like for the configuration template, the component-k8s-config-value component receives all the inputs from the dedicated resource-assignment process config-setup that is responsible for resolution of all the inputs for configuration. This process generates data for helm_vpkg prefix and such one is specified in the list of prefixes of the configuration values component. It means that configuration instance will be created only for vPKG function (component allows also update or delete of the configuration but in the vFW CNF case it is used only to create configuration instance).

CBA of vFW CNF use case is already enriched and VSP of vFW CNF has CBA included inside. In conequence, when VSP is being onboarded and service is being distributed, CBA is uploaded into CDS. Anyway, CDS contains in the starter dictionary all data dictionary values used in the use case and enrichment of CBA should work as well.

Note

The CBA for this use case is already enriched and there is no need to perform enrichment process for it. It is also automatically uploaded into CDS in time of the model distribution from the SDC.

Further information about the use case, role of the CDS and all the steps required to reproduce the process can be found in the dedicated web page

vFirewall CNF Use Case

The vFW CNF use case is an official use case used for verification of the CNF Orchestration extensions.

CDS Designer UI

Designer Guide

Note

How to Get Started with CDS Designer UI

If you’re new to CDS Designer UI and need to get set up, the following guides may be helpful:

Getting Started

This is your CDS Designer UI guide. No matter how experienced you are or what you want to achieve, it should cover everything you need to know — from navigating the interface to making the most of different features.

What is CDS Designer UI?

CDS Designer UI is a framework to automate the resolution of resources for instantiation and any config provisioning operation, such as day0, day1, or day2 configuration.

CDS has both design-time and run-time activities; during design time, Designer can define what actions are required for a given service, along with anything comprising the action. The design produces a CBA Package

Its content is driven from a catalog of reusable data dictionary and component, delivering a reusable and simplified self-service experience.

CDS modeling is mainly based on the TOSCA standard, using JSON as a representation.

image1

What’s new?

image2

Create full CBA packages from built-in forms without programming

image5

Customizable CBA Package actions

image3

Import old packages for edit and collaboration

image6

Easily create and manage lists of data via interface (Data Dictionary, controller catalog, and config management)

image4

Create sophisticated package workflows in a no-code graphical designer

image7

Integration between CDS UI and SDC Services

Overview of CDS Interface

Full CDS UI screens are available in InVision

image8

  1. CDS main menu: Access all CDS module list including Packages, Data Dictionary, Controller Catalog, etc.

  2. Profile: Access user profile information

  3. Module Title: See the current module name and the total number of items in the module list

  4. Module list: View all active items in module and tools for search and filtering

CBA Packages

Package List

It gives you quick access to all and most recent created/edit packages

image9

  1. Module Tabs: Access All, Deployed, Under Construction, or Archived packages

  2. Search: Search for a package by title

  3. Filter: Filter packages by package tags

  4. Package Sort: Sort packages by recent or alphanumeric (name) or version

  5. List Pagination: navigate between package list pages

  6. Create Package: Create a new CBA package

  7. Import Package: Import other packages that are created previously on CDS Editor or Designer or created by other/current user

  8. Package box: It shows a brief detail of the package and gives access to some actions of the package

  9. Deployed package indicator

  10. Package name and version

  11. More menu: Access a list of actions including Clone, Archive, Download, and Delete

  12. Last modified: Shows user name and date and time of last modifications made in the package

  13. Package Description

  14. Package Tags

  15. Collaborators: See who’s collaborating to edit in the package

  16. Configuration button: Go directly to package configuration

  17. Designer Mode: It indicates package mode (Designer, Scripting, and Generic scripting) and by clicking on it, it will load to mode screen

Create a New CBA Package

User Flow

image10

Create a New Package

You can create a new CBA Package by creating a new custom package or by import a package file that is already created before.

Note

Create/Import Package You can’t create/import a CBA package that has the same name and version of an existing package. Packages can be in the same name but in different version numbers (ex., Package one v1.0.0 & Package one v1.0.1).

Create a New Custom CBA Package From the Packages page, click on the Create Package button to navigate to Package Configuration

image11

MetaData

In MetaData Tab, select Package Mode, enter package Name, Version, Description and other configurations

image12

Once you fill in all required inputs, you can save this package by clicking the Save button in the Actions menu

image13

Package Info Box: It is in top of configurations tabs and it appears after you save a package for the first time

image14

You can continue adding package configuration or go directly to the Designer Mode screen from Package info box

All changes will be saved when you click on the Save button

To close the package configuration and go back to the Package list, navigate to the top left in breadcrumb and click the CBA Packages link or click on the Packages link in the Main menu.

Template & Mapping

You can create as many templates using

artifact-mapping-resource (Artifact Type -> Mapping) or/and artifact-template-velocity (Artifact Type -> Velocity)

image15

  1. Template name

  2. Template Section: Where you include template attributes

  3. Manage Mapping: Here the automapping process occurs to template attributes to refer to the data dictionary that will be used to resolve a particular resource.

Template Section

image16

  1. Template Type: Template is defined by one of three templates (Velocity, Jinja, Kotlin)

  2. Import Template Attributes/Parameters: You can add attributes by Import attribute list file or by

  3. Insert Template Attributes/Parameters Manually: You can insert Attributes manually in the code editor. Code editor validates attributes according to the pre-selected template type

Import Template Attributes

image17

After import attributes, you can add/edit/delete attributes in the code editor.

image18

Manage Mapping Section

image19

  1. Use current Template Instance: You can use attributes from the Template section

  2. Upload Attributes List: In case you don’t have existing attributes in Template section or have different attributes, you can upload the attributes list

Once you select the source of attributes, you get a confirmation of success fetching.

image20

Then the Mapped Table appears to show the Resource Dictionary reference.

image21

When you finish the creation process, you must click on the Finish button (1) to submit the template, or you can clear all data by click on the Clear button (2).

image22

Scripts

Allowed file type: Kotlin(kt), Python(py), Jython, Ansible

To add script file/s, you have two options:

  1. Create Script

  2. Import File

Enter file URL: Script file can be stored in server and you can add this script file by copy and paste file URL in URL input then press ENTER key from the keyboard

image23

Create a Script File

  1. File Name: Add the script file name

  2. Script Type: Choose script type (Kotlin, Jython, Ansible)

  3. Script Editor: Enter the script file content

image24

After you type the script, click on the Create Script button to save it

image25

By adding script file/s, you can: 1. Edit file: You can edit each script file from the code editor 2. Delete file

image26

Definitions

To define a data type that represents the schema of a specific type of data, you have to enrich the package to automatically generate all definition files:

  1. Enrich Package: from the package details box, click on the Enrich button

image27

Once you successfully enrich the package, all definition files will be listed.

image28

By definition file/s, you can Delete file

image29

External System Authentication Properties

In order to populate the system information within the package, you have to provide dsl_definitions

image30

Topology Template

Here you can manually add your package:

  1. Workflow that define an overall action to be taken on the service

  2. Node/Component template that is used to represent a functionality along with its contracts, such as inputs, outputs, and attributes

image31

Hello World CBA Reference

How to create a “Hello World” Package with CDS Designer UI? The Resource Resolution Type

Note

How to Get Started with CDS Designer UI

If you’re new to CDS Designer UI and need to get set up, the following guides may be helpful:

Note

NOTE:

In order to see the latest version described below in the tutorial, we will need to use the latest cds-ui-server docker image: nexus3.onap.org:10001/onap/ccsdk-cds-ui-server:1.1.0-STAGING-latest

Create New CBA Package

In the Package List, click on the Create Package button.

image1

Define Package MetaData

In MetaData Tab:

  1. Package name (Required), type “hello_world”

  2. Package version (Required), type “1.0.0”

  3. Package description (Required), type “Hello World, the New CBA Package created with CDS Designer UI”

  4. Package Tags (Required), type “tag1” then use the Enter key on the keyboard

image2

Once you enter all fields you will be able to save your package. Click on the Save button and continue to define your package.

image3

Define Template And Mapping

In the Template & Mapping Tab:

  1. Enter template name “hello_world_template”, then go to Template section

  2. Choose the template type “Velocity”

  3. Type the Template parameter “Hello, ${image_name}!” in the code editor

image4

Now, go to the Manage Mapping section.

image5

Click on the Use Current Template Instance button to resolve the value within the template and to auto-map it.

image6

Inside the Mapping table, change Dictionary Source from default to input

image7

Click on the Finish button to save the template and close it.

image8

After the new template is added to the Template and Mapping list, click on the Save button to save the package updates.

image9

Create An Action

From the Package information box on top, click on the Designer Mode button.

image10

Click on the Skip to Designer Canvas button to go directly to Designer Mode.

image11

Now the designer has zero action added. Let’s start adding the first Action.

image12

Go to the left side of the designer screen and in the ACTIONS tab, click on the + New Action button.

image13

Now, the first Action Action1 is added to the Actions list and in the Workflow canvas.

image14

Add Resource Resolution Function To The Action

On the left side of the designer screen, Click on the FUNCTIONS tab to view all the Functions List.

image15

Drag the function type “component-resource-resolution”

image16

Drop the function to the “Action1” Action container.

image17

Define Action Attributes

Click on Action1 from the ACTIONS tab to open the ACTION ATTRIBUTES section on designer screens’ right side.

image18

Let’s customize the first action’s attribute by click on the + Create Custom button to open Add Custom Attributes modal window.

image19

In the Add Custom Attributes Window, and the INPUTS tab starts to add the first input attribute for Action1.

INPUTS Tab: Enter the required properties for the inputs’ attribute:

  1. Name: “template-prefix”

  2. Type: “List”

  3. Required: “True”

image20

After you add the template-prefix input’s attribute, click on In the OUTPUT Tab to create the output attribute too.

image21

OUTPUTS Tab: Enter the required properties for the output’ attribute:

  1. Name: “hello-world-output”

  2. Required: “True”

  3. Type: “other”

  4. Type name: “json”

  5. Value (get_attribute): From the Functions list, select “component-resource-resolution” that will show all attributes included in this function

  6. Select parameter name “assignment-params”

  7. Click on the Submit Attributes button to add input and output attributes to Actions’ Attributes list

  8. Click on the Close button to close the modal window and go backto the designer screen.

image22

Now, you can see all the added attributes listed in the ACTION ATTRIBUTES area.

image23

Define Function Attributes

From ACTIONS List, Click on the function name “component-resource-resolution”.

image24

When you click on the component-resource-resolution function, the FUNCTION ATTRIBUTES section will be open on the right side of the designers’ screen.

image25

Now, you need to add the values of Inputs or Outputs required attributes in the Interfaces section.

  • artifact-prefix-names:

  1. Click on the Select Templates button

  2. In the modal window that lists all templates you created, click on the “hello_world_template” name

  3. Click on the Add Template button to insert it in the Artifacts section and to close the modal window.

image26

image27

Now, the hello_world_template template is listed inside the Artifacts section.

image28

From the page header and inside the Save menu, click on the Save button to save all the changes.

image30

Enrich And Deploy The CBA Package

From the page header and inside the Save menu, click on the Enrich & Deploy button.

image31

Once the process is done, a confirmation message will appear.

image32

Test The CBA package With CDS REST API

To test the CDS hello_world package we created, we can use the REST API shown below to run the resource resolution workflow in the hello_wold package, which will resolve the value of the “image_name” resource from the REST Call input, and will send it back to the user in the form of “Hello, $image_name!”.

CURL Request to RUN CBA Package

curl --location --request POST
'http://cds-blueprint-processor:8080/api/v1/execution-service/process'\\
--header 'Content-Type: application/json;charset=UTF-8'\\
--header 'Accept: application/json;charset=UTF-8,application/json'\\
--header 'Authorization: BasicY2NzZGthcHBzOmNjc2RrYXBwcw=='\\
--data-raw '{
    "actionIdentifiers": {
        "mode": "sync",
        "blueprintName": "hello_world",
        "blueprintVersion": "1.0.0",
        "actionName": "Action1"
    },
    "payload": {
        "Action1-request": {
             "Action1-properties": {
                 "image_name": "Sarah Abouzainah"
             }
        }
    },
    "commonHeader": {
         "subRequestId": "143748f9-3cd5-4910-81c9-a4601ff2ea58",
         "requestId": "e5eb1f1e-3386-435d-b290-d49d8af8db4c",
         "originatorId": "SDNC_DG"
    }
}'

CDS Response showing result of running package

{
  "correlationUUID": null,
  "commonHeader": {
    "timestamp": "2020-12-13T11:43:10.993Z",
    "originatorId": "SDNC_DG",
    "requestId": "e5eb1f1e-3386-435d-b290-d49d8af8db4c",
    "subRequestId": "143748f9-3cd5-4910-81c9-a4601ff2ea58",
    "flags": null
  },
  "actionIdentifiers": {
    "blueprintName": "hello_world",
    "blueprintVersion": "1.0.0",
    "actionName": "Action1",
    "mode": "sync"
  },
  "status": {
    "code": 200,
    "eventType": "EVENT_COMPONENT_EXECUTED",
    "timestamp": "2020-12-13T11:43:11.028Z",
    "errorMessage": null,
    "message": "success"
  },
  "payload": {
    "Action1-response": {
      "hello-world-output": {
        "hello_world_template": "Hello, Sarah Abouzainah!"
      }
    }
  }
}

Screenshot from POSTMAN showing how to run the hello_world package, and the CDS Response:

image33

Next:

How to create a “Hello World” Package with CDS Designer UI? The Script Executor Type

Note

How to Get Started with CDS Designer UI

If you’re new to CDS Designer UI and need to get set up, the following guides may be helpful:

Note

NOTE:

In order to see the latest version described below in the tutorial, we will need to use the latest cds-ui-server docker image: nexus3.onap.org:10001/onap/ccsdk-cds-ui-server:1.1.0-STAGING-latest

Create New CBA Package

In the Package List, click on the Create Package button.

image1

Define Package MetaData

In METADATA Tab:

  1. Package name (Required), type “Hello-world-package-kotlin”

  2. Package version (Required), type “1.0.0”

  3. Package description (Required), type “just description”

  4. Package Tags (Required), type “kotlin” then use the Enter key on the keyboard

  5. In the Custom Key section, add Key name “template_type” and

  6. For Key Value “DEFAULT”

image2

Once you enter all fields you will be able to save your package. Click on the Save button and continue to define your package.

image3

Define Scripts

In the SCRIPTS Tab:

  1. Click on the Create Script button

image4

In the Create Script File modal:

image5

  1. Enter script file name “Test”

  2. Choose the script type “Kotlin”

  3. Type or copy and paste the below script in the code editor

/*
 \* Copyright © 2020, Orange
 \*
 \* Licensed under the Apache License, Version 2.0 (the "License");
 \* you may not use this file except in compliance with the License.
 \* You may obtain a copy of the License at
 \*
 \* http://www.apache.org/licenses/LICENSE-2.0
 \*
 \* Unless required by applicable law or agreed to in writing, software
 \* distributed under the License is distributed on an "AS IS" BASIS,
 \* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 \* See the License for the specific language governing permissions and
 \* limitations under the License.
*/

package org.onap.ccsdk.cds.blueprintsprocessor.services.execution.scripts

import org.onap.ccsdk.cds.blueprintsprocessor.core.api.data.ExecutionServiceInput
import org.onap.ccsdk.cds.blueprintsprocessor.services.execution.AbstractScriptComponentFunction
import org.onap.ccsdk.cds.blueprintsprocessor.services.execution.ComponentRemoteScriptExecutor
import org.onap.ccsdk.cds.controllerblueprints.core.asJsonPrimitive
import org.slf4j.LoggerFactory

open class HelloWorld : AbstractScriptComponentFunction() {
    private val log = LoggerFactory.getLogger(HelloWorld::class.java)!!

    override fun getName(): String {
         return "Check"
    }

    override suspend fun processNB(executionRequest: ExecutionServiceInput) {
          log.info("executing hello world script ")
          val username = getDynamicProperties("username").asText()
          log.info("username : $username")
          //executionRequest.payload.put("Action1-response","hello from $username")
          setAttribute("response-data", "Hello, $username".asJsonPrimitive())
    }

    override suspend fun recoverNB(runtimeException: RuntimeException, executionRequest: ExecutionServiceInput) {
           log.info("Executing Recovery")
           bluePrintRuntimeService.getBluePrintError().addError("${runtimeException.message}")
     }
}
  1. Click on the Create Script button to save the script file

image6

Now, you can view and edit your script file.

image7

After the new script is added to the scripts list, click on the Save button to save the package updates.

image8

Define DSL Properties

In the DSL PROPERTIES Tab:

  1. Copy and paste the below DSL definition

{
    "Action1-properties": {
        "username": {
            "get_input": "username"
        }
    }
}

image9

Then click on the Save button to update the package.

image10

Create An Action

From the Package information box on top, click on the Designer Mode button.

image11

Click on the Skip to Designer Canvas button to go directly to Designer Mode.

image12

Now the designer has zero action added. Let’s start adding the first Action.

image13

Go to the left side of the designer screen and in the ACTIONS tab, click on the + New Action button.

image14

Now, the first Action Action1 is added to the Actions list and in the Workflow canvas.

image15

Add Script Executor Function To The Action

On the left side of the designer screen, Click on the FUNCTIONS tab to view all the Functions List.

image16

Drag the function type “component-script-executor”

image17

Drop the function to the “Action1” Action container.

image18

Define Action Attributes

Click on Action1 from the ACTIONS tab to open the ACTION ATTRIBUTES section on designer screens’ right side.

image19

Let’s customize the first action’s attribute by click on the + Create Custom button to open Add Custom Attributes modal window.

image20

In the Add Custom Attributes Window, and the INPUTS tab starts to add the first input attribute for Action1. INPUTS Tab: Enter the required properties for the inputs’ attribute:

  1. Name: “username”

  2. Type: “Other”

  3. Attribute type name: “dt-resource-assignment-properties”

  4. Required: “True”

image21

After you add the username input’s attribute, click on In the OUTPUT Tab to create the output attribute too.

image22

OUTPUTS Tab: Enter the required properties for the output’ attribute:

  1. Name: “hello-world-output”

  2. Required: “True”

  3. Type: “Other”

  4. Type name: “json”

  5. Value (get_attribute): From the Functions list, select “component-script-executor” that will show all attributes included in this function

  6. Select parameter name “response-data”

  7. Click on the Submit Attributes button to add input and output attributes to Actions’ Attributes list

  8. Click on the Close button to close the modal window and go back to the designer screen.

image23

Now, you can see all the added attributes listed in the ACTION ATTRIBUTES area.

image24

Define Function Attributes

From ACTIONS List, Click on the function nam “component-script-executor”.

image25

When you click on the component-script-executor function, the FUNCTION ATTRIBUTES section will be open on the right side of the designers’ screen. Now, you need to add the values of Inputs required attributes in the Interfaces section.

image26

  1. script-type: “kotlin”

  2. script-class-reference: “org.onap.ccsdk.cds.blueprintsprocessor.services.execution.scripts.HelloWorld”

  3. Add optional attribute by click on Add Optional Attributes button, add “dynamic-properties” then enter the value “*Action1-properties”

image27

From the page header and inside the Save menu, click on the Save button to save all the changes.

image29

Enrich And Deploy The CBA Package

From the page header and inside the Save menu, click on the Enrich & Deploy button.

image30

Once the process is done, a confirmation message will appear.

image31

Test The CBA package With CDS REST API

To test the CDS hello_world package we created, we can use the REST API shown below to run the script executor workflow in the Hello-world-package-kotlin package, which will resolve the value of the “username” resource from the REST Call input, and will send it back to the user in the form of “Hello, $username!”.

CURL Request to RUN CBA Package

curl --location --request POST 'http://10.1.1.9:8080/api/v1/execution-service/process' \
--header 'Content-Type: application/json;charset=UTF-8' \
--header 'Accept: application/json;charset=UTF-8,application/json' \
--header 'Authorization: Basic Y2NzZGthcHBzOmNjc2RrYXBwcw==' \
--header 'Host: cds-blueprints-processor-http:8080' \
--header 'Cookie: JSESSIONID=7E69BC3F752FD5A3D7D1663FE583ED71' \
--data-raw '{
               "actionIdentifiers": {
                   "mode": "sync",
                   "blueprintName": "Hello-world-package-kotlin",
                   "blueprintVersion": "1.0.0",
                   "actionName": "Action1"
               },
               "payload": {
                   "Action1-request": {
                       "username":"Orange Egypt"
                   }
               },
               "commonHeader": {
                   "subRequestId": "143748f9-3cd5-4910-81c9-a4601ff2ea58",
                   "requestId": "e5eb1f1e-3386-435d-b290-d49d8af8db4c",
                   "originatorId": "SDNC_DG"
               }
            }'

CDS Response showing result of running package

200 OK
    {
        "correlationUUID": null,
        "commonHeader": {
            "timestamp": "2021-01-12T13:22:26.518Z",
            "originatorId": "SDNC_DG",
            "requestId": "e5eb1f1e-3386-435d-b290-d49d8af8db4c",
            "subRequestId": "143748f9-3cd5-4910-81c9-a4601ff2ea58",
            "flags": null
        },
        "actionIdentifiers": {
            "blueprintName": "Hello-world-package-kotlin",
            "blueprintVersion": "1.0.0",
            "actionName": "Action1",
            "mode": "sync"
        },
        "status": {
            "code": 200,
            "eventType": "EVENT_COMPONENT_EXECUTED",
            "timestamp": "2021-01-12T13:22:56.144Z",
            "errorMessage": null,
            "message": "success"
        },
        "payload": {
            "Action1-response": {
                "hello-world-output": {
                    "hello_world_template": "Hello, Orange Egypt"
                 }
             }
        }
    }

Screenshot from POSTMAN showing how to run the Hello_world-package-kotlin package, and the CDS Response:

image32

Offered APIs

Offered APIs

Blueprint Processor API Reference

Introduction

This section shows all resources and endpoints which CDS BP processor currently provides through a swagger file which is automatically created during CDS build process by Swagger Maven Plugin. A corresponding Postman collection is also included. Endpoints can also be described using this template api-doc-template.rst but this is not the preferred way to describe the CDS API.

You can find a sample workflow tutorial below which will show how to use the endpoints in the right order. This will give you a better understanding of the CDS Blueprint Processor API.

Getting Started

If you cant access a running CDS Blueprint Processor yet, you can choose one of the below options to run it. Afterwards you can start trying out the API.

Authorization

Use Basic authorization with ccsdkapps as a username and password, in Header Authorization: Basic Y2NzZGthcHBzOmNjc2RrYXBwcw==.

Download

Here is the automatically created swagger file for CDS Blueprint Processor API: cds-bp-processor-api-swagger.json

You can find a postman collection including sample requests for all endpoints here: bp-processor.postman_collection.json. Please keep the Postman Collection up-to-date for new endpoints.

General Setup

All endpoints are accessable under http://{{host}}:{{port}}/api/v1/. Host and port depends on your CDS BP processor deployment.

List all endpoints

Lists all available endpoints from blueprints processor API.

Request
GET http://{{host}}:{{port}}/actuator/mappings

Lists all endpoints from blueprints processor.

request
curl --location --request GET 'http://localhost:8081/actuator/mappings' \
--header 'Authorization: Basic Y2NzZGthcHBzOmNjc2RrYXBwcw=='
Success Response

HTTP Status 202 OK

sample response body
{
   "contexts": {
      "application": {
            "mappings": {
               "dispatcherHandlers": {
                  "webHandler": [
                        {
                           "predicate": "{GET /api/v1/blueprint-model, produces [application/json]}",
                           "handler": "org.onap.ccsdk.cds.blueprintsprocessor.designer.api.BlueprintModelController#allBlueprintModel()",
                           "details": {
                              "handlerMethod": {
                                    "className": "org.onap.ccsdk.cds.blueprintsprocessor.designer.api.BlueprintModelController",
                                    "name": "allBlueprintModel",
                                    "descriptor": "()Ljava/util/List;"
                              },
                              "handlerFunction": null,
                              "requestMappingConditions": {
                                    "consumes": [],
                                    "headers": [],
                                    "methods": [
                                       "GET"
                                    ],
                                    "params": [],
                                    "patterns": [
                                       "/api/v1/blueprint-model"
                                    ],
                                    "produces": [
                                       {
                                          "mediaType": "application/json",
                                          "negated": false
                                       }
                                    ]
                              }
                           }
                        },
                        {
                           "predicate": "{GET /api/v1/blueprint-model/meta-data/{keyword}, produces [application/json]}",
                           "handler": "org.onap.ccsdk.cds.blueprintsprocessor.designer.api.BlueprintModelController#allBlueprintModelMetaData(String, Continuation)",
                           "details": {
                              "handlerMethod": {
                                    "className": "org.onap.ccsdk.cds.blueprintsprocessor.designer.api.BlueprintModelController",
                                    "name": "allBlueprintModelMetaData",
                                    "descriptor": "(Ljava/lang/String;Lkotlin/coroutines/Continuation;)Ljava/lang/Object;"
                              },
                              "handlerFunction": null,
                              "requestMappingConditions": {
                                    "consumes": [],
                                    "headers": [],
                                    "methods": [
                                       "GET"
                                    ],
                                    "params": [],
                                    "patterns": [
                                       "/api/v1/blueprint-model/meta-data/{keyword}"
                                    ],
                                    "produces": [
                                       {
                                          "mediaType": "application/json",
                                          "negated": false
                                       }
                                    ]
                              }
                           }
                        }
                  ]
               }
            },
            "parentId": null
      }
   }
}
API Reference

Warning

In the used Sphinx plugin sphinxcontrib-swaggerdoc some information of the swagger file is not rendered completely, e.g. the request body. Use your favorite Swagger Editor and paste the swagger file to get a complete view of the API reference, e.g. on https://editor.swagger.io/.

Error

Unable to process URL: media/cds-bp-processor-api-swagger.json. Please check that the URL is a valid Swagger api-docs URL and it is accesible

Workflow Tutorial
Introduction

This section will show a basic workflow how to proceed a CBA. For this we will follow the PNF Simulator use case guide. We will use the same CBA but since this CBA is loaded during bootstrap per default we will first delete it and afterwards manually enrich and save it in CDS. The referred use case shows how the day-n configuration is assigned and deployed to a PNF through CDS. You don’t necessarily need a netconf server (which will act as an PNF Simulator) running to get a understanding about this workflow tutorial. Just take care that without a set up netconf server the day-n configuration deployment will fail in the last step.

Use the Postman Collection from the referred use case to get sample requests for the following steps: json.

The CBA which we are using is downloadable here zip. Hint: this CBA is also included in the CDS source code for bootstrapping.

Set up CDS

If not done before, run Bootrap request which will call Bootstrap API of CDS (POST /api/v1/blueprint-model/bootstrap) to load all the CDS default model artifacts into CDS. You should get HTTP status 200 for the below command.

Call Get Blueprints request to get all blueprint models which are saved in CDS. This will call the GET /api/v1/blueprint-model endpoint. You will see the blueprint model "artifactName": "pnf_netconf" which is loaded by calling bootstrap since Guilin release. Since we manually want to load the CBA delete the desired CBA from CDS first through calling the delete endpoint DELETE /api/v1/blueprint-model/name/{name}/version/{version}. If you call Get Blueprints again you can see that the pnf_netconf CBA is missing now.

Because the CBA contains a custom data dictionary we need to push the custom entries to CDS first through calling Data Dictionary request. Actually the custom entries are also already loaded through bootstrap but just pretend they are not present in CDS so far.

Note

For every data dictionary entry CDS API needs to be called seperately. The postman collection contains a loop to go through all custom entries and call data dictionary endpoint seperately. To execute this loop, open Runner in Postman and run Data Dictionary request like it is shown in the picture below.

imageDDPostmanRunner

Enrichment

Enrich the blueprint through executing the Enrich Blueprint request. Take care to provide the CBA file which you can download here zip in the request body. After the request got executed download the response body like shown in the picture below, this will be your enriched CBA file.

saveResponseImage

Deploy/Save the Blueprint

Run Save Blueprint request to save/deploy the Blueprint into the CDS database. Take care to provide the enriched CBA file which you downloaded earlier in the request body.

After that you should see the new model "artifactName": "pnf_netconf" by calling Get Blueprints request.

An alternative would be to use POST /api/v1/blueprint-model/publish endpoint, which would also validate the CBA. For doing enrichment and saving the CBA in a single call POST /api/v1/blueprint-model/enrichandpublish could also be used.

Config-Assign / Config-Deploy

From now on you can continue with the PNF Simulator use case from section Config-assign and config-deploy to finish the workflow tutorial. The provided Postman collection already contains all the needed requests also for this part so you don’t need to create the calls and payloads manually. Take care that the last step will fail if you don’t have a netconf server set up.

Controller Design Studio Presentation

Details about CDS Architecture and Design detail, Please click the link. CDS_Architecture_Design