Data Collection, Analytics, and Events (DCAE)

Architecture

Data Collection Analytics and Events (DCAE) is the primary data collection and analysis system of ONAP. DCAE architecture comprises of DCAE Platform and DCAE Service components making DCAE flexible, elastic, and expansive enough for supporting the potentially infinite number of ways of constructing intelligent and automated control loops on distributed and heterogeneous infrastructure.

DCAE Platform supports the functions to deploy, host and perform LCM applications of Service components. DCAE Platform components enable model driven deployment of service components and middleware infrastructures that service components depend upon, such as special storage and computation platforms. When triggered by an invocation call (such as CLAMP or via DCAE Dashboard), DCAE Platform follows the TOSCA model of the control loop that is specified by the triggering call, interacts with the underlying networking and computing infrastructure such as OpenSatck installations and Kubernetes clusters to deploy and configure the virtual apparatus (i.e. the collectors, the analytics, and auxiliary microservices) that are needed to form the control loop, at locations that requested. DCAE Platform also provisions DMaaP topics and manages the distribution scopes of the topics following the prescription of the control loop model by interacting with controlling function of DMaaP.

DCAE Service components are the functional entities that realize the collection and analytics needs of ONAP control loops. They include the collectors for various data collection needs, event processors for data standardization, analytics that assess collected data, and various auxiliary microservices that assist data collection and analytics, and support other ONAP functions. Service components and DMaaP buses form the “data plane” for DCAE, where DCAE collected data is transported among different DCAE service components.

DCAE use Consul’s distributed K-V store service to manage component configurations where each key is based on the unique identity of a DCAE component (identified by ServiceComponentName), and the value is the configuration for the corresponding component. The K-V store for each service components is created during deployment. DCAE platform creates and updates the K-V pairs based on information provided as part of the control loop blueprint deployment, or through a notification/trigger received from other ONAP components such as Policy Framework and CLAMP. Either through periodically polling or proactive pushing, the DCAE components get the configuration updates in realtime and apply the configuration updates. DCAE Platform also offers dynamic template resolution for configuration parameters that are dynamic and only known by the DCAE platform, such as dynamically provisioned DMaaP topics. This approach standardizes component deployment and configuration management for DCAE service components in multi-site deployment.

DCAE Components

The following lists the components included in ONAP DCAE . All DCAE components are offered as Docker containers. Following ONAP level deployment methods, these components can be deployed as Kubernetes Deployments and Services.

  • DCAE Platform
    • Core Platform
      • Cloudify Manager: TOSCA model executor. Materializes TOSCA models of control loop, or Blueprints, into properly configured and managed virtual DCAE functional components.

      • Plugins (K8S, Dmaap, Policy, Clamp, Postgres)

    • Extended Platform
      • Configuration Binding Service: Agent for service component configuration fetching; providing configuration parameter resolution.

      • Deployment Handler: API for triggering control loop deployment based on control loop’s TOSCA model.

      • Policy Handler: Handler for fetching policy updates from Policy engine; and updating the configuration policies of KV entries in Consul cluster KV store for DCAE components.

      • Service Change Handler: Handler for interfacing with SDC; receiving new TOSCA models; and storing them in DCAE’s own inventory.

      • DCAE Inventory-API: API for DCAE’s TOSCA model store.

      • VES OpenApi Manager: Optional validator of VES_EVENT type artifacts executed during Service distributions.

    • Platform services
      • Consul: Distributed service discovery service and KV store.

      • Postgres Database: DCAE’s TOSCA model store.

      • Redis Database: DCAE’s transactional state store, used by TCA for supporting persistence and seamless scaling.

  • DCAE Services
    • Collectors
      • Virtual Event Streaming (VES) collector

      • SNMP Trap collector

      • High-Volume VES collector (HV-VES)

      • DataFile collector

      • RESTConf collector

    • Analytics
      • Holmes correlation analytics

      • CDAP based Threshold Crosssing Analytics application (tca)

      • Docker based Threshold Crosssing Analytics

      • Heartbeat Services

      • SON-Handler Service

      • Slice Analysis

    • Event processors
      • PNF Registration Handler

      • VES Mapper Service

      • PM-Mapper Service

      • BBS-EventProcessor Service

      • PM Subcription Handler

      • DataLake Handlers (DL-Admin, DL-Feeder, DES)

The figure below shows the DCAE architecture and how the components work with each other. The components on the right constitute the Platform/controller components which are statically deployed. The components on the right represent the services which can be both deployed statically or dynamically (via CLAMP)

_images/R8_architecture_diagram.png

Deployment Scenarios

Because DCAE service components are deployed on-demand following the control loop needs for managing ONAP deployed services, DCAE must support dynamic and on-demand deployment of service components based on ONAP control loop demands. This is why all other ONAP components are launched from the ONAP level method, DCAE only deploys a subset of its components during this ONAP deployment process and rest of DCAE components will be deployed on-demand based on usecase needs triggered by control loop request originated from CLAMP, or even by operator manually invoking DCAE’s deployment API call.

ONAP supports deployment through OOM Helm Chart currently (Heat deployment support is discontinued since R3). Hence all DCAE Platform components are deployed via Helm charts - this includes Cloudify Manager, ConfigBinding service, ServiceChange Handler, Policy Handler, Dashboard and Inventory, each with corresponding Helm charts under OOM (https://git.onap.org/oom/tree/kubernetes/dcaegen2/components). Once DCAE platform components are up and running, rest of DCAE service components required for ONAP flow are deployed via bootstrap POD, which invokes Cloudify Manager API with Blueprints for various DCAE components that are needed for the built-in collections and control loops flow support.

To keep the ONAP footprint minimal, only minimal set of MS (required for ONAP Integration usecases) are deployed via bootstrap pod. Rest of service blueprints are available for operator to deploy on-demand as required.

More details of the DCAE deployment can be found under Installation section.

Usage Scenarios

For ONAP DCAE participates in the following use cases.

  • vDNS: VES collector, TCA analytics

  • vFW: VES collector, TCA analytics

  • vCPE: VES collector, TCA analytics

  • vVoLTE: VES collector, Holmes analytics

  • CCVPN : RestConf Collector, Holmes

  • BBS : VES Collector, PRH, BBS-Event Processor, VES-Mapper, RESTConf Collector

  • 5G Bulk PM : DataFile Collector, PM-Mapper, HV-VES

  • 5G OOF SON: VES collector, SON-Handler

  • 5G E2E Network Slicing: VES collector, Slice Analysis, DES, PM-Mapper, DFC, Datalake feeder

In addition, DCAE supports on-demand deployment and configuration of service components via CLAMP. In such case CLAMP invokes the deployment and configuration of additional TCA instances.

Offered APIs

DCAE Dashboard

Description

DCAE Dashboard is a web application that provides a single interface for DCAE users and Ops users in ONAP to deploy and manage DCAE microservices.

API name

Swagger JSON

DCAE Dashboard

link

Contact Information

onap-discuss@lists.onap.org

Config Binding Service

API name

Swagger JSON

Swagger YAML

Config Binding Service

link

link

GET /service_component_all/{service_component_name}

Description
Binds the configuration for service_component_name and returns the bound configuration, policies, and any other keys that are in Consul
Parameters

Name

Located in

Required

Type

Format

Properties

Description

service_component_name

path

Yes

string

Service Component Name. service_component_name must be a key in consul.

Request
Responses
200

OK; returns {config : …, policies : ….., k : …} for all other k in Consul

Response Schema:

Example:

{}
404

there is no configuration in Consul for this component

GET /service_component/{service_component_name}

Description
Binds the configuration for service_component_name and returns the bound configuration as a JSON
Parameters

Name

Located in

Required

Type

Format

Properties

Description

service_component_name

path

Yes

string

Service Component Name. service_component_name must be a key in consul.

Request
Responses
200

OK; the bound config is returned as an object

Response Schema:

Example:

{}
404

there is no configuration in Consul for this component

GET /{key}/{service_component_name}

Description
this is an endpoint that fetches a generic service_component_name:key out of Consul. The idea is that we don't want to tie components to Consul directly in case we swap out the backend some day, so the CBS abstracts Consul from clients. The structuring and weird collision of this new API with the above is unfortunate but due to legacy concerns.
Parameters

Name

Located in

Required

Type

Format

Properties

Description

key

path

Yes

string

this endpoint tries to pull service_component_name:key; key is the key after the colon

service_component_name

path

Yes

string

Service Component Name.

Request
Responses
200

OK; returns service_component_name:key

Response Schema:

Example:

{}
400

bad request. Currently this is only returned on :policies, which is a complex object, and should be gotten through service_component_all

404

key does not exist

GET /healthcheck

Description
This is the health check endpoint. If this returns a 200, the server is alive and consul can be reached. If not a 200, either dead, or no connection to consul
Request
Responses
200

Successful response

503

the config binding service cannot reach Consul

Deployment-Handler

API name

Swagger JSON

Swagger YAML

deployment-handler

link

link

Description

High-level API for deploying/undeploying composed DCAE services using Cloudify Manager.

License

Apache 2.0

DCAE-DEPLOYMENTS

operations on dcae-deployments

DELETE /dcae-deployments/{deploymentId}
Description
Uninstall the DCAE service and remove all associated data from the orchestrator.
Parameters

Name

Located in

Required

Type

Format

Properties

Description

deploymentId

path

Yes

string

Deployment identifier for the service to be uninstalled.

Request
Responses
202

Success: The dispatcher has initiated the uninstall operation.

Type: DCAEDeploymentResponse

Example:

{
    "links": {
        "self": "somestring",
        "status": "somestring"
    },
    "requestId": "somestring"
}
400

Bad request: See the message in the response for details.

Type: DCAEErrorResponse

Example:

{
    "message": "somestring",
    "status": 1
}
500

Problem on the server side. See the message in the response for more details.

Type: DCAEErrorResponse

Example:

{
    "message": "somestring",
    "status": 1
}
502

Error reported to the dispatcher by a downstream system. See the message in the response for more details.

Type: DCAEErrorResponse

Example:

{
    "message": "somestring",
    "status": 1
}
504

Error communicating with a downstream system. See the message in the response for more details.

Type: DCAEErrorResponse

Example:

{
    "message": "somestring",
    "status": 1
}
GET /dcae-deployments
Description
List service deployments known to the orchestrator, optionally restricted to a single service type
Parameters

Name

Located in

Required

Type

Format

Properties

Description

serviceTypeId

query

No

string

Service type identifier for the type whose deployments are to be listed

Request
Responses
200

Success. (Note that if no matching deployments are found, the request is still a success; the deployments array is empty in that case.)

Type: DCAEDeploymentsListResponse

Example:

{
    "deployments": [
        {
            "href": "somestring"
        },
        {
            "href": "somestring"
        }
    ],
    "requestId": "somestring"
}
500

Problem on the server side. See the message in the response for more details.

Type: DCAEErrorResponse

Example:

{
    "message": "somestring",
    "status": 1
}
502

Error reported to the dispatcher by a downstream system. See the message in the response for more details.

Type: DCAEErrorResponse

Example:

{
    "message": "somestring",
    "status": 1
}
504

Error communicating with a downstream system. See the message in the response for more details.

Type: DCAEErrorResponse

Example:

{
    "message": "somestring",
    "status": 1
}
GET /dcae-deployments/{deploymentId}/operation/{operationId}
Description
Get status of a deployment operation
Parameters

Name

Located in

Required

Type

Format

Properties

Description

deploymentId

path

Yes

string

operationId

path

Yes

string

Request
Responses
200

Status information retrieved successfully

Type: DCAEOperationStatusResponse

Example:

{
    "error": "somestring",
    "links": {
        "self": "somestring",
        "uninstall": "somestring"
    },
    "operationType": "somestring",
    "requestId": "somestring",
    "status": "somestring"
}
404

The operation information does not exist (possibly because the service has been uninstalled and deleted).

Type: DCAEErrorResponse

Example:

{
    "message": "somestring",
    "status": 1
}
500

Problem on the server side. See the message in the response for more details.

Type: DCAEErrorResponse

Example:

{
    "message": "somestring",
    "status": 1
}
502

Error reported to the dispatcher by a downstream system. See the message in the response for more details.

Type: DCAEErrorResponse

Example:

{
    "message": "somestring",
    "status": 1
}
504

Error communicating with a downstream system. See the message in the response for more details.

Type: DCAEErrorResponse

Example:

{
    "message": "somestring",
    "status": 1
}
PUT /dcae-deployments/{deploymentId}
Description
Request deployment of a DCAE service
Parameters

Name

Located in

Required

Type

Format

Properties

Description

deploymentId

path

Yes

string

Unique deployment identifier assigned by the API client.

Request
Body

Request for deploying a DCAE service.

Name

Required

Type

Format

Properties

Description

inputs

No

Object containing inputs needed by the service blueprint to create an instance of the service. Content of the object depends on the service being deployed.

serviceTypeId

Yes

string

The service type identifier (a unique ID assigned by DCAE inventory) for the service to be deployed.

Inputs schema:

Object containing inputs needed by the service blueprint to create an instance of the service. Content of the object depends on the service being deployed.

{
    "inputs": {},
    "serviceTypeId": "somestring"
}
Responses
202
Success: The content that was posted is valid, the dispatcher has

found the needed blueprint, created an instance of the topology in the orchestrator, and started an installation workflow.

Type: DCAEDeploymentResponse

Example:

{
    "links": {
        "self": "somestring",
        "status": "somestring"
    },
    "requestId": "somestring"
}
400

Bad request: See the message in the response for details.

Type: DCAEErrorResponse

Example:

{
    "message": "somestring",
    "status": 1
}
409

A service with the specified deployment Id already exists. Using PUT to update the service is not a supported operation.

Type: DCAEErrorResponse

Example:

{
    "message": "somestring",
    "status": 1
}
415

Bad request: The Content-Type header does not indicate that the content is ‘application/json’

Type: DCAEErrorResponse

Example:

{
    "message": "somestring",
    "status": 1
}
500

Problem on the server side. See the message in the response for more details.

Type: DCAEErrorResponse

Example:

{
    "message": "somestring",
    "status": 1
}
502

Error reported to the dispatcher by a downstream system. See the message in the response for more details.

Type: DCAEErrorResponse

Example:

{
    "message": "somestring",
    "status": 1
}
504

Error communicating with a downstream system. See the message in the response for more details.

Type: DCAEErrorResponse

Example:

{
    "message": "somestring",
    "status": 1
}

INFO

version and links

GET /
Description
Returns version information and links to API operations
Request
Responses
200

Success

Response Schema:

Name

Required

Type

Format

Properties

Description

apiVersion

No

string

version of API supported by this server

links

No

links

Links to API resources

serverVersion

No

string

version of software running on this server

Links schema:

Links to API resources

Name

Required

Type

Format

Properties

Description

events

No

string

path for the events endpoint

info

No

string

path for the server information endpoint

Example:

{
    "apiVersion": "somestring",
    "links": {
        "events": "somestring",
        "info": "somestring"
    },
    "serverVersion": "somestring"
}

POLICY

policy update API consumed by policy-handler and debug API to find policies on components

GET /policy/components
Description
debug API to find policies on components
Request
Responses
200

deployment-handler found components with or without policies in cloudify

POST /policy
Description
policy update API consumed by policy-handler
Request
Body

request to update policies on DCAE components.

Name

Required

Type

Format

Properties

Description

catch_up

Yes

boolean

flag to indicate whether the request contains all the policies in PDP or not

errored_policies

No

whether policy-engine returned an error on the policy.

errored_scopes

No

array of string

on cartchup - list of policy scope_prefix values on wchich the policy-engine experienced an error other than not-found data.

latest_policies

Yes

dictionary of (policy_id -> DCAEPolicy object).

removed_policies

Yes

whether policy was removed from policy-engine.

scope_prefixes

No

array of string

on catchup - list of all scope_prefixes used by the policy-handler to retrieve the policies from policy-engine.

Errored_policies schema:

whether policy-engine returned an error on the policy. dictionary of (policy_id -> true). In example: replace additionalProp1,2,3 with policy_id1,2,3 values

Map of {“key”:”boolean”}

Latest_policies schema:

dictionary of (policy_id -> DCAEPolicy object). In example: replace additionalProp1,2,3 with policy_id1,2,3 values

Map of {“key”:”DCAEPolicy”}

Removed_policies schema:

whether policy was removed from policy-engine. dictionary of (policy_id -> true). In example: replace additionalProp1,2,3 with policy_id1,2,3 values

Map of {“key”:”boolean”}

{
    "catch_up": true,
    "errored_policies": {},
    "errored_scopes": [
        "somestring",
        "somestring"
    ],
    "latest_policies": {
        "DCAEPolicy": {
            "policy_body": {
                "config": {},
                "policyName": "somestring",
                "policyVersion": "somestring"
            },
            "policy_id": "somestring"
        }
    },
    "removed_policies": {},
    "scope_prefixes": [
        "somestring",
        "somestring"
    ]
}
Responses
200

deployment-handler always responds with ok to /policy before processing the request

Data Structures

DCAEDeploymentRequest Model Structure

Request for deploying a DCAE service.

Name

Required

Type

Format

Properties

Description

inputs

No

Object containing inputs needed by the service blueprint to create an instance of the service.

serviceTypeId

Yes

string

The service type identifier (a unique ID assigned by DCAE inventory) for the service to be deployed.

Inputs schema:

Object containing inputs needed by the service blueprint to create an instance of the service. Content of the object depends on the service being deployed.

DCAEDeploymentResponse Model Structure

Response body for a PUT or DELETE to /dcae-deployments/{deploymentId}

Name

Required

Type

Format

Properties

Description

links

Yes

links

Links that the API client can access.

requestId

Yes

string

Unique identifier for the request

Links schema:

Links that the API client can access.

Name

Required

Type

Format

Properties

Description

self

No

string

Link used to retrieve information about the service being deployed

status

No

string

Link used to retrieve information about the status of the installation workflow

DCAEDeploymentsListResponse Model Structure

Object providing a list of deployments

Name

Required

Type

Format

Properties

Description

deployments

Yes

array of deployments

requestId

Yes

string

Unique identifier for the request

Deployments schema:

Name

Required

Type

Format

Properties

Description

href

No

string

URL for the service deployment

DCAEErrorResponse Model Structure

Object reporting an error.

Name

Required

Type

Format

Properties

Description

message

No

string

Human-readable description of the reason for the error

status

Yes

integer

HTTP status code for the response

DCAEOperationStatusResponse Model Structure

Response body for a request for status of an installation or uninstallation operation.

Name

Required

Type

Format

Properties

Description

error

No

string

If status is ‘failed’, this field will be present and contain additional information about the reason the operation failed.

links

No

links

If the operation succeeded, links that the client can follow to take further action. Note that a successful ‘uninstall’ operation removes the DCAE service instance completely, so there are no possible further actions, and no links.

operationType

Yes

string

Type of operation being reported on. (‘install’ or ‘uninstall’)

requestId

Yes

string

A unique identifier assigned to the request. Useful for tracing a request through logs.

status

Yes

string

Status of the installation or uninstallation operation. Possible values are ‘processing’,

Links schema:

If the operation succeeded, links that the client can follow to take further action. Note that a successful ‘uninstall’ operation removes the DCAE service instance completely, so there are no possible further actions, and no links.

Name

Required

Type

Format

Properties

Description

self

No

string

Link used to retrieve information about the service.

uninstall

No

string

Link used to trigger an ‘uninstall’ operation for the service. (Use the DELETE method.)

DCAEPolicy Model Structure

policy object

Name

Required

Type

Format

Properties

Description

policy_body

Yes

DCAEPolicyBody

policy_id

Yes

string

unique identifier of policy regardless of its version

DCAEPolicyBody Model Structure

policy_body - the whole object received from policy-engine

Name

Required

Type

Format

Properties

Description

config

Yes

config

the policy-config - the config data provided by policy owner

policyName

Yes

string

unique policy name that contains the version and extension

policyVersion

Yes

string

stringified int that is autoincremented by policy-engine

Config schema:

the policy-config - the config data provided by policy owner

DCAEPolicyRequest Model Structure

request to update policies on DCAE components.

Name

Required

Type

Format

Properties

Description

catch_up

Yes

boolean

flag to indicate whether the request contains all the policies in PDP or not

errored_policies

No

whether policy-engine returned an error on the policy.

errored_scopes

No

array of string

on cartchup - list of policy scope_prefix values on wchich the policy-engine experienced an error other than not-found data.

latest_policies

Yes

dictionary of (policy_id -> DCAEPolicy object).

removed_policies

Yes

whether policy was removed from policy-engine.

scope_prefixes

No

array of string

on catchup - list of all scope_prefixes used by the policy-handler to retrieve the policies from policy-engine.

Errored_policies schema:

whether policy-engine returned an error on the policy. dictionary of (policy_id -> true). In example: replace additionalProp1,2,3 with policy_id1,2,3 values

Map of {“key”:”boolean”}

Latest_policies schema:

dictionary of (policy_id -> DCAEPolicy object). In example: replace additionalProp1,2,3 with policy_id1,2,3 values

Map of {“key”:”DCAEPolicy”}

Removed_policies schema:

whether policy was removed from policy-engine. dictionary of (policy_id -> true). In example: replace additionalProp1,2,3 with policy_id1,2,3 values

Map of {“key”:”boolean”}

Inventory API

Description

DCAE Inventory is a web service that provides the following:

  1. Real-time data on all DCAE services and their components

  2. Comprehensive details on available DCAE service types

API name

Swagger YAML

Inventory

link

Contact Information

dcae@lists.openecomp.org

DEFAULT

GET /dcae-service-types
Description
Get a list of `DCAEServiceType` objects.
Parameters

Name

Located in

Required

Type

Format

Properties

Description

typeName

query

No

string

Filter by service type name

onlyLatest

query

No

boolean

{“default”: true}

If set to true, query returns just the latest versions of DCAE service types. If set to false, then all versions are returned. Default is true

onlyActive

query

No

boolean

{“default”: true}

If set to true, query returns only active DCAE service types. If set to false, then all DCAE service types are returned. Default is true

vnfType

query

No

string

Filter by associated vnf type. No wildcards, matches are explicit. This field is treated case insensitive.

serviceId

query

No

string

Filter by assocaited service id. Instances with service id null or empty is always returned.

serviceLocation

query

No

string

Filter by associated service location. Instances with service location null or empty is always returned.

asdcServiceId

query

No

string

Filter by associated asdc design service id. Setting this to NONE will return instances that have asdc service id set to null

asdcResourceId

query

No

string

Filter by associated asdc design resource id. Setting this to NONE will return instances that have asdc resource id set to null

offset

query

No

integer

int32

Query resultset offset used for pagination (zero-based)

Request
Responses
200

List of DCAEServiceType objects

Type: InlineResponse200

Example:

{
    "items": [
        {
            "asdcResourceId": "somestring",
            "asdcServiceId": "somestring",
            "asdcServiceURL": "somestring",
            "blueprintTemplate": "somestring",
            "created": "2015-01-01T15:00:00.000Z",
            "deactivated": "2015-01-01T15:00:00.000Z",
            "owner": "somestring",
            "selfLink": {
                "params": {},
                "rel": "somestring",
                "rels": [
                    "somestring",
                    "somestring"
                ],
                "title": "somestring",
                "type": "somestring",
                "uri": "somestring",
                "uriBuilder": {}
            },
            "serviceIds": [
                "somestring",
                "somestring"
            ],
            "serviceLocations": [
                "somestring",
                "somestring"
            ],
            "typeId": "somestring",
            "typeName": "somestring",
            "typeVersion": 1,
            "vnfTypes": [
                "somestring",
                "somestring"
            ]
        },
        {
            "asdcResourceId": "somestring",
            "asdcServiceId": "somestring",
            "asdcServiceURL": "somestring",
            "blueprintTemplate": "somestring",
            "created": "2015-01-01T15:00:00.000Z",
            "deactivated": "2015-01-01T15:00:00.000Z",
            "owner": "somestring",
            "selfLink": {
                "params": {},
                "rel": "somestring",
                "rels": [
                    "somestring",
                    "somestring"
                ],
                "title": "somestring",
                "type": "somestring",
                "uri": "somestring",
                "uriBuilder": {}
            },
            "serviceIds": [
                "somestring",
                "somestring"
            ],
            "serviceLocations": [
                "somestring",
                "somestring"
            ],
            "typeId": "somestring",
            "typeName": "somestring",
            "typeVersion": 1,
            "vnfTypes": [
                "somestring",
                "somestring"
            ]
        }
    ],
    "links": {
        "nextLink": {
            "params": {},
            "rel": "somestring",
            "rels": [
                "somestring",
                "somestring"
            ],
            "title": "somestring",
            "type": "somestring",
            "uri": "somestring",
            "uriBuilder": {}
        },
        "previousLink": {
            "params": {},
            "rel": "somestring",
            "rels": [
                "somestring",
                "somestring"
            ],
            "title": "somestring",
            "type": "somestring",
            "uri": "somestring",
            "uriBuilder": {}
        }
    },
    "totalCount": 1
}
DELETE /dcae-service-types/{typeId}
Description
Deactivates existing `DCAEServiceType` instances
Parameters

Name

Located in

Required

Type

Format

Properties

Description

typeId

path

Yes

string

Request
Responses
200

DCAEServiceType has been deactivated

Type: ApiResponseMessage

Example:

{
    "code": 1,
    "message": "somestring",
    "type": "somestring"
}
404

DCAEServiceType not found

Type: ApiResponseMessage

Example:

{
    "code": 1,
    "message": "somestring",
    "type": "somestring"
}
410

DCAEServiceType already gone

Type: ApiResponseMessage

Example:

{
    "code": 1,
    "message": "somestring",
    "type": "somestring"
}
GET /dcae-service-types/{typeId}
Description
Get a `DCAEServiceType` object.
Parameters

Name

Located in

Required

Type

Format

Properties

Description

typeId

path

Yes

string

Request
Responses
200

Single DCAEServiceType object

Type: DCAEServiceType

Example:

{
    "asdcResourceId": "somestring",
    "asdcServiceId": "somestring",
    "asdcServiceURL": "somestring",
    "blueprintTemplate": "somestring",
    "created": "2015-01-01T15:00:00.000Z",
    "deactivated": "2015-01-01T15:00:00.000Z",
    "owner": "somestring",
    "selfLink": {
        "params": {},
        "rel": "somestring",
        "rels": [
            "somestring",
            "somestring"
        ],
        "title": "somestring",
        "type": "somestring",
        "uri": "somestring",
        "uriBuilder": {}
    },
    "serviceIds": [
        "somestring",
        "somestring"
    ],
    "serviceLocations": [
        "somestring",
        "somestring"
    ],
    "typeId": "somestring",
    "typeName": "somestring",
    "typeVersion": 1,
    "vnfTypes": [
        "somestring",
        "somestring"
    ]
}
404

Resource not found

Type: DCAEServiceType

Example:

{
    "asdcResourceId": "somestring",
    "asdcServiceId": "somestring",
    "asdcServiceURL": "somestring",
    "blueprintTemplate": "somestring",
    "created": "2015-01-01T15:00:00.000Z",
    "deactivated": "2015-01-01T15:00:00.000Z",
    "owner": "somestring",
    "selfLink": {
        "params": {},
        "rel": "somestring",
        "rels": [
            "somestring",
            "somestring"
        ],
        "title": "somestring",
        "type": "somestring",
        "uri": "somestring",
        "uriBuilder": {}
    },
    "serviceIds": [
        "somestring",
        "somestring"
    ],
    "serviceLocations": [
        "somestring",
        "somestring"
    ],
    "typeId": "somestring",
    "typeName": "somestring",
    "typeVersion": 1,
    "vnfTypes": [
        "somestring",
        "somestring"
    ]
}
POST /dcae-service-types
Description
Inserts a new `DCAEServiceType` or updates an existing instance. Updates are only allowed iff there are no running DCAE services of the requested type,
Request
Body

Name

Required

Type

Format

Properties

Description

asdcResourceId

No

string

Id of vf/vnf instance this DCAE service type is associated with. Value source is from ASDC’s notification event’s field resourceInvariantUUID.

asdcServiceId

No

string

Id of service this DCAE service type is associated with. Value source is from ASDC’s notification event’s field serviceInvariantUUID.

asdcServiceURL

No

string

URL to the ASDC service model

blueprintTemplate

Yes

string

String representation of a Cloudify blueprint with unbound variables

owner

Yes

string

serviceIds

No

array of string

List of service ids that are used to associate with DCAE service type. DCAE service types with this propery as null or empty means them apply for every service id.

serviceLocations

No

array of string

List of service locations that are used to associate with DCAE service type. DCAE service types with this propery as null or empty means them apply for every service location.

typeName

Yes

string

Descriptive name for this DCAE service type

typeVersion

Yes

integer

int32

Version number for this DCAE service type

vnfTypes

No

array of string

{
    "asdcResourceId": "somestring",
    "asdcServiceId": "somestring",
    "asdcServiceURL": "somestring",
    "blueprintTemplate": "somestring",
    "owner": "somestring",
    "serviceIds": [
        "somestring",
        "somestring"
    ],
    "serviceLocations": [
        "somestring",
        "somestring"
    ],
    "typeName": "somestring",
    "typeVersion": 1,
    "vnfTypes": [
        "somestring",
        "somestring"
    ]
}
Responses
200

Single DCAEServiceType object.

Type: DCAEServiceType

Example:

{
    "asdcResourceId": "somestring",
    "asdcServiceId": "somestring",
    "asdcServiceURL": "somestring",
    "blueprintTemplate": "somestring",
    "created": "2015-01-01T15:00:00.000Z",
    "deactivated": "2015-01-01T15:00:00.000Z",
    "owner": "somestring",
    "selfLink": {
        "params": {},
        "rel": "somestring",
        "rels": [
            "somestring",
            "somestring"
        ],
        "title": "somestring",
        "type": "somestring",
        "uri": "somestring",
        "uriBuilder": {}
    },
    "serviceIds": [
        "somestring",
        "somestring"
    ],
    "serviceLocations": [
        "somestring",
        "somestring"
    ],
    "typeId": "somestring",
    "typeName": "somestring",
    "typeVersion": 1,
    "vnfTypes": [
        "somestring",
        "somestring"
    ]
}
400

Bad request provided.

Type: ApiResponseMessage

Example:

{
    "code": 1,
    "message": "somestring",
    "type": "somestring"
}
409

Failed to update because there are still DCAE services of the requested type running.

Type: ApiResponseMessage

Example:

{
    "code": 1,
    "message": "somestring",
    "type": "somestring"
}
GET /dcae-services
Description
Get a list of `DCAEService` objects.
Parameters

Name

Located in

Required

Type

Format

Properties

Description

typeId

query

No

string

DCAE service type name

vnfId

query

No

string

vnfType

query

No

string

Filter by associated vnf type. This field is treated case insensitive.

vnfLocation

query

No

string

componentType

query

No

string

Use to filter by a specific DCAE service component type

shareable

query

No

boolean

Use to filter by DCAE services that have shareable components or not

created

query

No

string

Use to filter by created time

offset

query

No

integer

int32

Query resultset offset used for pagination (zero-based)

Request
Responses
200

List of DCAEService objects

Type: InlineResponse2001

Example:

{
    "items": [
        {
            "components": [
                {
                    "componentId": "somestring",
                    "componentLink": {
                        "params": {},
                        "rel": "somestring",
                        "rels": [
                            "somestring",
                            "somestring"
                        ],
                        "title": "somestring",
                        "type": "somestring",
                        "uri": "somestring",
                        "uriBuilder": {}
                    },
                    "componentSource": "DCAEController",
                    "componentType": "somestring",
                    "created": "2015-01-01T15:00:00.000Z",
                    "location": "somestring",
                    "modified": "2015-01-01T15:00:00.000Z",
                    "shareable": 1,
                    "status": "somestring"
                },
                {
                    "componentId": "somestring",
                    "componentLink": {
                        "params": {},
                        "rel": "somestring",
                        "rels": [
                            "somestring",
                            "somestring"
                        ],
                        "title": "somestring",
                        "type": "somestring",
                        "uri": "somestring",
                        "uriBuilder": {}
                    },
                    "componentSource": "DCAEController",
                    "componentType": "somestring",
                    "created": "2015-01-01T15:00:00.000Z",
                    "location": "somestring",
                    "modified": "2015-01-01T15:00:00.000Z",
                    "shareable": 1,
                    "status": "somestring"
                }
            ],
            "created": "2015-01-01T15:00:00.000Z",
            "deploymentRef": "somestring",
            "modified": "2015-01-01T15:00:00.000Z",
            "selfLink": {
                "params": {},
                "rel": "somestring",
                "rels": [
                    "somestring",
                    "somestring"
                ],
                "title": "somestring",
                "type": "somestring",
                "uri": "somestring",
                "uriBuilder": {}
            },
            "serviceId": "somestring",
            "typeLink": {
                "params": {},
                "rel": "somestring",
                "rels": [
                    "somestring",
                    "somestring"
                ],
                "title": "somestring",
                "type": "somestring",
                "uri": "somestring",
                "uriBuilder": {}
            },
            "vnfId": "somestring",
            "vnfLink": {
                "params": {},
                "rel": "somestring",
                "rels": [
                    "somestring",
                    "somestring"
                ],
                "title": "somestring",
                "type": "somestring",
                "uri": "somestring",
                "uriBuilder": {}
            },
            "vnfLocation": "somestring",
            "vnfType": "somestring"
        },
        {
            "components": [
                {
                    "componentId": "somestring",
                    "componentLink": {
                        "params": {},
                        "rel": "somestring",
                        "rels": [
                            "somestring",
                            "somestring"
                        ],
                        "title": "somestring",
                        "type": "somestring",
                        "uri": "somestring",
                        "uriBuilder": {}
                    },
                    "componentSource": "DCAEController",
                    "componentType": "somestring",
                    "created": "2015-01-01T15:00:00.000Z",
                    "location": "somestring",
                    "modified": "2015-01-01T15:00:00.000Z",
                    "shareable": 1,
                    "status": "somestring"
                },
                {
                    "componentId": "somestring",
                    "componentLink": {
                        "params": {},
                        "rel": "somestring",
                        "rels": [
                            "somestring",
                            "somestring"
                        ],
                        "title": "somestring",
                        "type": "somestring",
                        "uri": "somestring",
                        "uriBuilder": {}
                    },
                    "componentSource": "DCAEController",
                    "componentType": "somestring",
                    "created": "2015-01-01T15:00:00.000Z",
                    "location": "somestring",
                    "modified": "2015-01-01T15:00:00.000Z",
                    "shareable": 1,
                    "status": "somestring"
                }
            ],
            "created": "2015-01-01T15:00:00.000Z",
            "deploymentRef": "somestring",
            "modified": "2015-01-01T15:00:00.000Z",
            "selfLink": {
                "params": {},
                "rel": "somestring",
                "rels": [
                    "somestring",
                    "somestring"
                ],
                "title": "somestring",
                "type": "somestring",
                "uri": "somestring",
                "uriBuilder": {}
            },
            "serviceId": "somestring",
            "typeLink": {
                "params": {},
                "rel": "somestring",
                "rels": [
                    "somestring",
                    "somestring"
                ],
                "title": "somestring",
                "type": "somestring",
                "uri": "somestring",
                "uriBuilder": {}
            },
            "vnfId": "somestring",
            "vnfLink": {
                "params": {},
                "rel": "somestring",
                "rels": [
                    "somestring",
                    "somestring"
                ],
                "title": "somestring",
                "type": "somestring",
                "uri": "somestring",
                "uriBuilder": {}
            },
            "vnfLocation": "somestring",
            "vnfType": "somestring"
        }
    ],
    "links": {
        "nextLink": {
            "params": {},
            "rel": "somestring",
            "rels": [
                "somestring",
                "somestring"
            ],
            "title": "somestring",
            "type": "somestring",
            "uri": "somestring",
            "uriBuilder": {}
        },
        "previousLink": {
            "params": {},
            "rel": "somestring",
            "rels": [
                "somestring",
                "somestring"
            ],
            "title": "somestring",
            "type": "somestring",
            "uri": "somestring",
            "uriBuilder": {}
        }
    },
    "totalCount": 1
}
502

Bad response from DCAE controller

Type: ApiResponseMessage

Example:

{
    "code": 1,
    "message": "somestring",
    "type": "somestring"
}
504

Failed to connect with DCAE controller

Type: ApiResponseMessage

Example:

{
    "code": 1,
    "message": "somestring",
    "type": "somestring"
}
GET /dcae-services-groupby/{propertyName}
Description
Get a list of unique values for the given `propertyName`
Parameters

Name

Located in

Required

Type

Format

Properties

Description

propertyName

path

Yes

string

Property to find unique values. Restricted to type, vnfType, vnfLocation

Request
Responses
200

List of unique property values

Type: DCAEServiceGroupByResults

Example:

{
    "propertyName": "somestring",
    "propertyValues": [
        {
            "count": 1,
            "dcaeServiceQueryLink": {
                "params": {},
                "rel": "somestring",
                "rels": [
                    "somestring",
                    "somestring"
                ],
                "title": "somestring",
                "type": "somestring",
                "uri": "somestring",
                "uriBuilder": {}
            },
            "propertyValue": "somestring"
        },
        {
            "count": 1,
            "dcaeServiceQueryLink": {
                "params": {},
                "rel": "somestring",
                "rels": [
                    "somestring",
                    "somestring"
                ],
                "title": "somestring",
                "type": "somestring",
                "uri": "somestring",
                "uriBuilder": {}
            },
            "propertyValue": "somestring"
        }
    ]
}
DELETE /dcae-services/{serviceId}
Description
Remove an existing `DCAEService` object.
Parameters

Name

Located in

Required

Type

Format

Properties

Description

serviceId

path

Yes

string

Request
Responses
200

DCAE service has been removed

404

Unknown DCAE service

Type: ApiResponseMessage

Example:

{
    "code": 1,
    "message": "somestring",
    "type": "somestring"
}
GET /dcae-services/{serviceId}
Description
Get a `DCAEService` object.
Parameters

Name

Located in

Required

Type

Format

Properties

Description

serviceId

path

Yes

string

Request
Responses
200

Single DCAEService object

Type: DCAEService

Example:

{
    "components": [
        {
            "componentId": "somestring",
            "componentLink": {
                "params": {},
                "rel": "somestring",
                "rels": [
                    "somestring",
                    "somestring"
                ],
                "title": "somestring",
                "type": "somestring",
                "uri": "somestring",
                "uriBuilder": {}
            },
            "componentSource": "DCAEController",
            "componentType": "somestring",
            "created": "2015-01-01T15:00:00.000Z",
            "location": "somestring",
            "modified": "2015-01-01T15:00:00.000Z",
            "shareable": 1,
            "status": "somestring"
        },
        {
            "componentId": "somestring",
            "componentLink": {
                "params": {},
                "rel": "somestring",
                "rels": [
                    "somestring",
                    "somestring"
                ],
                "title": "somestring",
                "type": "somestring",
                "uri": "somestring",
                "uriBuilder": {}
            },
            "componentSource": "DCAEController",
            "componentType": "somestring",
            "created": "2015-01-01T15:00:00.000Z",
            "location": "somestring",
            "modified": "2015-01-01T15:00:00.000Z",
            "shareable": 1,
            "status": "somestring"
        }
    ],
    "created": "2015-01-01T15:00:00.000Z",
    "deploymentRef": "somestring",
    "modified": "2015-01-01T15:00:00.000Z",
    "selfLink": {
        "params": {},
        "rel": "somestring",
        "rels": [
            "somestring",
            "somestring"
        ],
        "title": "somestring",
        "type": "somestring",
        "uri": "somestring",
        "uriBuilder": {}
    },
    "serviceId": "somestring",
    "typeLink": {
        "params": {},
        "rel": "somestring",
        "rels": [
            "somestring",
            "somestring"
        ],
        "title": "somestring",
        "type": "somestring",
        "uri": "somestring",
        "uriBuilder": {}
    },
    "vnfId": "somestring",
    "vnfLink": {
        "params": {},
        "rel": "somestring",
        "rels": [
            "somestring",
            "somestring"
        ],
        "title": "somestring",
        "type": "somestring",
        "uri": "somestring",
        "uriBuilder": {}
    },
    "vnfLocation": "somestring",
    "vnfType": "somestring"
}
404

DCAE service not found

Type: ApiResponseMessage

Example:

{
    "code": 1,
    "message": "somestring",
    "type": "somestring"
}
502

Bad response from DCAE controller

Type: ApiResponseMessage

Example:

{
    "code": 1,
    "message": "somestring",
    "type": "somestring"
}
504

Failed to connect with DCAE controller

Type: ApiResponseMessage

Example:

{
    "code": 1,
    "message": "somestring",
    "type": "somestring"
}
PUT /dcae-services/{serviceId}
Description
Put a new or update an existing `DCAEService` object.
Parameters

Name

Located in

Required

Type

Format

Properties

Description

serviceId

path

Yes

string

Request
Body

Name

Required

Type

Format

Properties

Description

components

Yes

array of DCAEServiceComponentRequest

List of DCAE service components that this service is composed of

deploymentRef

No

string

Reference to a Cloudify deployment

typeId

Yes

string

Id of the associated DCAE service type

vnfId

Yes

string

Id of the associated VNF that this service is monitoring

vnfLocation

Yes

string

Location identifier of the associated VNF that this service is monitoring

vnfType

Yes

string

The type of the associated VNF that this service is monitoring

{
    "components": [
        {
            "componentId": "somestring",
            "componentSource": "DCAEController",
            "componentType": "somestring",
            "shareable": 1
        },
        {
            "componentId": "somestring",
            "componentSource": "DCAEController",
            "componentType": "somestring",
            "shareable": 1
        }
    ],
    "deploymentRef": "somestring",
    "typeId": "somestring",
    "vnfId": "somestring",
    "vnfLocation": "somestring",
    "vnfType": "somestring"
}
Responses
200

Single DCAEService object

Type: DCAEService

Example:

{
    "components": [
        {
            "componentId": "somestring",
            "componentLink": {
                "params": {},
                "rel": "somestring",
                "rels": [
                    "somestring",
                    "somestring"
                ],
                "title": "somestring",
                "type": "somestring",
                "uri": "somestring",
                "uriBuilder": {}
            },
            "componentSource": "DCAEController",
            "componentType": "somestring",
            "created": "2015-01-01T15:00:00.000Z",
            "location": "somestring",
            "modified": "2015-01-01T15:00:00.000Z",
            "shareable": 1,
            "status": "somestring"
        },
        {
            "componentId": "somestring",
            "componentLink": {
                "params": {},
                "rel": "somestring",
                "rels": [
                    "somestring",
                    "somestring"
                ],
                "title": "somestring",
                "type": "somestring",
                "uri": "somestring",
                "uriBuilder": {}
            },
            "componentSource": "DCAEController",
            "componentType": "somestring",
            "created": "2015-01-01T15:00:00.000Z",
            "location": "somestring",
            "modified": "2015-01-01T15:00:00.000Z",
            "shareable": 1,
            "status": "somestring"
        }
    ],
    "created": "2015-01-01T15:00:00.000Z",
    "deploymentRef": "somestring",
    "modified": "2015-01-01T15:00:00.000Z",
    "selfLink": {
        "params": {},
        "rel": "somestring",
        "rels": [
            "somestring",
            "somestring"
        ],
        "title": "somestring",
        "type": "somestring",
        "uri": "somestring",
        "uriBuilder": {}
    },
    "serviceId": "somestring",
    "typeLink": {
        "params": {},
        "rel": "somestring",
        "rels": [
            "somestring",
            "somestring"
        ],
        "title": "somestring",
        "type": "somestring",
        "uri": "somestring",
        "uriBuilder": {}
    },
    "vnfId": "somestring",
    "vnfLink": {
        "params": {},
        "rel": "somestring",
        "rels": [
            "somestring",
            "somestring"
        ],
        "title": "somestring",
        "type": "somestring",
        "uri": "somestring",
        "uriBuilder": {}
    },
    "vnfLocation": "somestring",
    "vnfType": "somestring"
}
422

Bad request provided

Type: ApiResponseMessage

Example:

{
    "code": 1,
    "message": "somestring",
    "type": "somestring"
}

Data Structures

ApiResponseMessage Model Structure

Name

Required

Type

Format

Properties

Description

code

No

integer

int32

message

No

string

type

No

string

DCAEService Model Structure

Name

Required

Type

Format

Properties

Description

components

No

array of DCAEServiceComponent

created

No

string

date-time

deploymentRef

No

string

Reference to a Cloudify deployment

modified

No

string

date-time

selfLink

No

Link

Link.title is serviceId

serviceId

No

string

typeLink

No

Link

Link.title is typeId

vnfId

No

string

vnfLink

No

Link

Link.title is vnfId

vnfLocation

No

string

Location information of the associated VNF

vnfType

No

string

DCAEServiceComponent Model Structure

Name

Required

Type

Format

Properties

Description

componentId

Yes

string

The id format is unique to the source

componentLink

Yes

Link

Link to the underlying resource of this component

componentSource

Yes

string

{‘enum’: [‘DCAEController’, ‘DMaaPController’]}

Specifies the name of the underying source service that is responsible for this components

componentType

Yes

string

created

Yes

string

date-time

location

No

string

Location information of the component

modified

Yes

string

date-time

shareable

Yes

integer

int32

Used to determine if this component can be shared amongst different DCAE services

status

No

string

DCAEServiceComponentRequest Model Structure

Name

Required

Type

Format

Properties

Description

componentId

Yes

string

The id format is unique to the source

componentSource

Yes

string

{‘enum’: [‘DCAEController’, ‘DMaaPController’]}

Specifies the name of the underying source service that is responsible for this components

componentType

Yes

string

shareable

Yes

integer

int32

Used to determine if this component can be shared amongst different DCAE services

DCAEServiceGroupByResults Model Structure

Name

Required

Type

Format

Properties

Description

propertyName

No

string

Property name of DCAE service that the group by operation was performed on

propertyValues

No

array of DCAEServiceGroupByResultsPropertyValues

DCAEServiceGroupByResultsPropertyValues Model Structure

Name

Required

Type

Format

Properties

Description

count

No

integer

int32

dcaeServiceQueryLink

No

Link

Link.title is the DCAE service property value. Following this link will provide a list of DCAE services that all have this property value.

propertyValue

No

string

DCAEServiceRequest Model Structure

Name

Required

Type

Format

Properties

Description

components

Yes

array of DCAEServiceComponentRequest

List of DCAE service components that this service is composed of

deploymentRef

No

string

Reference to a Cloudify deployment

typeId

Yes

string

Id of the associated DCAE service type

vnfId

Yes

string

Id of the associated VNF that this service is monitoring

vnfLocation

Yes

string

Location identifier of the associated VNF that this service is monitoring

vnfType

Yes

string

The type of the associated VNF that this service is monitoring

DCAEServiceType Model Structure

Name

Required

Type

Format

Properties

Description

asdcResourceId

No

string

Id of vf/vnf instance this DCAE service type is associated with. Value source is from ASDC’s notification event’s field resourceInvariantUUID.

asdcServiceId

No

string

Id of service this DCAE service type is associated with. Value source is from ASDC’s notification event’s field serviceInvariantUUID.

asdcServiceURL

No

string

URL to the ASDC service model

blueprintTemplate

Yes

string

String representation of a Cloudify blueprint with unbound variables

created

Yes

string

date-time

Created timestamp for this DCAE service type in epoch time

deactivated

No

string

date-time

Deactivated timestamp for this DCAE service type in epoch time

owner

Yes

string

selfLink

Yes

Link

Link to self where the Link.title is typeName

serviceIds

No

array of string

List of service ids that are used to associate with DCAE service type. DCAE service types with this propery as null or empty means them apply for every service id.

serviceLocations

No

array of string

List of service locations that are used to associate with DCAE service type. DCAE service types with this propery as null or empty means them apply for every service location.

typeId

Yes

string

Unique identifier for this DCAE service type

typeName

Yes

string

Descriptive name for this DCAE service type

typeVersion

Yes

integer

int32

Version number for this DCAE service type

vnfTypes

No

array of string

DCAEServiceTypeRequest Model Structure

Name

Required

Type

Format

Properties

Description

asdcResourceId

No

string

Id of vf/vnf instance this DCAE service type is associated with. Value source is from ASDC’s notification event’s field resourceInvariantUUID.

asdcServiceId

No

string

Id of service this DCAE service type is associated with. Value source is from ASDC’s notification event’s field serviceInvariantUUID.

asdcServiceURL

No

string

URL to the ASDC service model

blueprintTemplate

Yes

string

String representation of a Cloudify blueprint with unbound variables

owner

Yes

string

serviceIds

No

array of string

List of service ids that are used to associate with DCAE service type. DCAE service types with this propery as null or empty means them apply for every service id.

serviceLocations

No

array of string

List of service locations that are used to associate with DCAE service type. DCAE service types with this propery as null or empty means them apply for every service location.

typeName

Yes

string

Descriptive name for this DCAE service type

typeVersion

Yes

integer

int32

Version number for this DCAE service type

vnfTypes

No

array of string

InlineResponse200 Model Structure

Name

Required

Type

Format

Properties

Description

items

No

array of DCAEServiceType

links

No

InlineResponse200Links

totalCount

No

integer

int32

InlineResponse2001 Model Structure

Name

Required

Type

Format

Properties

Description

items

No

array of DCAEService

links

No

InlineResponse200Links

totalCount

No

integer

int32

UriBuilder Model Structure

VES-Collector

Description

Virtual Event Streaming (VES) Collector is RESTful collector for processing JSON messages. The collector verifies the source and validates the events against VES schema before distributing to DMAAP MR topics.

API name

Swagger JSON

Swagger YAML

VES Collector

link

link

Contact Information

onap-discuss@lists.onap.org

Security

VES Authentication Types

VES Specification

Response Code

Code

Reason Phrase

Description

202

Accepted

The request has been accepted for processing

400

Bad Request

Many possible reasons not specified by the other codes (e.g., missing required parameters or incorrect format) . The response body may include a further exception code and text. HTTP 400 errors may be mapped to SVC0001 (general service error), SVC0002 (bad parameter), SVC2000 (general service error with details) or PO9003 (message content size exceeds the allowable limit).

401

Unauthorized

Authentication failed or was not provided. HTTP 401 errors may be mapped to POL0001 (general policy error) or POL2000 (general policy error with details).

404

Not Found

The server has not found anything matching the Request-URI. No indication is given of whether the condition is temporary or permanent.

405

Method Not Allowed

A request was made of a resource using a request method not supported by that resource (e.g., using PUT on a REST resource that only supports POST).

500

Internal Server Error

The server encountered an internal error or timed out; please retry (general catch-all server-side error).HTTP 500 errors may be mapped to SVC1000 (no server resources).

Sample Request and Response

Request Example

POST  /eventListener/v7 HTTP/1.1
Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==
content-type: application/json
content-length: 12345
X-MinorVersion: 1

{
    "event": {
        "commonEventHeader": {
            "version": "4.1",
            "vesEventListenerVersion": "7.1.1",
            "domain": "fault",
            "eventName": "Fault_Vscf:Acs-Ericcson_PilotNumberPoolExhaustion",
            "eventId": "fault0000245",
            "sequence": 1,
            "priority": "High",
            "reportingEntityId": "cc305d54-75b4-431b-adb2-eb6b9e541234",
            "reportingEntityName": "ibcx0001vm002oam001",
            "sourceId": "de305d54-75b4-431b-adb2-eb6b9e546014",
            "sourceName": "scfx0001vm002cap001",
            "nfVendorName": "Ericsson",
            "nfNamingCode": "scfx",
            "nfcNamingCode": "ssc",
            "startEpochMicrosec": 1413378172000000,
            "lastEpochMicrosec": 1413378172000000,
            "timeZoneOffset": "UTC-05:30"
        },
        "faultFields": {
            "faultFieldsVersion": 4.0,
            "alarmCondition": "PilotNumberPoolExhaustion",
            "eventSourceType": "other",
            "specificProblem": "Calls cannot complete - pilot numbers are unavailable",
            "eventSeverity": "CRITICAL",
            "vfStatus": "Active",
            "alarmAdditionalInformation": {
                "PilotNumberPoolSize": "1000"
            }
        }
    }
}

Response Example

HTTPS/1.1 202 Accepted
X-MinorVersion: 1
X-PatchVersion: 1
X-LatestVersion: 7.1.1

HV-VES (High Volume VES)

Overview

Component description can be found under HV-VES Collector.

TCP Endpoint

HV-VES is exposed as NodePort service on Kubernetes cluster on port 30222/tcp. By default, as of the Frankfurt release, all TCP communications are secured using SSL/TLS. Plain, insecure TCP connections without socket data encryption can be enabled if needed.

(see ref:ssl_tls_authorization).

Without TLS, client authentication/authorization is not possible. Connections are stream-based (as opposed to request-based) and long-running.

Communication is wrapped with thin Wire Transfer Protocol, which mainly provides delimitation.

-- Wire Transfer Protocol (binary, defined using ASN.1 notation)
-- Encoding: use "direct encoding" to the number of octets indicated in the comment [n], using network byte order.

WTP DEFINITIONS ::= BEGIN

-- Used to sent data from the data provider
WtpData ::= SEQUENCE {
    magic           INTEGER (0..255),           -- [1] always 0xAA
    versionMajor    INTEGER (0..255),           -- [1] major interface version, forward incompatible with previous major version, current value: 1
    versionMinor    INTEGER (0..255),           -- [1] minor interface version, forward compatible with previous minor version, current value: 0
    reserved        OCTET STRING (SIZE (3)),    -- [3] reserved for future use (ignored, but use 0)
    payloadId       INTEGER (0..65535),         -- [2] payload type: 0x0000=undefined, 0x0001=ONAP VesEvent (protobuf)
    payloadLength   INTEGER (0..4294967295).    -- [4] payload length in octets
    payload         OCTET STRING                -- [length as per payloadLength]
}

END

Payload is binary-encoded, using Google Protocol Buffers (GPB) representation of the VES Event.

/*
 * ============LICENSE_START=======================================================
 * dcaegen2-collectors-veshv
 * ================================================================================
 * Copyright (C) 2018 NOKIA
 * ================================================================================
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 * ============LICENSE_END=========================================================
 */
syntax = "proto3";
package org.onap.ves;

message VesEvent                            // top-level message, currently the maximum event size supported by the HV-VES Collector is 1 MiB
{
    CommonEventHeader commonEventHeader=1;  // required

    bytes eventFields=2;                    // required, payload
        // this field contains a domain-specific GPB message
        // the field being opaque (bytes), the decoding of the payload occurs in a separate step
        // the name of the GPB message for domain XYZ is XyzFields
        // e.g. for domain==perf3gpp, the GPB message is Perf3gppFields
}

// VES CommonEventHeader adapted to GPB (Google Protocol Buffers)

message CommonEventHeader
{
    string version = 1;                     // required, "version of the gpb common event header", current value "1.0"
    string domain = 2;                      // required, "the eventing domain associated with the event", allowed values:
                                            // fault, heartbeat, measurement, mobileFlow, other, pnfRegistration, sipSignaling,
                                            // stateChange, syslog, thresholdCrossingAlert, voiceQuality, perf3gpp

    uint32 sequence = 3;                    // required, "ordering of events communicated by an event source instance or 0 if not needed"

    enum Priority
    {
        PRIORITY_NOT_PROVIDED = 0;
        HIGH = 1;
        MEDIUM = 2;
        NORMAL = 3;
        LOW = 4;
    }
    Priority priority = 4;                  // required, "processing priority"

    string eventId = 5;                     // required, "event key that is unique to the event source"
    string eventName = 6;                   // required, "unique event name"
    string eventType = 7;                   // "for example - guest05,  platform"

    uint64 lastEpochMicrosec = 8;           // required, "the latest unix time aka epoch time associated with the event from any component--as microseconds elapsed since 1 Jan 1970 not including leap seconds"
    uint64 startEpochMicrosec = 9;          // required, "the earliest unix time aka epoch time associated with the event from any component--as microseconds elapsed since 1 Jan 1970 not including leap seconds"

    string nfNamingCode = 10;               // "4 character network function type, aligned with vnf naming standards"
    string nfcNamingCode = 11;              // "3 character network function component type, aligned with vfc naming standards"
    string nfVendorName = 12;               // " Vendor Name providing the nf "

    bytes reportingEntityId = 13;           // "UUID identifying the entity reporting the event, for example an OAM VM; must be populated by the ATT enrichment process"
    string reportingEntityName = 14;        // required, "name of the entity reporting the event, for example, an EMS name; may be the same as sourceName should match A&AI entry"
    bytes sourceId = 15;                    // "UUID identifying the entity experiencing the event issue; must be populated by the ATT enrichment process"
    string sourceName = 16;                 // required, "name of the entity experiencing the event issued use A&AI entry"
    string timeZoneOffset = 17;             // "Offset to GMT to indicate local time zone for the device"
    string vesEventListenerVersion = 18;    // required, "Version of the VesEvent Listener", current value "7.2"

    reserved "InternalHeaderFields";        // "enrichment fields for internal VES Event Listener service use only, not supplied by event sources"
    reserved 100;
}

HV-VES makes routing decisions based on the content of the domain field or stndDefinedNamespace field in case of stndDefined events.

The PROTO file, which contains the VES CommonEventHeader, comes with a binary-type Payload (eventFields) parameter, where domain-specific data should be placed. Domain-specific data are encoded as well with GPB. A domain-specific PROTO file is required to decode the data.

API towards DMaaP

HV-VES Collector forwards incoming messages to a particular DMaaP Kafka topic based on the domain (or stndDefinedNamespace) and configuration. Every Kafka record is comprised of a key and a value. In case of HV-VES:

  • Kafka record key is a GPB-encoded CommonEventHeader.

  • Kafka record value is a GPB-encoded VesEvent (CommonEventHeader and domain-specific eventFields).

In both cases raw bytes might be extracted using org.apache.kafka.common.serialization.ByteArrayDeserializer. The resulting bytes might be further passed to parseFrom methods included in classes generated from GPB definitions. WTP is not used here - it is only used in communication between PNF/VNF and the collector.

By default, HV-VES will use routing defined in k8s-hv-ves.yaml-template in dcaegen2/platform/blueprints project when deployed using Cloudify. In case of Helm deployment routing is defined in values.yaml file in HV-VES Helm Chart.

Supported domains

Domains that are currently supported by HV-VES:

  • perf3gpp - basic domain to Kafka topic mapping

  • stndDefined - specific routing, when event has this domain, then stndDefinedNamespace field value is mapped to Kafka topic

For domains descriptions, see Domains supported by HV-VES

HV-VES behaviors

Connections with HV-VES are stream-based (as opposed to request-based) and long-running. In case of interrupted or closed connection, the collector logs such event but does not try to reconnect to client. Communication is wrapped with thin Wire Transfer Protocol, which mainly provides delimitation. Wire Transfer Protocol Frame:

  • is dropped after decoding and validating and only GPB is used in further processing.

  • has to start with MARKER_BYTE, as defined in protocol specification (see TCP Endpoint). If MARKER_BYTE is invalid, HV-VES disconnects from client.

HV-VES decodes only CommonEventHeader from GPB message received. Collector does not decode or validate the rest of the GPB message and publishes it to Kafka topic intact. Kafka topic for publishing events with specific domain can be configured through Consul service as described in Run-Time configuration. In case of Kafka service unavailability, the collector drops currently handled messages and disconnects the client.

Messages handling:

  • HV-VES Collector skips messages with unknown/invalid GPB CommonEventHeader format.

  • HV-VES Collector skips messages with unsupported domain. Domain is unsupported if there is no route for it in configuration (see Run-Time configuration).

  • HV-VES Collector skips messages with invalid Wire Frame format, unsupported WTP version or inconsistencies of data in the frame (other than invalid MARKER_BYTE).

  • HV-VES Collector interrupts connection when it encounters a message with too big GPB payload. Default maximum size and ways to change it are described in Deployment.

Note

xNF (VNF/PNF) can split messages bigger than 1 MiB and set sequence field in CommonEventHeader accordingly. It is advised to use smaller than 1 MiB messages for GPBs encoding/decoding efficiency.

  • Skipped messages (for any of the above reasons) might not leave any trace in HV-VES logs.

PRH (PNF Registration Handler)

Date

2018-09-13

Overview

Physical Network Function Registration Handler is responsible for registration of PNF (Physical Network Function) to ONAP (Open Network Automation Platform) in plug and play manner.

API name

Swagger JSON

Swagger YAML

PNF Registration Handler

link

link

heartbeat-controller

GET /heartbeat

Returns liveness of PRH service

  • Produces: [‘*/*’]

Responses

200 - PRH sevice is living

401 - You are not authorized to view the resource

403 - Accessing the resource you were trying to reach is forbidden

404 - The resource you were trying to reach is not found

schedule-controller

GET /start

Start scheduling worker request

  • Produces: [‘*/*’]

Responses

200 - OK

401 - Unauthorized

403 - Forbidden

404 - Not Found

GET /stopPrh

Stop scheduling worker request

  • Produces: [‘*/*’]

Responses

200 - OK

401 - Unauthorized

403 - Forbidden

404 - Not Found

Introduction

PRH is delivered as one Docker container which hosts application server and can be started by docker-compose.

Functionality

_images/prhAlgo.png

Paths

GET /events/unauthenticated.VES_PNFREG_OUTPUT
Description

Reads PNF registration fromD DMaaP (Data Movement as a Platform)

Responses

HTTP Code

Description

200

successful response

PATCH /aai/v12/network/pnfs/{pnf-name}
Description
Update AAI (Active and Available Inventory) PNF’s specific entries:
  • ipv4 to ipaddress-v4-oam

  • ipv6 to ipaddress-v6-oam

Parameters

Type

Name

Description

Schema

Path

pnf-name
required

Name of the PNF.

string (text)

Body

patchbody

Required patch body.

Responses

HTTP Code

Description

200

successful response

POST /events/unauthenticated.PNF_READY
Description
Publish PNF_READY to DMaaP and set:
  • pnf-id to correlationID

  • ipv4 to ipaddress-v4-oam

  • ipv6 to ipaddress-v6-oam

Parameters

Type

Name

Description

Schema

Body

postbody
required

Required patch body.

hydratorappput

Responses

HTTP Code

Description

200

successful response

Compiling PRH

Whole project (top level of PRH directory) and each module (sub module directory) can be compiled using mvn clean install command.

Main API Endpoints

Running with dev-mode of PRH
  • Heartbeat: http://<container_address>:8100/heartbeat or https://<container_address>:8443/heartbeat

  • Start PRH: http://<container_address>:8100/start or https://<container_address>:8433/start

  • Stop PRH: http://<container_address>:8100/stopPrh or https://<container_address>:8433/stopPrh

Maven GroupId:

org.onap.dcaegen2.services

Maven Parent ArtifactId:

dcae-services

Maven Children Artifacts:

  1. prh-app-server: Pnf Registration Handler (PRH) server

  2. prh-aai-client: Contains implementation of AAI client

  3. prh-dmaap-client: Contains implementation of DmaaP client

  4. prh-commons: Common code for whole prh modules

DFC (DataFile Collector)

Date

2019-04-24

Overview

Component description can be found under DFC.

Offered APIs

API name

Swagger JSON

Datafile Collector API

link

3GPP PM Mapper

Overview

Component description can be found under 3GPP PM Mapper.

Paths

PUT /delivery
Description

Publish the PM Measurment file to PM Mapper.

Responses

HTTP Code

Description

200

successful response

GET /healthcheck
Description

This is the health check endpoint. If this returns a 200, the server is alive. If anything other than a 200, the server is either dead or no connection to pm mapper.

Responses

HTTP Code

Description

200

successful response

GET /reconfigure
Description

This is the reconfigure endpoint to fetch updated config information using config binding service.

Responses

HTTP Code

Description

200

successful response

PM Subscription Handler

Overview

Component description can be found under PM Subscription Handler.

Offered APIs

API name

Swagger JSON

Swagger YAML

PM Subscription Handler Service

link

link

DCAE SDK

Overview

DCAE SDK contains utilities and clients which may be used for fetching configuration from CBS, consuming messages from DMaaP, etc. SDK is written in Java.

Artifacts

Current version
<properties>
    <sdk.version>1.4.2</sdk.version>
</properties>
SDK Maven dependencies (modules)
<dependencies>
    <dependency>
      <groupId>org.onap.dcaegen2.services.sdk.rest.services</groupId>
      <artifactId>cbs-client</artifactId>
      <version>${sdk.version}</version>
    </dependency>

    <dependency>
      <groupId>org.onap.dcaegen2.services.sdk.rest.services</groupId>
      <artifactId>dmaap-client</artifactId>
      <version>${sdk.version}</version>
    </dependency>

    <dependency>
      <groupId>org.onap.dcaegen2.services.sdk.rest.services</groupId>
      <artifactId>http-client</artifactId>
      <version>${sdk.version}</version>
    </dependency>

    <dependency>
      <groupId>org.onap.dcaegen2.services.sdk.security.crypt</groupId>
      <artifactId>crypt-password</artifactId>
      <version>${sdk.version}</version>
    </dependency>

    <dependency>
      <groupId>org.onap.dcaegen2.services.sdk.security</groupId>
      <artifactId>ssl</artifactId>
      <version>${sdk.version}</version>
    </dependency>

    <dependency>
      <groupId>org.onap.dcaegen2.services.sdk</groupId>
      <artifactId>hvvesclient-producer-api</artifactId>
      <version>${sdk.version}</version>
    </dependency>

    <dependency>
      <groupId>org.onap.dcaegen2.services.sdk</groupId>
      <artifactId>hvvesclient-producer-impl</artifactId>
      <version>${sdk.version}</version>
      <scope>runtime</scope>
    </dependency>

    <dependency>
      <groupId>org.onap.dcaegen2.services.sdk</groupId>
      <artifactId>dcaegen2-services-sdk-services-external-schema-manager</artifactId>
      <version>${sdk.version}</version>
    </dependency>

    <!-- more to go -->
</dependencies>

Onboarding HTTP API (MOD)

Description

Onboarding API is sub-component under MOD provides following function:

  1. API to add/update data-formats

  2. API to add/update components (component_Spec)

These API can be invoked by MS owners or by Acumos adapter to upload artifact into MOD catalog

API name

Swagger

Inventory

link

Base URL

http:///onboarding

ONBOARDING

Default namespace

GET /components/{component_id}
Description
Get a Component
Parameters

Name

Located in

Required

Type

Format

Properties

Description

component_id

path

Yes

string

Request
Headers
X-Fields: An optional fields mask to support partial object fetching - https://flask-restplus.readthedocs.io/en/stable/mask.html
Responses
200

Success

Type: component fields extended inline

Example:

{
    "componentType": "somestring",
    "componentUrl": "somestring",
    "description": "somestring",
    "id": "somestring",
    "modified": "2015-01-01T15:00:00.000Z",
    "name": "somestring",
    "owner": "somestring",
    "spec": {},
    "status": "somestring",
    "version": "somestring",
    "whenAdded": "2015-01-01T15:00:00.000Z"
}
404

Component not found in Catalog

Type: Error message

Example:

{
    "message": "somestring"
}
500

Internal Server Error

GET /components
Description
Get list of Components in the catalog
Parameters

Name

Located in

Required

Type

Format

Properties

Description

name

query

No

string

Name of component to filter for

version

query

No

string

Version of component to filter for

Request
Headers
X-Fields: An optional fields mask to support partial object fetching - https://flask-restplus.readthedocs.io/en/stable/mask.html
Responses
200

Success

Type: Component List

Example:

{
    "components": [
        {
            "componentType": "somestring",
            "componentUrl": "somestring",
            "description": "somestring",
            "id": "somestring",
            "modified": "2015-01-01T15:00:00.000Z",
            "name": "somestring",
            "owner": "somestring",
            "status": "somestring",
            "version": "somestring",
            "whenAdded": "2015-01-01T15:00:00.000Z"
        },
        {
            "componentType": "somestring",
            "componentUrl": "somestring",
            "description": "somestring",
            "id": "somestring",
            "modified": "2015-01-01T15:00:00.000Z",
            "name": "somestring",
            "owner": "somestring",
            "status": "somestring",
            "version": "somestring",
            "whenAdded": "2015-01-01T15:00:00.000Z"
        }
    ]
}
500

Internal Server Error

GET /dataformats/{dataformat_id}
Description
Get a Data Format
Parameters

Name

Located in

Required

Type

Format

Properties

Description

dataformat_id

path

Yes

string

Request
Headers
X-Fields: An optional fields mask to support partial object fetching - https://flask-restplus.readthedocs.io/en/stable/mask.html
Responses
200

Success

Type: dataformat fields extended inline

Example:

{
    "dataFormatUrl": "somestring",
    "description": "somestring",
    "id": "somestring",
    "modified": "2015-01-01T15:00:00.000Z",
    "name": "somestring",
    "owner": "somestring",
    "spec": {},
    "status": "somestring",
    "version": "somestring",
    "whenAdded": "2015-01-01T15:00:00.000Z"
}
404

Data Format not found in Catalog

Type: Error message

Example:

{
    "message": "somestring"
}
500

Internal Server Error

GET /dataformats
Description
Get list of Data Formats in the catalog
Request
Headers
X-Fields: An optional fields mask to support partial object fetching - https://flask-restplus.readthedocs.io/en/stable/mask.html
Responses
200

Success

Type: Data Format List

Example:

{
    "dataFormats": [
        {
            "dataFormatUrl": "somestring",
            "description": "somestring",
            "id": "somestring",
            "modified": "2015-01-01T15:00:00.000Z",
            "name": "somestring",
            "owner": "somestring",
            "status": "somestring",
            "version": "somestring",
            "whenAdded": "2015-01-01T15:00:00.000Z"
        },
        {
            "dataFormatUrl": "somestring",
            "description": "somestring",
            "id": "somestring",
            "modified": "2015-01-01T15:00:00.000Z",
            "name": "somestring",
            "owner": "somestring",
            "status": "somestring",
            "version": "somestring",
            "whenAdded": "2015-01-01T15:00:00.000Z"
        }
    ]
}
500

Internal Server Error

PATCH /components/{component_id}
Description
Update a Component's status in the Catalog
Parameters

Name

Located in

Required

Type

Format

Properties

Description

component_id

path

Yes

string

Request
Body

Name

Required

Type

Format

Properties

Description

owner

Yes

string

User ID

status

Yes

string

{‘enum’: [‘published’, ‘revoked’]}

… . .[published] is the only status change supported right now

{
    "owner": "somestring",
    "status": "published"
}
Responses
200

Success, Component status updated

400

Bad Request

Type: Error message

Example:

{
    "message": "somestring"
}
403

Forbidden Request

Type: Error message

Example:

{
    "message": "somestring"
}
404

Component not found in Catalog

Type: Error message

Example:

{
    "message": "somestring"
}
500

Internal Server Error

PATCH /dataformats/{dataformat_id}
Description
Update a Data Format's status in the Catalog
Parameters

Name

Located in

Required

Type

Format

Properties

Description

dataformat_id

path

Yes

string

Request
Body

Name

Required

Type

Format

Properties

Description

owner

Yes

string

User ID

status

Yes

string

{‘enum’: [‘published’, ‘revoked’]}

… . .[published] is the only status change supported right now

{
    "owner": "somestring",
    "status": "published"
}
Responses
200

Success, Data Format status updated

400

Bad Request

Type: Error message

Example:

{
    "message": "somestring"
}
403

Forbidden Request

Type: Error message

Example:

{
    "message": "somestring"
}
404

Data Format not found in Catalog

Type: Error message

Example:

{
    "message": "somestring"
}
500

Internal Server Error

POST /components
Description
Add a Component to the Catalog
Request
Headers
X-Fields: An optional fields mask to support partial object fetching - https://flask-restplus.readthedocs.io/en/stable/mask.html
Body

Name

Required

Type

Format

Properties

Description

owner

No

string

spec

No

spec

The Component Spec schema is here -> https://git.onap.org/dcaegen2/platform/cli/plain/component-json-schemas/component-specification/dcae-cli-v2/component-spec-schema.json

Spec schema:

The Component Spec schema is here -> https://git.onap.org/dcaegen2/platform/cli/plain/component-json-schemas/component-specification/dcae-cli-v2/component-spec-schema.json

{
    "owner": "somestring",
    "spec": {}
}
Responses
200

Success

Type: Component post

Example:

{
    "componentUrl": "somestring"
}
400

Bad Request

Type: Error message

Example:

{
    "message": "somestring"
}
409

Component already exists

Type: Error message

Example:

{
    "message": "somestring"
}
500

Internal Server Error

POST /dataformats
Description
Add a Data Format to the Catalog
Request
Headers
X-Fields: An optional fields mask to support partial object fetching - https://flask-restplus.readthedocs.io/en/stable/mask.html
Body

Name

Required

Type

Format

Properties

Description

owner

No

string

spec

No

spec

The Data Format Spec schema is here -> https://git.onap.org/dcaegen2/platform/cli/plain/component-json-schemas/data-format/dcae-cli-v1/data-format-schema.json

Spec schema:

The Data Format Spec schema is here -> https://git.onap.org/dcaegen2/platform/cli/plain/component-json-schemas/data-format/dcae-cli-v1/data-format-schema.json

{
    "owner": "somestring",
    "spec": {}
}
Responses
200

Success

Type: Data Format post

Example:

{
    "dataFormatUrl": "somestring"
}
400

Bad Request

Type: Error message

Example:

{
    "message": "somestring"
}
409

Data Format already exists

Type: Error message

Example:

{
    "message": "somestring"
}
500

Internal Server Error

PUT /components/{component_id}
Description
Replace a Component Spec in the Catalog
Parameters

Name

Located in

Required

Type

Format

Properties

Description

component_id

path

Yes

string

Request
Body

Name

Required

Type

Format

Properties

Description

owner

No

string

spec

No

spec

The Component Spec schema is here -> https://git.onap.org/dcaegen2/platform/cli/plain/component-json-schemas/component-specification/dcae-cli-v2/component-spec-schema.json

Spec schema:

The Component Spec schema is here -> https://git.onap.org/dcaegen2/platform/cli/plain/component-json-schemas/component-specification/dcae-cli-v2/component-spec-schema.json

{
    "owner": "somestring",
    "spec": {}
}
Responses
200

Success, Component replaced

400

Bad Request

Type: Error message

Example:

{
    "message": "somestring"
}
404

Component not found in Catalog

Type: Error message

Example:

{
    "message": "somestring"
}
500

Internal Server Error

PUT /dataformats/{dataformat_id}
Description
Replace a Data Format Spec in the Catalog
Parameters

Name

Located in

Required

Type

Format

Properties

Description

dataformat_id

path

Yes

string

Request
Body

Name

Required

Type

Format

Properties

Description

owner

No

string

spec

No

spec

The Data Format Spec schema is here -> https://git.onap.org/dcaegen2/platform/cli/plain/component-json-schemas/data-format/dcae-cli-v1/data-format-schema.json

Spec schema:

The Data Format Spec schema is here -> https://git.onap.org/dcaegen2/platform/cli/plain/component-json-schemas/data-format/dcae-cli-v1/data-format-schema.json

{
    "owner": "somestring",
    "spec": {}
}
Responses
200

Success, Data Format added

400

Bad Request

Type: Error message

Example:

{
    "message": "somestring"
}
404

Data Format not found in Catalog

Type: Error message

Example:

{
    "message": "somestring"
}
500

Internal Server Error

Data Structures

Component List Model Structure

Name

Required

Type

Format

Properties

Description

components

No

array of component fields

Component Spec Model Structure

Name

Required

Type

Format

Properties

Description

owner

No

string

spec

No

spec

The Component Spec schema is here -> https://git.onap.org/dcaegen2/platform/cli/plain/component-json-schemas/component-specification/dcae-cli-v2/component-spec-schema.json

Spec schema:

The Component Spec schema is here -> https://git.onap.org/dcaegen2/platform/cli/plain/component-json-schemas/component-specification/dcae-cli-v2/component-spec-schema.json

Component post Model Structure

Name

Required

Type

Format

Properties

Description

componentUrl

Yes

string

… . Url to the Component Specification

Data Format List Model Structure

Name

Required

Type

Format

Properties

Description

dataFormats

No

array of dataformat fields

Data Format Spec Model Structure

Name

Required

Type

Format

Properties

Description

owner

No

string

spec

No

spec

The Data Format Spec schema is here -> https://git.onap.org/dcaegen2/platform/cli/plain/component-json-schemas/data-format/dcae-cli-v1/data-format-schema.json

Spec schema:

The Data Format Spec schema is here -> https://git.onap.org/dcaegen2/platform/cli/plain/component-json-schemas/data-format/dcae-cli-v1/data-format-schema.json

Data Format post Model Structure

Name

Required

Type

Format

Properties

Description

dataFormatUrl

Yes

string

… . Url to the Data Format Specification

Error message Model Structure

Name

Required

Type

Format

Properties

Description

message

No

string

… . .Details about the unsuccessful API request

Patch Spec Model Structure

Name

Required

Type

Format

Properties

Description

owner

Yes

string

User ID

status

Yes

string

{‘enum’: [‘published’, ‘revoked’]}

… . .[published] is the only status change supported right now

component fields Model Structure

Name

Required

Type

Format

Properties

Description

componentType

Yes

string

… . only ‘docker’

componentUrl

Yes

string

… . Url to the Component Specification

description

Yes

string

… . Description of the component

id

Yes

string

… . ID of the component

modified

Yes

string

date-time

… . When component was last modified

name

Yes

string

… . Name of the component

owner

Yes

string

… . ID of who added the component

status

Yes

string

… . Status of the component

version

Yes

string

… . Version of the component

whenAdded

Yes

string

date-time

… . When component was added to the Catalog

component fields by id Model Structure

component fields extended inline

Inline schema:

Name

Required

Type

Format

Properties

Description

componentType

Yes

string

… . only ‘docker’

componentUrl

Yes

string

… . Url to the Component Specification

description

Yes

string

… . Description of the component

id

Yes

string

… . ID of the component

modified

Yes

string

date-time

… . When component was last modified

name

Yes

string

… . Name of the component

owner

Yes

string

… . ID of who added the component

spec

Yes

spec

The Component Specification (json)

status

Yes

string

… . Status of the component

version

Yes

string

… . Version of the component

whenAdded

Yes

string

date-time

… . When component was added to the Catalog

Spec schema:

The Component Specification (json)

dataformat fields Model Structure

Name

Required

Type

Format

Properties

Description

dataFormatUrl

Yes

string

… . Url to the Data Format Specification

description

Yes

string

… . Description of the data format

id

Yes

string

… . ID of the data format

modified

Yes

string

date-time

… . When data format was last modified

name

Yes

string

… . Name of the data format

owner

Yes

string

… . ID of who added the data format

status

Yes

string

… . Status of the data format

version

Yes

string

… . Version of the data format

whenAdded

Yes

string

date-time

… . When data format was added to the Catalog

dataformat fields by id Model Structure

dataformat fields extended inline

Inline schema:

Name

Required

Type

Format

Properties

Description

dataFormatUrl

Yes

string

… . Url to the Data Format Specification

description

Yes

string

… . Description of the data format

id

Yes

string

… . ID of the data format

modified

Yes

string

date-time

… . When data format was last modified

name

Yes

string

… . Name of the data format

owner

Yes

string

… . ID of who added the data format

spec

Yes

spec

The Data Format Specification (json)

status

Yes

string

… . Status of the data format

version

Yes

string

… . Version of the data format

whenAdded

Yes

string

date-time

… . When data format was added to the Catalog

Spec schema:

The Data Format Specification (json)

DES (DataLake Extraction Service)

Date

2020-11-11

Overview

Component description is included in DES.

Offered APIs

API name

Swagger JSON

Datafile Collector API

link

Consumed APIs

Note

  • This section is used to reference APIs that a software component depends on and uses from other sources.

  • Consumed APIs should be a specific link to the offered APIs from another component or external source.

  • This note must be removed after content has been added.

DCAEGEN2 Components making following API calls into other ONAP components.

Building DCAE

Description

DCAE has multiple code repos and these repos are in several different languages. All DCAE projects are built in similar fashion, following Maven framework as Maven projects. Although many DCAE projects are not written in Java, adopting the Maven framework does help including DCAE projects in the overall ONAP building methodology and CICD process.

All DCAE projects use ONAP oparent project POM as ancestor. That is, DCAE projects inherent all parameters defined in the oparent project which include many ONAP wide configuration parameters such as the location of various artifact repos.

A number of DCAE projects are not written Java. For these projects we use the CodeHaus Maven Execution plugin for triggering a Bash script at various stages of Maven lifecycle. The script is mvn-phase-script.sh, located at the root of each non-Java DACE project. It is in this script that the actual build operation is performed at different Maven phases. For example, for a Python project, Maven test will actually trigger a call to tox to conduct project unit tests.

Below is a list of the repositories and their sub-modules, and the language they are written in.

  • dcaegen2

  • docs (rst)

  • platformdoc (mkdoc)

  • dcaegen2.analytics

  • dcaegen2.analytics.tca-gen2

  • dcae-analytics (Java)

  • eelf-logger (Java)

  • dcaegen2.collectors

  • dcaegen2.collectors.snmptrap (Python)

  • dcaegen2.collectors.ves (Java)

  • dcaegen2.collectors.hv-ves (Kotlin)

  • dcaegen2.collectors.datafile (Java)

  • dcaegen2.collectors.restconf (Java)

  • dcaegen2.services

  • dcaegen2.services.heartbeat (Python)

  • dcaegen2.services.prh (Java)

  • dcaegen2.services.bbs-eventprocessor (Java)

  • dcaegen2.services.pm-mapper (Java)

  • dcaegen2.services.ves-mapper (Java)

  • dcaegen2.services.son-handler (Java)

  • dcaegen2.services.kpi-ms (Java)

  • dcaegen2.services.pmsh (Python)

  • dcaegen2.services.datalake-handler (Java)

  • dcaegen2.services.son-handler (Java)

  • dcaegen2.deployments

  • scripts (bash, python)

  • tls-init-container (bash)

  • k8s-bootstrap-container (bash)

  • healthcheck-container (Node.js)

  • k8s-bootstrap-container (bash)

  • tca-cdap-container (bash)

  • multisite-init-container (python)

  • dcae-remote-site (helm chart)

  • dcaegen2.platform

  • dcaegen2.platform.blueprints

  • blueprints (yaml)

  • input-templates (yaml)

  • dcaegen2.platform.cli (Python)

  • component-json-schemas (yaml)

  • dcae-cli (Python)

  • dcaegen2.platform.configbinding (Python)

  • dcaegen2.platform.deployment-handler (NodeJS)

  • dcaegen2.platform.inventory-api (Java)

  • dcaegen2.platform.plugins

  • dcae-policy (Python)

  • relationships (Python)

  • k8splugin (Python)

  • dcaegen2.platform.policy-handler (Python)

  • dcaegen2.platform.servicechange-handler (Clojure)

  • dcaegen2.platform.ves-openapi-manager (Java)

  • dcaegen2.utils

  • onap-dcae-cbs-docker-client (Python)

  • onap-dcae-dcaepolicy-lib (Python)

  • python-discovery-client (Python)

  • python-dockering (Python)

  • scripts (bash)

Environment

Building is conducted in a Linux environment that has the basic building tools such as JDK 8, Maven 3, Python 2.7 and 3.6, docker engine, etc.

Steps

Because of the uniform adoption of Maven framework, each project can be built by running the standard Maven build commands: mvn clean, install, deploy, etc. For projects with submodules, the pom file in the project root will descent to the submodules and complete the submodule building.

Artifacts

Building of DCAE projects produce three different kinds of artifacts: Java jar files, raw file artifacts (including yaml files, scripts, wagon packages, etc), Pypi packages, and docker container images.

DCAE Deployment (Installation)

DCAE Deployment (using Helm and Cloudify)

This document describes the details of the Helm chart based deployment process for ONAP and how DCAE is deployed through this process.

Deployment Overview

ONAP deployments are done on kubernetes through OOM/Helm charts. Kubernetes is a container orchestration technology that organizes containers into composites of various patterns for easy deployment, management, and scaling. ONAP uses Kubernetes as the foundation for fulfilling its platform maturity promises.

ONAP manages Kubernetes specifications using Helm charts (in OOM project), under which all Kubernetes yaml-formatted resource specifications and additional files are organized into a hierarchy of charts, sub-charts, and resources. These yaml files are further augmented with Helm’s templating, which makes dependencies and cross-references of parameters and parameter derivatives among resources manageable for a large and complex Kubernetes system such as ONAP.

At deployment time, with a single helm deploy command, Helm resolves all the templates and compiles the chart hierarchy into Kubernetes resource definitions, and invokes Kubernetes deployment operations for all the resources.

All ONAP Helm charts are organized under the kubernetes directory of the OOM project, where roughly each ONAP component occupies a subdirectory. DCAE platform components are deployed using Helm charts under the dcaegen2 directory.

With DCAE Transformation to Helm in Istabul, all DCAE components are supported for both helm and Cloudify/Blueprint deployments. Charts for individual MS are available under dcaegen2-services directory under OOM project (https://git.onap.org/oom/tree/kubernetes/dcaegen2-services/components). With ONAP deployment, four DCAE services (HV VES collector, VES collector, PNF Registration Handler, and TCA (Gen2) analytics service) are bootstrapped via Helm charts.

Other DCAE Services are deployed on-demand, after ONAP/DCAE installation, through Cloudify Blueprints or helm-charts. For on-demand helm chart, refer to steps described in Helm install/upgrade section. Operators can deploy on-demand other MS required for their usecases also via Cloudify as described in On-demand MS Installation.

DCAE Chart Organization

Following Helm conventions, the DCAE Helm chart directory (oom/kubernetes/dcaegen2) consists of the following files and subdirectories:

  • Chart.yaml: metadata.

  • requirements.yaml: dependency charts.

  • values.yaml: values for Helm templating engine to expand templates.

  • resources: subdirectory for additional resource definitions such as configuration, scripts, etc.

  • Makefile: make file to build DCAE charts

  • components: subdirectory for DCAE sub-charts.

The dcaegen2 chart has the following sub-charts:

  • dcae-bootstrap: deploys the DCAE bootstrap service that performs some DCAE initialization and deploys additional DCAE components.

  • dcae-cloudify-manager: deploys the DCAE Cloudify Manager instance.

  • dcae-config-binding-service: deploys the DCAE config binding service.

  • dcae-deployment-handler: deploys the DCAE deployment handler service.

  • dcae-healthcheck: deploys the DCAE healthcheck service that provides an API to check the health of all DCAE components.

  • dcae-policy-handler: deploys the DCAE policy handler service.

  • dcae-redis: deploys the DCAE Redis cluster.

  • dcae-dashboard: deploys the DCAE Dashboard for managing DCAE microservices deployments

  • dcae-servicechange-handler: deploys the DCAE service change handler service.

  • dcae-inventory-api: deploys the DCAE inventory API service.

  • dcae-ves-openapi-manager: deploys the DCAE service validator of VES_EVENT type artifacts from distributed services.

The dcaegen2-services chart has the following sub-charts:

  • dcae-datafile-collector: deploys the DCAE DataFile Collector service.

  • dcae-hv-ves-collector: deploys the DCAE High-Volume VES collector service.

  • dcae-ms-healthcheck: deploys a health check component that tests the health of the 4 DCAE services deployed via Helm.

  • dcae-pm-mapper: deploys the DCAE PM-Mapper service.

  • dcae-prh: deploys the DCAE PNF Registration Handler service.

  • dcae-tcagen2: deploys the DCAE TCA analytics service.

  • dcae-ves-collector: deploys the DCAE VES collector service.

  • dcae-bbs-eventprocessor-ms: deploys the DCAE BBS Eventprocessor service.

  • dcae-datafile-collector: deploys the DCAE Datafile collector service.

  • dcae-datalake-admin-ui: deploys the Datalake Admin UI service.

  • dcae-datalake-des: deploys the Datalake Data Extraction service.

  • dcae-datalake-feeder: deploys the Datalake Feeder service.

  • dcae-heartbeat: deploys the DCAE Heartbeat microservice.

  • dcae-kpi-ms: deploys the DCAE KPI computation microservice.

  • dcae-ms-healthcheck: deploys the DCAE healthcheck service that provides API to check health of bootstrapped DCAE service deployed via helm

  • dcae-pm-mapper: deploys the DCAE PM-Mapper service.

  • dcae-pmsh: deploys the DCAE PM Subscription Handler service.

  • dcae-restconf-collector: deploys the DCAE RESTConf collector service.

  • dcae-slice-analysis-ms: deploys the DCAE Slice Analysis service.

  • dcae-snmptrap-collector: deploys the DCAE SNMPTRAP collector service.

  • dcae-son-handler: deploys the DCAE SON-Handler microservice.

  • dcae-ves-mapper: deploys the DCAE VES Mapper microservice.

The dcaegen2-services sub-charts depend on a set of common templates, found under the common subdirectory under dcaegen2-services.

Information about using the common templates to deploy a microservice can be found in Using Helm to deploy DCAE Microservices.

DCAE Deployment

At deployment time for ONAP, when the helm deploy command is executed, DCAE resources defined within the subcharts - “dcaegen2” above are deployed along with subset of DCAE Microservices (based on override file configuration defined in values.yaml

These include:

  • DCAE bootstrap service

  • DCAE healthcheck service

  • DCAE platform components:

    • Cloudify Manager

    • Config binding service

    • Deployment handler

    • Policy handler

    • Service change handler

    • Inventory API service

    • Inventory postgres database service (launched as a dependency of the inventory API service)

    • DCAE postgres database service (launched as a dependency of the bootstrap service)

    • DCAE Mongo database service (launched as a dependency of the bootstrap service)

    • VES OpenAPI Manager

  • DCAE Service components: * VES Collector * HV-VES Collector * PNF-Registration Handler Service * Threshold Crossing Analysis (TCA-gen2)

Some of the DCAE subcharts include an initContainer that checks to see if other services that they need in order to run have become ready. The installation of these subcharts will pause until the needed services are available.

In addition, DCAE operations depends on a Consul server cluster. For ONAP OOM deployment, the Consul cluster is provided as a shared resource. Its charts are defined under the oom/kubernetes/consul directory, not as part of the DCAE chart hierarchy.

With Istanbul release, DCAE bootstrapped Microservice deployment are managed completely under Helm. The Cloudify Bootstrap container preloads the microservice blueprints into DCAE Inventory, thereby making them available for On-Demand deployment support (trigger from CLAMP or external projects).

The dcae-bootstrap service has a number of prerequisites because the subsequently deployed DCAE components depends on a number of resources having entered their normal operation state. DCAE bootstrap job will not start before these resources are ready. They are:

  • dcae-cloudify-manager

  • consul-server

  • msb-discovery

  • kube2msb

  • dcae-config-binding-service

  • dcae-db

  • dcae-mongodb

  • dcae-inventory-api

Additionaly tls-init-container invoked during component deployment relies on AAF to generate the required certificate hence AAF must be enabled under OOM deployment configuration.

DCAE Configuration

Deployment time configuration of DCAE components are defined in several places.

  • Helm Chart templates:
    • Helm/Kubernetes template files can contain static values for configuration parameters;

  • Helm Chart resources:
    • Helm/Kubernetes resources files can contain static values for configuration parameters;

  • Helm values.yaml files:
    • The values.yaml files supply the values that Helm templating engine uses to expand any templates defined in Helm templates;

    • In a Helm chart hierarchy, values defined in values.yaml files in higher level supersedes values defined in values.yaml files in lower level;

    • Helm command line supplied values supersedes values defined in any values.yaml files.

In addition, for DCAE components deployed through Cloudify Manager blueprints, their configuration parameters are defined in the following places:

  • The blueprint files can contain static values for configuration parameters;
    • The blueprint files are defined under the blueprints directory of the dcaegen2/platform/blueprints repo, named with “k8s” prefix.

  • The blueprint files can specify input parameters and the values of these parameters will be used for configuring parameters in Blueprints. The values for these input parameters can be supplied in several ways as listed below in the order of precedence (low to high):
    • The blueprint files can define default values for the input parameters;

    • The blueprint input files can contain static values for input parameters of blueprints. These input files are provided as config resources under the dcae-bootstrap chart;

    • The blueprint input files may contain Helm templates, which are resolved into actual deployment time values following the rules for Helm values.

Now we walk through an example, how to configure the Docker image for the DCAE VESCollector, which is deployed by Cloudify Manager.

(Note: Beginning with the Istanbul release, VESCollector is no longer deployed using Cloudify Manager during bootstrap. However, the example is still useful for understanding how to deploy other components using a Cloudify blueprint.)

In the k8s-ves.yaml blueprint, the Docker image to use is defined as an input parameter with a default value:

tag_version:
type: string
default: "nexus3.onap.org:10001/onap/org.onap.dcaegen2.collectors.ves.vescollector:1.5.4"

The corresponding input file, https://git.onap.org/oom/tree/kubernetes/dcaegen2/components/dcae-bootstrap/resources/inputs/k8s-ves-inputs-tls.yaml, it is defined again as:

Thus, when common.repository and componentImages.ves are defined in the values.yaml files, their values will be plugged in here and the resulting tag_version value will be passed to the blueprint as the Docker image tag to use instead of the default value in the blueprint.

The componentImages.ves value is provided in the oom/kubernetes/dcaegen2/charts/dcae-bootstrap/values.yaml file:

componentImages:
  ves: onap/org.onap.dcaegen2.collectors.ves.vescollector:1.5.4

The final result is that when DCAE bootstrap calls Cloudify Manager to deploy the DCAE VES collector, the 1.5.4 image will be deployed.

On-demand deployment/upgrade through Helm

Under DCAE Transformation to Helm, all DCAE components has been delivered as helm charts under OOM repository (https://git.onap.org/oom/tree/kubernetes/dcaegen2-services).

Blueprint deployment is also available to support regression usecases; Istanbul will be final release where Cloudify blueprint for components/microservices will be supported.

All DCAE component charts follows standard Helm structure. Each Microservice charts has predefined configuration defined under applicationConfig which can be modified or overridden at deployment time.

Using helm, any of DCAE microservice can be deployed/upgraded/uninstalled on-demand.

Pre-Install

Note

This step is only required when helm install should be done on different releasename/prefix from rest of ONAP deployment

With Istanbul release, OOM team included support for ServiceAccount in ONAP deployment to limit the pod access to API server.

Following packages has been added under oom/common to support pre-provisioning of cluster roles and ServiceAccount management

When deployed, these chart will create the ServiceAccount and Role (based on override) and required Rolebinding (to associate the Serviceaccount to a role).

ONAP deployment by default includes the required provisioning of roles under release name (such as “dev”) under which ONAP is deployed. For subsequent helm installation under same release name prefix (i.e dev-) no further action is required.

When Helm install is required under different releasename prefix, then execute following command prior to running helm install.

helm install <DEPLOYMENT_PREFIX>-role-wrapper local/roles-wrapper -n <namespace>

Followed by install of required service/chart

helm -n <namespace> install <DEPLOYMENT_PREFIX>-dcaegen2-services oom/kubernetes/dcaegen2-services

Installation

Review and update local copy of dcaegen2-service values.yaml oom/kubernetes/dcaegen2-services/values.yaml to ensure component is enabled for deployment (or provide as command line override)

helm -n <namespace> install <DEPLOYMENT_PREFIX>-dcaegen2-services oom/kubernetes/dcaegen2-services

Service component can also be installed individually from oom/kubernetes/dcaegen2-services/components/<dcae-ms-chart>

helm -n onap install dev-dcaegen2-services-ves-mapper oom/kubernetes/dcaegen2-services/components/dcae-ves-mapper -f values.yaml

Using -f flag override file can be specified which will take precedence over default configuration. When commandline override is not provided, default (values.yaml) provided in chart-directory will be used.

Upgrade

Helm support upgrade of charts already deployed; using upgrade component deployment can be modified

helm -n <namespace> upgrade <DEPLOYMENT_PREFIX>-dcaegen2-services --reuse-values --values <updated values.yaml path> <dcaegen2-services helm charts path>

For minor configuration updates, helm also supports new values to be provided inline to the upgrade command. Example below -

helm -n onap upgrade dev-dcaegen2-services oom/kubernetes/dcaegen2-services --reuse-values --set dcae-ves-collector.applicationConfig.auth.method="noAuth"

Uninstall

Components can be uninstalled using delete command.

helm -n <namespace> delete <DEPLOYMENT_PREFIX>-dcaegen2-services

DCAE Service Endpoints

Below is a table of default hostnames and ports for DCAE component service endpoints in Kubernetes deployment:

Component

Cluster Internal (host:port)

Cluster external (svc_name:port)

VES

dcae-ves-collector:8443

dcae-ves-collector.onap:30417

HV-VES

dcae-hv-ves-collector:6061

dcae-hv-ves-collector.onap:30222

TCA-Gen2

dcae-tcagen2:9091

NA

PRH

dcae-prh:8100

NA

Policy Handler

policy-handler:25577

NA

Deployment Handler

deployment-handler:8443

NA

Inventory

inventory:8080

NA

Config binding

config-binding-service:10000/10001

NA

DCAE Healthcheck

dcae-healthcheck:80

NA

DCAE MS Healthcheck

dcae-ms-healthcheck:8080

NA

Cloudify Manager

dcae-cloudify-manager:80

NA

DCAE Dashboard

dashboard:8443

dashboard:30418

DCAE mongo

dcae-mongo-read:27017

NA

In addition, a number of ONAP service endpoints that are used by DCAE components are listed as follows for reference by DCAE developers and testers:

Component

Cluster Internal (host:port)

Cluster external (svc_name:port)

Consul Server

consul-server-ui:8500

NA

Robot

robot:88

robot:30209 TCP

Message router

message-router:3904

NA

Message router

message-router:3905

message-router-external:30226

Message router Kafka

message-router-kafka:9092

NA

MSB Discovery

msb-discovery:10081

msb-discovery:30281

Logging

log-kibana:5601

log-kibana:30253

AAI

aai:8080

aai:30232

AAI

aai:8443

aai:30233

Uninstalling DCAE

All of the DCAE components deployed using the OOM Helm charts will be deleted by the helm undeploy command. This command can be used to uninstall all of ONAP by undeploying the top-level Helm release that was created by the helm deploy command. The command can also be used to uninstall just DCAE, by having the command undeploy the top_level_release_name-dcaegen2 Helm sub-release.

Helm will undeploy only the components that were originally deployed using Helm charts. Components deployed by Cloudify Manager are not deleted by the Helm operations.

When uninstalling all of ONAP, it is sufficient to delete the namespace used for the deployment (typically onap) after running the undeploy operation. Deleting the namespace will get rid of any remaining resources in the namespace, including the components deployed by Cloudify Manager.

When uninstalling DCAE alone, deleting the namespace would delete the rest of ONAP as well. To delete DCAE alone, and to make sure all of the DCAE components deployed by Cloudify Manager are uninstalled:

  • Find the Cloudify Manager pod identifier, using a command like:

    kubectl -n onap get pods | grep dcae-cloudify-manager

  • Execute the DCAE cleanup script on the Cloudify Manager pod, using a command like:

    kubectl -n onap exec cloudify-manager-pod-id -- /scripts/dcae-cleanup.sh

  • Finally, run helm undeploy against the DCAE Helm subrelease

The DCAE cleanup script uses Cloudify Manager and the DCAE Kubernetes plugin to instruct Kubernetes to delete the components deployed by Cloudify Manager. This includes the components deployed when the DCAE bootstrap service ran and any components deployed after bootstrap.

To undeploy the DCAE services deployed via Helm (the hv-ves-collector, ves-collector, tcagen2, and prh), use the helm undeploy command against the top_level_release_name-dcaegen2-services Helm sub-release.

DCAE MS Deployment

DCAE MS catalog includes number of collectors, analytics and event processor services. Not all MS available on default ONAP/DCAE deployment.

Following Services are deployed via DCAE Bootstrap

VNF Event Streaming (VES) Collector

Virtual Event Streaming (VES) Collector (formerly known as Standard Event Collector/Common Event Collector) is RESTful collector for processing JSON messages into DCAE. The collector supports individual events or eventbatch posted to collector end-point(s) and post them to interface/bus for other application to subscribe. The collector verifies the source (when authentication is enabled) and validates the events against VES schema before distributing to DMAAP MR topics for downstream system to subscribe. The VESCollector also supports configurable event transformation function and event distribution to DMAAP MR topics.

VES Collector (HTTP) overview and functions
VES Architecture
_images/ves-deployarch.png
VES Processing Flow
  1. Collector supports different URI based on single or batch event to be received.

  2. Post authentication – events are validated against schema. At this point – appropriate return code is sent to client when validation fails.

  3. Event Processor checks against transformation rules (if enabled) and handles VES output standardization (e.g. VES 7.x input to VES5.4 output).

  4. Optional (activated by flag collector.externalSchema.checkflag) post authentication of stndDefined fields – specific fields are validated against schema. At this point – appropriate return code is sent to client when validation fails.

  5. If no problems were detected during previous steps, success HTTP code is being returned.

  6. Based on domain (or stndDefinedNamespace), events are asynchronously distributed to configurable topics.
    1. If topic mapping does not exist, event distribution is skipped.

    2. Post to outbound topic(s).

    3. If DMaaP publish is unsuccessful, messages will be queued per topic within VESCollector.

Note: As the collector is deployed as micro-service, all configuration parameters (including DMaaP topics) are passed to the collector dynamically. VEScollector refreshes the configuration from CBS every 5 minutes

_images/ves-processing-flow.png
VES Schema Validation

VES Collector is configured to support below VES Version; the corresponding API uses VES schema definition for event validation.

VES Version

API version

Schema Definition

VES 1.2

eventListener/v1

CommonEventFormat_Vendors_v25.json

VES 4.1

eventListener/v4

CommonEventFormat_27.2.json

VES 5.4

eventListener/v5

CommonEventFormat_28.4.1.json

VES 7.2.1

eventListener/v7

CommonEventFormat_30.2.1_ONAP.json

Features Supported
  • VES collector deployed as docker containers

  • Acknowledgement to sender with appropriate response code (both successful and failure)

  • Authentication of the events posted to collector (support 2 types of authentication setting)

  • Support single or batch JSON events input

  • General schema validation (against standard VES definition)

  • StndDefined fields schema validation

  • Mapping of external schemas to local schema files during stndDefined validation

  • Multiple schema support and backward compatibility

  • Configurable event transformation

  • Configurable suppression

  • Publish events into Dmaap Topic (with/without AAF)

The collector can receive events via standard HTTP port (8080) or secure port (8443). Depending on the install/configuration – either one or both can be supported (ports are also modifiable).

Dynamic configuration fed into Collector via DCAEPlatform
  • Outbound Dmaap/UEB topic

  • Schema version to be validated against

  • Authentication account for VNF

POST requests result in standard HTTP status codes:

  • 200-299 Success

  • 400-499 Client request has a problem (data error)

  • 500-599 Collector service has a problem

Configuration

VES expects to be able to fetch configuration directly from consul service in following JSON format:

During ONAP OOM/Kubernetes deployment this configuration is created from VES Cloudify blueprint.

Delivery

VES is delivered as a docker container and published in ONAP Nexus repository following image naming convention.

Full image name is onap/org.onap.dcaegen2.collectors.ves.vescollector.

VES Collector Cloudify Installation

VESCollector is installed via cloudify blueprint by DCAE bootstrap process on typical ONAP installation. As the service is containerized, it can be started on stand-alone mode also.

To run VES Collector container on standalone mode, following parameters are required

docker run -d -p 8080:8080/tcp -p 8443:8443/tcp -P -e DMAAPHOST='10.0.11.1' nexus.onap.org:10001/onap/org.onap.dcaegen2.collectors.ves.vescollector:1.7.9

DMAAPHOST is required for standalone; for normal platform installed instance the publish URL are obtained from Consul. Below parameters are exposed for DCAE platform (cloudify) deployed instance

  • COLLECTOR_IP

  • DMAAPHOST - should contain an address to DMaaP, so that event publishing can work

  • CONFIG_BINDING_SERVICE - should be a name of CBS

  • CONFIG_BINDING_SERVICE_SERVICE_PORT - should be a http port of CBS

  • HOSTNAME - should be a name of VESCollector application as it is registered in CBS catalog

These parameters can be configured either by passing command line option during docker run call or by specifying environment variables named after command line option name

Authentication Support

VES Collector support following authentication types

  • auth.method=noAuth default option - no security (http)

  • auth.method=certBasicAuth is used to enable mutual TLS authentication or/and basic HTTPs authentication

The blueprint is same for both deployments - based on the input configuration, VESCollector can be set for required authentication type. Default ONAP deployed VESCollector is configured for “certBasicAuth”.

If VESCollector instance need to be deployed with authentication disabled, follow below setup

  • Execute into Bootstrap POD using kubectl command

    Note

    For doing this, follow the below steps

    • First get the bootstrap pod name by running this: kubectl get pods -n onap | grep bootstrap

    • Then login to bootstrap pod by running this: kubectl exec -it <bootstrap pod> -n onap – bash

  • VES blueprint is available under /blueprints directory k8s-ves.yaml. A corresponding input file is also pre-loaded into bootstrap pod under /inputs/k8s-ves-inputs.yaml

  • Deploy blueprint
    cfy install -b ves-http -d ves-http -i /inputs/k8s-ves-inputs.yaml /blueprints/k8s-ves.yaml
    

To undeploy ves-http, steps are noted below

  • Uninstall running ves-http and delete deployment
    cfy uninstall ves-http
    

The deployment uninstall will also delete the blueprint. In some case you might notice 400 error reported indicating active deployment exist such as below ** An error occurred on the server: 400: Can’t delete blueprint ves-http - There exist deployments for this blueprint; Deployments ids: ves-http**

In this case blueprint can be deleted explicitly using this command.

cfy blueprint delete ves-http
External repo schema files from OOM connection to VES collector

In order to not use schema files bundled in VES Collector image but schema files defined in OOM repository and installed with dcaegen2 module, follow below setup.

  • Execute into Bootstrap POD using kubectl command

    Note

    For doing this, follow the below steps

    • First get the bootstrap pod name by running this: kubectl get pods -n onap | grep bootstrap

    • Then login to bootstrap pod by running this: kubectl exec -it <bootstrap pod> -n onap – bash

  • VES blueprint is available under /blueprints directory k8s-ves.yaml. A corresponding input file is also pre-loaded into bootstrap pod under /inputs/k8s-ves-inputs.yaml

  • Edit k8s-ves.yaml blueprint by adding section below docker_config: tag:
    volumes:
    - container:
        bind: /opt/app/VESCollector/etc/externalRepo/3gpp/rep/sa5/MnS/blob/SA88-Rel16/OpenAPI
      config_volume:
        name: dcae-external-repo-configmap-sa88-rel16
    - container:
        bind: /opt/app/VESCollector/etc/externalRepo/
      config_volume:
        name: dcae-external-repo-configmap-schema-map
    
  • After all docker_config: section in blueprint should looks like:
    docker_config:
      healthcheck:
        endpoint: /healthcheck
        interval: 15s
        timeout: 1s
        type: http
      volumes:
      - container:
          bind: /opt/app/VESCollector/etc/externalRepo/3gpp/rep/sa5/MnS/blob/SA88-Rel16/OpenAPI
        config_volume:
          name: dcae-external-repo-configmap-sa88-rel16
      - container:
          bind: /opt/app/VESCollector/etc/externalRepo/
        config_volume:
          name: dcae-external-repo-configmap-schema-map
    

Note

To undeploy ves-http if it is deployed, steps are noted below

Uninstall running ves-http and delete deployment
cfy uninstall ves-http

The deployment uninstall will also delete the blueprint. In some case you might notice 400 error reported indicating active deployment exist such as below ** An error occurred on the server: 400: Can’t delete blueprint ves-http - There exist deployments for this blueprint; Deployments ids: ves-http**

In this case blueprint can be deleted explicitly using this command.

cfy blueprint delete ves-http

To deploy modified ves-http, steps are noted below

  • Load blueprint:
    cfy blueprints upload -b ves-http /blueprints/k8s-ves.yaml
    
  • Deploy blueprint
    cfy install -b ves-http -d ves-http -i /inputs/k8s-ves-inputs.yaml /blueprints/k8s-ves.yaml
    
Using external TLS certificates obtained using CMP v2 protocol

In order to use the X.509 certificates obtained from the CMP v2 server (so called “operator`s certificates”), refer to the following description:

TLS Support

To comply with ONAP security requirement, all services exposing external API required TLS support using AAF generated certificates. DCAE Platform was updated in R3 to enable certificate distribution mechanism for services needing TLS support. For R6, we have moved from generating certificates manually to retrieving certificates from AAF at deployment time.

Solution overview
  1. Certificate setup:

    AAF requires setting up certificate details in AAF manually before a certificate is generated. This step is currently done using a test AAF instance in POD25. Required namespace, DCAE identity (dcae@dcae.onap.org), roles and Subject Alternative Names for all components are set in the test instance. We use a single certificate for all DCAE components, with a long list of Subject Alternative Names (SANs).

    Current SAN listing:

    bbs-event-processor, bbs-event-processor.onap, bbs-event-processor.onap.svc.cluster.local, config-binding-service, config-binding-service.onap, config-binding-service.onap.svc.cluster.local, dcae-cloudify-manager, dcae-cloudify-manager.onap, dcae-cloudify-manager.onap.svc.cluster.local, dcae-datafile-collector, dcae-datafile-collector.onap, dcae-datafile-collector.onap.svc.cluster.local, dcae-hv-ves-collector, dcae-hv-ves-collector.onap, dcae-hv-ves-collector.onap.svc.cluster.local, dcae-pm-mapper, dcae-pm-mapper.onap, dcae-pm-mapper.onap.svc.cluster.local, dcae-prh, dcae-prh.onap, dcae-prh.onap.svc.cluster.local, dcae-tca-analytics, dcae-tca-analytics.onap, dcae-tca-analytics.onap.svc.cluster.local, dcae-ves-collector, dcae-ves-collector.onap, dcae-ves-collector.onap.svc.cluster.local, deployment-handler, deployment-handler.onap, deployment-handler.onap.svc.cluster.local, holmes-engine-mgmt, holmes-engine-mgmt.onap, holmes-engine-mgmt.onap.svc.cluster.local, holmes-rule-mgmt, holmes-rules-mgmt.onap, holmes-rules-mgmt.onap.svc.cluster.local, inventory, inventory.onap, inventory.onap.svc.cluster.local, policy-handler, policy-handler.onap, policy-handler.onap.svc.cluster.local
    
  2. Certificate generation and retrieval:

    When a DCAE component that needs a TLS certificate is launched, a Kubernetes init container runs before the main component container is launched. The init container contacts the AAF certificate manager server. The AAF certificate management server generates a certificate based on the information previously set up in step 1 above and sends the certificate (in several formats) along with keys and passwords to the init container. The init container renames the files to conform to DCAE naming conventions and creates some additional formats. It stores the results into a volume that’s shared with the main component container.

    DCAE platform components are deployed via ONAP OOM. The Helm chart for each deployment includes the init container and sets up the shared volume.

    DCAE service components (sometimes called “microservices”) are deployed via Cloudify using blueprints. This is described in more detail in the next section.

  3. Plugin and Blueprint:

    The blueprint for a component that needs a TLS certificate needs to include the node property called “tls_info” in the node properties for the component. The property is a dictionary with two elements:

    • A boolean (use_tls) that indicates whether the component uses TLS.

    • A string (cert_directory) that indicates where the component expects to find certificate artifacts.

    Example

tls_info:
   cert_directory: '/opt/app/dh/etc/cert'
   use_tls: true

(Note that the cert_directory value does not include a trailing /.)

For this example the certificates are mounted into /opt/app/dh/etc/cert directory within the container.

During deployment Kubernetes plugin (referenced in blueprint) will check if the tls_info property is set and use_tls is set to true, then the plugin will add some elements to the Kubernetes Deployment for the component:
  • A Kubernetes volume (tls-info) that will hold the certificate artifacts

  • A Kubernetes initContainer (tls-init)

  • A Kubernetes volumeMount for the initContainer that mounts the tls-info volume at /opt/app/osaaf.

  • A Kubernetes volumeMount for the main container that mounts the tls-info volume at the mount point specified in the cert_directory property.

Service components that act as HTTPS clients only need access to the root CA certificate used by AAF. For R6, such components should set up a tls_info property as described above. See below for a note about an alternative approach that is available in R6 but is not currently being used.

  1. Certificate artifacts

    The certificate directory mounted on the container will include the following files:
    • cert.jks: A Java keystore containing the DCAE certificate.

    • jks.pass: A text file with a single line that contains the password for the cert.jks keystore.

    • trust.jks: A Java truststore containing the AAF CA certificate. (Needed by clients that access TLS-protected servers.)

    • trust.pass: A text file with a single line that contains the password for the trust.jks keystore.

    • cert.p12: The DCAE certificate and private key packaged in PKCS12 form.

    • p12.pass: A text file with a single line that contains the password for cert.p12 file.

    • cert.pem: The DCAE certificate concatenated with the intermediate CA certficate from AAF, in PEM form.

    • key.pem: The private key for the DCAE certificate. The key is not encrypted.

    • cacert.pem: The AAF CA certificate, in PEM form. (Needed by clients that access TLS-protected servers.)

  2. Alternative for getting CA certificate only

    The certificates generated by AAF are signed by AAF, not by a recognized certificate authority (CA). If a component acts as a client and makes an HTTPS request to another component, it will not be able to validate the other component’s server certificate because it will not recognize the CA. Most HTTPS client library software will raise an error and drop the connection. To prevent this, the client component needs to have a copy of the AAF CA certificate. As noted in section 3 above, one way to do this is to set up the tls_info property as described in section 3 above.

    There are alternatives. In R6, two versions of the DCAE k8splugin are available: version 1.7.2 and version 2.0.0. They behave differently with respect to setting up the CA certs.

    • k8splugin version 1.7.2 will automatically mount the CA certificate, in PEM format, at /opt/dcae/cacert/cacert.pem. It is not necessary to add anything to the blueprint. To get the CA certificate in PEM format in a different directory, add a tls_info property to the blueprint, set use_tls to false, and set cert_directory to the directory where the CA cert is needed. For example:

      tls_info:
         cert_directory: '/opt/app/certs'
         use_tls: false
      

      For this example, the CA certificate would be mounted at /opt/app/certs/cacert.pem.

      k8splugin version 1.7.2 uses a configmap, rather than an init container, to supply the CA certificate.

    • k8splugin version 2.0.0 will automatically mount the CA certificate, in PEM and JKS formats, in the directory /opt/dcae/cacert. It is not necessary to add anything to the blueprint. To get the CA certificates in a different directory, add a tls_info property to the blueprint, set use_tls to false, and set cert_directory to the directory where the CA certs are needed. Whatever directory is used, the following files will be available:

      • trust.jks: A Java truststore containing the AAF CA certificate. (Needed by clients that access TLS-protected servers.)

      • trust.pass: A text file with a single line that contains the password for the trust.jks keystore.

      • cacert.pem: The AAF CA certificate, in PEM form. (Needed by clients that access TLS-protected servers.)

      k8splugin version 2.0.0 uses an init container to supply the CA certificates.

External TLS Support - using Cloudify

External TLS support was introduced in order to integrate DCAE with CertService to acquire operator certificates meant to protect external traffic between DCAE’s components (VES collector, HV-VES, RestConf collector and DFC) and xNFs. For that reason K8s plugin which creates K8s resources from Cloudify blueprints was enhanced with new TLS properties support. New TLS properties are meant to control CertService’s client call in init containers section and environment variables which are passed to it.

This external TLS support doesn’t influence ONAP internal traffic which is protected by certificates issued by AAF’s CertMan. External TLS Support was introduced in k8splugin 3.1.0.

From k8splugin 3.4.1 when external TLS is enabled (use_external_tls=true), keystore contains only certificate from CMPv2 server. Keystore issued by CertMan has appended .bak extension and is not used.

  1. Certificate setup:

    To create certificate artifacts, OOM CertService must obtain the certificate details. Common name and list of Subject Alternative Names (SANs) are set in blueprint as described in step 3. The following parameters with default values are stored in OOM in k8splugin configuration file (k8splugin.json) in group external_cert:

    • A string image_tag that indicates CertService client image name and version

    • A string request_url that indicates URL to Cert Service API

    • A string timeout that indicates request timeout.

    • A string country that indicates country name in ISO 3166-1 alpha-2 format, for which certificate will be created

    • A string organization that indicates organization name, for which certificate will be created.

    • A string state that indicates state name, for which certificate will be created.

    • A string organizational_unit that indicates organizational unit name, for which certificate will be created.

    • A string location that indicates location name, for which certificate will be created.

    • A string keystore_password that indicates keystore password.

    • A string truststore_password that indicates truststore password.

    Group external_cert from k8splugin.json with default values:

    {
      "image_tag": "nexus3.onap.org:10001/onap/org.onap.oom.platform.certservice.oom-certservice-client:$VERSION",
      "request_url": "https://oom-cert-service:8443/v1/certificate/",
      "timeout":  "30000",
      "country": "US",
      "organization": "Linux-Foundation",
      "state": "California",
      "organizational_unit": "ONAP",
      "location": "San-Francisco",
      "keystore_password": "secret",
      "truststore_password": "secret"
    }
    

    Parameters configured in k8splugin are propagated via Helm Charts to Kubernetes ConfigMap and finally they are transfered to Consul. Blueprint, during start of execution, reads k8splugin.json configuration from Consul and applies it.

  2. Certificate generation and retrieval:

    When a DCAE component that needs an external TLS certificate is launched, a Kubernetes init container runs before the main component container is launched. The init container contacts the OOM CertService.

    DCAE service components (sometimes called “microservices”) are deployed via Cloudify using blueprints. This is described in more detail in the next section.

  3. Plugin and Blueprint: The blueprint for a component that needs an external TLS certificate needs to include the node property called “external_cert” in the node properties for the component. The property is a dictionary with following elements:

    • A boolean (use_external_tls) that indicates whether the component uses TLS in external traffic.

    • A string (external_cert_directory) that indicates where the component expects to find operator certificate and trusted certs.

    • A string (ca_name) that indicates name of Certificate Authority configured on CertService side (in cmpServers.json).

    • A string (output_type) that indicates certificate output type.

    • A dictionary (external_certificate_parameters) with two elements:
      • A string (common_name) that indicates common name which should be present in certificate. Specific for every blueprint (e.g. dcae-ves-collector for VES).

      • A string (sans) that indicates list of Subject Alternative Names (SANs) which should be present in certificate. Delimiter - , Should contain common_name value and other FQDNs under which given component is accessible. The following SANs types are supported: DNS names, IPs, URIs, emails.

    As a final step of the plugin the generated CMPv2 truststore entries will be appended to AAF CA truststore (see certificate artifacts below).

    Example

    external_cert:
        external_cert_directory: /opt/app/dcae-certificate/
        use_external_tls: true
        ca_name: "RA"
        cert_type: "P12"
        external_certificate_parameters:
            common_name: "simpledemo.onap.org"
            sans: "simpledemo.onap.org,ves.simpledemo.onap.org,ves.onap.org"
    

    For this example the certificates are mounted into /opt/app/dcae-certificate/external directory within the container.

    During deployment Kubernetes plugin (referenced in blueprint) will check if the external_cert property is set and use_external_tls is set to true, then the plugin will add some elements to the Kubernetes Deployment for the component:
    • A Kubernetes volume (tls-volume) that will hold the certificate artifacts

    • A Kubernetes initContainer (cert-service-client)

    • A Kubernetes volumeMount for the initContainer that mounts the tls-volume volume at /etc/onap/oom/certservice/certs/.

    • A Kubernetes volumeMount for the main container that mounts the tls-info volume at the mount point specified in the external_cert_directory property.

    Kurbernetes volumeMount tls-info is shared with TLS init container for internal traffic.

  4. Certificate artifacts

    The certificate directory mounted on the container will include the following:
    • Directory external with files:
      • keystore.p12: A keystore containing the operator certificate.

      • keystore.pass: A text file with a single line that contains the password for the keystore.p12 keystore.

      • truststore.p12: A truststore containing the operator certificate. (Needed by clients that access TLS-protected servers in external traffic.)

      • truststore.pass: A text file with a single line that contains the password for the truststore.p12 keystore.

    • trust.jks: A file with the AAF CA certificate and CMPv2 certificate with private key packaged in Java form.

    • trust.jks.bak: The (original) file with the AAF CA certificate only.

    • trust.pass: A text file with a single line that contains the password for trust.jks and trust.jks.bak file.

    • cacert.pem: The AAF CA certificate, in PEM form.

External TLS Support - Helm based deployment
CMPv2 certificates can be enabled and configured via helm values. The feature is switched on only when:
  • global.cmpv2Enabled flag is set to true

  • certDirectory directory where TLS certs should be stored is set (in a specific component)

  • flag useCmpv2Certificates is set to true (in a specific component)

Default values for certificates are defined in global.certificate.default and can be overriden during onap installation process.

global:
  certificate:
    default:
      renewBefore: 720h #30 days
      duration:    8760h #365 days
      subject:
        organization: "Linux-Foundation"
        country: "US"
        locality: "San-Francisco"
        province: "California"
        organizationalUnit: "ONAP"
      issuer:
        group: certmanager.onap.org
        kind: CMPv2Issuer
        name: cmpv2-issuer-onap
CMPv2 settings can be changed in Helm values.
  • mountPath - the directory within the container where certificates should be mounted

  • commonName - indicates common name which should be present in certificate

  • dnsNames - list of DNS names which should be present in certificate

  • ipAddresses - list of IP addresses which should be present in certificate

  • uris - list of uris which should be present in certificate

  • emailAddresses - list of email addresses which should be present in certificate

  • outputType - indicates certificate output type (jks or p12)

certificates:
- mountPath: <PATH>
  commonName: <COMMON-NAME>
  dnsNames:
    - <DNS-NAME-1>
    - <DNS-NAME-2>
    ...
  ipAddresses:
    ...
  uris:
    ...
  emailAddresses:
    ...
  keystore:
    outputType:
      - <OUTPUT-TYPE>
    passwordSecretRef:
      name: <SECRET-NAME>
      key: <PASSWORD-KEY>
      create: <SHOULD-CREATE>

The values can be changed by upgrading a component with modified values, eg.

helm -n onap upgrade <deploymant name> --values <path to updated values>
VES Collector Helm Installation
Authentication Support - Helm based deployment

VES Collector support following authentication types

  • auth.method=noAuth - no security (http)

  • auth.method=certBasicAuth - is used to enable mutual TLS authentication or/and basic HTTPs authentication

Default ONAP deployed VESCollector is configured for “certBasicAuth”.

The default behavior can be changed by upgrading dcaegen2-services deployment with custom values:
helm -n <namespace> upgrade <DEPLOYMENT_PREFIX>-dcaegen2-services --reuse-values --values <path to values> <path to dcaegen2-services helm charts>
For example:
helm -n onap upgrade dev-dcaegen2-services --reuse-values --values new-config.yaml oom/kubernetes/dcaegen2-services
Where the contents of new-config.yaml file is:
dcae-ves-collector:
  applicationConfig:
    auth.method: "noAuth"
For small changes like this, it is also possible to inline the new value:
helm -n onap upgrade dev-dcaegen2-services --reuse-values --set dcae-ves-collector.applicationConfig.auth.method="noAuth" oom/kubernetes/dcaegen2-services

After the upgrade, the new auth method value should be visible inside dev-dcae-ves-collector-application-config-configmap Config-Map. It can be verified by running:

kubectl -n onap get cm <config map name> -o yaml
For VES Collector:
kubectl -n onap get cm dev-dcae-ves-collector-application-config-configmap -o yaml
External repository schema files integration with VES Collector

In order to utilize the externalRepo openAPI schema files defined in OOM repository and installed with dcaegen2 module, follow below steps.

Note

For more information on generating schema files, see External-schema-repo-generator (OOM Utils repository)

Default ONAP deployment for Istanbul release makes available the SA88-Rel16 OpenAPI schema files; optionally SA99-Rel16 files can be loaded using the Generator script based on the steps documented in README

  1. Go to directory with dcaegen2-services helm charts (oom/kubernetes/dcaegen2-services). These charts should be located on RKE deployer node or server which is used to deploy and manage ONAP installation by Helm charts.

  2. Create file with specific VES values-overrides:

dcae-ves-collector:
  externalVolumes:
    - name: '<config map name with schema mapping file>'
      type: configmap
      mountPath: <path on VES collector container where externalRepo schema-map is expected>
      optional: true
    - name: '<config map name contains schemas>'
      type: configmap
      mountPath: <path on VES collector container where externalRepo openAPI files are stored>
      optional: true

E.g:

dcae-ves-collector:
  externalVolumes:
    - name: 'dev-dcae-external-repo-configmap-schema-map'
      type: configmap
      mountPath: /opt/app/VESCollector/etc/externalRepo
      optional: true
    - name: 'dev-dcae-external-repo-configmap-sa88-rel16'
      type: configmap
      mountPath: /opt/app/VESCollector/etc/externalRepo/3gpp/rep/sa5/MnS/blob/SA88-Rel16/OpenAPI
      optional: true

If more than a single external schema is required add new config map to object ‘externalVolumes’ like in above example. Make sure that all external schemas (all openAPI files) are reflected in the schema-map file.

  1. Upgrade release using following command:

helm -n <namespace> upgrade <dcaegen2-services release name> --reuse-values -f <path to values.yaml file created in previous step> <path to dcaegen2-services helm chart>

E.g:

helm -n onap upgrade dev-dcaegen2-services --reuse-values -f values.yaml .
Using external TLS certificates obtained using CMP v2 protocol

In order to use the X.509 certificates obtained from the CMP v2 server (so called “operator`s certificates”), refer to the following description:

Enabling TLS with external x.509 certificates

Example values for VES Collector:
global:
  cmpv2Enabled: true
dcae-ves-collector:
  useCmpv2Certificates: true
  certificates:
  - mountPath: /opt/app/dcae-certificate/external
    commonName: dcae-ves-collector
    dnsNames:
      - dcae-ves-collector
      - ves-collector
      - ves
    keystore:
      outputType:
        - jks
      passwordSecretRef:
        name: ves-cmpv2-keystore-password
        key: password
        create: true
Authentication Types

VES supports mutual TLS authentication via X.509 certificates. If VES is deployed via docker image then VES configuration can be modified by editing /opt/app/VESCollector/etc/collector.properties which is present on the docker container. VES detects changes made to the mentioned file automatically and restarts the application.

The authentication can be enabled by collector.service.secure.clientauth property. When collector.service.secure.clientauth=1 VES uses additional properties:

  • collector.truststore.file.location - a path to jks trust store containing certificates of clients or certificate authorities

  • collector.truststore.passwordfile - a path to file containing password for the trust store

Of course, mutual TLS authentication requires also server certificates, so following properties have to be set to valid values:

  • collector.keystore.file.location - a path to jks key store containing certificates which can be used for TLS handshake

  • collector.keystore.passwordfile - a path to file containing a password for the key store

Property auth.method is used to manage security mode, possible configuration: noAuth, certBasicAuth

  • auth.method=noAuth default option - no security (http)

  • auth.method=certBasicAuth is used to enable mutual TLS authentication or/and basic HTTPs authentication

  • client without cert and without basic auth = Authentication failure

  • client without cert and wrong basic auth = Authentication failure

  • client without cert and correct basic auth = Authentication successful

  • client with cert and without/wrong basic auth = Authentication successful

  • client with cert and correct basic auth = Authentication successful

When application is in certBasicAuth mode then certificates are also validated by regexp in /etc/certSubjectMatcher.properties, only SubjectDn field in certificate description are checked. Default regexp value is .* means that we approve all SubjectDN values.

StndDefined Events Collection Mechanism
Description

The target of that development was to allow collection of events defined by standards organizations using VES Collector, and providing them for consumption by analytics applications running on top of DCAE platform. The following features have been implemented:

  • Event routing, based on a new CommonHeader field “stndDefinedNamespace”

  • Standards-organization defined events can be included using a dedicated stndDefinedFields.data property

  • Standards-defined events can be validated using openAPI descriptions provided by standards organizations, and indicated in stndDefinedFields.schemaReference

StndDefined properties

There are 5 additional properties related to stndDefined validation in collector.properties file.

Name

Description

Example

collector.externalSchema.checkflag

Flag is responsible for turning on/off stndDefined data validation. By default this flag is set to 1, which means that the validation is enabled. In case flag is set to -1, validation is disabled.

-1 or 1

collector.externalSchema.mappingFileLocation

This should be a local filesystem path to file with mappings of public URLs to local URLs.

/opt/app/VESCollector/etc/externalRepo/schema-map.json

collector.externalSchema.schemasLocation

Schemas location is a directory context for localURL paths set in mapping file. Result path of schema is collector.externalSchema.schemasLocation + localURL. This path is not related to mapping file path and may point to any location.

/opt/app/VESCollector/etc/externalRepo/ , when first mapping from example mapping file below this table is taken, validator will look for schema under the path: /opt/app/VESCollector/etc/externalRepo/3gpp/rep/sa5/data-models/blob/REL-16/OpenAPI/faultMnS.yaml

event.externalSchema.schemaRefPath

This is an internal path from validated JSON. It should define which field will be taken as public schema reference, which is later mapped.

$.event.stndDefinedFields.schemaReference

event.externalSchema.stndDefinedDataPath

This is internal path from validated JSON. It should define which field will be validated.

$.event.stndDefinedFields.data

Format of the schema mapping file is a JSON file with list of mappings, as shown in the example below.

[
  {
    "publicURL": "https://forge.3gpp.org/rep/sa5/MnS/blob/SA88-Rel16/OpenAPI/faultMnS.yaml",
    "localURL": "3gpp/rep/sa5/MnS/blob/SA88-Rel16/OpenAPI/faultMnS.yaml"
  },
  {
    "publicURL": "https://forge.3gpp.org/rep/sa5/MnS/blob/SA88-Rel16/OpenAPI/heartbeatNtf.yaml",
    "localURL": "3gpp/rep/sa5/MnS/blob/SA88-Rel16/OpenAPI/heartbeatNtf.yaml"
  },
  {
    "publicURL": "https://forge.3gpp.org/rep/sa5/MnS/blob/SA88-Rel16/OpenAPI/PerDataFileReportMnS.yaml",
    "localURL": "3gpp/rep/sa5/MnS/blob/SA88-Rel16/OpenAPI/PerDataFileReportMnS.yaml"
  },
  {
    "publicURL": "https://forge.3gpp.org/rep/sa5/MnS/blob/SA88-Rel16/OpenAPI/provMnS.yaml",
    "localURL": "3gpp/rep/sa5/MnS/blob/SA88-Rel16/OpenAPI/provMnS.yaml"
  }
]
External schema config maps

The mapping and schema files content can be changed by editing a proper config map.

Config map name

Description

dcae-external-repo-configmap-schema-map

Defines a content of the /opt/app/VESCollector/etc/externalRepo/schema-map.json file.

dcae-external-repo-configmap-sa88-rel16

Defines a content of schemas stored in the /opt/app/VESCollector/etc/externalRepo folder.

Config maps are defined in the OOM repository and are installed with dcaegen2-services module.

Properties configuration via Cloudify

Collector.properties content may be overridden when deploying VES Collector via Cloudify. To keep VES settings consistent listed above properties has been updated in the VES Collector Cloudify blueprint (in blueprints/k8s-ves.yaml file under dcaegen2/platform/blueprints project) and in componentspec file (in dpo/spec/vescollector-componentspec.json file in VES project) which may be used for generation of VES Collector Cloudify blueprints in some scenarios.

The following table shows new stndDefined related properties added to VES Collector Cloudify blueprint. These properties represent fields from collector.properties file, but also contain configuration of DMaaP topic URLs used for stndDefined events routing. It has been specified in the table which of these properties may be configured via inputs during blueprint deployment.

NOTE: Keep in mind that some properties may use relative path. It is relative to default VES Collector context which is: /opt/app/VESCollector/. Final path of etc. collector.externalSchema.schemasLocation will be: /opt/app/VESCollector/etc/externalRepo/. Setting absolute path to these properties is also acceptable and won’t generate error.

Property name

Input?

Type

Default value

collector.externalSchema.checkflag

Yes

Integer

1

collector.externalSchema.mappingFileLocation

Yes

String

./etc/externalRepo/schema-map.json

collector.externalSchema.schemasLocation

Yes

String

./etc/externalRepo/

event.externalSchema.schemaRefPath

No

String

$.event.stndDefinedFields.schemaReference

event.externalSchema.stndDefinedDataPath

No

String

$.event.stndDefinedFields.data

ves_3gpp_fault_supervision_publish_url

Yes

String

http://message-router.onap.svc.cluster.local:3904/events/unauthenticated.SEC_3GPP_FAULTSUPERVISION_OUTPUT

ves_3gpp_provisioning_publish_url

Yes

String

http://message-router.onap.svc.cluster.local:3904/events/unauthenticated.SEC_3GPP_PROVISIONING_OUTPUT

ves_3gpp_hearbeat_publish_url

Yes

String

http://message-router.onap.svc.cluster.local:3904/events/unauthenticated.SEC_3GPP_HEARTBEAT_OUTPUT

ves_3gpp_performance_assurance_publish_url

Yes

String

http://message-router.onap.svc.cluster.local:3904/events/unauthenticated.SEC_3GPP_PERFORMANCEASSURANCE_OUTPUT

Config maps with app properties via Helm

When deploying VES collector via deacgen2-services Helm chart, application properties can be changed by editing the corresponding config map.

Config map name

Description

dcae-ves-collector-application-config-configmap

Defines a content of the /app-config/application_config.yaml file.

dcae-ves-collector-filebeat-configmap

Defines a content of the /usr/share/filebeat/filebeat.yml file.

Properties configuration via Helm chart overrides

Collector.properties content may be overridden when deploying VES Collector via Helm chart. In case of deploying VES using Helm chart, a config map “dcae-ves-collector-application-config-configmap” with the application_config.yaml file is created. The application_config.yaml contains properties, that override values from Collector.properties. In order to change any value, it is sufficient to edit the application_config.yaml in the config map. The VES application frequently reads the configMap content and applies configuration changes.

The content of “dcae-ves-collector-application-config-configmap” is defined in the values.yaml of the dcae-ves-collector chart and is installed with dcaegen2-services module.

The following table shows stndDefined related properties added to VES Collector Helm chart. These properties represent fields from collector.properties file, but also contain configuration of DMaaP topic URLs used for stndDefined events routing.

NOTE: Keep in mind that some properties may use relative path. It is relative to default VES Collector context which is: /opt/app/VESCollector/. Final path of etc. collector.externalSchema.schemasLocation will be: /opt/app/VESCollector/etc/externalRepo/. Setting absolute path to these properties is also acceptable and won’t generate error.

Property name

Type

Default value

collector.externalSchema.checkflag

Integer

1

collector.externalSchema.mappingFileLocation

String

./etc/externalRepo/schema-map.json

collector.externalSchema.schemasLocation

String

./etc/externalRepo/

event.externalSchema.schemaRefPath

String

$.event.stndDefinedFields.schemaReference

event.externalSchema.stndDefinedDataPath

String

$.event.stndDefinedFields.data

streams_publishes.ves-3gpp-fault-supervision.dmaap_info.topic_url

String

http://message-router.onap.svc.cluster.local:3904/events/unauthenticated.SEC_3GPP_FAULTSUPERVISION_OUTPUT

streams_publishes.ves-3gpp-provisioning.dmaap_info.topic_url

String

http://message-router.onap.svc.cluster.local:3904/events/unauthenticated.SEC_3GPP_PROVISIONING_OUTPUT

streams_publishes.ves-3gpp-heartbeat.dmaap_info.topic_url

String

http://message-router.onap.svc.cluster.local:3904/events/unauthenticated.SEC_3GPP_HEARTBEAT_OUTPUT

streams_publishes.ves-3gpp-performance-assurance.dmaap_info.topic_url

String

http://message-router.onap.svc.cluster.local:3904/events/unauthenticated.SEC_3GPP_PERFORMANCEASSURANCE_OUTPUT

Validation overview

This mechanism can be used to validate any JSON content incoming as JsonNode using OpenAPI standardized schemas. During validation externally located schemas are mapped to local schema files.

Validated JSON must have one field that will refer to an external schema, which will be mapped to local file and then validation of any chosen part of JSON is executed using local schema.

StndDefined validation is integrated with the event collecting functionality available under the endpoint /eventListener/v7. Process of event collecting includes steps in the following order:

  1. General event validation (1st stage validation)

  2. Event transformation

  3. StndDefined event validation (2nd stage validation)

  4. Event routing to DMaaP

Mapping file is cached on stndDefined validator creation, so it’s not read every time validation is performed. Schemas’ content couldn’t be cached due to an external library restrictions (OpenAPI4j).

The value of the ‘stndDefinedNamespace’ field in any incoming stndDefined JSON event is used to match the topic from property collector.dmaap.streamid.

Requirements for stndDefined validation

To run stndDefined validation, both collector.schema.checkflag and collector.externalSchema.checkflag must be set to 1.

Despite the flag set, the validation will not start when:

  • Domain of the incoming event is not ‘stndDefined’.

  • General event validation (1st stage) failed.

  • Field of event referenced under the property event.externalSchema.schemaRefPath (by default /event/stndDefinedFields/schemaReference):
    • Has an empty value.

    • Does not exist in the incoming event.

Validation scenarios

Positive scenario, which returns 202 Accepted HTTP code after successful stndDefined validation:

  • collector.schema.checkflag and collector.externalSchema.checkflag is set to 1

  • Mapping file has valid format

  • Schema file mapped from referenced in the event is valid

  • The incoming event is valid against schema

Below are scenarios when, the stndDefined validation will end with failure and return 400 Bad Request HTTP code:

  • One of stndDefined data fields has wrong type or value

  • StndDefined data has an empty body or is missing required field

  • Field of event referenced under the property event.externalSchema.schemaRefPath has publicURL which is not mapped in the schemas mappings

  • Field defining public schema in event (by default /event/stndDefinedFields/schemaReference) after “#” has non existing reference in schema file

Schema repository description

Schemas and mapping file location might be configured to any local directory through properties in collector.properties as described in ‘StndDefined properties’ section.

By default schemas repository is located under /opt/app/VESCollector/etc/externalSchema directory, as well as schemas mapping file called schema-map.json. Every organisation which adds or mounts external schemas should store them in folder named by organisation name. Further folders structure may be whatever as long as schemas are correctly referenced in the mapping file.

Sample directory tree of /opt/app/VESCollector/etc directory:

/opt/app/VESCollector/etc
├── ...
└── externalRepo
    ├── schema-map.json
    └── 3gpp
        └── rep
            └── sa5
                └── MnS
                    └── blob
                        └── SA88-Rel16
                            └── OpenAPI
                                ├── faultMnS.yaml
                                ├── heartbeatNtf.yaml
                                ├── PerDataFileReportMnS.yaml
                                └── provMnS.yaml
Routing of stndDefined domain events

All events, except those with ‘stndDefined’ domain, are routed to DMaaP topics based on domain value. Events with ‘stndDefined’ domain are sent to proper topic basing on field ‘stndDefinedNamespace’.

This is the only difference from standard event routing, specific for ‘stndDefined’ domain. As in every other event routing value is being mapped for specific DMaaP stream. Stream ID to DMaaP channels mappings are located in /opt/app/VESCollector/etc/collector.properties file under property collector.dmaap.streamid. Channels descriptions are in /opt/app/VESCollector/etc/DmaapConfig.json, where destination DMaaP topics are selected.

With stndDefined domain managment 4 new mappings were added. Their routing has been described in the table below:

Stream ID

Channel

DMaaP Stream

3GPP-FaultSupervision

ves-3gpp-fault-supervision

unauthenticated.SEC_3GPP_FAULTSUPERVISION_OUTPUT

3GPP-Heartbeat

ves-3gpp-heartbeat

unauthenticated.SEC_3GPP_HEARTBEAT_OUTPUT

3GPP-Provisioning

ves-3gpp-provisioning

unauthenticated.SEC_3GPP_PROVISIONING_OUTPUT

3GPP-PerformanceAssurance

ves-3gpp-performance-assurance

unauthenticated.SEC_3GPP_PERFORMANCEASSURANCE_OUTPUT

Error scenarios behaviour

There are few error scenarios described in ‘Validation scenarios’ section. This section will describe user point of view of VES Collector behaviour when they happen. Messages returned as HTTP response contain data described below for each scenario.

  1. StndDefined fields validation related errors

1.1. Schema file referred under the path from property event.externalSchema.schemaRefPath (by default /event/stndDefinedFields/schemaReference) not present in the schema repository.

Property Name

Property Description

MessageId

SVC2004

Text

“Invalid input value for %1 %2: %3”

Variables

%1 – “attribute” %2 – “event.stndDefinedFields.schemaReference” %3 – “Referred external schema not present in schema repository”

HTTP status code(s)

400 Bad request

1.2. File referred under the path from property event.externalSchema.schemaRefPath (by default /event/stndDefinedFields/schemaReference) exists, but internal reference (part of URL after #) is incorrect.

Property Name

Property Description

MessageId

SVC2000

Text

The following service error occurred: %1. Error code is %2

Variables

%1 - “event.stndDefinedFields.schemaReference value does not correspond to any external event schema file in externalSchema repo” %2 - “400”

HTTP status code(s)

400 Bad request

1.3. StndDefined validation executed, but event contents do not validate with referenced schema.

Property Name

Property Description

MessageId

SVC2000

Text

The following service error occurred: %1. Error code is %2

Variables

%1 - “event.stndDefinedFields.data invalid against event.stndDefinedFields.schemaReference” %2 - “400”

HTTP status code(s)

400 Bad request

  1. Problems with routing of stndDefined domain.

2.1. StndDefinedNamespace field not received in the incoming event.

Property Name

Property Description

MessageId

SVC2006

Text

Mandatory input %1 %2 is missing from request

Variables

%1 – “attribute” %2 – “event.commonEventHeader.stndDefinedNamespace”

HTTP status code(s)

400 Bad Request

2.2. StndDefinedNamespace field present, but value is empty.

Property Name

Property Description

MessageId

SVC2006

Text

Mandatory input %1 %2 is empty in request

Variables

%1 – “attribute” %2 – “event.commonEventHeader.stndDefinedNamespace”

HTTP status code(s)

400 Bad Request

2.3. StndDefinedNamespace field present, but value doesn’t match any stream ID mapping.

Property Name

Property Description

MessageId

SVC2004

Text

“Invalid input value for %1 %2: %3”

Variables

%1 – “attribute” %2 – “event.commonEventHeader.stndDefinedNamespace” %3 – “stndDefinedNamespace received not present in VES Collector routing configuration. Unable to route event to appropriate DMaaP topic”

HTTP status code(s)

400 Bad request

API reference

Refer to VES APIs for detailed api information.

High Volume VNF Event Streaming (HV-VES) Collector

HV-VES collector has been proposed, based on a need to process high-volumes of data generated frequently by a large number of NFs. The driving use-case is described and published within presentation during Casablanca Release Developer Forum: Google Protocol Buffers versus JSON - 5G RAN use-case - comparison.

The goal of the collector is to support high volume data. It uses plain TCP connections. Connections are stream-based (as opposed to request-based) and long running. Payload is binary-encoded (currently using Google Protocol Buffers). HV-VES uses direct connection to DMaaP’s Kafka. All these decisions were made in order to support high-volume data with minimal latency.

High Volume VES Collector overview and functions
High-level architecture of HV-VES

HV-VES Collector is a part of DCAEGEN2. Its goal is to collect data from xNF (PNF/VNF) and publish it in DMaaP’s Kafka. High Volume Collector is deployed with DCAEGEN2 via OOM Helm charts and Cloudify blueprints.

Input messages come from TCP interface and Wire Transfer Protocol. Each frame includes Google Protocol Buffers (GPB) encoded payload. Based on information provided in CommonEventHeader, domain messages are validated and published to specific Kafka topic in DMaaP.

_images/ONAP_VES_HV_Architecture.png

Messages published in DMaaP’s Kafka topic will be consumed by DCAE analytics application or other ONAP component that consumes messages from DMaaP/Kafka. DMaaP serves direct access to Kafka allowing other analytics applications to utilize its data.

Design
Compatibility aspects (VES-JSON)

HV-VES Collector is a high-volume variant of the existing VES (JSON) collector, and not a completely new collector. HV-VES follows the VES-JSON schema as much as possible.

  • HV-VES uses a Google Protocol Buffers (GPB, proto files) representation of the VES Common Header.

  • The proto files use most encoding-effective types defined by GPB to cover Common Header fields.

  • HV-VES makes routing decisions based on the content of the domain field or stndDefinedNamespace field in case of stndDefined events.

  • HV-VES allows to embed Payload of different types (by default perf3gpp and stndDefined domains are included).

Analytics applications impacts

  • An analytics application operating on high-volume data needs to be prepared to read directly from Kafka.

  • An analytics application needs to operate on GPB encoded data in order to benefit from GPB encoding efficiencies.

  • It is assumed, that due to the nature of high volume data, there would have to be dedicated applications provided, able to operate on such volumes of data.

Extendability

HV-VES is designed to allow extending by adding new domain-specific proto files.

The proto file (with the VES CommonHeader) comes with a binary-type Payload parameter, where domain-specific data should be placed. Domain-specific data are encoded as well with GPB. A domain-specific proto file is required to decode the data. This domain-specific proto has to be shared with analytics applications - HV-VES does not analyze domain-specific data.

In order to support the RT-PM use-case, HV-VES uses a perf3gpp domain proto file. Within this domain, high volume data are expected to be reported to HV-VES collector. Additional domains can be defined based on existing VES domains (like Fault, Heartbeat) or completely new domains. New domains can be added when needed.

There is also stndDefined domain supported by default in HV-VES. Events with this domain are expected to contain data payload described by OpenAPI schemas. HV-VES doesn’t decode payload of stndDefined events thus it does not contain specific stndDefined proto files. The only difference of stndDefined domain is its specific routing. More details of stndDefined routing: _stndDefined_domain.

GPB proto files are backwards compatible, and a new domain can be added without affecting existing systems.

Analytics applications have to be equipped with the new domain-specific proto file as well. Currently, these additional, domain specific proto files can be added to hv-ves-client protobuf library repository (artifactId: hvvesclient-protobuf).

Implementation details
  • Project Reactor is used as a backbone of the internal architecture.

  • Netty is used by means of reactor-netty library.

  • Kotlin is used to write concise code with great interoperability with existing Java libraries.

  • Types defined in Λrrow library are also used when it improves readability or general cleanness of the code.

Repositories

HV-VES is delivered as a docker container and published in ONAP Nexus repository following image naming convention.

Full image name is onap/org.onap.dcaegen2.collectors.hv-ves.hv-collector-main.

There are also simulators published as docker images. Those simulators are used internally during Continuous System and Integration Testing

Full simulators’ names are onap/org.onap.dcaegen2.collectors.hv-ves.hv-collector-dcae-app-simulator and onap/org.onap.dcaegen2.collectors.hv-ves.hv-collector-xnf-simulator.

For source code, see https://gerrit.onap.org/r/#/admin/projects/dcaegen2/collectors/hv-ves.

Deployment

To run HV-VES Collector container you need to specify required command line options and environment variables.

Command line parameters can be specified either by using long form (–long-form) or short form (-s) followed by argument if needed (see Arg column in table below). These parameters can be omitted if corresponding environment variables are set. These variables are named after command line option name rewritten using UPPER_SNAKE_CASE and prepended with VESHV_ prefix, for example VESHV_CONFIGURATION_FILE.

Command line options have precedence over environment variables in cases when both are present.

Currently HV-VES requires single command line parameter which points to base configuration file.

Long form

Short form

Arg

Env form

Description

configuration-file

c

yes

VESHV_CONFIGURATION_FILE

Path to JSON file containing HV-VES configuration

Environment variables that are required by HV-VES are used by collector for provisioning of run-time configuration and are provided by DCAE platform.

Environment variable name

Description

CONSUL_HOST

Hostname under which Consul service is available

CONFIG_BINDING_SERVICE

Hostname under which Config Binding Service is available

HOSTNAME

Configuration key of HV-VES as seen by CBS, usually dcae-hv-ves-collector

There is also optional command line parameter which configures container-internal port for Healthcheck Server API (see Healthcheck and Monitoring).

Long form

Short form

Arg

Env form

Description

health-check-api-port

H

yes

VESHV_HEALTH_CHECK_API_PORT

Health check rest api listen port

Configuration file

File must provide base configuration for HV-VES Collector in JSON format.

Some entries in configuration can also be obtained from Config Binding Service (see Run-Time configuration). Every entry defined in configuration file will be OVERRIDEN if it is also present in run-time configuration.

Following JSON shows every possible configuration option. Default file shipped with HV-VES container can be found in the collector’s repository (see Repositories).

{
  "logLevel": "INFO",
  "server.listenPort": 6061,
  "server.idleTimeoutSec": 60,
  "cbs.firstRequestDelaySec": 10,
  "cbs.requestIntervalSec": 5,
  "security.sslDisable": false,
  "security.keys.keyStoreFile": "/etc/ves-hv/ssl/server.p12",
  "security.keys.keyStorePasswordFile": "/etc/ves-hv/ssl/server.pass",
  "security.keys.trustStoreFile": "/etc/ves-hv/ssl/trust.p12",
  "security.keys.trustStorePasswordFile": "/etc/ves-hv/ssl/trust.pass"
}

The configuration is split into smaller sections. Tables show restrictions on fields in file configuration and short description.

Server

Key

Value type

Description

server.listenPort

number

Port on which HV-VES listens internally

server.idleTimeoutSec

number

Idle timeout for remote hosts. After given time without any data exchange, the connection is closed

Config Binding Service

Key

Value type

Description

cbs.firstRequestDelaySec

number

Delay of first request to Config Binding Service in seconds

cbs.requestIntervalSec

number

Interval of configuration requests in seconds

Security

Key

Value type

Description

security.sslDisable

boolean

Disables SSL encryption

security.keys.keyStoreFile

String

Key store path used in HV-VES incoming connections

security.keys.keyStorePasswordFile

String

Key store password file used in HV-VES incoming connections

security.keys.trustStoreFile

String

Path to file with trusted certificates bundle used in HV-VES incoming connections

security.keys.trustStorePasswordFile

String

Trust store password file used in HV-VES incoming connections

All security entries are mandatory with security.sslDisable set to false. Otherwise only security.sslDisable needs to be specified. If security.sslDisable flag is missing, then it is interpreted as it was set to false.

Uncategorized

Key

Value type

Description

logLevel

String

Log level on which HV-VES publishes all log messages. Valid argument values are (case insensitive): ERROR, WARN, INFO, DEBUG, TRACE.

Horizontal Scaling

Kubernetes command line tool (kubectl) is recommended for manual horizontal scaling of HV-VES Collector.

To scale HV-VES deployment you need to determine its name and namespace in which it is deployed. For default OOM deployment, HV-VES full deployment name is deployment/dep-dcae-hv-ves-collector and it is installed under onap namespace.

  1. If the namespace is unknown, execute the following command to determine possible namespaces.

kubectl get namespaces

2. Find desired deployment (in case of huge output you can try final call in combination with grep hv-ves command). You can also see current replicas amount under a corresponding column.

ONAP_NAMESPACE=onap
kubectl get --namespace ${ONAP_NAMESPACE} deployment
  1. To scale deployment, execute the following commands:

DEPLOYMENT_NAME=deployment/dep-dcae-hv-ves-collector
DESIRED_REPLICAS_AMOUNT=5
kubectl scale --namespace ${ONAP_NAMESPACE} ${DEPLOYMENT_NAME} --replicas=${DESIRED_REPLICAS_AMOUNT}

Result:

kubectl get pods --namespace ${ONAP_NAMESPACE} --selector app=dcae-hv-ves-collector
HV-VES Cloudify Installation

Starting from ONAP/Honolulu release, HV-VES is installed with a DCAEGEN2-Services Helm charts. This installation mechanism is convenient, but it doesn`t support all HV-VES features (e.g. CMP v2 certificates, and IPv4/IPv6 dual stack networking). This description demonstrates, how to deploy HV-VES collector using Cloudify orchestrator.

Setting insecure mode for testing

HV-VES application is configured by default to use TLS/SSL encryption on TCP connection. However it is posible to turn off TLS/SSL authorization by overriding Cloudify blueprint inputs.

Accessing bootstrap container with Kubernetes command line tool

To find bootstrap pod, execute the following command:

kubectl -n <onap namespace> get pods | grep bootstrap

To run command line in bootstrap pod, execute:

kubectl -n <onap namespace> exec -it <bootstrap-pod-name> bash
Install HV-VES collector using Cloudify blueprint inputs
  1. If You have a running HV-VES instance, uninstall HV-VES and delete current deployment:

cfy executions start -d hv-ves uninstall
cfy deployments delete hv-ves
  1. Create new deployment with inputs from yaml file (available by default in bootstrap container):

cfy deployments create -b hv-ves -i inputs/k8s-hv_ves-inputs.yaml hv-ves

In order to disable the TLS security, override the ‘secuirty_ssl_disable’ value in the deloyment:

cfy deployments create -b hv-ves -i inputs/k8s-hv_ves-inputs.yaml -i security_ssl_disable=True hv-ves

To verify inputs, You can execute:

cfy deployments inputs hv-ves
  1. Install HV-VES deployment:

cfy executions start -d hv-ves install
Using external TLS certificates obtained using CMP v2 protocol

In order to use the X.509 certificates obtained from the CMP v2 server (so called “operator`s certificates”), refer to the following description:

HV-VES Helm Installation

Starting from ONAP/Honolulu release, HV-VES is installed with a DCAEGEN2-Services Helm charts. HV-VES application is configured by default to use TLS/SSL encryption on TCP connection.

Disable TLS security - Helm based deployment
The default behavior can be changed by upgrading dcaegen2-services deployment with custom values:
helm -n <namespace> upgrade <DEPLOYMENT_PREFIX>-dcaegen2-services --reuse-values --values <path to values> <path to dcaegen2-services helm charts>
For example:
helm -n onap upgrade dev-dcaegen2-services --reuse-values --values new-config.yaml oom/kubernetes/dcaegen2-services
Where the contents of new-config.yaml file is:
dcae-hv-ves-collector:
  applicationConfig:
    security.sslDisable: true
For small changes like this, it is also possible to inline the new value:
helm -n onap upgrade dev-dcaegen2-services --reuse-values --set dcae-hv-ves-collector.applicationConfig.security.sslDisable="true" oom/kubernetes/dcaegen2-services

After the upgrade, the security.sslDisable property should be changed and visible inside dev-dcae-ves-collector-application-config-configmap Config-Map. It can be verified by running:

kubectl -n onap get cm <config map name> -o yaml
For HV-VES Collector:
kubectl -n onap get cm dev-dcae-hv-ves-collector-application-config-configmap -o yaml
For apply new configuration by HV-VES Collector the application restart might be necessary. It could be done by HV-VES helm reinstallation:
helm -n onap upgrade dev-dcaegen2-services --reuse-values --set dcae-hv-ves-collector.enabled="false" oom/kubernetes/dcaegen2-services
helm -n onap upgrade dev-dcaegen2-services --reuse-values --set dcae-hv-ves-collector.enabled="true" oom/kubernetes/dcaegen2-services
Using external TLS certificates obtained using CMP v2 protocol

In order to use the X.509 certificates obtained from the CMP v2 server (so called “operator`s certificates”), refer to the following description:

Enabling TLS with external x.509 certificates

Example values for HV-VES Collector:
global:
  cmpv2Enabled: true
dcae-ves-collector:
  useCmpv2Certificates: true
  certificates:
  - mountPath: /etc/ves-hv/ssl/external
    commonName: dcae-hv-ves-collector
    dnsNames:
      - dcae-hv-ves-collector
      - hv-ves-collector
      - hv-ves
    keystore:
      outputType:
        - jks
      passwordSecretRef:
        name: hv-ves-cmpv2-keystore-password
        key: password
        create: true
Run-Time configuration

HV-VES dynamic configuration is primarily meant to provide DMaaP Connection Objects (see DMaaP connection objects). These objects contain information necessary to route received VES Events to correct Kafka topic. This metadata will be later referred to as Routing definition.

Collector uses DCAE-SDK internally, to fetch configuration from Config Binding Service.

HV-VES waits 10 seconds (default, configurable during deployment with firstRequestDelay option, see Configuration file) before the first attempt to retrieve configuration from CBS. This is to prevent possible synchronization issues. During that time HV-VES declines any connection attempts from xNF (VNF/PNF).

After first request, HV-VES asks for configuration in fixed intervals, configurable from file configuration (requestInterval). By default interval is set to 5 seconds.

In case of failing to retrieve configuration, collector retries the action. After five unsuccessful attempts, container becomes unhealthy and cannot recover. HV-VES in this state is unusable and the container should be restarted.

Configuration format

Following JSON format presents dynamic configuration options recognized by HV-VES Collector.

{
  "logLevel": "INFO",
  "server.listenPort": 6061,
  "server.idleTimeoutSec": 60,
  "cbs.requestIntervalSec": 5,
  "security.sslDisable": false,
  "security.keys.keyStoreFile": "/etc/ves-hv/ssl/cert.jks",
  "security.keys.keyStorePasswordFile": "/etc/ves-hv/ssl/jks.pass",
  "security.keys.trustStoreFile": "/etc/ves-hv/ssl/trust.jks",
  "security.keys.trustStorePasswordFile": "/etc/ves-hv/ssl/trust.pass",
  "streams_publishes": {
    "perf3gpp": {
      "type": "kafka",
      "kafka_info": {
        "bootstrap_servers": "message-router-kafka:9092",
        "topic_name": "HV_VES_PERF3GPP"
      }
    },
    "heartbeat": {
      "type": "kafka",
      "kafka_info": {
        "bootstrap_servers": "message-router-kafka:9092",
        "topic_name": "HV_VES_HEARTBEAT"
      }
    },
    "ves-3gpp-fault-supervision": {
      "type": "kafka",
      "kafka_info": {
        "bootstrap_servers": "message-router-kafka:9092",
        "topic_name": "SEC_3GPP_FAULTSUPERVISION_OUTPUT"
      }
    },
    "ves-3gpp-heartbeat": {
      "type": "kafka",
      "kafka_info": {
        "bootstrap_servers": "message-router-kafka:9092",
        "topic_name": "SEC_3GPP_HEARTBEAT_OUTPUT"
      }
    }
  }
}

Fields have the same meaning as in the configuration file with only difference being Routing definition.

Note

There is no verification of the data correctness (e.g. if specified security files are present on machine) and thus invalid data can result in service malfunctioning or even container shutdown.

Routing

For every JSON key-object pair defined in “stream_publishes”, the key is used as domain and related object is used to setup Kafka’s bootstrap servers and Kafka topic for this domain.

When receiving a VES Event from client, collector checks if domain (or stndDefinedNamespace when domain is ‘stndDefined’) from the event corresponds to any domain from Routing and publishes this event into related topic. If there is no match, the event is dropped. If there are two routes from the same domain to different topics, then it is undefined which route is used.

For more information, see Supported domains.

Providing configuration during OOM deployment

The configuration is created from HV-VES Cloudify blueprint by specifying application_config node during ONAP OOM/Kubernetes deployment. Example of the node specification:

node_templates:
  hv-ves:
    properties:
      application_config:
        logLevel: "INFO"
        server.listenPort: 6061
        server.idleTimeoutSec: 60
        cbs.requestIntervalSec: 5
        security.sslDisable: false
        security.keys.keyStoreFile: "/etc/ves-hv/ssl/cert.jks"
        security.keys.keyStorePasswordFile: "/etc/ves-hv/ssl/jks.pass"
        security.keys.trustStoreFile: "/etc/ves-hv/ssl/trust.jks"
        security.keys.trustStorePasswordFile: "/etc/ves-hv/ssl/trust.pass"
        stream_publishes:
          perf3gpp:
            type: "kafka"
            kafka_info:
              bootstrap_servers: "message-router-kafka:9092"
              topic_name: "HV_VES_PERF3GPP"
          heartbeat:
            type: "kafka"
            kafka_info:
              bootstrap_servers: "message-router-kafka:9092"
              topic_name: "HV_VES_HEARTBEAT"
      tls_info:
        cert_directory: "/etc/ves-hv/ssl"
        use_tls: true
SSL/TLS authorization

HV-VES requires usage of SSL/TLS on every TCP connection. This can be done only during deployment of application container. For reference about exact commands, see Deployment.

General steps for configuring TLS for HV-VES collector:

  1. Create the collector’s key-store in PKCS #12 format and add HV-VES server certificate to it.

  2. Create the collector’s trust-store in PKCS #12 format with all trusted certificates and certification authorities. Every client with certificate signed by a Certificate Authority (CA) in chain of trust is allowed. The trust-store should not contain ONAP’s root CAs.

  3. Start the collector with all required options specified.

    docker run -v /path/to/key/and/trust/stores:/etc/hv-ves nexus3.onap.org:10001/onap/org.onap.dcaegen2.collectors.hv-ves.hv-collector-main --listen-port 6061 --config-url http://consul:8500/v1/kv/dcae-hv-ves-collector --key-store /etc/hv-ves/keystore.p12  --key-store-password keystorePass --trust-store /etc/hv-ves/truststore.p12 --trust-store-password truststorePass
    

HV-VES uses OpenJDK (version 11.0.6) implementation of TLS ciphers. For reference, see https://docs.oracle.com/en/java/javase/11/security/java-security-overview1.html.

If SSL/TLS is enabled for HV-VES container then service turns on also client authentication. HV-VES requires clients to provide their certificates on connection. In addition, HV-VES provides its certificate to every client during SSL/TLS-handshake to enable two-way authorization.

The service rejects any connection attempt that is not secured by SSL/TLS and every connection made by unauthorized client - this is client which certificate is not signed by CA contained within the HV-VES Collector trust store. With TLS tunneling, the communication protocol does not change (see the description in HV-VES behaviors). In particular there is no change to Wire Frame Protocol.

HV-VES example event

HV-VES Collector accepts messages in the format provided in TCP Endpoint.

This Yaml file represents a message with sample values. It should be treated as an abstract example showing the structure of the message. The message consists of several parts. Each part is encoded in a different way. Encoding is noted with inline comments with proper file names.

Values of fields can be changed according to types specified in noted definition files.

WTP:
  -- direct encoding using ASN.1 notation - WTP.asn
  magic: 0xAA
  versionMajor: 0x01
  versionMinor: 0x00
  reserved: 0x00 0x00 0x00
  payloadId: 0x00 0x01
  -- payloadLength set to the highest value 1MiB = 1024 * 1024 = 1048576 B
  payloadLength: 0x00 0x10 0x00 0x00
  payload:
    -- GPB encoded payload - VesEvent.proto
      commonEventHeader:
        version: "1.0"
        domain: "perf3gpp"
        sequence: 0
        priority: 1
        eventId: "sampleEventId01"
        eventName: "sampleEventName01"
        lastEpochMicrosec: 120034455
        startEpochMicrosec: 120034455
        reportingEntityName: "sampleEntityName"
        sourceName: "sampleSourceName"
        vesEventListenerVersion: "anotherVersion"
      eventFields:
        -- GPB encoded fields for perf3gpp domain - Perf3gppFields.proto
        perf3gppFieldsVersion: "1.0"
        measDataCollection:
          -- GPB encoded RTPM - MeasDataCollection.proto
          formatVersion: "28.550 2.0"
          granularityPeriod: 5
          measuredEntityUserName: "sampleEntityUserName"
          measuredEntityDn: "sampleEntityDn"
          measuredEntitySoftwareVersion: "1.0"
          measInfo:
            - measInfo1:
              iMeasInfoId: 1
              iMeasTypes: 1
              jobId: "sampleJobId"
              measValues:
                - measValue1:
                  measObjInstIdListIdx: 1
                  measResults:
                    p: 0
                    sint64 iValue: 63888
                    suspectFlag: false
Healthcheck and Monitoring
Healthcheck

Inside HV-VES docker container runs a small HTTP service for healthcheck. Port for healthchecks can be configured at deployment using command line (for details see Deployment).

This service exposes endpoint GET /health/ready which returns a HTTP 200 OK when HV-VES is healthy and ready for connections. Otherwise it returns a HTTP 503 Service Unavailable message with a short reason of unhealthiness.

Monitoring

HV-VES collector allows to collect metrics data at runtime. To serve this purpose HV-VES application exposes an endpoint GET /monitoring/prometheus which returns a HTTP 200 OK message with a specific data in its body. Returned data is in a format readable by Prometheus service. Prometheus endpoint shares a port with healthchecks.

Metrics provided by HV-VES metrics:

Name of metric

Unit

Description

hvves_clients_rejected_cause_total

cause/piece

number of rejected clients grouped by cause

hvves_clients_rejected_total

piece

total number of rejected clients

hvves_connections_active

piece

number of currently active connections

hvves_connections_total

piece

total number of connections

hvves_data_received_bytes_total

bytes

total number of received bytes

hvves_disconnections_total

piece

total number of disconnections

hvves_messages_dropped_cause_total

cause/piece

number of dropped messages grouped by cause

hvves_messages_dropped_total

piece

total number of dropped messages

hvves_messages_latency_seconds_bucket

seconds

latency is a time between message.header.lastEpochMicrosec and time when data has been sent from HV-VES to Kafka

cumulative counters for the latency occurance

hvves_messages_latency_seconds_count

piece

counter for number of latency occurance

hvves_messages_latency_seconds_max

seconds

maximal observed latency

hvves_messages_latency_seconds_sum

seconds

sum of latency parameter from each message

hvves_messages_processing_time_seconds_bucket

seconds

processing time is time meassured between decoding of WTP message and time when data has been sent From HV-VES to Kafka

cumulative counters for processing time occurance

hvves_messages_processing_time_seconds_count

piece

counter for number of processing time occurance

hvves_messages_processing_time_seconds_max

seconds

maximal processing time

hvves_messages_processing_time_seconds_sum

seconds

sum of processing time from each message

hvves_messages_received_payload_bytes_total

bytes

total number of received payload bytes

hvves_messages_received_total

piece

total number of received messages

hvves_messages_sent_topic_total

topic/piece

number of sent messages grouped by topic

hvves_messages_sent_total

piece

number of sent messages

JVM metrics:

  • jvm_buffer_memory_used_bytes

  • jvm_classes_unloaded_total

  • jvm_gc_memory_promoted_bytes_total

  • jvm_buffer_total_capacity_bytes

  • jvm_threads_live

  • jvm_classes_loaded

  • jvm_gc_memory_allocated_bytes_total

  • jvm_threads_daemon

  • jvm_buffer_count

  • jvm_gc_pause_seconds_count

  • jvm_gc_pause_seconds_sum

  • jvm_gc_pause_seconds_max

  • jvm_gc_max_data_size_bytes

  • jvm_memory_committed_bytes

  • jvm_gc_live_data_size_bytes

  • jvm_memory_max_bytes

  • jvm_memory_used_bytes

  • jvm_threads_peak

Sample response for GET monitoring/prometheus:

jvm_threads_live 26.0
system_cpu_count 4.0
jvm_gc_memory_promoted_bytes_total 6740576.0
process_cpu_usage 5.485463521667581E-4
jvm_buffer_count{id="mapped",} 0.0
jvm_buffer_count{id="direct",} 14.0
jvm_threads_peak 26.0
jvm_classes_unloaded_total 0.0
hvves_clients_rejected_cause_total{cause="too_big_payload",} 7.0
hvves_messages_sent_topic_total{topic="HV_VES_PERF3GPP",} 20000.0
jvm_threads_daemon 25.0
jvm_buffer_memory_used_bytes{id="mapped",} 0.0
jvm_buffer_memory_used_bytes{id="direct",} 1.34242446E8
jvm_gc_max_data_size_bytes 4.175429632E9
hvves_messages_received_total 32000.0
hvves_messages_dropped_total 12000.0
hvves_clients_rejected_total 7.0
system_cpu_usage 0.36204059243006037
jvm_memory_max_bytes{area="nonheap",id="CodeHeap 'non-nmethods'",} 5832704.0
jvm_memory_max_bytes{area="nonheap",id="Metaspace",} -1.0
jvm_memory_max_bytes{area="nonheap",id="CodeHeap 'profiled nmethods'",} 1.22912768E8
jvm_memory_max_bytes{area="nonheap",id="Compressed Class Space",} 1.073741824E9
jvm_memory_max_bytes{area="heap",id="G1 Eden Space",} -1.0
jvm_memory_max_bytes{area="heap",id="G1 Old Gen",} 4.175429632E9
jvm_memory_max_bytes{area="heap",id="G1 Survivor Space",} -1.0
jvm_memory_max_bytes{area="nonheap",id="CodeHeap 'non-profiled nmethods'",} 1.22912768E8
hvves_messages_dropped_cause_total{cause="invalid",} 12000.0
hvves_messages_received_payload_bytes_total 1.0023702E7
jvm_buffer_total_capacity_bytes{id="mapped",} 0.0
jvm_buffer_total_capacity_bytes{id="direct",} 1.34242445E8
system_load_average_1m 2.77
hvves_data_received_bytes_total 1.052239E7
jvm_gc_pause_seconds_count{action="end of minor GC",cause="Metadata GC Threshold",} 2.0
jvm_gc_pause_seconds_sum{action="end of minor GC",cause="Metadata GC Threshold",} 0.087
jvm_gc_pause_seconds_count{action="end of minor GC",cause="G1 Evacuation Pause",} 8.0
jvm_gc_pause_seconds_sum{action="end of minor GC",cause="G1 Evacuation Pause",} 0.218
jvm_gc_pause_seconds_max{action="end of minor GC",cause="Metadata GC Threshold",} 0.03
jvm_gc_pause_seconds_max{action="end of minor GC",cause="G1 Evacuation Pause",} 0.031
hvves_messages_processing_time_seconds_max 0.114395
hvves_messages_processing_time_seconds_count 20000.0
hvves_messages_processing_time_seconds_sum 280.282544
hvves_disconnections_total 11.0
jvm_memory_used_bytes{area="nonheap",id="CodeHeap 'non-nmethods'",} 1312640.0
jvm_memory_used_bytes{area="nonheap",id="Metaspace",} 3.624124E7
jvm_memory_used_bytes{area="nonheap",id="CodeHeap 'profiled nmethods'",} 1.1602304E7
jvm_memory_used_bytes{area="nonheap",id="Compressed Class Space",} 4273752.0
jvm_memory_used_bytes{area="heap",id="G1 Eden Space",} 1.38412032E8
jvm_memory_used_bytes{area="heap",id="G1 Old Gen",} 7638112.0
jvm_memory_used_bytes{area="heap",id="G1 Survivor Space",} 7340032.0
jvm_memory_used_bytes{area="nonheap",id="CodeHeap 'non-profiled nmethods'",} 4083712.0
jvm_memory_committed_bytes{area="nonheap",id="CodeHeap 'non-nmethods'",} 2555904.0
jvm_memory_committed_bytes{area="nonheap",id="Metaspace",} 3.7486592E7
jvm_memory_committed_bytes{area="nonheap",id="CodeHeap 'profiled nmethods'",} 1.1730944E7
jvm_memory_committed_bytes{area="nonheap",id="Compressed Class Space",} 4587520.0
jvm_memory_committed_bytes{area="heap",id="G1 Eden Space",} 1.58334976E8
jvm_memory_committed_bytes{area="heap",id="G1 Old Gen",} 9.8566144E7
jvm_memory_committed_bytes{area="heap",id="G1 Survivor Space",} 7340032.0
jvm_memory_committed_bytes{area="nonheap",id="CodeHeap 'non-profiled nmethods'",} 4128768.0
jvm_gc_memory_allocated_bytes_total 1.235222528E9
hvves_connections_total 12.0
jvm_classes_loaded 7120.0
hvves_messages_sent_total 20000.0
hvves_connections_active 1.0
jvm_gc_live_data_size_bytes 7634496.0
hvves_messages_latency_seconds_max 1.5459828692292638E9
hvves_messages_latency_seconds_count 20000.0
hvves_messages_latency_seconds_sum 2.91400110035487E9
Troubleshooting

NOTE

According to ONAP logging policy, HV-VES logs contain all required markers as well as service and client specific Mapped Diagnostic Context (later referred as MDC)

Default console log pattern:

| %date{&quot;yyyy-MM-dd'T'HH:mm:ss.SSSXXX&quot;, UTC}\t| %thread\t| %highlight(%-5level)\t| %msg\t| %marker\t| %rootException\t| %mdc\t| %thread

A sample, fully qualified message implementing this pattern:

| 2018-12-18T13:12:44.369Z       | p.dcae.collectors.veshv.impl.socket.NettyTcpServer    | DEBUG         | Client connection request received    | ENTRY         |       | RequestID=d7762b18-854c-4b8c-84aa-95762c6f8e62, InstanceID=9b9799ca-33a5-4f61-ba33-5c7bf7e72d07, InvocationID=b13d34ba-e1cd-4816-acda-706415308107, PartnerName=C=PL, ST=DL, L=Wroclaw, O=Nokia, OU=MANO, CN=dcaegen2-hvves-client, StatusCode=INPROGRESS, ClientIPAddress=192.168.0.9, ServerFQDN=a4ca8f96c7e5       | reactor-tcp-nio-2
For simplicity, all log messages in this section are shortened to contain only:
  • logger name

  • log level

  • message

Error and warning logs contain also:
  • exception message

  • stack trace

Also exact exception’s stack traces has been dropped due to readability

Do not rely on exact log messages or their presence, as they are often subject to change.

Deployment/Installation errors

Missing required parameters

| org.onap.dcae.collectors.veshv.main | ERROR | Failed to create configuration: Base configuration filepath missing on command line
| org.onap.dcae.collectors.veshv.main | ERROR | Failed to start a server | org.onap.dcae.collectors.veshv.config.api.model.MissingArgumentException: Base configuration filepath missing on command line

These log messages are printed when the single required parameter, configuration file path, was not specified (via command line, or as an environment variable). Command line arguments have priority over environment variables. If you configure a parameter in both ways, HV-VES applies the one from the command line. For more information about HV-VES configuration parameters, see Deployment.

Configuration errors

Consul service not available

| org.onap.dcae.collectors.veshv.config.impl.CbsConfigurationProvider | ERROR | Failed to retrieve CBS client: consul-server: Temporary failure in name resolution
| org.onap.dcae.collectors.veshv.config.impl.CbsConfigurationProvider | WARN  | Exception from configuration provider client, retrying subscription | java.net.UnknownHostException: consul-server: Temporary failure in name resolution

HV-VES looks for Consul under hostname defined in CONSUL_HOST environment variable. If the service is down, above logs will appear and after few retries collector will shut down.

Config Binding Service not available

| org.onap.dcae.services.sdk.rest.services.cbs.client.impl.CbsLookup  | INFO  | Config Binding Service address: config-binding-service:10000
| org.onap.dcae.collectors.veshv.config.impl.CbsConfigurationProvider | INFO  | CBS client successfully created
| org.onap.dcae.collectors.veshv.config.impl.CbsConfigurationProvider | ERROR | Error while creating configuration: config-binding-service: Temporary failure in name resolution
| org.onap.dcae.collectors.veshv.config.impl.CbsConfigurationProvider | WARN  | Exception from configuration provider client, retrying subscription

Logs indicate that HV-VES successfully retrieved Config Binding Service (later referred as CBS) connection string from Consul, though the address was either incorrect, or the CBS is down. Make sure CBS is up and running and the connection string stored in Consul is correct.


Missing configuration on Consul

| org.onap.dcae.services.sdk.rest.services.cbs.client.impl.CbsLookup         | INFO  | Config Binding Service address: config-binding-service:10000
| org.onap.dcae.collectors.veshv.config.impl.CbsConfigurationProvider        | INFO  | CBS client successfully created
| org.onap.dcae.collectors.veshv.config.impl.CbsConfigurationProvider        | ERROR | Error while creating configuration: Request failed for URL 'http://config-binding-service:10000/service_component/invalid-resource'. Response code: 404 Not Found
| org.onap.dcae.collectors.veshv.config.impl.CbsConfigurationProvider        | WARN  | Exception from configuration provider client, retrying subscription   |       | org.onap.dcaegen2.services.sdk.rest.services.adapters.http.exceptions.HttpException: Request failed for URL 'http://config-binding-service:10000/service_component/dcae-hv-ves-collector'. Response code: 404 Not Found

HV-VES logs this information when connected to Consul, but cannot find JSON configuration under given key which in this case is invalid-resource. For more information, see Run-Time configuration


Invalid configuration format

| org.onap.dcae.services.sdk.rest.services.cbs.client.impl.CbsLookup    | INFO  | Config Binding Service address: config-binding-service:10000
| org.onap.dcae.collectors.veshv.config.impl.CbsConfigurationProvider   | INFO  | CBS client successfully created
| org.onap.dcae.collectors.veshv.config.impl.CbsConfigurationProvider   | INFO  | Received new configuration:
| {"streams_publishes":{"perf3gpp":{"typo":"kafka","kafka_info":{"bootstrap_servers":"message-router-kafka:9092","topic_name":"HV_VES_PERF3GPP"}}}}
| org.onap.dcae.collectors.veshv.config.impl.CbsConfigurationProvider   | ERROR | Error while creating configuration: Could not find sub-node 'type'. Actual sub-nodes: typo, kafka_info
| org.onap.dcae.collectors.veshv.config.impl.CbsConfigurationProvider   | WARN  | Exception from configuration provider client, retrying subscription | org.onap.dcaegen2.services.sdk.rest.services.cbs.client.api.exceptions.StreamParsingException: Could not find sub-node 'type'. Actual sub-nodes: typo, kafka_info

This log is printed when you upload a configuration in an invalid format Received json contains invalid Streams configuration, therefore HV-VES does not apply it and becomes unhealthy. For more information on dynamic configuration, see Run-Time configuration.

Message handling errors

Handling messages when invalid Kafka url is specified

| org.onap.dcae.collectors.veshv.impl.socket.NettyTcpServer | INFO | Handling new client connection
| org.apache.kafka.clients.ClientUtils                          | WARN | Removing server invalid-message-router-kafka:9092 from bootstrap.servers as DNS resolution failed for invalid-message-router-kafka
| org.apache.kafka.clients.producer.KafkaProducer           | INFO | [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 0 ms.
| org.onap.dcae.collectors.veshv.impl.HvVesCollector            | WARN | Error while handling message stream: org.apache.kafka.common.KafkaException (Failed to construct kafka producer)
| org.onap.dcae.collectors.veshv.impl.socket.NettyTcpServer | INFO | Connection has been close0d

HV-VES responds with the above when it handles a message with domain that has invalid bootstrap_servers specified in streams_publishes configuration. To fix this problem, you have to correct streams_publishes configuration stored in Consul. For more information, see: Run-Time configuration.


Kafka service became unavailable after producer has been created

HV-VES lazily creates Kafka producer for each domain. If Kafka service becomes unreachable after producer initialization, appropriate logs are shown and HV-VES fails to deliver future messages to that Kafka service.

| org.apache.kafka.clients.NetworkClient                            | WARN | [Producer clientId=producer-1] Connection to node 1001 could not be established. Broker may not be available.
| org.apache.kafka.clients.NetworkClient                            | WARN | [Producer clientId=producer-1] Connection to node 1001 could not be established. Broker may not be available.
| org.apache.kafka.clients.NetworkClient                            | WARN | [Producer clientId=producer-1] Connection to node 1001 could not be established. Broker may not be available.
| org.apache.kafka.clients.NetworkClient                            | WARN | [Producer clientId=producer-1] Connection to node 1001 could not be established. Broker may not be available.
| org.onap.dcae.collector.veshv.impl.socket.NettyTcpServer          | INFO | Handling new client connection
| org.onap.dcae.collector.veshv.impl.socket.NettyTcpServer          | INFO | Connection has been closed
| org.apache.kafka.clients.NetworkClient                            | WARN | [Producer clientId=producer-1] Connection to node 1001 could not be established. Broker may not be available
| org.onap.dcae.collector.veshv.impl.adapters.kafka.KafkaPublisher  | WARN | Failed to send message to Kafka. Reason: Expiring 1 record(s) for HV_VES_PERF3GPP-0: 30007 ms has passed since batch creation plus linger time
| org.onap.dcae.collectors.veshv.impl.HvVesCollector                | WARN | Error while handling message stream: org.apache.kafka.common.errors.TimeoutException (Expiring 1 record(s) for HV_VES_PERF3GPP-0: 30007 ms has passed since batch creation plus linger time)
| org.apache.kafka.clients.NetworkClient                            | WARN | [Producer clientId=producer-1] Error connecting to node message-router-kafka:9092 (id: 1001 rack: null)

To resolve this issue, you can either wait for that Kafka service to be available, or just like in previous paragraph, provide alternative Kafka bootstrap server via dynamic configuration (see Run-Time configuration.)


Message with too big payload size

| org.onap.dcae.collectors.veshv.impl.socket.NettyTcpServer | INFO  | Handling new client connection
| org.onap.dcae.collectors.veshv.impl.wire.WireChunkDecoder | WARN  | Error while handling message stream: org.onap.dcae.collectors.veshv.impl.wire.WireFrameException (PayloadSizeExceeded: payload size exceeds the limit (1048576 bytes))
| org.onap.dcae.collectors.veshv.impl.socket.NettyTcpServer | INFO  | Connection has been closed

The above log is printed when the message payload size is too big. HV-VES does not handle messages that exceed maximum payload size specified under streams_publishes configuration (see DMaaP connection objects)


Invalid GPB data

Messages with invalid Google Protocol Buffers data encoded are omitted. HV-VES responds as follows:

| org.onap.dcae.collectors.veshv.impl.socket.NettyTcpServer  | INFO  | Handling new client connection
| org.onap.dcae.collectors.veshv.impl.HvVesCollector             | WARN      | Failed to decode ves event header, reason: Protocol message tag had invalid wire type.
| org.onap.dcae.collectors.veshv.impl.socket.NettyTcpServer  | INFO  | Connection has been closed

Invalid Wire Frame

Messages with invalid Wire Frame, just like those containing invalid GPB data, will be dropped. The exact reason can be found in logs.

| org.onap.dcae.collectors.veshv.impl.socket.NettyTcpServer  | INFO  | Handling new client connection
| org.onap.dcae.collectors.veshv.impl.HvVesCollector             | WARN      | Invalid wire frame header, reason: Invalid major version in wire frame header. Expected 1 but was 2
| org.onap.dcae.collectors.veshv.impl.socket.NettyTcpServer  | INFO  | Connection has been closed

For more information, see the HV-VES behaviors section.

PNF Registration Handler (PRH)

PNF Registration Handler is a micro-service in DCAE. It is used during Physical Network Function (PNF) Plug-n-Play procedure to process the PNF Registration event.

PRH overview and functions
PRH Architecture

PRH is a DCAE micro-service which participates in the Physical Network Function Plug and Play (PNF PnP) procedure. PNF PnP is used to register PNF when it comes online.

PRH Processing Flow _images/prhAlgo.png
Configuration

PRH fetches configuration directly from CBS service in the following JSON format:

{
  "config":{
    "dmaap.dmaapConsumerConfiguration.dmaapUserName":"admin",
    "dmaap.dmaapConsumerConfiguration.dmaapUserPassword":"admin",
    "dmaap.dmaapConsumerConfiguration.consumerId":"c12",
    "dmaap.dmaapConsumerConfiguration.consumerGroup":"OpenDCAE-c12",
    "dmaap.dmaapConsumerConfiguration.timeoutMs":-1,

    "dmaap.dmaapProducerConfiguration.dmaapUserName":"admin",
    "dmaap.dmaapProducerConfiguration.dmaapUserPassword":"admin",
    "dmaap.dmaapUpdateProducerConfiguration.dmaapUserName":"admin",
    "dmaap.dmaapUpdateProducerConfiguration.dmaapUserPassword":"admin",
    "aai.aaiClientConfiguration.pnfUrl": "https://aai.onap.svc.cluster.local:8443/aai/v12/network/pnfs/pnf",
    "aai.aaiClientConfiguration.baseUrl": "https://aai.onap.svc.cluster.local:8443/aai/v12",
    "aai.aaiClientConfiguration.aaiUserName":"AAI",
    "aai.aaiClientConfiguration.aaiUserPassword":"AAI",
    "aai.aaiClientConfiguration.aaiIgnoreSslCertificateErrors":true,
    "aai.aaiClientConfiguration.aaiServiceInstancePath":"/business/customers/customer/${customer}/service-subscriptions/service-subscription/${serviceType}/service-instances/service-instance/${serviceInstanceId}",
    "aai.aaiClientConfiguration.aaiHeaders":{
      "X-FromAppId":"prh",
      "X-TransactionId":"9999",
      "Accept":"application/json",
      "Real-Time":"true",
      "Authorization":"Basic QUFJOkFBSQ=="
    },
    "security.trustStorePath":"/opt/app/prh/local/org.onap.prh.trust.jks",
    "security.trustStorePasswordPath":"change_it",
    "security.keyStorePath":"/opt/app/prh/local/org.onap.prh.p12",
    "security.keyStorePasswordPath":"change_it",
    "security.enableAaiCertAuth":false,
    "security.enableDmaapCertAuth":false,
    "streams_publishes":{
      "pnf-update":{
        "type": "message_router",
        "dmaap_info":{
          "topic_url":"http://dmaap-mr:2222/events/unauthenticated.PNF_UPDATE"
        }
      },
      "pnf-ready":{
        "type": "message_router",
        "dmaap_info":{
          "topic_url":"http://dmaap-mr:2222/events/unauthenticated.PNF_READY"
        }
      }
    },
    "streams_subscribes":{
      "ves-reg-output":{
        "type": "message_router",
        "dmaap_info":{
          "topic_url":"http://dmaap-mr:2222/events/unauthenticated.VES_PNFREG_OUTPUT"
        }
      }
    }
  }
}

The configuration is created from PRH Cloudify blueprint by specifying application_config node during ONAP OOM/Kubernetes deployment.

Delivery

PRH is delivered as a docker container. It is published in ONAP Nexus repository.

Full image name is onap/org.onap.dcaegen2.services.prh.prh-app-server.

Installation

The following docker-compose-yaml file shows a default configuration. The file can be run using docker compose up command:

version: '3'
services:
  prh:
    image: nexus3.onap.org:10003/onap/org.onap.dcaegen2.services.prh.prh-app-server
    command: >
      --dmaap.dmaapConsumerConfiguration.dmaapHostName=10.42.111.36
      --dmaap.dmaapConsumerConfiguration.dmaapPortNumber=8904
      --dmaap.dmaapConsumerConfiguration.dmaapTopicName=/events/unauthenticated.SEC_OTHER_OUTPUT
      --dmaap.dmaapConsumerConfiguration.dmaapProtocol=http
      --dmaap.dmaapConsumerConfiguration.dmaapUserName=admin
      --dmaap.dmaapConsumerConfiguration.dmaapUserPassword=admin
      --dmaap.dmaapConsumerConfiguration.dmaapContentType=application/json
      --dmaap.dmaapConsumerConfiguration.consumerId=c12
      --dmaap.dmaapConsumerConfiguration.consumerGroup=OpenDCAE-c12
      --dmaap.dmaapConsumerConfiguration.timeoutMS=-1
      --dmaap.dmaapConsumerConfiguration.message-limit=-1
      --dmaap.dmaapProducerConfiguration.dmaapHostName=10.42.111.36
      --dmaap.dmaapProducerConfiguration.dmaapPortNumber=8904
      --dmaap.dmaapProducerConfiguration.dmaapTopicName=/events/unauthenticated.PNF_READY
      --dmaap.dmaapProducerConfiguration.dmaapProtocol=http
      --dmaap.dmaapProducerConfiguration.dmaapUserName=admin
      --dmaap.dmaapProducerConfiguration.dmaapUserPassword=admin
      --dmaap.dmaapProducerConfiguration.dmaapContentType=application/json
      --aai.aaiClientConfiguration.aaiHostPortNumber=30233
      --aai.aaiClientConfiguration.aaiHost=10.42.111.45
      --aai.aaiClientConfiguration.aaiProtocol=https
      --aai.aaiClientConfiguration.aaiUserName=admin
      --aai.aaiClientConfiguration.aaiUserPassword=admin
      --aai.aaiClientConfiguration.aaiIgnoreSSLCertificateErrors=true
      --aai.aaiClientConfiguration.aaiBasePath=/aai/v11
      --aai.aaiClientConfiguration.aaiPnfPath=/network/pnfs/pnf
      --security.enableAaiCertAuth=false
      --security.enableDmaapCertAuth=false
      --security.keyStorePath=/opt/app/prh/etc/cert/cert.jks
      --security.keyStorePasswordPath=/opt/app/prh/etc/cert/jks.pass
      --security.trustStorePath=/opt/app/prh/etc/cert/trust.jks
      --security.trustStorePasswordPath=/opt/app/prh/etc/cert/trust.pass
    entrypoint:
      - java
      - -Dspring.profiles.active=dev
      - -jar
      - /opt/prh-app-server.jar
    ports:
      - "8100:8100"
      - "8433:8433"
    restart: always
Running with dev-mode of PRH

Heartbeat: http://<container_address>:8100/heartbeat or https://<container_address>:8443/heartbeat

Start prh: http://<container_address>:8100/start or https://<container_address>:8433/start

Stop prh: http://<container_address>:8100/stopPrh or https://<container_address>:8433/stopPrh

SSL/TLS Authentication & Authorization
PRH does not perform any authorization in AAF, as the only endpoint which is provided by the service is the healthcheck, which is unsecured.
For authentication settings there is a possibility to change from default behavior to certificate-based solution independently for DMaaP and AAI communication.
AAI authentication
Default
By default basic authentication is being used with following credentials:
user=AAI
password=AAI
Certificate-based
There is an option to enable certificate-based authentication for PRH towards AAI service calls.
To achieve this secure flag needs to be turned on in PRH configuration :
security.enableAaiCertAuth=true
DMaaP BC authentication
Default
By default basic authentication is being used with following credentials (for both DMaaP consumer and DMaaP publisher endpoints):
user=admin
password=admin
Certificate-based
There is an option to enable certificate-based authentication for PRH towards DMaaP Bus Controller service calls.
To achieve this secure flag needs to be turned on in PRH configuration :
--security.enableDmaapCertAuth=true
PRH identity and certificate data
PRH is using dcae identity when certificate-based authentication is turned on.
It’s the DCAEGEN2 responsibility to generate certificate for dcae identity and provide it to the collector.

PRH by default expects that the volume tls-info is being mounted under path /opt/app/prh/etc/cert.
It’s the component/collector responsibility to provide necessary inputs in Cloudify blueprint to get the volume mounted.
See TLS Support for detailed information.

PRH is using four files from tls-info DCAE volume (cert.jks, jks.pass, trust.jks, trust.pass).
Refer configuration for proper security attributes settings.

IMPORTANT Even when certificate-based authentication security features are disabled,
still all security settings needs to be provided in configuration to make PRH service start smoothly.
Security attributes values are not validated in this case, and can point to non-existent data.
API reference

Refer to PRH offered APIs for detailed PRH api information.

Threshold Crossing Analytics (TCA-gen2)

Overview

The TCA-gen2 is docker based mS intended to replace TCA/cdap version, which was first delivered as part of ONAP R0. Functionality of TCA-gen2 is identical to that of TCA - where meaurement events are subscribed from DMAAP in VES structure, once events are recieved TCA-gen2 performs a comparison of an incoming performance metric(s) against both a high and low threshold defined and generates CL events when threshold are exceeded. When the original threshold defined are cleared, TCA-Gen2 will generate an ABATEMENT event to notify the downstream system on original problem being cleared.

Installation

TCA-gen2 is a microservice that will be configured and instantiated through Cloudify Manager.TCA-gen2 will be deployed by DCAE deployment among the bootstrapped services. This is more to facilitate automated deployment of ONAP regression test cases required services. During instantiation, the TCA-gen2 will fetch its configuration through the Config Binding Service. Steps to deploy using the CLI tool are shown below.

Deployment Prerequisite/dependencies
  • DCAE and DMaaP pods should be up and running.

  • MongoDB should be up and running

  • Make sure that cfy is installed and configured to work with the Cloudify deployment.

Deployment steps

Following are steps if manual deployment/undeployment is required. Steps to deploy are below

Enter the Cloudify Manager kuberenetes pod

Note

For doing this, follow the below steps

  • First get the bootstrap pod name by running run this: kubectl get pods -n onap | grep bootstrap

  • Then login to bootstrap pod by running this: kubectl exec -it <bootstrap pod> bash -n onap

  • Tca-gen2 blueprint directory (/blueprints/k8s-tcagen2.yaml). The blueprint is also maintained in gerrit and can be downloaded from https://git.onap.org/dcaegen2/platform/blueprints/tree/blueprints/k8s-tcagen2.yaml

  • Create input file required for deployment

    Configuration of the service consists of generating an inputs file (YAML) which will be used as part of the Cloudify install. The tca-gen2 blueprints was designed with known defaults for the majority of the fields.

    Below you will find examples of fields which can be configured, and some of the fields which must be configured. An input file is loaded into bootstrap container (/inputs/k8s-tcagen2-inputs.yaml).

    Property

    Sample Value

    Description

    Required

    tca_handle_in_subscribe_url

    http://message-router:3904/events/unauthenticated.TCAGEN2_OUTPUT/

    DMaap topic to publish CL event output

    No

    tca_handle_in_subscribe_url

    http://message-router:3904/events/unauthenticated.VES_MEASUREMENT_OUTPUT/

    DMaap topic to subscribe VES measurement feeds

    No

    tag_version

    nexus3.onap.org:10001/onap/org.onap.dcaegen2.analytics.tca-gen2.dcae-analytics-tca-web:1.0.1

    The tag of the Docker image will be used when deploying the tca-gen2.

    No

    Example inputs.yaml

    tag_version: nexus3.onap.org:10001/onap/org.onap.dcaegen2.analytics.tca-gen2.dcae-analytics-tca-web:1.0.1
    tca_handle_in_subscribe_url: "http://message-router:3904/events/unauthenticated.VES_MEASUREMENT_OUTPUT/"
    tca_handle_out_publish_url: "http://message-router:3904/events/unauthenticated.TCAGEN2_OUTPUT/"
    
  • Create deployment

    cfy install --blueprint-id tcagen2 --deployment-id tcagen2 -i /inputs/k8s-tcagen2-inputs.yaml /blueprints/k8s-tcagen2.yaml
    

To undeploy TCA-gen2, steps are shown below

  • Uninstall running TCA-gen2 and delete deployment
    cfy uninstall tcagen2
    
  • Delete blueprint
    cfy blueprints delete tcagen2
    
Helm Installation

The TCA-gen2 microservice can be deployed using helm charts in the oom repository.

Deployment Pre-requisites
  • DCAE and DMaaP pods should be up and running.

  • MongoDB should be up and running.

Deployment steps
  • Default app config values can be updated in oom/kubernetes/dcaegen2-services/components/dcae-tcagen2/values.yaml.

  • Make the chart and deploy using the following command:

    cd oom/kubernetes/
    make dcaegen2-services
    helm install dev-dcaegen2-services dcaegen2-services --namespace <namespace> --set global.masterPassword=<password>
    
  • To deploy only tcagen2:

    helm install dev-dcae-tcagen2 dcaegen2-services/components/dcae-tcagen2 --namespace <namespace> --set global.masterPassword=<password>
    
  • To Uninstall

    helm uninstall dev-dcae-tcagen2
    
Application Configurations

Configuration

Description

streams_subscribes

Dmaap topics that the MS will consume messages

streams_publishes

Dmaap topics that the MS will publish messages

streams_subscribes. tca_handle_in. polling.auto_adjusting.max

Max polling Interval for consuming config data from dmaap

streams_subscribes. tca_handle_in. polling.auto_adjusting.min

Min polling Interval for consuming config data from dmaap

streams_subscribes. tca_handle_in. polling.auto_adjusting. step_down

Step down in polling Interval for consuming config data from dmaap

streams_subscribes. tca_handle_in. polling.auto_adjusting.step_up

Step up polling Interval for consuming config data from dmaap

spring.data.mongodb.uri

MongoDB URI

tca.aai.generic_vnf_path

AAI generic VNF path

tca.aai.node_query_path

AAI node query path

tca.aai.password

AAI password

tca.aai.url

AAI base URL

tca.aai.username

AAI username

streams_subscribes. tca_handle_in.consumer_group

DMAAP Consumer group for subscription

streams_subscribes. tca_handle_in.consumer_ids[0]

DMAAP Consumer id for subscription

tca.policy

Policy details

tca.processing_batch_size

Processing batch size

tca.enable_abatement

Enable abatement

tca.enable_ecomp_logging

Enable ecomp logging

Configuration

Following is default configuration set for TCA during deployment.

spring.data.mongodb.uri:
  get_input: spring.data.mongodb.uri
streams_subscribes.tca_handle_in.consumer_group:
  get_input: tca_consumer_group
streams_subscribes.tca_handle_in.consumer_ids[0]: c0
streams_subscribes.tca_handle_in.consumer_ids[1]: c1
streams_subscribes.tca_handle_in.message_limit: 50000
streams_subscribes.tca_handle_in.polling.auto_adjusting.max: 60000
streams_subscribes.tca_handle_in.polling.auto_adjusting.min: 30000
streams_subscribes.tca_handle_in.polling.auto_adjusting.step_down: 30000
streams_subscribes.tca_handle_in.polling.auto_adjusting.step_up: 10000
streams_subscribes.tca_handle_in.polling.fixed_rate: 0
streams_subscribes.tca_handle_in.timeout: -1
tca.aai.enable_enrichment: true
tca.aai.generic_vnf_path: aai/v11/network/generic-vnfs/generic-vnf
tca.aai.node_query_path: aai/v11/search/nodes-query
tca.aai.password:
  get_input: tca.aai.password
tca.aai.url:
  get_input: tca.aai.url
tca.aai.username:
  get_input: tca.aai.username
tca.policy: '{"domain":"measurementsForVfScaling","metricsPerEventName":[{"eventName":"vFirewallBroadcastPackets","controlLoopSchemaType":"VM","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.vNicPerformanceArray[*].receivedTotalPacketsDelta","thresholdValue":300,"direction":"LESS_OR_EQUAL","severity":"MAJOR","closedLoopEventStatus":"ONSET"},{"closedLoopControlName":"ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.vNicPerformanceArray[*].receivedTotalPacketsDelta","thresholdValue":700,"direction":"GREATER_OR_EQUAL","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]},{"eventName":"vLoadBalancer","controlLoopSchemaType":"VM","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.vNicPerformanceArray[*].receivedTotalPacketsDelta","thresholdValue":300,"direction":"GREATER_OR_EQUAL","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]},{"eventName":"Measurement_vGMUX","controlLoopSchemaType":"VNF","policyScope":"DCAE","policyName":"DCAE.Config_tca-hi-lo","policyVersion":"v0.0.1","thresholds":[{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"EQUAL","severity":"MAJOR","closedLoopEventStatus":"ABATED"},{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","version":"1.0.2","fieldPath":"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value","thresholdValue":0,"direction":"GREATER","severity":"CRITICAL","closedLoopEventStatus":"ONSET"}]}]}'
tca.processing_batch_size: 10000
tca.enable_abatement: true
tca.enable_ecomp_logging: true

Complete configuration and input defaults can be found on blueprint here - https://git.onap.org/dcaegen2/platform/blueprints/plain/blueprints/k8s-tcagen2.yaml

Functionality

TCA-gen2 is driven by the VES collector events published into Message Router. This Message Router topic is the source for the CDAP application which will read each incoming message. If a message meets the VES (CEF, v28.4) as specified by the VES 5.4 standard, it will be parsed and if it contains a message which matches the policy configuration for a given metric (denoted primarily by the “eventName” and the “fieldPath”), the value of the metric will be compared to the “thresholdValue”. If that comparison indicates that a Control Loop Event Message should be generated, the application will output the alarm to the Message Router topic in a format that matches the interface spec defined for Control-Loop by ONAP-Policy

Assumptions:

TCA-gen2 output will be similar to R0 Tca/cdap implementation, where CL event will be triggered each time threshold rules are met.

In the context of the vCPE use case, the CLEAR event (aka ABATED event) is driven by a measured metric (i.e. packet loss equal to 0) rather than by the lapse of a threshold crossing event over some minimum number of measured intervals. Thus, this requirement can be accommodated by use of the low threshold with a policy of “direction = 0”. TCA-gen2 implementation will keep only the minimal state needed to correlate an ABATED event with the corresponding ONSET event. This correlation will be indicated by the requestID in the Control Loop Event Message.

TCA-gen2 can support multiple ONAP usecases. Single TCA instance can be deployed to support all 3 usecases. - vFirewall - vDNS - vCPE

Following is default configuration set for TCA-gen2 during deployment.

{
    "domain": "measurementsForVfScaling",
    "metricsPerEventName": [{
        "eventName": "measurement_vFirewall-Att-Linkdownerr",
        "controlLoopSchemaType": "VM",
        "policyScope": "DCAE",
        "policyName": "DCAE.Config_tca-hi-lo",
        "policyVersion": "v0.0.1",
        "thresholds": [{
            "closedLoopControlName": "ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a",
            "version": "1.0.2",
            "fieldPath": "$.event.measurementsForVfScalingFields.vNicPerformanceArray[*].receivedTotalPacketsDelta",
            "thresholdValue": 300,
            "direction": "LESS_OR_EQUAL",
            "severity": "MAJOR",
            "closedLoopEventStatus": "ONSET"
        }, {
            "closedLoopControlName": "ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a",
            "version": "1.0.2",
            "fieldPath": "$.event.measurementsForVfScalingFields.vNicPerformanceArray[*].receivedTotalPacketsDelta",
            "thresholdValue": 700,
            "direction": "GREATER_OR_EQUAL",
            "severity": "CRITICAL",
            "closedLoopEventStatus": "ONSET"
        }]
    }, {
        "eventName": "vLoadBalancer",
        "controlLoopSchemaType": "VM",
        "policyScope": "DCAE",
        "policyName": "DCAE.Config_tca-hi-lo",
        "policyVersion": "v0.0.1",
        "thresholds": [{
            "closedLoopControlName": "ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3",
            "version": "1.0.2",
            "fieldPath": "$.event.measurementsForVfScalingFields.vNicPerformanceArray[*].receivedTotalPacketsDelta",
            "thresholdValue": 300,
            "direction": "GREATER_OR_EQUAL",
            "severity": "CRITICAL",
            "closedLoopEventStatus": "ONSET"
        }]
    }, {
        "eventName": "Measurement_vGMUX",
        "controlLoopSchemaType": "VNF",
        "policyScope": "DCAE",
        "policyName": "DCAE.Config_tca-hi-lo",
        "policyVersion": "v0.0.1",
        "thresholds": [{
            "closedLoopControlName": "ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e",
            "version": "1.0.2",
            "fieldPath": "$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value",
            "thresholdValue": 0,
            "direction": "EQUAL",
            "severity": "MAJOR",
            "closedLoopEventStatus": "ABATED"
        }, {
            "closedLoopControlName": "ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e",
            "version": "1.0.2",
            "fieldPath": "$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value",
            "thresholdValue": 0,
            "direction": "GREATER",
            "severity": "CRITICAL",
            "closedLoopEventStatus": "ONSET"
        }]
    }]
}

For more details about the exact flows - please refer to usecases wiki

Delivery
Docker Container

TCA-GEN2 is delivered as a docker image that can be downloaded from ONAP docker registry:

``docker run -d --name tca-gen2 -e CONFIG_BINDING_SERVICE_SERVICE_HOST=<IP Required> -e CONFIG_BINDING_SERVICE_SERVICE_PORT=<Port Required> -e HOSTNAME=<HOSTNAME>  nexus3.onap.org:10001/onap/org.onap.dcaegen2.analytics.tca-gen2.dcae-analytics-tca-web:<tag>``

Following additional MS are available for on-demand deployment as necessary for any usecases; instruction for deployment are provided under each MS.

Installation

Deployment Prerequisite/dependencies

VES-Mapper can be deployed individually though it will throw errors if it can’t reach to DMaaP instance’s APIs. To test it functionally, DMaaP is the only required prerequisite outside DCAE. As VES-Mapper is integrated with Consul / CBS, it fetches the initial configuration from Consul.

Blueprint/model/image

VES-Mapper blueprint is available @ https://git.onap.org/dcaegen2/platform/blueprints/tree/blueprints/k8s-ves-mapper.yaml?h=guilin

VES-Mapper docker image is available in Nexus repo @ nexus3.onap.org:10001/onap/org.onap.dcaegen2.services.mapper.vesadapter.universalvesadaptor:latest

1.To Run via blueprint

a. Verify DMaaP configurations in the blueprint as per setup

Dmaap Configuration consists of subscribe url to fetch notifications from the respective collector and publish url to publish ves event.

streams_publishes and streams_subscribes point to the publishing topic and subscribe topic respectively. Update these urls as per your DMaaP configurations in the blueprint.

*b. Verify the Smooks mapping configuration in the blueprint as per the usecase. Blueprint contains default mapping for each supported collector ( SNMP Collector and RESTConf collector currently) which may serve the purpose for the usecase. The mapping-files in collectors contains the contents of the mapping file.

c. Upload the blueprint in the DCAE’s Cloudify instance

For this step, DCAE’s Cloudify instance should be in running state. Transfer blueprint file in DCAE bootstrap POD under /blueprints directory. Log-in to the DCAE bootstrap POD’s main container.

Note

For doing this, we should run the below commands

  • To get the bootstrap pod name, run this: kubectl get pods -n onap | grep bootstrap

  • To transfer blueprint file in bootstrap pod, run this: kubectl cp <source file path> <bootstrap pod>:/blueprints -n onap

  • To login to bootstrap pod name, run this: kubectl exec -it <bootstrap pod> bash -n onap

Note

Verify the below versions before validate blueprint

  • The version of the plugin used is different from “cfy plugins list”, update the blueprint import to match.

  • If the tag_version under inputs is old, update with the latest

Validate blueprint

cfy blueprints validate /blueprints/k8s-ves-mapper.yaml

Use following command for validated blueprint to upload:

cfy blueprints upload -b ves-mapper /blueprints/k8s-ves-mapper.yaml

d. Create the Deployment After VES-Mapper’s validated blueprint is uploaded, create Cloudify Deployment by following command

cfy deployments create -b ves-mapper ves-mapper

e. Deploy the component by using following command

cfy executions start -d ves-mapper install

To undeploy running ves-mapper, follow the below steps

a. cfy uninstall ves-mapper -f

Note

The deployment uninstall will also delete the blueprint. In some case you might notice 400 error reported indicating active deployment exist such as below.

Ex: An error occurred on the server: 400: Can’t delete deployment ves-mapper - There are running or queued executions for this deployment. Running executions ids: d89fdd0c-8e12-4dfa-ba39-a6187fcf2f18

b. In that case, cancel the execution ID then run uninstall as below

cfy executions cancel <Running executions ID>
cfy uninstall ves-mapper

2.To run on standalone mode

Though this is not a preferred way, to run VES-Mapper container on standalone mode using local configuration file carried in the docker image, following docker run command can be used.

docker run -d   nexus3.onap.org:10003/onap/org.onap.dcaegen2.services.mapper.vesadapter.universalvesadaptor:1.1.0

Installation

An environment suitable for running docker containers is recommended.

Using Cloudify deployment

The first possibility is to use blueprints and cfy commands.

Deployment Prerequisite/dependencies

Make sure that cfy is installed and configured to work with the Cloudify deployment.

Make sure the Message Router and Data Router are running.

Deployment steps
  1. Execute bash on the cloudify manager kubernetes pod.

    kubectl -n onap exec -it <dev-dcaegen2-dcae-cloudify-manager> bash

  2. Download the dfc blueprint.

  1. Run Cloudify Install command to install dfc.

    cfy install <dfc-blueprint-path>

Sample output:

cfy install k8s-datafile.yaml

Run ‘cfy events list -e 37da3f5f-a06b-4ce8-84d3-8b64ccd81c33’ to retrieve the execution’s events/logs.

Validation

curl <dcaegen2-dcae-healthcheck> and check if datafile-collector is in ‘ready’ state.

Standalone deployment of a container

DFC is delivered as a docker container based on openjdk:8-jre-alpine. The host or VM that will run this container must have the docker application loaded and available to the userID that will be running the DFC container.

Also required is a working DMAAP/MR and DMAAP/DR environment. DFC subscribes to DMAAP/MR fileReady event as JSON messages and publishes the downloaded files to the DMAAP/DR.

Installation

The following command will download the Frankfurt version of the datafile image from nexus and launch it in the container named “datafile”:

docker run -d -p 8100:8100 -p 8433:8433 nexus3.onap.org:10001/onap/org.onap.dcaegen2.collectors.datafile.datafile-app-server:1.3.0

For another version, it is possible to replace the tag ‘1.2.3’ with any version that seems suitable (including latest). Available images are visible following this link.

Another option is to pull the image first, and then run the image’s container with the image ID:

docker pull nexus3.onap.org:10001/onap/org.onap.dcaegen2.collectors.datafile.datafile-app-server:latest

docker images | grep 'datafile'

docker run -d -p 8100:8100 -p 8433:8433 <image ID>

The grep command will display the images corresponding to DFC. There can be several due to remotely or locally built image, and also to different tags, i.e. different versions.

Certifcates

Installation

Following are steps if manual deployment/undeployment required.

Steps to deploy are shown below

To undeploy heartbeat, steps are shown below

  • Uninstall running heartbeat and delete deployment
    cfy uninstall heartbeat
    
  • Delete blueprint
    cfy blueprints delete heartbeat
    

Installation

PM mapper is a microservice that will be configured and instantiated through Cloudify Manager, either through the user interface or the command line tool. During instantiation, the PM Mapper will fetch its configuration through the Config Binding Service. Steps to deploy using the CLI tool are shown below.

Deployment Prerequisite/dependencies
  • DCAE and DMaaP pods should be up and running.

  • DMaaP Bus Controller post install jobs should have completed successfully (executed as part of an OOM install).

  • Make sure that cfy is installed and configured to work with the Cloudify deployment.

Deployment steps

Enter the Cloudify Manager kuberenetes pod

  • Download the PM Mapper blueprint onto the pod, this can be found in:

  • Create inputs.yaml

    Configuration of the service consists of generating an inputs file (YAML) which will be used as part of the Cloudify install. The PM-Mapper blueprints were designed with sane defaults for the majority of the fields. Below you will find some examples of fields which can be configured, and some of the fields which must be configured. The full list of configurable parameters can be seen within the blueprint file under “inputs”.

    Property

    Sample Value

    Description

    Required

    client_id

    dcae@dcae.onap.org

    Information about the AAF user must be provided to enable publishing to authenticated topics.

    Yes

    client_password

    <dcae_password>

    This is the password for the given user e.g. The <dcae_password> is dcae@dcae.onap.org’s password.

    Yes

    enable_http

    true

    By default, the PM-Mapper will only allow inbound queries over HTTPS. However, it is possible to configure it to enable HTTP also.

    No

    tag_version

    nexus3.onap.org:10001/onap/org.onap.dcaegen2.services.pm-mapper:1.0.1

    The tag of the Docker image will be used when deploying the PM-Mapper.

    No

    pm-mapper-filter

    {“filters”: [{“pmDefVsn”:”targetVersion”, “nfType”:”targetNodeType”, “vendor”:”targetVendor”,”measTypes”:[“targetMeasType”]}]}

    The default behavior of the PM-Mapper is to map all measType in the received PM XML files, however, it’s possible to provide filtering configuration which will reduce the VES event to the counters that the designer has expressed interest in.

    No

    Example inputs.yaml

    client_id: dcae@dcae.onap.org
    client_password: <dcae_password>
    enable_http: false
    tag_version: nexus3.onap.org:10001/onap/org.onap.dcaegen2.services.pm-mapper:latest
    pm-mapper-filter: {"filters": []}
    
  • Create deployment

    cfy install --blueprint-id pm-mapper --deployment-id pm-mapper -i inputs.yaml k8s-pm-mapper.yaml
    

Installation

BBS-ep is delivered as a Spring-Boot application ready to be deployed in Docker (via docker-compose).

The following docker-compose-yaml file shows a default configuration. The file can be run using docker compose up command:

version: '3'
services:
  bbs-event-processor:
    image: onap/org.onap.dcaegen2.services.components.bbs-event-processor:latest
    container_name: bbs-event-processor
    hostname: bbs-event-processor
    ports:
    - 32100:8100
    environment:
      CONFIGS_DMAAP_CONSUMER_RE-REGISTRATION_DMAAPHOSTNAME: 10.133.115.190
      CONFIGS_DMAAP_CONSUMER_RE-REGISTRATION_DMAAPPORTNUMBER: 30227
      CONFIGS_DMAAP_CONSUMER_RE-REGISTRATION_DMAAPTOPICNAME: /events/unauthenticated.PNF_UPDATE
      CONFIGS_DMAAP_CONSUMER_RE-REGISTRATION_CONSUMERGROUP: foo
      CONFIGS_DMAAP_CONSUMER_RE-REGISTRATION_CONSUMERID: bar
      CONFIGS_DMAAP_CONSUMER_CPE-AUTHENTICATION_DMAAPHOSTNAME: 10.133.115.190
      CONFIGS_DMAAP_CONSUMER_CPE-AUTHENTICATION_DMAAPPORTNUMBER: 30227
      CONFIGS_DMAAP_CONSUMER_CPE-AUTHENTICATION_DMAAPTOPICNAME: /events/unauthenticated.CPE_AUTHENTICATION
      CONFIGS_DMAAP_CONSUMER_CPE-AUTHENTICATION_CONSUMERGROUP: foo
      CONFIGS_DMAAP_CONSUMER_CPE-AUTHENTICATION_CONSUMERID: bar
      CONFIGS_DMAAP_PRODUCER_DMAAPHOSTNAME: 10.133.115.190
      CONFIGS_DMAAP_PRODUCER_DMAAPPORTNUMBER: 30227
      CONFIGS_DMAAP_PRODUCER_DMAAPTOPICNAME: /events/unauthenticated.DCAE_CL_OUTPUT
      CONFIGS_AAI_CLIENT_AAIHOST: 10.133.115.190
      CONFIGS_AAI_CLIENT_AAIPORT: 30233
      CONFIGS_APPLICATION_PIPELINESPOLLINGINTERVALSEC: 30
      CONFIGS_APPLICATION_PIPELINESTIMEOUTSEC: 15
      CONFIGS_APPLICATION_RE-REGISTRATION_POLICYSCOPE: policyScope
      CONFIGS_APPLICATION_RE-REGISTRATION_CLCONTROLNAME: controlName
      CONFIGS_APPLICATION_CPE-AUTHENTICATION_POLICYSCOPE: policyScope
      CONFIGS_APPLICATION_CPE-AUTHENTICATION_CLCONTROLNAME: controlName
      CONFIGS_SECURITY_TRUSTSTOREPATH: KeyStore.jks
      CONFIGS_SECURITY_TRUSTSTOREPASSWORDPATH: KeyStorePass.txt
      CONFIGS_SECURITY_KEYSTOREPATH: KeyStore.jks
      CONFIGS_SECURITY_KEYSTOREPASSWORDPATH: KeyStorePass.txt
      LOGGING_LEVEL_ORG_ONAP_BBS: TRACE

BBS-ep can be dynamically deployed in DCAE’s Cloudify environment via its blueprint deployment artifact.

Blueprint can be found in

Steps to deploy are shown below

  • Enter the Bootstrap POD

  • Validate blueprint
    cfy blueprints validate /blueprints/k8s-bbs-event-processor.yaml
    
  • Upload validated blueprint
    cfy blueprints upload -b bbs-ep /blueprints/k8s-bbs-event-processor.yaml
    
  • Create deployment
    cfy deployments create -b bbs-ep -i /blueprints/k8s-bbs-event-processor.yaml bbs-ep
    
  • Deploy blueprint
    cfy executions start -d bbs-ep install
    

To undeploy BBS-ep, steps are shown below

  • Uninstall running BBS-ep and delete deployment
    cfy uninstall bbs-ep
    
  • Delete blueprint
    cfy blueprints delete bbs-ep
    

Installation

SON handler microservice can be deployed using cloudify blueprint using bootstrap container of an existing DCAE deployment

Deployment Prerequisites
  • SON-Handler service requires DMAAP and Policy components to be functional.

  • SON-hadler service requires the following dmaap topics to be present in the running DMAAP instance :

    1.PCI-NOTIF-TOPIC-NGHBR-LIST-CHANGE-INFO

    2.unauthenticated.SEC_FAULT_OUTPUT

    3.unauthenticated.SEC_MEASUREMENT_OUTPUT

    4.DCAE_CL_RSP

  • Policy model required for SON-handler service should be created and pushed to policy component. Steps for creating and pushing the policy model:

    1.Login to PDP container and execute

    kubectl exec -ti --namespace onap policy-pdp-0 bash
    

    2.Create policy model

    curl -k -v --silent -X PUT --header 'Content-Type: application/json' --header 'Accept: text/plain' --header 'ClientAuth: cHl0aG9uOnRlc3Q=' --header 'Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==' --header 'Environment: TEST' -d '{
     "policyName": "com.PCIMS_CONFIG_POLICY",
     "configBody": "{ \"PCI_NEIGHBOR_CHANGE_CLUSTER_TIMEOUT_IN_SECS\":60, \"PCI_MODCONFIG_POLICY_NAME\":\"ControlLoop-vPCI-fb41f388-a5f2-11e8-98d0-529269fb1459\", \"PCI_OPTMIZATION_ALGO_CATEGORY_IN_OOF\":\"OOF-PCI-OPTIMIZATION\", \"PCI_SDNR_TARGET_NAME\":\"SDNR\" }",
     "policyType": "Config", "attributes" : { "matching" : { "key1" : "value1" } },
     "policyConfigType": "Base",
     "onapName": "DCAE",
     "configName": "PCIMS_CONFIG_POLICY",
     "configBodyType": "JSON" }' 'https://pdp:8081/pdp/api/createPolicy'
    

    3.Push policy model

     curl -k -v --silent -X PUT --header 'Content-Type: application/json' --header 'Accept: text/plain' --header 'ClientAuth: cHl0aG9uOnRlc3Q=' --header 'Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==' --header 'Environment: TEST' -d '{
    "policyName": "com.PCIMS_CONFIG_POLICY",
    "policyType": "Base"}' 'https://pdp:8081/pdp/api/pushPolicy'
    

    4.Verify config policy is present

    curl -k -v --silent -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' --header 'ClientAuth: cHl0aG9uOnRlc3Q=' --header 'Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==' --header 'Environment: TEST' -d '{ "configName": "PCIMS_CONFIG_POLICY",    "policyName": "com.Config_PCIMS_CONFIG_POLICY1*",    "requestID":"e65cc45a-9efb-11e8-98d0-529269ffa459"  }' 'https://pdp:8081/pdp/api/getConfig'
    
Deployment steps
1.Using DCAE Dashboard
  • Login to DCAE Dashboard (https://{k8s-nodeip}:30418/ccsdk-app/login_external.htm)

  • Go to Inventory –> Blueprints

  • Click on Deploy Action for son-handler blueprint

  • Override the value of ‘tag_version’ to ‘nexus3.onap.org:10001/onap/org.onap.dcaegen2.services.son-handler:2.1.5’ and click deploy.

  • Deployment logs can be viewed under Deployments section

2.Using cloudify commands

  • Login to the bootstrap container

    kubectl exec -ti –namespace onap <bootstrap pod name> bash

  • The blueprint can be found under dcaegen2 blueprint repo and is part of bootstrap container. (https://gerrit.onap.org/r/dcaegen2/platform/blueprints)

  • Deploy the microservice into the cloudify using the following command

    cfy install -d sonhms -b sonhms <blueprint file path>

  • Deployment status of the microservice can be found from kubernetes pods status (MS will be deployed as a k8s pod in the kubernetes environment under the same namespace as the DCAE environment).

    kubectl get pods –namespace onap

  • To uninstall the microservice

    cfy uninstall sonhms

  • To delete the blueprint from the cloudify instance

    cfy blueprints delete sonhms

Application Configurations

Configuration

Description

Streams_subscribes

Dmaap topics that the MS will consume messages

Streams_publishes

Dmaap topics that the MS will publish messages

postgres.host

Host where the postgres database is running

postgres.port

Host where the postgres database is running

postgres.username

Postgres username

postgres.password

Postgres password

sonhandler.pollingInterval

Polling Interval for consuming dmaap messages

sonhandler.pollingTimeout

Polling timeout for consuming dmaap messages

sonhandler.numSolutions

Number for solutions for OOF optimization

sonhandler.minCollision

Minimum collision criteria to trigger OOF

sonhandler.minConfusion

Minimum confusion criteria to trigger OOF

sonhandler.maximumClusters

Maximum number of clusters MS can process

sonhandler.badThreshold

Bad threshold for Handover success rate

sonhandler.poorThreshold

Poor threshold for Handover success rate

sonhandler.namespace

Namespace where MS is going to be deployed

sonhandler.sourceId

Source ID of the Microservice (to OOF)

sonhandler.dmaap.server

Location of message routers

sonhandler.bufferTime

Buffer time for MS to wait for notifications

sonhandler.cg

DMAAP Consumer group for subscription

sonhandler.cid

DMAAP Consumer id for subcription

sonhandler.configDbService

Location of config DB (protocol, host & port)

sonhandler.oof.service

Location of OOF (protocol, host & port)

sonhandler.optimizers

Optimizer to trigger in OOF

sonhandler.poorCountThreshold

Threshold for number of times poorThreshold can be recorded for the cell

sonhandler.badCountThreshold

Threshold for number of times badThreshold can be recorded for the cell

sonhandler. oofTriggerCountTimer

Timer for oof triggered count in minutes

sonhandler.policyRespTimer

Timer to wait for notification from policy

sonhandler. policyNegativeAckThreshold

Maximum number of negative acknowledgements from policy for a given cell

sonhandler. policyFixedPciTimeInterval

Time interval to trigger OOF with fixed pci cells

sonhandler.nfNamingCode

Parameter to filter FM and PM notifications coming from ves

Installation

Standalone docker run command
docker run onap/org.onap.dcaegen2.collectors.restconfcollector

For the current release, RESTConf collector will be a DCAE component that can dynamically be deployed via Cloudify blueprint installation.

Steps to deploy are shown below

  • Enter the Bootstrap POD using kubectl

    Note

    For doing this, follow the below steps

    • First get the bootstrap pod name by running run this: kubectl get pods -n onap | grep bootstrap

    • Then login to bootstrap pod by running this: kubectl exec -it <bootstrap pod> bash -n onap

  • Validate blueprint

    Note

    Verify that the version of the plugin used should match with “cfy plugins list” and use an explicit URL to the plugin YAML file if needed in the blueprint.

    cfy blueprints validate /blueprints/k8s-restconf.yaml
    
  • Upload validated blueprint
    cfy blueprints upload -b restconfcollector /blueprints/k8s-restconf.yaml
    
  • Create deployment
    cfy deployments create -b restconfcollector restconfcollector
    
  • Deploy blueprint
    cfy executions start -d restconfcollector install
    

To undeploy restconfcollector, steps are shown below

  • Uninstall running restconfcollector and delete deployment
    cfy uninstall restconfcollector
    
  • Delete blueprint
    cfy blueprints delete restconfcollector
    

Installation

An environment suitable for running docker containers is recommended. If that is not available, SNMPTRAP source can be downloaded and run in a VM or on baremetal.

Both scenarios are documented below.

As a docker container

trapd is delivered as a docker container based on python 3.6. The host or VM that will run this container must have the docker application loaded and available to the userID that will be running the SNMPTRAP container.

If running from a docker container, it is assumed that Config Binding Service has been installed and is successfully providing valid configuration assets to instantiated containers as needed.

Also required is a working DMAAP/MR environment. trapd publishes traps to DMAAP/MR as JSON messages and expects the host resources and publishing credentials to be included in the Config Binding Service config.

Installation

The following command will download the latest trapd container from nexus and launch it in the container named “trapd”:

docker run --detach -t --rm -p 162:6162/udp -P --name=trapd nexus3.onap.org:10001/onap/org.onap.dcaegen2.collectors.snmptrap:2.0.3 ./bin/snmptrapd.sh start

Running an instance of trapd will result in arriving traps being published to the topic specified by Config Binding Services.

Standalone

trapd can also be run outside of a container environment, without CBS interactions. If CBS is not present, SNMPTRAP will look for a JSON configuration file specified via the environment variable CBS_SIM_JSON at startup. Location of this file should be specified as a relative path from the <SNMPTRAP base directory>/bin directory. E.g.

Installation
Prerequisites

trapd requires the following to run in a non-docker environment:

  • Python 3.6+

  • Python module “pysnmp” 4.4.5

  • Python module “requests” 2.18.3

To install prerequisites:

export PATH=<path to Python 3.6 binary>:$PATH

pip3 install --no-cache-dir requests==2.18.3

pip3 install --no-cache-dir pysnmp==4.4.5

Download latest trapd version from Gerrit

Download a copy of the latest trapd image from gerrit in it’s standard runtime location:

cd /opt/app

git clone --depth 1 ssh://<your linux foundation id>@gerrit.onap.org:29418/dcaegen2/collectors/snmptrap snmptrap

“Un-dockerize”

mv /opt/app/snmptrap/snmptrap /opt/app/snmptrap/bin

Configure for your environment

In a non-docker environment, ONAP trapd is controlled by a locally hosted JSON configuration file. It is referenced in the trapd startup script as:

CBS_SIM_JSON=../etc/snmptrapd.json

This file should be in the exact same format is the response from CBS in a fully implemented container/controller environment. A sample file is included with source/container images, at:

/opt/app/snmptrap/etc/snmptrapd.json

Make applicable changes to this file - typically things that will need to change include:

"topic_url": "http://localhost:3904/events/ONAP-COLLECTOR-SNMPTRAP"

Action: Change ‘localhost’ and topic name (ONAP-COLLECTOR-SNMPTRAP) to desired values in your environment.

"snmpv3_config" (needed only when SNMPv3 agents are present)

Action: Add/delete/modify entries as needed to align with SNMP agent configurations in a SNMPv3 environment.

Start the application

nohup /opt/app/snmptrap/bin/snmptrapd.sh start > /opt/app/snmptrap/logs/snmptrapd.out 2>&1 &

Installation

In Guilin, the PMSH can be deployed using the DCAE Dashboard or via CLI. Steps to deploy using CLI will be shown below.

Deployment Prerequisites

In order to successfully deploy the PMSH, one will need administrator access to the kubernetes cluster, as the following procedure will be run from the dcae-bootstrap pod. As well as this, the following components are required to be running. They can be verified by running the health checks.

  • DCAE Platform

  • DMaaP

  • A&AI

  • AAF

The robot healthcheck can be run from one of the Kubernetes controllers.

./oom/kubernetes/robot/ete-k8s.sh onap health
Deployment Procedure

To deploy the PMSH in the Frankfurt release, the monitoring policy needs to be pushed directly to CONSUL. To begin, kubectl exec on to the dcae-bootstrap pod and move to the /tmp directory.

kubectl exec -itn <onap-namespace> onap-dcae-bootstrap bash

For information on creating a monitoring policy see Subscription configuration.

The following JSON is an example monitoring policy.

{
   "subscription":{
      "subscriptionName":"subscriptiona",
      "administrativeState":"UNLOCKED",
      "fileBasedGP":15,
      "fileLocation":"/pm/pm.xml",
      "nfFilter":{
         "nfNames":[
            "^pnf1.*"
         ],
         "modelInvariantIDs":[
            "5845y423-g654-6fju-po78-8n53154532k6",
            "7129e420-d396-4efb-af02-6b83499b12f8"
         ],
         "modelVersionIDs":[
            "e80a6ae3-cafd-4d24-850d-e14c084a5ca9"
         ]
      },
      "measurementGroups":[
         {
            "measurementGroup":{
               "measurementTypes":[
                  {
                     "measurementType":"countera"
                  },
                  {
                     "measurementType":"counterb"
                  }
               ],
               "managedObjectDNsBasic":[
                  {
                     "DN":"dna"
                  },
                  {
                     "DN":"dnb"
                  }
               ]
            }
         },
         {
            "measurementGroup":{
               "measurementTypes":[
                  {
                     "measurementType":"counterc"
                  },
                  {
                     "measurementType":"counterd"
                  }
               ],
               "managedObjectDNsBasic":[
                  {
                     "DN":"dnc"
                  },
                  {
                     "DN":"dnd"
                  }
               ]
            }
         }
      ]
   }
}

The monitoring-policy.json can then be PUT with the following curl request.

curl -X PUT http://consul:8500/v1/kv/dcae-pmsh:policy \
    -H 'Content-Type: application/json' \
    -d @monitoring-policy.json

To deploy the PMSH microservice using the deployment handler API, the serviceTypeId is needed. This can be retrieved using the inventory API.

curl -k https://inventory:8080/dcae-service-types \
    | grep k8s-pmsh | jq '.items[] | select(.typeName == "k8s-pmsh") | .typeId'

Finally, deploy the PMSH via dcae deployment handler.

curl -k https://deployment-handler:8443/dcae-deployments/dcae-pmsh \
    -H 'Content-Type: application/json' \
    -d '{
        "inputs": (),
        "serviceTypeId": "<k8s-pmsh-typeId>"
    }'

Deployment Steps

Pre-requisite

Make sure dcae postgres is properly deployed and functional. An external database, such as Elasticsearch and MongoDB is deployed. Install mongodb through the following command.

#docker run -itd –restart=always –name dl-mongo -p 27017:27017 mongo

For DES service deployment, presto service is deployed. Here is a sample how presto deploy in the environment.
Build a presto image:

The package of presto version we are using is v0.0.2:presto-v0.0.2.tar.gz

#docker build -t presto:v0.0.2 . #docker tag presto:v0.0.2 registry.baidubce.com/onap/presto:v0.0.2 #docker push registry.baidubce.com/onap/presto:v0.0.2

Note: Replace the repository path with your own repository.

Install presto service:

#kubectl -n onap run dl-presto –image=registry.baidubce.com/onap/presto:v0.0.2 –env=”MongoDB_IP=192.168.235.11” –env=”MongoDB_PORT=27017” #kubectl -n onap expose deployment dl-presto –port=9000 –target-port=9000 –type=NodePort

Note: MonoDB_IP and Mongo_PORT you can replace this two values with your own configuration.

After datalake getting deployed, the admin UI can be used to configure the sink database address and credentials.

Log-in to the DCAE Bootstrap POD

First, we should find the bootstrap pod name through the following command and make sure that DCAE coudify manager is properly deployed.
_images/bootstrap-pod.png
Login to the DCAE bootstrap pod through the following command.
#kubectl exec -it <DCAE bootstrap pod> /bin/bash -n onap

Validate Blueprint

Before the blueprints uploading to Cloudify manager, the blueprints shoule be validated first through the following command.
#cfy blueprint validate /bluerints/k8s-datalake-feeder.yaml
#cfy blueprint validate /blueprints/k8s-datalake-admin-ui.yaml
#cfy blueprint validate /blueprints/k8s-datalake-des.yaml

Upload the Blueprint to Cloudify Manager.

After validating, we can start to proceed blueprints uploading.
#cfy blueprint upload -b dl-feeder /bluerints/k8s-datalake-feeder.yaml
#cfy blueprint upload -b dl-admin-ui /blueprints/k8s-datalake-admin-ui.yaml
#cfy blueprint upload -b des /blueprints/k8s-datalake-des.yaml

Verify Uploaded Blueprints

Using “cfy blueprint list” to verify your work.
#cfy blueprint list
You can see the following returned message to show the blueprints have been correctly uploaded.
_images/blueprint-list.png

Verify Plugin Versions

If the version of the plugin used is different, update the blueprint import to match.
#cfy plugins list

Create Deployment

Here we are going to create deployments for both feeder and admin UI.
#cfy deployments create -b dl-feeder feeder-deploy
#cfy deployments create -b dl-admin-ui admin-ui-deploy
#cfy deployments create -b des des

Launch Service

Next, we are going to launch the datalake.
#cfy executions start -d feeder-deploy install
#cfy executions start -d admin-ui-deploy install
#cfy executions start -d des install

Verify the Deployment Result

The following command can be used to list the datalake logs.

#kubectl logs <datalake-pod> -n onap
The output should looks like.
_images/feeder-log.png
The des output should looks like.
sections/services/datalake-handler/des-log.png

If you find any Java exception from log, make sure that the external database and datalake configuration are properly configured. Admin UI can be used to configure the external database configuration.

Uninstall

Uninstall running component and delete deployment
#cfy uninstall feeder-deploy
#cfy uninstall admin-ui-deploy
#cfy uninstall des

Delete Blueprint

#cfy blueprints delete dl-feeder
#cfy blueprints delett dl-admin-ui
#cfy blueprints delete des

DCAE Deployment Validation

Check Deployment Status

The healthcheck service is exposed as a Kubernetes ClusterIP Service named dcae-healthcheck. The service can be queried for status as shown below.

$ curl dcae-healthcheck
{
  "type": "summary",
  "count": 14,
  "ready": 14,
  "items": [
     {
       "name": "dev-dcaegen2-dcae-cloudify-manager",
       "ready": 1,
       "unavailable": 0
     },
     {
       "name": "dev-dcaegen2-dcae-config-binding-service",
       "ready": 1,
       "unavailable": 0
     },
     {
       "name": "dev-dcaegen2-dcae-inventory-api",
       "ready": 1,
       "unavailable": 0
     },
     {
       "name": "dev-dcaegen2-dcae-servicechange-handler",
       "ready": 1,
       "unavailable": 0
     },
     {
       "name": "dev-dcaegen2-dcae-deployment-handler",
       "ready": 1,
       "unavailable": 0
     },
     {
       "name": "dev-dcaegen2-dcae-policy-handler",
       "ready": 1,
       "unavailable": 0
     },
     {
       "name": "dep-dcae-ves-collector",
       "ready": 1,
       "unavailable": 0
     },
     {
       "name": "dep-dcae-tca-analytics",
       "ready": 1,
       "unavailable": 0
     },
     {
       "name": "dep-dcae-prh",
       "ready": 1,
       "unavailable": 0
     },
     {
       "name": "dep-dcae-hv-ves-collector",
       "ready": 1,
       "unavailable": 0
     },
     {
       "name": "dep-dcae-dashboard",
       "ready": 1,
       "unavailable": 0
     },
     {
       "name": "dep-dcae-snmptrap-collector",
       "ready": 1,
       "unavailable": 0
     },
     {
       "name": "dep-holmes-engine-mgmt",
       "ready": 1,
       "unavailable": 0
     },
     {
       "name": "dep-holmes-rule-mgmt",
       "ready": 1,
       "unavailable": 0
     }
   ]
 }

Data Flow Verification

After the platform is assessed as healthy, the next step is to check the functionality of the system. This can be monitored at a number of “observation” points.

  1. Incoming VNF Data into VES Collector can be verified through logs using kubectl

    kubectl logs -f -n onap <vescollectorpod> dcae-ves-collector

Note

To get the “vescollectorpod” run this command: kubectl -n onap get pods | grep dcae-ves-collector

  1. Check VES Output

    VES publishes received VNF data, after authentication and syntax check, onto DMaaP Message Router. To check this is happening we can subscribe to the publishing topic.

    1. Run the subscription command to subscribe to the topic: curl -H “Content-Type:text/plain” -k -X GET https://{{K8S_NODEIP}}:30226/events/unauthenticated.VES_MEASUREMENT_OUTPUT/group1/C1?timeout=50000. The actual format and use of Message Router API can be found in DMaaP project documentation.
      • When there are messages being published, this command returns with the JSON array of messages;

      • If no message being published, up to the timeout value (i.e. 50000 seconds as in the example above), the call is returned with empty JAON array;

      • It may be useful to run this command in a loop: while :; do curl -H “Content-Type:text/plain” -k -X GET https://{{K8S_NODEIP}}:30226/events/unauthenticated.VES_MEASUREMENT_OUTPUT/group1/C1?timeout=50000; echo; done;

  2. Check TCA Output

    TCA also publishes its events to Message Router under the topic of “unauthenticated.DCAE_CL_OUTPUT”. The same Message Router subscription command can be used for checking the messages being published by TCA; * Run the subscription command to subscribe to the topic: curl -H “Content-Type:text/plain” -k -X GET https://{{K8S_NODEIP}}:30226/events/unauthenticated.DCAE_CL_OUTPUT/group1/C1?timeout=50000. * Or run the command in a loop: while :; do curl -H “Content-Type:text/plain” -k -X GET https://{{K8S_NODEIP}}:30226/events/unauthenticated.DCAE_CL_OUTPUT/group1/C1?timeout=50000; echo; done;

Logging

DCAE logging is available in several levels; most DCAE Components are complaint with EELF logging standard and generates debug, audit, metric logging.

Platform Components Logging

As all the platform components are containered and deployed under K8S as pod; corresponding log information can be accessed using kubectl get logs -n onap <pod_name>

More detailed audit/debug logs can be found within the pod.

Component Logging

Please refer to individual service component webpage for more information. In general the logs of service component can be accessed using kubectl get logs -n onap <pod_name>

DCAE Health Check

OOM Deployment

In OOM deployments, DCAE healthchecks are performed by a separate service–dcae-healthcheck. This service is packaged into a Docker image (onap/org.onap.dcaegen2.deployments.healthcheck-container), which is built in the healthcheck-container module in the dcaegen2/deployments repository.

The service is deployed with a Helm chart (oom/kubernetes/dcaegen2/charts/dcae-healthcheck) when DCAE is deployed using OOM.

The dcae-healthcheck container runs a service that exposes a simple Web API. In response to request, the service checks Kubernetes to verify that all of the expected DCAE platform and service components are in a ready state. The service has a fixed list of platform and service components that are normally deployed when DCAE is first installed, including components deployed with Helm charts and components deployed using Cloudify blueprints. In addition, the healthcheck service tracks and checks components that are deployed dynamically using Cloudify blueprints after the initial DCAE installation.

The healthcheck service is exposed as a Kubernetes ClusterIP Service named dcae-healthcheck. The service can be queried for status as shown below.

Note

Run the below commands before running “curl dcae-healthcheck”

  • To get the dcae-healthcheck pod name, run this: kubectl get pods -n onap | grep dcae-healthcheck

  • Then enter in to the shell of the container, run this: kubectl exec -it <dcae-healthcheck pod> -n onap bash

$ curl dcae-healthcheck
{
  "type": "summary",
  "count": 14,
  "ready": 14,
  "items": [
     {
       "name": "dev-dcaegen2-dcae-cloudify-manager",
       "ready": 1,
       "unavailable": 0
     },
     {
       "name": "dev-dcaegen2-dcae-config-binding-service",
       "ready": 1,
       "unavailable": 0
     },
     {
       "name": "dev-dcaegen2-dcae-inventory-api",
       "ready": 1,
       "unavailable": 0
     },
     {
       "name": "dev-dcaegen2-dcae-servicechange-handler",
       "ready": 1,
       "unavailable": 0
     },
     {
       "name": "dev-dcaegen2-dcae-deployment-handler",
       "ready": 1,
       "unavailable": 0
     },
     {
       "name": "dev-dcaegen2-dcae-policy-handler",
       "ready": 1,
       "unavailable": 0
     },
     {
       "name": "dep-dcae-ves-collector",
       "ready": 1,
       "unavailable": 0
     },
     {
       "name": "dep-dcae-tca-analytics",
       "ready": 1,
       "unavailable": 0
     },
     {
       "name": "dep-dcae-prh",
       "ready": 1,
       "unavailable": 0
     },
     {
       "name": "dep-dcae-hv-ves-collector",
       "ready": 1,
       "unavailable": 0
     },
     {
       "name": "dep-dcae-dashboard",
       "ready": 1,
       "unavailable": 0
     },
     {
       "name": "dep-dcae-snmptrap-collector",
       "ready": 1,
       "unavailable": 0
     },
     {
       "name": "dep-holmes-engine-mgmt",
       "ready": 1,
       "unavailable": 0
     },
     {
       "name": "dep-holmes-rule-mgmt",
       "ready": 1,
       "unavailable": 0
     }
   ]
 }

DCAE SDK

Since Dublin release, DCAE has introduced new SDK’s to aid component development. SDK is a common software development kit written in Java. It contains various utilities and clients which may be used for getting configuration from CBS, consuming messages from DMaaP, etc.

SDK Overview

Architecture

Introduction

As most services and collectors deployed on DCAE platform relies on similar microservices a common Software Development Kit has been created. It contains utilities and clients which may be used for getting configuration from CBS, consuming messages from DMaaP, etc. SDK is written in Java.

Some of common function across different services are targeted to build as separate library.

Reactive programming

Most of SDK APIs are using Project Reactor, which is one of available implementations of Reactive Streams (as well as Java 9 Flow). Due to this fact SDK supports both high-performance, non-blocking asynchronous clients and old-school, thread-bound, blocking clients. Reactive programming can solve many cloud-specific problems - if used properly.

Before using DCAE SDK, please take a moment and read Project Reactor documentation. You should also skim through methods available in Flux and Mono.

Rx short intro

For general introduction please read 3rd section of Reactor Documentation.

Some general notes:
  • In Project Reactor you have two reactive streams’ types at your disposal: Mono which may emit at most 1 element and Flux which may emit 0, finite or infinite number of elements.

  • Both of them may end with error. In such situation the stream ends immediately. After stream is terminated (normally or because of error) it won’t emit any new elements. You may use retry operators to resubscribe to events in case of error. In cloud environment retryWhen is especially usable: you may use it together with reactor-extra retry functionality in order to support more advanced reaction to unreachable peer microservice.

  • If you do not have any background in functional operators like map, flatMap, please take a time to understand them. The general meaning is similar as in Java 8 Streams API. They are the most common operators used in reactive applications. Especially flatMap is very powerful despite its simplicity.

  • There is a large group of operators which deal with time dimension of the stream, eg. buffer, window, delay*, timeout etc.

  • Be aware that calling aggregation operators (count, collect, etc.) on infinite Flux makes no sense. In worst case scenario you can end JVM with OoM error.

  • There is a nice intro to operators in Appendix A of Reactor Documentation. You should also learn how to read Marble Diagrams which concisely describe operators in a graphical form. Fortunately they are quite easy to grasp.

  • Do not block in any of handlers which are passed to operators defined by Reactor. The library uses a set of Schedulers (think thread-pools) which are suitable for different jobs. More details can be found in the documentation. If possible try to use non-blocking APIs.

  • Most of operators support back-pressure. That means that a demand for new messages will be signalized from downstream subscribers. For instance if you have a flux.flatMap(this::doThis).map(this::doThat).subscribe() then if doThis is very slow it will not request many items from source flux and it will emit items at it’s own pace for doThat to process. So usually there will be no buffering nor blocking needed between flux and doThis.

  • (Almost) nothing will happen without subscribing to the Flux/Mono. These reactive streams are lazy, so the demand will be signaled only when subscription is being made ie. by means of subscribe or block* methods.

  • If you are going to go fully-reactive then you should probably not call subscribe/block anywhere in your code. For instance, when using Reactor Netty or Spring Web Flux you should return Mono/Flux from your core methods and it will be subscribed somewhere by the library you are using.

  • Return Mono<Void> in case you want to return from the method a listener to some processing being done. You may be tempted to return Disposable (result of subscribe()) but it won’t compose nicely in reactive flow. Using then() operator is generally better as you can handle onComplete and onError events in the client code.

Handling errors in reactive streams

As noted above a reactive stream (Flux/Mono) terminates on first exception in any of the stream operators. For instance if Flux.map throws an exception, downstream operators won’t receive onNext event. onError event will be propagated instead. It is a terminal event so the stream will be finished. This fail-fast behavior is a reasonable default but sometimes you will want to avoid it. For instance when polling for the updates from a remote service you may want to retry the call when the remote service is unavailable at a given moment. In such cases you might want to retry the stream using one of retry* operators.

// Simple retry on error with error type check/
// It will immediately retry stream failing with IOException
public Mono<String> fetchSomething() {
    Mono<Response> restResponse = ...
    return restResponse
            .retry(ex -> ex instanceof IOException)
            .map(resp -> ...);
}

// Fancy retry using reactor-extra library
// It will retry stream on IOException after some random time as specified in randomBackoff JavaDoc
public Mono<String> fetchSomething() {
    Mono<Response> restResponse = ...
    Retry retry = Retry.anyOf(IOException.class)
                 .randomBackoff(Duration.ofMillis(100), Duration.ofSeconds(60));
    return restResponse
            .retryWhen(retry)
            .map(resp -> ...);
}
Environment substitution

CBS-client have ability to insert environment variables into loaded application configuration. Environment variables in configuration file must be in format ${ENV_NAME} example:

streams_publishes:
  perf3gpp:
    testArray:
      - testPrimitiveArray:
          - "${AAF_USER}"
          - "${AAF_PASSWORD}"
Libraries
DmaaP-MR Client
  • Support for DmaaP MR publish and subscribe

  • Support for DmaaP configuration fetching from Consul

  • Support for authenticated topics pub/sub

  • Standardized logging

ConfigBindingService Client

Thin client wrapper to fetch configuration based on exposed properties during deployment from the file on a configMap/volume or from CBS api if configMap/volume does not exist. Provides option to periodically query and capture new configuration changes if any should be returned to application.

Crypt Password

Library to generate and match cryptography password using BCrypt algorithm.

High Volume VES Collector Client Producer

A reference Java implementation of High Volume VES Collector client. This library is used in xNF simulator which helps us test HV VES Collector in CSIT tests. You may use it as a reference when implementing your code in non-JVM language or directly when using Java/Kotlin/etc.

External Schema Manager

Library to validate JSON with mapping of external schemas to local schema files.

Available APIs

cbs-client - a Config Binding Service client

CbsClientFactory can be used to lookup for CBS in your application. Returned CbsClient can then be used to get a configuration, poll for configuration or poll for configuration changes.

The following CBS endpoints are supported by means of different CbsRequests:
  • get-configuration created by CbsRequests.getConfiguration method - returns the service configuration

  • get-by-key created by CbsRequests.getByKey method - returns componentName:key entry from Consul

  • get-all created by CbsRequests.getAll method - returns everything which relates to the service (configuration, policies, etc.)

Sample usage:

// Generate RequestID and InvocationID which will be used when logging and in HTTP requests
final RequestDiagnosticContext diagnosticContext = RequestDiagnosticContext.create();
final CbsRequest request = CbsRequests.getConfiguration(diagnosticContext);

// Read necessary properties from the environment
final CbsClientConfiguration config = CbsClientConfiguration.fromEnvironment();

// Create the client and use it to get the configuration
CbsClientFactory.createCbsClient(config)
        .flatMap(cbsClient -> cbsClient.get(request))
        .subscribe(
                jsonObject -> {
                    // do a stuff with your JSON configuration using GSON API
                    final int port = Integer.parseInt(jsonObject.get("collector.listen_port").getAsString());
                    // ...
                },
                throwable -> {
                    logger.warn("Ooops", throwable);
                });

Note that a subscribe handler can/will be called in separate thread asynchronously after CBS address lookup succeeds and CBS service call returns a result.

If you are interested in calling CBS periodically and react only when the configuration changed you can use updates method:

// Generate RequestID and InvocationID which will be used when logging and in HTTP requests
final RequestDiagnosticContext diagnosticContext = RequestDiagnosticContext.create();
final CbsRequest request = CbsRequests.getConfiguration(diagnosticContext);

// Read necessary configuration from the environment
final CbsClientConfiguration config = CbsClientConfiguration.fromEnvironment();

// Polling properties
final Duration initialDelay = Duration.ofSeconds(5);
final Duration period = Duration.ofMinutes(1);

// Create the client and use it to get the configuration
CbsClientFactory.createCbsClient(config)
        .flatMapMany(cbsClient -> cbsClient.updates(request, initialDelay, period))
        .subscribe(
                jsonObject -> {
                    // do a stuff with your JSON configuration using GSON API
                    final int port = Integer.parseInt(jsonObject.get("collector.listen_port").getAsString());
                    // ...
                },
                throwable -> {
                    logger.warn("Ooops", throwable);
                });

The most significant change is in line 14. We are using flatMapMany since we want to map one CbsClient to many JsonObject updates. After 5 seconds CbsClient will call CBS every minute. If the configuration has changed it will pass the JsonObject downstream - in our case consumer of JsonObject will be called.

Parsing streams’ definitions:

  • CBS configuration response contains various service-specific entries. It also contains a standardized DCAE streams definitions as streams_publishes and streams_subscribes JSON objects. CBS Client API provides a way of parsing this part of configuration so you can use Java objects instead of low-level GSON API.

  • Because streams definitions are a simple value objects we were not able to provide you a nice polymorphic API. Instead you have 2-level API at your disposal:
    • You can extract raw streams by means of DataStreams.namedSinks (for streams_publishes) and DataStreams.namedSources (for streams_subscribes).

    • Then you will be able to parse the specific entry from returned collection to a desired stream type by means of parsers built by StreamFromGsonParsers factory.

  • Sample usage:

    final CbsRequest request = CbsRequests.getConfiguration(RequestDiagnosticContext.create());
    final StreamFromGsonParser<MessageRouterSink> mrSinkParser = StreamFromGsonParsers.messageRouterSinkParser();
    
    CbsClientFactory.createCbsClient(CbsClientConfiguration.fromEnvironment())
        .flatMapMany(cbsClient -> cbsClient.updates(request, Duration.ofSeconds(5), Duration.ofMinutes(1)))
        .map(DataStreams::namedSinks)
        .map(sinks -> sinks.filter(StreamPredicates.streamOfType(MESSAGE_ROUTER)).map(mrSinkParser::unsafeParse).toList())
        .subscribe(
            mrSinks -> mrSinks.forEach(mrSink -> {
                logger.info(mrSink.name()); // name = the configuration key
                logger.info(mrSink.aafCredentials().username()); // = aaf_username
                logger.info(mrSink.topicUrl());
                // ...
            }),
            throwable -> logger.warn("Ooops", throwable)
    );
    

    For details and sample usage please refer to JavaDoc and unit and integration tests. Especially CbsClientImplIT, MessageRouterSinksIT and MixedDmaapStreamsIT might be useful.

  • INFO

    Results of these parsers (MessageRouterSink, MessageRouterSource) can be directly used to connect to DMaaP MR by means of dmaap-client API described below.

crypt-password - an utility for BCrypt passwords

Library to generate and match cryptography password using BCrypt algorithm

java -jar crypt-password-${sdk.version}.jar password_to_crypt

$2a$10$iDEKdKknakPqH5XZb6wEmeBP2SMRwwiWHy8RNioUTNycIomjIqCAO

Can be used like maven dependency to match generated password.

dmaap-client - a DMaaP MR client

After parsing CBS sink definitions you will get a Source or Sink value object. It can be then directly used to communicate with DMaaP Message Router REST API.

Writing message publisher

final MessageRouterPublisher publisher = DmaapClientFactory.createMessageRouterPublisher();
final MessageRouterSink sinkDefinition; //... Sink definition obtained by parsing CBS response
final MessageRouterPublishRequest request = ImmutableMessageRouterPublishRequest.builder()
        .sinkDefinition(sinkDefinition)
        .build();

Flux.just(1, 2, 3)
        .map(JsonPrimitive::new)
        .transform(input -> publisher.put(request, input))
        .subscribe(resp -> {
                    if (resp.successful()) {
                        logger.debug("Sent a batch of messages to the MR");
                    } else {
                        logger.warn("Message sending has failed: {}", resp.failReason());
                    }
                },
                ex -> {
                    logger.warn("An unexpected error while sending messages to DMaaP", ex);
                });

Note that we are using Reactor transform operator. As an alternative you could assign Flux of JSON values to the variable and then invoke publisher.put on it. The important performance-related thing to remember is that you should feed the put method with a stream of messages instead of multiple calls with single messages. This way the client API will be able to send them in batches which should significantly improve performance (at least on transfer level).

Writing message subscriber

final MessageRouterSource sourceDefinition; //... Source definition obtained by parsing CBS response
final MessageRouterSubscribeRequest request = ImmutableMessageRouterSubscribeRequest.builder()
        .sourceDefinition(sourceDefinition)
        .build();

cut.subscribeForElements(request, Duration.ofMinutes(1))
        .map(JsonElement::getAsJsonObject)
        .subscribe(json -> {
                // application logic
            },
            ex -> {
                logger.warn("An unexpected error while receiving messages from DMaaP", ex);
            });
Configure timeout when talking to DMaaP-MR
  • publisher:

final MessageRouterPublishRequest request = ImmutableMessageRouterPublishRequest.builder()
             .timeoutConfig(ImmutableDmaapTimeoutConfig.builder()
                     .timeout(Duration.ofSeconds(2))
                     .build())
             .
             .
             .
             .build();
  • subscriber:

final MessageRouterSubscribeRequest request = ImmutableMessageRouterSubscribeRequest.builder()
              .timeoutConfig(ImmutableDmaapTimeoutConfig.builder()
                      .timeout(Duration.ofSeconds(2))
                      .build())
              .
              .
              .
              .build();

The default timeout value (4 seconds) can be used:

ImmutableDmaapTimeoutConfig.builder().build()

For timeout exception following message is return as failReason in DmaapResponse:

408 Request Timeout
{"requestError":{"serviceException":{"messageId":"SVC0001","text":"Client timeout exception occurred, Error code is %1","variables":["408"]}}}
Configure retry mechanism
  • publisher:

final MessageRouterPublisherConfig config = ImmutableMessageRouterPublisherConfig.builder()
         .retryConfig(ImmutableDmaapRetryConfig.builder()
                 .retryIntervalInSeconds(2)
                 .retryCount(2)
                 .build())
         .
         .
         .
         .build();
final MessageRouterPublisher publisher = DmaapClientFactory.createMessageRouterPublisher(config);
  • subscriber:

final MessageRouterSubscriberConfig config = ImmutableMessageRouterSubscriberConfig.builder()
            .retryConfig(ImmutableDmaapRetryConfig.builder()
                    .retryIntervalInSeconds(2)
                    .retryCount(2)
                    .build())
            .
            .
            .
            .build();
final MessageRouterSubscriber subscriber = DmaapClientFactory.createMessageRouterSubscriber(config);

The default retry config (retryCount=3, retryIntervalInSeconds=1) can be used:

ImmutableDmaapRetryConfig.builder().build()
Retry functionality works for:
  • DMaaP MR HTTP response status codes: 404, 408, 413, 429, 500, 502, 503, 504

  • Java Exception classes: ReadTimeoutException, ConnectException

Configure custom persistent connection
  • publisher:

final MessageRouterPublisherConfig connectionPoolConfiguration = ImmutableMessageRouterPublisherConfig.builder()
          .connectionPoolConfig(ImmutableDmaapConnectionPoolConfig.builder()
                 .connectionPool(16)
                 .maxIdleTime(10) //in seconds
                 .maxLifeTime(20) //in seconds
                 .build())
         .build();
final MessageRouterPublisher publisher = DmaapClientFactory.createMessageRouterPublisher(connectionPoolConfiguration);
  • subscriber:

final MessageRouterSubscriberConfig connectionPoolConfiguration = ImmutableMessageRouterSubscriberConfig.builder()
                .connectionPoolConfig(ImmutableDmaapConnectionPoolConfig.builder()
                    .connectionPool(16)
                    .maxIdleTime(10) //in seconds
                    .maxLifeTime(20) //in seconds
                    .build())
            .build();
final MessageRouterSubscriber subscriber = DmaapClientFactory.createMessageRouterSubscriber(connectionPoolConfiguration);

The default custom persistent connection configuration (connectionPool=16, maxLifeTime=2147483647, maxIdleTime=2147483647) can be used:

ImmutableDmaapConnectionPoolConfig.builder().build()
Configure request for authorized topics
  • publisher:

final MessageRouterSink sink = ImmutableMessageRouterSink.builder()
            .aafCredentials(ImmutableAafCredentials.builder()
                    .username("username")
                    .password("password").build())
            .
            .
            .
            .build();

final MessageRouterPublishRequest request = ImmutableMessageRouterPublishRequest.builder()
            .sinkDefinition(sink)
            .
            .
            .
            .build();
  • subscriber:

final MessageRouterSource sourceDefinition = ImmutableMessageRouterSource.builder()
            .aafCredentials(ImmutableAafCredentials.builder()
                    .username("username")
                    .password("password")
                    .build())
            .
            .
            .
            .build();

final MessageRouterSubscribeRequest request = ImmutableMessageRouterSubscribeRequest.builder()
            .sourceDefinition(sourceDefinition)
            .
            .
            .
            .build();

AAF Credentials are optional for subscribe/publish requests. Username and password are used for basic authentication header during sending HTTP request to dmaap-mr.

hvvesclient-producer - a reference Java implementation of High Volume VES Collector client

This library is used in xNF simulator which helps us test HV VES Collector in CSIT tests. You may use it as a reference when implementing your code in non-JVM language or directly when using Java/Kotlin/etc.

Sample usage:

final ProducerOptions producerOptions = ImmutableProducerOptions.builder()
        .collectorAddresses(HashSet.of(
                InetSocketAddress.createUnresolved("dcae-hv-ves-collector", 30222)))
        .build();
final HvVesProducer hvVesProducer = HvVesProducerFactory.create(producerOptions);

Flux<VesEvent> events; // ...
Mono.from(hvVesProducer.send(events))
        .doOnSuccess(() -> logger.info("All events has been sent"))
        .doOnError(ex -> logger.warn("Failed to send one or more events", ex))
        .subscribe();

external-schema-manager - JSON Validator with schema mapping

This library can be used to validate any JSON content incoming as JsonNode. What differs it from other validation libraries is mapping of externally located schemas to local schema files.

Validated JSON must have one field that will refer to an external schema, which will be mapped to local file and then validation of any chosen part of JSON is executed using local schema.

Mapping file is cached on validator creation, so it’s not read every time validation is performed. Schemas’ content couldn’t be cached due to external library restrictions (OpenAPI4j).

Example JSON:

{
    "schemaReference": "https://forge.3gpp.org/rep/sa5/data-models/blob/REL-16/OpenAPI/faultMnS.yaml",
    "data":
    {
        "exampleData: "SAMPLE_VALUE"
    }
}

Interface:

There are two methods, that are interface of this sub-project.

Validator builder:

new StndDefinedValidator.ValidatorBuilder()
        .mappingFilePath(mappingFilePath)
        .schemasPath(schemasPath)
        .schemaRefPath(schemaRefPath)
        .stndDefinedDataPath(stndDefinedDataPath)
        .build();

Validator usage:

stndDefinedValidator.validate(event);

There are 4 string parameters of the builder:

String parameters of the builder

Name

Description

Example

Note

mappingFilePath

This should be a local filesystem path to file with mappings of public URLs to local URLs. Format of the schema mapping file is a JSON file with list of mappings

etc/externalRepo/schema-map.json

schemasPath

Schemas path is a directory under which external-schema-manager will search for local schemas

./etc/externalRepo/ and first mapping from example mappingFile is taken, validator will look for schema under the path: ./etc/externalRepo/3gpp/rep/sa5/data-models/blob/REL-16/OpenAPI/faultMnS.yaml

schemaRefPath

This is an internal path from validated JSON. It should define which field will be taken as public schema reference, which is later mapped

/event/stndDefinedFields/schemaReference

In SDK version 1.4.2 this path doesn’t use JSON path notation (with . signs). It might change in further versions

stndDefinedDataPath

This is path to stndDefined data in JSON. This fields will be validated during stndDefined validation.

/event/stndDefinedFields/data

In SDK version 1.4.2 this path doesn’t use JSON path notation (with . signs). It might change in further versions.

Format of the schema mapping file is a JSON file with list of mappings, as shown in the example below.

[
    {
        "publicURL": "https://forge.3gpp.org/rep/sa5/data-models/blob/REL-16/OpenAPI/faultMnS.yaml",
        "localURL": "3gpp/rep/sa5/data-models/blob/REL-16/OpenAPI/faultMnS.yaml"
    },
    {
        "publicURL": "https://forge.3gpp.org/rep/sa5/data-models/blob/REL-16/OpenAPI/heartbeatNtf.yaml",
        "localURL": "3gpp/rep/sa5/data-models/blob/REL-16/OpenAPI/heartbeatNtf.yaml"
    },
    {
        "publicURL": "https://forge.3gpp.org/rep/sa5/data-models/blob/REL-16/OpenAPI/PerDataFileReportMnS.yaml",
        "localURL": "3gpp/rep/sa5/data-models/blob/REL-16/OpenAPI/PerDataFileReportMnS.yaml"
    },
    {
        "publicURL": "https://forge.3gpp.org/rep/sa5/data-models/blob/master/OpenAPI/provMnS.yaml",
        "localURL": "3gpp/rep/sa5/data-models/blob/REL-16/OpenAPI/provMnS.yaml"
    }
]

Possible scenarios when using external-schema-manger:

When the schema-map file, the schema and the sent event are correct, then the validation is successful and the log shows “Validation of stndDefinedDomain has been successful”.

Errors in the schema-map - none of the mappings are cached:

  • When no schema-map file exists, “Unable to read mapping file. Mapping file path: {}” is logged.

  • When a schema-map file exists, but has an incorrect format, a warning is logged: “Schema mapping file has incorrect format. Mapping file path: {}”

Errors in one of the mappings in schema-map - this mapping is not cached, a warning is logged “Mapping for publicURL ({}) will not be cached to validator.”:

  • When the local url in the schema-map file references a file that does not exist, the warning “Local schema resource missing. Schema file with path {} has not been found.”

  • When the schema file is empty, the information “Schema file is empty. Schema path: {}” is logged

  • When a schema file has an incorrect format (not a yaml), the following information is logged: Schema has incorrect YAML structure. Schema path: {} “

Errors in schemaRefPath returns errors:

  • If in the schemaRef path in the event we provide an url that refers to an existing schema, but the part after # refers to a non-existent part of it, then an error is thrown: IncorrectInternalFileReferenceException (“Schema reference refer to existing file, but internal reference (after #) is incorrect.”) “

  • When in the schemaRef path in the event, we provide a url that refers to a non-existent mapping from public to localURL, a NoLocalReferenceException is thrown, which logs to the console: “Couldn’t find mapping for public url. PublicURL: {}”

Changelog

FAQ

General SDK questions

Where can I find Java Doc API description?

JavaDoc JAR package is published together with compiled classes to the ONAP Nexus repository. You can download JavaDoc in your IDE so you will get documentation hints. Alternatively you can use Maven Dependency plugin (classifier=javadoc).

Which Java version is supported?

For now we are compiling SDK using JDK 11. Hence we advice to use SDK on JDK 11.

Are you sure Java 11 is supported? I can see a debug log from Netty.

If you have enabled a debug log level for Netty packages you might have seen the following log:

[main] DEBUG i.n.util.internal.PlatformDependent0 - jdk.internal.misc.Unsafe.allocateUninitializedArray(int): unavailable

Background: this is a result of moving sun.misc.Unsafe to jdk.internal.misc.Unsafe in JDK 9, so if Netty wants to support both pre and post Java 9 it has to check the JDK version and use the class from available package.

It does not have any impact on SDK. SDK still works with this log. You might want to change log level for io.netty package to INFO.

CBS Client

When cbsClient.updates() will yield an event?

updates will emit changes to the configuration, ie. it will yield an event only when newJsonObject != lastJsonObject (using standard Java equals for comparison). Every check is performed at the specified interval (= it’s poll-based).

What does a single JsonObject returned from CbsClient contain?

It will consist of the complete JSON representing what CBS/Consul keeps for microservice (and not only the changes).

Note:
  • We have started an implementation for listening to changes in a subtree of JsonObject based on Merkle Tree data structure. For now we are recommending to first convert the JsonObject to domain classes and then subscribe to changes in these objects if you want to have a fine-grained control over update handling. It’s an experimental API so it can change or be removed in future releases.

An issue arises when the Flux stream terminates with an error. In that case, since error is a terminal event, stream that updates from Consul is finished. In order to restart periodic CBS fetches, it must be re-subscribed. What is the suggestion about it?

Please use one of retry operators as described in Handling errors in reactive streams section of DCAE SDK main page. You should probably use a retry operator with a back-off so you won’t be retrying immediately (which can result in DDoS attack on CBS).

Configuration

DACEGEN2 platform is deployed via helm charts. The configuration are maintained as on values.yaml and can be updated for deployment if necessary.

For Frankfurt release, the helm charts for each platform component can be controlled via separate override file https://wiki.onap.org/pages/viewpage.action?pageId=71837415

Component

Charts

Cloudify

https://git.onap.org/oom/tree/kubernetes/dcaegen2/components/dcae-cloudify-manager

ConfigBinding Service

https://git.onap.org/oom/tree/kubernetes/dcaegen2/components/dcae-config-binding-service

Deployment Handler

https://git.onap.org/oom/tree/kubernetes/dcaegen2/components/dcae-deployment-handler

Policy Handler

https://git.onap.org/oom/tree/kubernetes/dcaegen2/components/dcae-policy-handler

ServiceChangeHandler

https://git.onap.org/oom/tree/kubernetes/dcaegen2/components/dcae-servicechange-handler

Inventory

https://git.onap.org/oom/tree/kubernetes/dcaegen2/components/dcae-inventory-api

Dashboard

https://git.onap.org/oom/tree/kubernetes/dcaegen2/components/dcae-dashboard

Deployment time configuration of DCAE components are defined in several places.

  • Helm Chart templates:
    • Helm/Kubernetes template files can contain static values for configuration parameters;

  • Helm Chart resources:
    • Helm/Kubernetes resources files can contain static values for configuration parameters;

  • Helm values.yaml files:
    • The values.yaml files supply the values that Helm templating engine uses to expand any templates defined in Helm templates;

    • In a Helm chart hierarchy, values defined in values.yaml files in higher level supersedes values defined in values.yaml files in lower level;

    • Helm command line supplied values supersedes values defined in any values.yaml files.

In addition, for DCAE components deployed through Cloudify Manager blueprints, their configuration parameters are defined in the following places:

  • The blueprint files can contain static values for configuration parameters;
    • The blueprint files are defined under the blueprints directory of the dcaegen2/platform/blueprints repo, named with “k8s” prefix.

  • The blueprint files can specify input parameters and the values of these parameters will be used for configuring parameters in Blueprints. The values for these input parameters can be supplied in several ways as listed below in the order of precedence (low to high):
    • The blueprint files can define default values for the input parameters;

    • The blueprint input files can contain static values for input parameters of blueprints. These input files are provided as config resources under the dcae-bootstrap chart;

    • The blueprint input files may contain Helm templates, which are resolved into actual deployment time values following the rules for Helm values.

DCAE Service components are deployed via Cloudify Blueprints. Instruction for deployment and configuration are documented under https://docs.onap.org/projects/onap-dcaegen2/en/latest/sections/services/serviceindex.html

Now we walk through an example, how to configure the Docker image for the DCAE VESCollector, which is deployed by Cloudify Manager.

In the k8s-ves.yaml blueprint, the Docker image to use is defined as an input parameter with a default value:

tag_version:
type: string
default: "nexus3.onap.org:10001/onap/org.onap.dcaegen2.collectors.ves.vescollector:1.5.4"

The corresponding input file, https://git.onap.org/oom/tree/kubernetes/dcaegen2/components/dcae-bootstrap/resources/inputs/k8s-ves-inputs-tls.yaml, it is defined again as:

{{ if .Values.componentImages.ves }}
tag_version: {{ include "common.repository" . }}/{{ .Values.componentImages.ves }}
{{ end }}

Thus, when common.repository and componentImages.ves are defined in the values.yaml files, their values will be plugged in here and the resulting tag_version value will be passed to the blueprint as the Docker image tag to use instead of the default value in the blueprint.

The componentImages.ves value is provided in the oom/kubernetes/dcaegen2/charts/dcae-bootstrap/values.yaml file:

componentImages:
  ves: onap/org.onap.dcaegen2.collectors.ves.vescollector:1.5.4

Config maps

During installation of DCAEGEN2 module two config maps are installed by default: dcae-external-repo-configmap-schema-map and dcae-external-repo-configmap-sa88-rel16.

Config maps are used by DCAEGEN VES and VES OPEN API components.

Instruction how to generate the content of config maps is described in README file.

User Guide

DCAE Dashboard User Guide

Overview

DCAE Dashboard is a web application that provides a unified interface for DCAE users and Ops users in ONAP to manage DCAE microservices.

Starting with the Dashboard

Type in the application login URL in a web browser. The Login page should appear.

dashboard_1.png

If you are a first time user, then click on the Sign up button. Fill in the Sign up FORM and submit it to register for an account. Upon a successful login by providing valid login credentials, the user’s dashboard (Home) screen is displayed. The header navigation bar contains menu links and a user profile section in the top right corner. The collapsible left sidebar contains all the application menus; each item (except “Home”) can be expanded into sub-items by clicking on the item.

User Dashboard contents appear in the panel to the right of the left navigation menu. By default(initial view), the screen displays tiles contains the count of blueprints and count of deployments owned by the signed-in user. The count of plugins uploaded to the Cloudify Orchestrator is also shown in the Plugins tile.  Each tile is clickable and will navigate control to the individual screens that display detailed information about the items. There is a switch at the top to toggle between user level or user role group access for the dashboard contents.  By switching access type to Group, the aggregate count of inventory items at the user group (role) level are shown on the dashboard.

dashboard_2.png

dashboard_3.png

Auto Refresh switch lets user automatically reload the dashboard at regular intervals.

An autocomplete list of blueprints lets a user find a specific blueprint and trigger a deployments list query. Upon selecting a blueprint item, a query is triggered to find associated deployments for the selected blueprint. Similarly upon selecting a specific deployment entity, any matching execution workflows are queried and displayed in the executions data grid. By default the user dashboard shows currently active executions that are happening in Cloudify.

dashboard_4.png

dashboard_5.png

dashboard_6.png

Working with blueprints and deployments

Searching/Filtering Blueprints

  1. Click on the “Inventory” menu item, which will expand to show sub-items. Select the “Blueprints” sub-item.

A “Blueprints” screen appears, which displays the blueprints retrieved from Inventory. By default, a blueprint owner filter is applied to display items where the owner is the signed in user. By deselecting the “My Blueprints” checkbox, all blueprints belonging to the role group will be displayed.

dashboard_7.png

dashboard_8.png

2. You can search for blueprints based on their name. Type the name of the blueprint you wish to work with in the search box at the top right hand of the screen and press ‘enter’ or click the magnifying glass icon to look for it.

dashboard_9.png

3. You can use search filters by clicking on the down arrow at the right end of the search box. Filter by blueprint name and/or owner is available. Once finished, click the magnifying glass at the bottom of the advanced filters box or press “enter”.

dashboard_10.png

dashboard_11.png

dashboard_12.png

  1. Lookup deployments mapped to a blueprint

dashboard_13.png

Creating Blueprints

  1. A user can create a new blueprint either from scratch or by uploading an existing file. To do this, select the “Blueprints” sub-menu

Once the existing set of blueprints appear on the screen, click on the “Upload” Button

dashboard_14.png

  1. In the “Upload Blueprint” pop-up, fill out all the fields. Provide the name and version number for the blueprint – hints on nomenclature are available by clicking on the input field help icon (question mark symbol).  A blueprint file can be either “dragged and dropped” from the user’s own directory or it can be created from scratch. When finished, press “Save”. Note:Import Datafunction is not supported and will be removed later.

dashboard_15.png

  1. Allow the Blueprints screen to reload and then check that the blueprint you created is in the table.

Viewing/Exporting Blueprints

  1. Navigate to the Blueprints screen via the sidebar menu On the Blueprints screen, click on the Actions button icon (More actions) for the blueprint you wish to work with. A number of choices are indicated in a pop-up: View, Export, Update, Deploy and Delete.

dashboard_16.png

  1. Choose “View” to display the contents of the blueprint

dashboard_17.png

Deploying Blueprints

  1. Navigate to the Blueprints screen via the sidebar menu. On the Blueprints screen, click on the Actions button icon (More actions) for the blueprint you wish to work with and select “Deploy”

dashboard_18.png

  1. On the “Deploy Blueprint” pop-up, fill in all the fields. There are two ways to  supply the input parameters for the blueprint: one is to drag and drop a parameters file; the other is to manually fill in the name-value pairs. When finished, press the “Deploy” button at the bottom.

dashboard_19.png

  1. Navigate to the Deployments screen via the sidebar menu and check that the blueprint deployed is listed on the screen

dashboard_20.png

Searching/Filtering Deployments

  1. Navigate to the Deployments screen via the sidebar menu.

  2. By default, the deployment owner filter and application cache filters are applied to display items where the owner is the signed in user. Data is fetched from the application cache store. By deselecting the “My Deployments” checkbox, all deployments belonging to the role group will be displayed. By deselecting “Cache” checkbox, cache is bypassed and data is fetched from Cloudify Maanger.  “Tenant” filter can be applied to restrict the query per tenant partition. Upon selecting the “Tenant” checkbox, the tenants list dropdown appears.

  3. You can search for Deployments by an ID. Enter the ID  and press ‘Enter’ or click the magnifying glass icon.

dashboard_21.png

  1. If you wish to make an advanced search, select the “Tenant” checkbox, select a tenant from the tenants list,  click the down arrow at the right end of the input field to expand the advanced search filters. Here you can filter by deployment IDs,  owners, (installation) Status and Helm chart deployment. Once finished, click the magnifying glass at the bottom of the advanced filters box.

dashboard_22.png

Viewing Blueprint, Inputs, Executions

  1. Navigate to the Deployments screen on the left hand menu

On the deployments table screen, click on the “Actions” button icon  for the deployment you wish to manage.

dashboard_23.png

dashboard_24.png

dashboard_25.png

dashboard_26.png

dashboard_27.png

Undeploying Deployments

  1. Navigate to the Deployments screen on the left hand menu

  2. On the deployments table screen, click on the “Actions” button icon for the deployment you wish to uninstall. Click on Undeploy.

dashboard_28.png

  1. On the confirmation popup, confirm the tenant is correct and select “Undeploy” when ready to undeploy

dashboard_29.png

Helm Status, Upgrade, Rollback

  1. Navigate to the Deployments screen on the left hand menu

  2. Ensure that the deployment is a helm deployment

On the deployments table screen, click on the “Actions” button icon  for the deployment you wish to perform helm operations on.

dashboard_30.png

Helm Status

dashboard_31.png

Helm Upgrade

dashboard_32.png

Helm Rollback

dashboard_33.png

Checking system health

Viewing Service Health

Navigate to the Service Health screen on the sidebar menu

dashboard_34.png

Node Health

Viewing Node Health

Navigate to the Node Health screen on the sidebar menu

dashboard_35.png

DCAE MOD User Guide

Types of Users and Usage Instructions:

Sr.No

User

Usage Instructions

Developers who are looking to onboard their mS

-        Access the Nifi Web UI url provided to you

-        Follow steps  2.c to 2.f

-        You should be able to see your microservices in the Nifi Web UI by clicking and dragging ‘Processor’ on the canvas, and searching for the name of the micros ervice/component/processor.

Designers who are building the flows through UI and triggering distribution

-        Access the Nifi Web UI url provided to you

-        Follow steps 3 to the end of the document

Infrastructure/ Admins who want to stand up DCAE Mod and validate it

-        Follow start to the end

1.    Deployment of DCAE MOD components via Helm charts

The DCAE MOD components are deployed using the standard ONAP OOM deployment process.   When deploying ONAP using the helm deploy command, DCAE MOD components are deployed when the dcaemod.enabled flag is set to true, either via a –set option on the command line or by an entry in an overrides file.  In this respect, DCAE MOD is no different from any other ONAP subsystem.

The default DCAE MOD deployment relies on an nginx ingress controller being available in the Kubernetes cluster where DCAE MOD is being deployed.   The Rancher RKE installation process sets up a suitable ingress controller.   In order to enable the use of the ingress controller, it is necessary to override the OOM default global settings for ingress configuration.   Specifically, the installation needs to set the following configuration in an override file:

ingress:
  enabled: true
  virtualhost:
    baseurl: "simpledemo.onap.org"

When DCAE MOD is deployed with an ingress controller, several endpoints are exposed outside the cluster at the ingress controller’s external IP address and port.   (In the case of a Rancher RKE installation, there is an ingress controller on every worker node, listening at the the standard HTTP port (80).)  These exposed endpoints are needed by users using machines outside the Kubernetes cluster.

Endpoint

** Routes to (cluster internal address)**

Description

/nifi

http://dcaemod-designtool:8080/nifi

Design tool Web UI

/nifi-api

http://dcaemod-designtool:8080/nifi-api

Design tool API

/nifi-jars

http://dcaemod-nifi-registry:18080/nifi-jars

Flow registry listing of JAR files built from component specs

/onboarding

http://dcaemod-onboarding-api:8080/onboarding

Onboarding API

/distributor

http://dcaemod-distributor-api:8080/distributor

Distributor API

To access the design Web UI, for example, a user would use the URL : http://ingress_controller_address:ingress_controller_port/nifi.
ingress_controller_address is the the IP address or DNS FQDN of the ingress controller and
ingress_controller_port is the port on which the ingress controller is listening for HTTP requests.  (If the port is 80, the HTTP default, then there is no need to specify a port.)

There are two additional internal endpoints that users need to know, in order to configure a registry client and a distribution target in the design tool’s controller settings.

Configuration Item

Endpoint URL

Registry client

http://dcaemod-nifi-registry:18080

Distribution target

http://dcaemod-runtime-api:9090

With Guilin release, OOM/ingress template has been updated to enable virtual host by default. All MOD API’s and UI access via ingress should use dcaemod.simpledemo.onap.org.

In order to access Design UI from local, add an entry for dcaemod.simpledemo.onap.org in /etc/hosts with the correct IP (any K8S node IP can be specified).

Using DCAE MOD without an Ingress Controller

Not currently supported

2.    Configuring DCAE mod

a. Configure Nifi Registry url

Next check Nifi settings by selecting the Hamburger button in the Nifi UI. It should lead you to the Nifi Settings screen

image16

image3

Add a registry client. The Registry client url will be http://dcaemod-nifi-registry:18080

image4

b. Add distribution target which will be the runtime api url

Set the distribution target in the controller settings

image17

Distribution target URL will be http://dcaemod-runtime-api:9090

Now let’s access the Nifi (DCAE designer) UI - http://dcaemod.simpledemo.onap.org/nifi

IPAddress is the host address or the DNS FQDN, if there is one, for one of the Kubernetes nodes.

image0

c. Get the artifacts to test and onboard.

Let’s fetch the artifacts/ spec files

Component Spec for DCAE-VES-Collector : https://git.onap.org/dcaegen2/collectors/ves/tree/dpo/spec/vescollector-componentspec.json

Component Spec for DCAE-TCAgen2 : https://git.onap.org/dcaegen2/collectors/ves/tree/dpo/spec/vescollector-componentspec.json

VES 5.28.4 Data Format : https://git.onap.org/dcaegen2/collectors/ves/tree/dpo/data-formats/VES-5.28.4-dataformat.json

VES 7.30.2.1 Data Format : https://git.onap.org/dcaegen2/collectors/ves/tree/etc/CommonEventFormat_30.2.1_ONAP.jsonormat.json

VES Collector Response Data Format : https://git.onap.org/dcaegen2/collectors/ves/tree/dpo/data-formats/ves-response.json

TCA CL Data Format : https://git.onap.org/dcaegen2/analytics/tca-gen2/tree/dcae-analytics/dpo/dcaeCLOutput.json

For the purpose of onboarding, a Sample Request body should be of the type -:

{ "owner": "<some value>", "spec": <some json object> }

where the json object inside the spec field can be a component spec json.

Request bodies of this type will be used in the onboarding requests you make using curl or the onboarding swagger interface.

The prepared Sample Request body for a component dcae-ves-collector looks like so –

See VES Collector Spec

The prepared Sample request body for a sample data format  looks like so -

See VES data Format

d. To onboard a data format and a component

Each component has a description that tells what it does.

These requests would be of the type

curl -X POST http://<onboardingapi host>/onboarding/dataformats     -H “Content-Type: application/json” -d @<filepath to request>

curl -X POST http://<onboardingapi host>/onboarding/components     -H “Content-Type: application/json” -d @<filepath to request>

In our case,

curl -X POST http://dcaemod.simpledemo.onap.org/onboarding/dataformats     -H “Content-Type: application/json” -d @<filepath to request>

curl -X POST http://dcaemod.simpledemo.onap.org/onboarding/components    -H “Content-Type: application/json” -d @<filepath to request>

e. Verify the resources were created using

curl -X GET http://<IPAddress>/onboarding/dataformats

curl -X GET http://<IPAddress>/onboarding/components

f. Verify the genprocessor (which polls onboarding periodically to convert component specs to nifi processor), converted the component

Open http://dcaemod.simpledemo.onap.org/nifi-jars in a browser.

These jars should now be available for you to use in the nifi UI as processors

image1

3.    Design & Distribution Flow

a. To start creating flows, we need to create a process group first. The name of the process group will be the name of the flow. Drag and Drop on the canvas, the ‘Processor Group’ icon from the DCAE Designer bar on the top.

image2

Now enter the process group by double clicking it,

You can now drag and drop on the canvas ‘Processor’ icon from the top DCAE Designer tab. You can search for a particular component in the search box that appears when you attempt to drag the ‘Processor’ icon to the canvas.

image5

If the Nifi registry linking worked, you should see the “Import” button when you try to add a Processor or Process group to the Nifi canvas, like so-

image6

By clicking on the import button, we can import already created saved and version controlled flows from the Nifi registry, if they are present.

image7

We can save created flows by version controlling them like so starting with a ‘right click’ anywhere on the canvas-

image8

Ideally you would name the flow and process group the same, because functionally they are similar.

image9

When the flow is checked in, the bar at the bottom shows a green checkmark

image10

Note: Even if you move a component around on the canvas, and its position on the canvas changes, it is recognized as a change, and it will have to recommitted.

You can add additional components in your flow and connect them.

DcaeVesCollector connects to DockerTcagen2.

image11

image12

image13

Along the way you need to also provide topic names in the settings section. These can be arbitrary names.

image14

To recap, see how DcaeVesCollector connects to DockerTcagen2. Look at the connection relationships. Currently there is no way to validate these relationships. Notice how it is required to name the topics by going to Settings.

The complete flow after joining our components looks like so

image15

b. Submit/ Distribute the flow:

Once your flow is complete and saved in the Nifi registry, you can choose to submit it for distribution.

image18

If the flow was submitted successfully to the runtime api, you should get a pop up a success message like so -

image19

At this step, the design was packaged and sent to Runtime api.

The runtime is supposed to generate the blueprint out of the packaged design/flow and push it to the DCAE inventory and the DCAE Dasboard.

c. Checking the components in the DCAE Dashboard

You should see the generated artifact/ blueprint in the DCAE Dashboard dashboard at https://<IPAddress>:30418/ccsdk-app/login_external.htm in our deployment. The name for each component will be appended by the flow name followed by underscore followed by the component’s name.

The credentials to access the DCAE Dashboard

Login: su1234 Password: fusion

image20

image21

image22

The generated Blueprint can be viewed.

image23

Finally, the generated Blueprint can be deployed.

image24

You can use/import the attached input configurations files to deploy. Drag and Drop these sample JSON files to fill in the configuration values. See VES Collector Input Configuration See Tcagen2 Input Configuration

image25

image26

Design Platform

Overview

DCAE components are services that provide a specific functionality and are generally written to be composable with other DCAE components, although a component can run independently as well. The DCAE platform is responsible for running and managing DCAE service components reliably.

The DCAE Design platform aims to provide a common catalog of available DCAE Service components, enabling designers to select required components to construct and deploy composite flows into DCAE Runtime platform.

Service component/MS to be onboarded and deployed into DCAE platform would typically go through the following phases

  • Onboarding

  • Design

  • Runtime

DCAE Design Platform supports onboarding and service design through MOD.

Onboarding is a process that ensures that the component is compliant with the DCAE platform rules. The high level summary of the onboarding process is:

  1. Defining the data formats if they don’t already exist.

  2. Defining the component specification

  3. Validate the component spec schema against Component Spec json schema

  4. Use blueprint-generator tool to generate Cloudify blueprint

  5. Test the blueprint generated in DCAE Runtime Environment (using either Dashboard UI or Cloudify cli from bootstrap)

  6. Using DCAE-MOD , publish the component and data formats into DCAE-MOD catalog. (This step is required if Microservice needs to be deployed part of flow/usecase)

A Component requires one or more data formats.

A component is a software application that performs a function. It doesn’t run independently; it depends upon other components. A component’s function could require connecting to other components to fulfill that function. A component could also be providing its function as a service through an interface for other components to use.

A component cannot connect to or be connected with any other component. The upstream and downstream components must speak the same vocabulary or data format. The output of an one component must match another component’s input. This is necessary for components to function correctly and without errors.

The platform requires data formats to ensure that a component will be run with other compatible components.

Data formats can and should be shared by multiple components.

Each Component requires a component specification.

The component specification is a JSON artifact that fully specifies the component, it’s interfaces, and configuration. It’s standardized for CDAP (deprecated) and Docker applications and is validated using a JSON schema.

The component specification fully specifies all the configuration parameters of the component. This is used by the designer and by policy (future) to configure the runtime behavior of the component.

The component specification is used to generate application configuration in a standardized JSON that the platform will make available to the component. This application configuration JSON will include:

  • Parameters that have been assigned values from the component specification, policy, and/or the designer

  • Connection details of downstream components

The component specification is transformed by DCAE tooling (explained later) into TOSCA models (one for the component, and in the future, one for Policy). The TOSCA models then get transformed into Cloudify blueprints.

The component specification is used by:

  • Blueprint Generator - Tool to generate standalone cloudify blueprint using component spec. The blueprints can be uploaded into inventory using Dashboard and triggered for deployment.

  • MOD Platform - To onboard the microservice and maintain in catalog enabling designer to compose new DCAE service flows and distribute to DCAE Runtime platform.

  • Policy (future) - TOSCA models are generated from the component specification so that operations can create policy models used to dynamically configure the component.

  • Runtime platform - The component’s application configuration (JSON) is generated from the component specification and will be provided to the component at runtime (through ConfigBindingService or Consul).

Onboarding Pre-requisite

Before a component is onboarded into DCAE, the component developer must ensure it is compliant with ONAP & DCAE goals and requirement in order to correctly be deployed and be managed. This page will discuss the changes which are grouped into the following categories:

Configuration Management

All configuration for a component is stored in CONSUL under the components uniquely generated name which is provided by the environment variable HOSTNAME as well as SERVICE_NAME. It is then made available to the component via a remote HTTP service call to CONFIG BINDING SERVICE.

The main entry in CONSUL for the component contains its generated application configuration. This is based on the submitted component specification, and consists of the interfaces (streams and services/calls) and parameters sections. Other entries may exist as well, under specific keys, such as :dmaap . Each key represents a specific type of information and is also available to the component by calling CONFIG BINDING SERVICE. More on this below.

Components are required to pull their generated application configuration at application startup using the environment setting exposed during deployment.

Envs

The platform provides a set of environment variables into each Docker container:

Name

Type

Description

HOSTNAME

string

Unique name of the component instance that is generated

CONSUL_HOST

string

Hostname of the platform’s Consul instance

CONFIG_BINDING_SERVICE

string

Hostname of the platform’s config binding service instance

DOCKER_HOST

string

Host of the target platform Docker host to run the container on

CBS_CONFIG_URL

string

Fully resolved URL to query config from CONSUL via CBS

Config Binding Service

The config binding service is a platform HTTP service that is responsible for providing clients with its fully resolve configuration JSON at startup, and also other configurations objects when requested.

At runtime, components should make an HTTP GET on:

<config binding service hostname>:<port>/service_component/NAME

For Docker components, NAME should be set to HOSTNAME, which is provided as an ENV variable to the container.

The binding service integrates with the streams and services section of the component specification. For example, if you specify that you call a service:

"services": {
    "calls": [{
        "config_key": "vnf-db",
        "request": {
            "format": "dcae.vnf.meta",
            "version": "1.0.0"
            },
        "response": {
            "format": "dcae.vnf.kpi",
            "version": "1.0.0"
            }
    }],
...
}

Then the config binding service will find all available IP addresses of services meeting the containers needs, and provide them to the container under your config_key:

// your configuration
{
    "vbf-db" :                 // see above
        [IP:Port1, IP:Port2,…] // all of these meet your needs, choose one.
}

Regarding <config binding service hostname>:<port>, there is DNS work going on to make this resolvable in a convenient way inside of your container.

For all Kubernetes deployments since El-Alto, an environment variable CBS_CONFIG_URL will be exposed by platform (k8s plugins) providing the exact URL to be used for configuration retrieval. Application can use this URL directly instead of constructing URL from HOSTNAME (which refers to ServiceComponentName) and CONFIG_BINDING_SERVICE env’s. By default, this URL will use HTTPS CBS interface

If you are integrating with CBS SDK, then the DNS resolution and configuration fetch are handled via library functions.

Generated Application Configuration

The DCAE platform uses the component specification to generate the component’s application configuration provided at deployment time. The component developer should expect to use this configuration JSON in the component.

The following component spec snippet (from String Matching):

"streams":{
    "subscribes": [{
      "format": "VES_specification",
      "version": "4.27.2",
      "type": "message_router",
      "config_key" : "mr_input"
    }],
    "publishes": [{
      "format": "VES_specification",
      "version": "4.27.2",
      "config_key": "mr_output",
      "type": "message_router"
     }]
  },
  "services":{
    "calls": [{
      "config_key" : "aai_broker_handle",
      "verb": "GET",
      "request": {
        "format": "get_with_query_params",
        "version": "1.0.0"
      },
      "response": {
        "format": "aai_broker_response",
        "version": "3.0.0"
      }
    }],
    "provides": []
  },

Will result in the following top level keys in the configuration

"streams_publishes":{
   "mr_output":{                // notice the config key above
      "aaf_password":"XXX",
      "type":"message_router",
      "dmaap_info":{
         "client_role": null,
         "client_id": null,
         "location": null,
         "topic_url":"https://YOUR_HOST:3905/events/com.att.dcae.dmaap.FTL2.DCAE-CL-EVENT" // just an example
      },
      "aaf_username":"XXX"
   }
},
"streams_subscribes":{
   "mr_input":{                 // notice the config key above
      "aaf_password":"XXX",
      "type":"message_router",
      "dmaap_info":{
         "client_role": null,
         "client_id": null,
         "location": null,
         "topic_url":"https://YOUR_HOST:3905/events/com.att.dcae.dmaap.FTL2.TerrysStringMatchingTest" // just an example
      },
      "aaf_username":"XXX"
   }
},
"services_calls":{
   "aai_broker_handle":[        // notice the config key above
      "135.205.226.128:32768"   // based on deployment time, just an example
   ]
}

These keys will always be populated whether they are empty or not. So the minimum configuration you will get, (in the case of a component that provides an HTTP service, doesn’t call any services, and has no streams, is:

"streams_publishes":{},
"streams_subscribes":{},
"services_calls":{}

Thus your component should expect these well-known top level keys.

DCAE SDK

DCAE has SDK/libraries which can be used for service components for easy integration.

Policy Reconfiguration

Components must provide a way to receive policy reconfiguration, that is, configuration parameters that have been updated via the Policy UI. The component developer must either periodically poll the ConfigBindingService API to retrieve/refresh the new configuration or provides a script (defined in the Docker auxiliary specification) that will be triggered when policy update is detected by the platform.

Docker Images

Docker images must be pushed to the environment specific Nexus repository. This requires tagging your build with the full name of you image which includes the Nexus repository name.

For ONAP microservices, the components images are expected to pushed into ONAP nexus part of ONAP CI jobs

Operational Requirement

Logging

All ONAP MS logging should follow logging specification defined by logging project

The application log configuration must enable operation to choose if to be written into file or stdout or both during deployment.

S3P

ONAP S3P (all scaling/resiliency/security/maintainability) goals should meet at the minimum level defined for DCAE project for the targeted release

If the component is stateful, it should persist its state on external store (eg. pg, redis) to allow support for scaling and resiliency. This should be important design criteria for the component. If the components either publish/subscribe into DMAAP topic, then secure connection to DMAAP must be supported (platform will provide aaf_username/aaf_password for each topic as configuration).

Blueprint Generator

What is Blueprint Generator?

The blueprint generator is java-based tool to take a component spec for a given micro-service and translate that component spec into a cloudify blueprint yaml file that can be used during deployment in DCAE Runtime plaform.

Service components to be deployed as stand-alone (i.e not part of DCAE service composition flow) can use the blueprint-generator utility to create deployment yaml. The generated blueprint can be uploaded to inventory and deployed from Dashboard directly.

Steps to run the blueprint generator

  1. Download the blueprint generator jar file from Nexus

  2. To execute the application, run the following command

    java -jar blueprint-generator-onap-executable-1.7.3.jar app ONAP

  3. This execution will provide the help, as you have not provided the required flags.

  4. When ready you can run the program again except with the required flags.

  5. OPTIONS

    • -i OR –component-spec: The path of the ONAP Blueprint INPUT JSON SPEC FILE (Required)

    • -p OR –blueprint-path: The path of the ONAP Blueprint OUTPUT where it will be saved (Required)

    • -n OR –blueprint-name: The NAME of the ONAP Blueprint OUTPUT that will be created (Optional)

    • -t OR –imports: The path of the ONAP Blueprint IMPORT FILE (Optional)

    • -o OR –service-name-override: The Value used to OVERRIDE the SERVICE NAME of the ONAP Blueprint (Optional)

    • -d OR –dmaap-plugin: The option to create an ONAP Blueprint with DMAAP Plugin included (Optional)

  6. An example running this program is shown below

    java -jar blueprint-generator-onap-executable-1.7.3.jar app ONAP -p blueprint_output -i ComponentSpecs/TestComponentSpec.json -n TestAppBlueprint

Extra information
  1. The component spec must be compliant with Component Spec json schema

  2. If the flag is marked required then the corresponding values must be provided for blueprint-generator execution

  3. If the flag is identified as optional then it is not mandatory for blueprint-generator execution

  4. If you do not add a -n flag the blueprint name will default to what it is in the component spec

  5. If the directory you specified in the -p flag does not already exist the directory will be created for you

  6. The -t flag will override the default imports set for the blueprints. Below you can see example content of the import file:

imports:
  - https://www.getcloudify.org/spec/cloudify/4.5.5/types.yaml
  - plugin:k8splugin?version=3.6.0
  - plugin:dcaepolicyplugin?version=2.4.0

How to create policy models:

  1. Policy model creation can be done with the same jar as downloaded for the blueprint generation.

  2. Run the same command as the blueprint generator except add a flag -type policycreate

  3. Options

    • -i: The path to the JSON spec file (required)

    • -p: The Output path for all of the models (required)

  4. Example command

    java -jar blueprint-generator-onap-executable-1.7.3.jar app ONAP -type policycreate -i componentspec -p OutputPolicyPath

Extra information
  1. Not all component specs will be able to create policy models

  2. Multiple policy model files may be created from a single component spec

How to use Blueprint Generator as a Spring library

To use BlueprintGenerator you need to import the following artifact to your project:

<dependency>
    <groupId>org.onap.dcaegen2.platform.mod</groupId>
    <artifactId>blueprint-generator-onap</artifactId>
    <version>1.7.3</version>
</dependency>

In order to see how to use the library in detail please familiarize yourself with real application: Blueprint Generator Executable main class

DCAE MOD Architecture

DCAE MOD is composed of a mix of components developed in ONAP and other components taken from the Apache Nifi project and modified for appropriate use. The MOD architecture and design was intended to simplify the onboarding and design experience in ONAP addressing below goals.

MOD Objectives

MOD stands for “micro-service onboarding and design” and the project is an effort to reboot the onboarding and design experience in DCAE.

Goals and Stretch Goals
  • Due to resource constraints, there are mismatched capabilities between SDC/DCAE-DS and DCAE mS deployment.

  • Due to #1, mS developers upload handcrafted blueprint, and stay involved throughout the deployment process. This also ties mS development to specific Cloudify implementation.

  • There is no Service Assurance flow design in SDC/DCAE-DS, and so there are no reusable flow designs for the Service Designer.

  • There is extensive reliance on developers’ involvement in providing [Inputs.json] as runtime configurations for mS deployment.

  • There is no E2E tracking of the microservice lifecycle.

To address these problems, the new DCAE MOD, replacing the mS onboarding & DCAE-DS in SDC, aims to -

  • Move DCAE mS onboarding & design from SDC project to DCAE Project.

  • Provide simplified mS Onboarding, Service Assurance flow design, & mS microservice design time & runtime configurations to support developers, service designers, and operations.

  • Auto-generate blueprint at the end of the design process, not onboarded before the design process.

  • Support Policy onboarding & artifact distribution to Policy/CLAMP to support Self Service Control Loop.

  • Streamline the process of constructing to deploying flows, Provide the ability to track flows - capture and store the progress and evolution of flows and Provide clear coordination and accountability i.e Provide catalog & data for microservice lifecycle tracking. It fits the ECOMP’s release process and must provide clear visibility along the entire process and across different environments.

  • Support automated adaptation of ML model from Acumos to DCAE design & runtime environment through the Acumos Adapter.

  • DCAE-MOD is developed by the DCAE team to ensure consistency across all DCAE implementation, with the long term objective to integrate with SDC as part of the Design Platform.

  • Integrate with ONAP User Experience portals (initially ONAP portal, later SDC portal).

MOD aims to streamline the construction, management, and evolution of DCAE flows from role to role, from environment to environment, and from release to release. MOD is composed of three functional areas: onboarding, design, and distribution and caters to different user group

The below illustrations describe the architecture of DCAE-MOD and show the usage flow in DCAE MOD

image0

image1

Onboarding API

It is a component developed to onboard models/components/microservices (spec files) into DCAE MOD.

Genprocessor

It has been developed in Java. This project is a tool to experiment with generating a Nifi Processor POJO from a DCAE component spec.

Nifi Web UI

It is a component taken from the Apache Nifi Project but modified for use in the MOD project.

Apache NiFi is a dataflow system based on the concepts of flow-based programming. It supports powerful and scalable directed graphs of data routing, transformation, and system mediation logic. NiFi has a web-based user interface for design, control, feedback, and monitoring of dataflows. It is highly configurable along several dimensions of quality of service, such as loss-tolerant versus guaranteed delivery, low latency versus high throughput, and priority-based queuing. NiFi provides fine-grained data provenance for all data received, forked, joined cloned, modified, sent, and ultimately dropped upon reaching its configured end-state.

The NiFi UI provides mechanisms for creating automated dataflows, as well as visualizing, editing, monitoring, and administering those dataflows. The UI can be broken down into several segments, each responsible for different functionality of the application. This section provides screenshots of the application and highlights the different segments of the UI. Each segment is discussed in further detail later in the document.

The users of Apache Nifi will find that it is used very differently than intended to serve our purpose in the DCAE-MOD project.

Registry API

This component taken from the Apache Nifi project, is a REST API that provides an interface to a registry with operations for saving, versioning, reading NiFi flows and components.

Distributor API

It is a component developed using the Flask framework in Python. It is a HTTP API to manage distribution targets for DCAE design. Distribution targets are DCAE runtime environments that have been registered and are enabled to accept flow design changes that are to be orchestrated in that environment.

Flow Based Programming (FBP)

NiFi’s fundamental design concepts closely relate to the main ideas of Flow Based Programming [fbp].

For more information on how some of the main NiFi concepts map to FBP, check out https://nifi.apache.org/docs/nifi-docs/html/overview.html

Runtime API

It is developed in Java’s Spring Boot framework. It is a HTTP API to support runtime environment for DCAE-MOD. It has two major functionalities:

  1. It accepts changes on the flow-graph via fbp protocol

  2. It generates and distributes blueprints based on the change made on the flow-graph

Blueprint Generator

This tool allows the user to create a blueprint from a component spec json file. This tool is used by the runtime api.

Inventory API

DCAE Inventory is a web service that provides the following:

  1. Real-time data on all DCAE services and their components

  2. Comprehensive details on available DCAE service types

DCAE Inventory is a composite API that relies on other APIs to obtain resources on underlying components and uses these resources to compose a DCAE service resource. In addition, DCAE Inventory will store data that is unique to the DCAE service level including:

  1. DCAE service metadata

  2. DCAE service type description and composition details

  3. Relationships between DCAE service and DCAE service types and their respective VNF and VNF types

DCAE Inventory has a REST interface to service client requests. It has a well-defined query interface that would filter result sets based upon resource attributes.

Here, think of it as a back end API for the DCAE dashboard. The runtime posts Cloudify Blueprints to this API so they show up in the DCAE dashboard.

DCAE Dashboard

The DCAE dashboard provides visibility into running DCAE services for operational purposes. It queries the DCAE Inventory for aggregate details on all the running DCAE services and for getting up-to-date status information on DCAE services and their components.

End-to-End Flow

A model/component/microservice can be onboarded by a ms Developer by posting a spec file on the onboarding API. Alternatively, an Acumos model can be onboarded using the Acumos Adapter. Once successfully onboarded, the genprocessor converts converts them to jars and onboards them into Nifi i.e DCAE MOD. These artifacts are now available to use from the Modified Nifi Web UI i.e DCAE Designer.

The registry api offers version control and retrieval for flows. The distributor api can be used to set distribution targets. Once a flow is designed and distributed, it goes to the distributor api which is supposed to post graph changes (in accordance with fbp) to the runtime api. The runtime api generates and distributes blueprints based on the change made on the flow-graph. These blueprints received by the DCAE inventory can then be viewed and deployed from the DCAE dashboard.

Component Specification

What is Component Specification?

This page will discuss categories defined in component specification schema and their usage.

Meta Schema Definition

The “Meta Schema” implementation defines how component specification JSON schemas can be written to define user input. It is itself a JSON schema (thus it is a “meta schema”). It requires the name of the component entry, component type (either ‘cdap’ or ‘docker’) and a description under “self” object. The meta schema version must be specified as the value of the “version” key. Then the input schema itself is described.

There are four types of schema descriptions objects - jsonschema for inline standard JSON Schema definitions of JSON inputs, delimitedschema for delimited data input using a JSON description defined by AT&T, unstructured for unstructured text, and reference that allows a pointer to another artifact for a schema. The reference allows for XML and Protocol Buffer schema, but can be used as a pointer to JSON, Delimited Format, and Unstructured schemas as well.

Component Metadata

Metadata refers to the properties found under the self JSON. This group of properties is used to uniquely identify this component specification and identify the component that this specification is used to capture.

Example:

"self": {
    "version": "1.0.0",
    "name": "yourapp.component.kpi_anomaly",
    "description": "Classifies VNF KPI data as anomalous",
    "component_type": "docker"
},

self Schema:

Property Name

Type

Description

version

string

Required. Semantic version for this specification

name

string

Required. Full name of this component which is also used as this component’s catalog id.

description

string

Required Human-readable text describing the component and the components functional purpose.

component_type

string

Required Identify what containerization technology this component uses: docker or cdap.

Interfaces

Interfaces are the JSON objects found under the streams key and the services key. These are used to describe the interfaces that the component uses and the interfaces that the component provides. The description of each interface includes the associated data format.

Streams
  • The streams JSON is for specifying data produced for consumption by other components, and the streams expected to subscribe to that is produced by other components. These are “fire and forget” type interfaces where the publisher of a stream does not expect or parse a response from the subscriber.

  • The term stream here is abstract and neither refers to “CDAP streams” or “DMaaP feeds”. While a stream is very likely a DMaaP feed, it could be a direct stream of data being routed via HTTP too. It abstractly refers to a sequence of data leaving a publisher.

  • Streams have anonymous publish/subscribe semantics, which decouples the production of information from its consumption. Like the component specification, the data format specification is represented/validated against this Data Format json schema

  • In general, components are not aware of who they are communicating with.

  • Instead, components that are interested in data, subscribe to the relevant stream; components that generate data publish to the relevant stream.

  • There can be multiple publishers and subscribers to a stream. Streams are intended for unidirectional, streaming communication.

Streams interfaces that implement an HTTP endpoint must support POST.

Streams are split into:

Property Name

Type

Description

subscribes

JSON list

Required. List of all available stream interfaces that this component has that can be used for subscribing

publishes

JSON list

Required. List of all stream interfaces that this component will publish onto

Subscribes

Example:

"streams": {
    "subscribes": [{
        "format": "dcae.vnf.kpi",
        "version": "1.0.0",
        "route": "/data",        // for CDAP this value is not used
        "type": "http"
    }],
...
}

This describes that yourapp.component.kpi_anomaly exposes an HTTP endpoint called /data which accepts requests that have the data format of dcae.vnf.kpi version 1.0.0.

subscribes Schema:

Property Name

Type

Description

format

string

Required. Data format id of the data format that is used by this interface

version

string

Required. Data format version of the data format that is used by this interface

route

string

Required for HTTP and data router. The HTTP route that this interface listens on

config_key

string

Required for message_routerand data router. The HTTP route that this interface listens on

type

string

Required. Type of stream: http , message_router , data_router , kafka

Message router

Message router subscribers are http clients rather than http services and performs a http a GET call. Thus, message router subscribers description is structured like message router publishers and requires config_key:

"streams": {
    "subscribes": [{
        "format": "dcae.some-format",
        "version": "1.0.0",
        "config_key": "some_format_handle",
        "type": "message router"
    }],
...
}
Data router

Data router subscribers are http or https services that handle PUT requests from data router. Developers must provide the route or url path/endpoint that is expected to handle data router requests. This will be used to construct the delivery url needed to register the subscriber to the provisioned feed. Developers must also provide a config_key because there is dynamic configuration information associated with the feed that the application will need e.g. username and password. See the page on DMaaP connection objects for more details on the configuration information.

Example (not tied to the larger example):

"streams": {
    "subscribes": [{
        "config_key": "some-sub-dr",
        "format": "sandbox.platform.any",
        "route": "/identity",
        "type": "data_router",
        "version": "0.1.0"
    }],
...
}
Kafka

Kafka subscribers are clients fetching data directly from kafka.

config_key:

"streams": {
    "subscribes": [{
        "format": "dcae.some-format",
        "version": "1.0.0",
        "config_key": "some_format_handle",
        "type": "kafka"
    }],
...
}
Publishes

Example:

"streams": {
...
    "publishes": [{
        "format": "yourapp.format.integerClassification",
        "version": "1.0.0",
        "config_key": "prediction",
        "type": "http"
    }]
},

This describes that yourapp.component.kpi_anomaly publishes by making POST requests to streams that support the data format yourapp.format.integerClassification version 1.0.0.

publishes Schema:

Property Name

Type

Description

format

string

Required. Data format id of the data format that is used by this interface

version

string

Required. Data format version of the data format that is used by this interface

config_key

string

Required. The JSON key in the generated applicat ion configuration that will be used to pass the downstream component’s (the subscriber’s) connection information.

type

string

Required. Type of stream: http , message_router , data_router , kafka

Message router

Message router publishers are http clients of DMaap message_router. Developers must provide a config_key because there is dynamic configuration information associated with the feed that the application needs to receive e.g. topic url, username, password. See the page on DMaaP connection objects for more details on the configuration information.

Example (not tied to the larger example):

"streams": {
...
    "publishes": [{
        "config_key": "some-pub-mr",
        "format": "sandbox.platform.any",
        "type": "message_router",
        "version": "0.1.0"
    }]
}
Data router

Data router publishers are http clients that make PUT requests to data router. Developers must also provide a config_key because there is dynamic configuration information associated with the feed that the application will need to receive e.g. publish url, username, password. See the page on DMaaP connection objects for more details on the configuration information.

Example (not tied to the larger example):

"streams": {
...
    "publishes": [{
        "config_key": "some-pub-dr",
        "format": "sandbox.platform.any",
        "type": "data_router",
        "version": "0.1.0"
    }]
}
Kafka

Kafka publishers are clients publishing data directly to kafka.

config_key:

"streams": {
    "publishes": [{
        "format": "dcae.some-format",
        "version": "1.0.0",
        "config_key": "some_format_handle",
        "type": "kafka"
    }],
...
}
Quick Reference

Refer to this Quick Reference for a comparison of the Streams ‘Publishes’ and ‘Subscribes’ sections.

Services
  • The publish / subscribe model is a very flexible communication paradigm, but its many-to-many one-way transport is not appropriate for RPC request / reply interactions, which are often required in a distributed system.

  • Request / reply is done via a Service, which is defined by a pair of messages: one for the request and one for the reply.

Services are split into:

Property Name

Type

Description

calls

JSON list

Required. List of all service interfaces that this component will call

provides

JSON list

Required. List of all service interfaces that this component exposes and provides

Calls

The JSON services/calls is for specifying that the component relies on an HTTP(S) service—the component sends that service an HTTP request, and that service responds with an HTTP reply. An example of this is how string matching (SM) depends on the AAI Broker. SM performs a synchronous REST call to the AAI broker, providing it the VMNAME of the VNF, and the AAI Broker responds with additional details about the VNF. This dependency is expressed via services/calls. In contrast, the output of string matching (the alerts it computes) is sent directly to policy as a fire-and-forget interface, so that is an example of a stream.

Example:

"services": {
    "calls": [{
        "config_key": "vnf-db",
        "request": {
            "format": "dcae.vnf.meta",
            "version": "1.0.0"
            },
        "response": {
            "format": "dcae.vnf.kpi",
            "version": "1.0.0"
            }
    }],
...
}

This describes that yourapp.component.kpi_anomaly will make HTTP calls to a downstream component that accepts requests of data format dcae.vnf.meta version 1.0.0 and is expecting the response to be dcae.vnf.kpi version 1.0.0.

calls Schema:

Property Name

Type

Description

request

JSON object

Required. Description of the expected request for this downstream interface

response

JSON object

Required. Description of the expected response for this downstream interface

config_key

string

Required. The JSON key in the generated applicat ion configuration that will be used to pass the downstream componen t connection information.

The JSON object schema for both request and response:

Property Name

Type

Description

format

string

Required. Data format id of the data format that is used by this interface

version

string

Required. Data format version of the data format that is used by this interface

Provides

Example:

"services": {
...
    "provides": [{
        "route": "/score-vnf",
        "request": {
            "format": "dcae.vnf.meta",
            "version": "1.0.0"
            },
        "response": {
            "format": "yourapp.format.integerClassification",
            "version": "1.0.0"
            }
    }]
},

This describes that yourapp.component.kpi_anomaly provides a service interface and it is exposed on the /score-vnf HTTP endpoint. The endpoint accepts requests that have the data format dcae.vnf.meta version 1.0.0 and gives back a response of yourapp.format.integerClassification version 1.0.0.

provides Schema for a Docker component:

Property Name

Type

Description

request

JSON object

Required. Description of the expected request for this interface

response

JSON object

Required. Description of the expected response for this interface

route

string

Required. The HTTP route that this interface listens on

The JSON object schema for both request and response:

Property Name

Type

Description

format

string

Required. Data format id of the data format that is used by this interface

version

string

Required. Data format version of the data format that is used by this interface

Note, for CDAP, there is a slight variation due to the way CDAP exposes services:

"provides":[                             // note this is a list of JSON
   {
      "request":{  ...},
      "response":{  ...},
      "service_name":"name CDAP service",
      "service_endpoint":"greet",         // E.g the URL is /services/service_name/methods/service_endpoint
      "verb":"GET"                        // GET, PUT, or POST
   }
]

provides Schema for a CDAP component:

Property Name

Type

Description

request

JSON object

Required. Description of the expected request data format for this interface

response

JSON object

Required. Description of the expected response for this interface

service_name

string

Required. The CDAP service name (eg “Greeting”)

service_end point

string

Required. The CDAP service endpoint for this service_name (eg “/greet” )

verb

string

Required. ‘GET’, ‘PUT’ or ‘POST’

Parameters

parameters is where to specify the component’s application configuration parameters that are not connection information.

Property Name

Type

Description

parameters

JSON array

Each entry is a parameter object

Parameter object has the following available properties:

Property Name

Type

Description

Default

name

string

Required. The property name that will be used as the key in the generated config

value

any

Required. The default value for the given parameter

description

string

Required. Human-readable text describing the parameter like what its for

type

string

The required data type for the parameter

required

boolean

An optional key that declares a parameter as required (true) or not (false)

true

constraints

array

The optional list of sequence d constraint clauses for the parameter. See below

entry_schema

string

The optional key that is used to declare the name of the Datatype definition for entries of set types such as the TOSCA ‘list’ or ‘map’. Only 1 level is supported at this time

designer_editable

boolean

An optional key that declares a parameter to be editable by designer (true) or not (false)

sourced_at_deployment

boolean

An optional key that declares a parameter’s value to be assigned at deployment time (true)

policy_editable

boolean

An optional key that declares a parameter to be editable by policy (true) or not (false)

policy_schema

array

The optional list of schema definitions used for policy. See below

Example:

"parameters": [
    {
        "name": "threshold",
        "value": 0.75,
        "description": "Probability threshold to exceed to be anomalous"
    }
]

Many of the parameter properties have been copied from TOSCA model property definitions and are to be used for service design composition and policy creation. See section 3.5.8 *Property definition*.

The property constraints is a list of objects where each constraint object:

Property Name

Type

Description

equal

Constrains a property or parameter to a value equal to (‘=’) the value declared

greater_than

number

Constrains a property or paramete r to a value greater than (‘>’) the value declared

greater_or_equal

number

Constrains a property or parameter to a value greater than or equal to (‘>=’) the value declared

less_than

number

Constrains a property or parameter to a value less than (‘<’) the value declared

less_or_equal

number

Constrains a property or parameter to a value less than or equal to (‘<=’) the value declared

valid_values

array

Constrains a property or parameter to a value that is in the list of declared values

length

number

Constrains the property or parameter to a value of a given length

min_length

number

Constrains the property or parameter to a value to a minimum length

max_length

number

Constrains the property or parameter to a value to a maximum length

threshold is the configuration parameter and will get set to 0.75 when the configuration gets generated.

The property policy_schema is a list of objects where each policy_schema object:

Property Name

Type

Description

Default

name

string

Required. parameter name

value

string

default value for the parameter

description

string

parameter description

type

enum

Required. data type of the parameter, ‘string’ , ‘number’ , ‘boolean ’, ‘datetime’, ‘list’, or ‘map’

required

boolean

is parameter required or not?

true

constraints

array

The optional list of sequenced constraint clauses for the parameter. See above

entry_schema

string

The optional key that is used to declare the name of the Datatype definition for certain types. entry_schema must be defined when the type is either list or map. If the type is list and the entry type is a simple type (string, number, bookean, datetime ), follow with an string to describe the entry

If the type is list and the entry type is a map, follow with an array to describe the keys for the entry map

If the type is list and the entry type is a list , that is not currently supported

If the type is map, follow with an aray to describe the keys for the map

Artifacts

artifacts contains a list of artifacts associated with this component. For Docker, this is the full path (including the registry) to the Docker image. For CDAP, this is the full path to the CDAP jar.

Property Name

Type

Description

artifacts

JSON array

Each entry is a artifact object

artifact Schema:

Property Name

Type

Description

uri

string

Required. Uri to the artifact, full path

type

string

Required. docker image or jar

Auxilary
Health check

Component developers are required to provide a way for the platform to periodically check the health of their running components. The details of the definition used by your component is to be provided through the Docker auxiliary specification.

The information contained in the auxilary_docker field is the Docker component specification schema. Some properties of the docker component specification include -

healthcheck : Defines the health check that Consul should perform for this component

log_info : Component specific details for logging, includes the path in the container where logs are written that can also be used for logstash/kibana style reporting.

ports : Port mapping to be used for Docker containers. Each entry is of the format <container port>:<host port>.

tls_info : Component information for use of tls certificates, where they will be available, whether or not to use them

policy : Information for Policy configuration and reconfiguration

databases: Information about databases the application should connect to

volumes: Contains information on volume mapping for the docker containers

Schema portion:

"auxilary_docker": {
  "title": "Docker component specification schema",
  "type": "object",
  "properties": {
    "healthcheck": {
      "description": "Define the health check that Consul should perfom for this component",
      "type": "object",
      "oneOf": [
        { "$ref": "#/definitions/docker_healthcheck_http" },
        { "$ref": "#/definitions/docker_healthcheck_script" }
      ]
    },
    "ports": {
      "description": "Port mapping to be used for Docker containers. Each entry is of the format <container port>:<host port>.",
      "type": "array",
      "items": {
        "type": "string"
      }
    },
    "log_info": {
      "description": "Component specific details for logging",
      "type": "object",
      "properties": {
        "log_directory": {
          "description": "The path in the container where the component writes its logs. If the component is following the EELF requirements, this would be the directory where the four EELF files are being written. (Other logs can be placed in the directory--if their names in '.log', they'll also be sent into ELK.)",
          "type": "string"
        },
        "alternate_fb_path": {
          "description": "By default, the log volume is mounted at /var/log/onap/<component_type> in the sidecar container's file system. 'alternate_fb_path' allows overriding the default.  Will affect how the log data can be found in the ELK system.",
          "type": "string"
        }
      },
      "additionalProperties": false
    },
    "tls_info": {
      "description": "Component information to use tls certificates",
      "type": "object",
      "properties": {
        "cert_directory": {
          "description": "The path in the container where the component certificates will be placed by the init container",
          "type": "string"
        },
        "use_tls": {
          "description": "Boolean flag to determine if the application is using tls certificates",
          "type": "boolean"
        },
        "use_external_tls": {
          "description": "Boolean flag to determine if the application is using tls certificates for external communication",
          "type": "boolean"
        }
      },
      "required": [
        "cert_directory","use_tls"
      ],
      "additionalProperties": false
    },
    "databases": {
      "description": "The databases the application is connecting to using the pgaas",
      "type": "object",
      "additionalProperties": {
        "type": "string",
        "enum": [
          "postgres"
        ]
      }
    },
    "policy": {
      "properties": {
        "trigger_type": {
          "description": "Only value of docker is supported at this time.",
          "type": "string",
          "enum": ["docker"]
        },
        "script_path": {
          "description": "Script command that will be executed for policy reconfiguration",
          "type": "string"
        }
      },
      "required": [
        "trigger_type","script_path"
      ],
      "additionalProperties": false
    },
    "volumes": {
      "description": "Volume mapping to be used for Docker containers. Each entry is of the format below",
      "type": "array",
      "items": {
        "type": "object",
        "oneOf": [
          { "$ref": "#/definitions/host_path_volume" },
          { "$ref": "#/definitions/config_map_volume" }
        ]
      }
    }
  },
  "required": [
    "healthcheck"
  ],
  "additionalProperties": false
}

Component JSON Schema Definition

The schema file used for DCAE onboarding is maintained in gerrit The same is provided below for documentation reference.

{
 "$schema": "http://json-schema.org/draft-04/schema#",
 "title": "Component specification schema",
 "type": "object",
 "properties": {
   "self": {
     "type": "object",
     "properties": {
       "version": {
         "$ref": "#/definitions/version"
       },
       "description": {
         "type": "string"
       },
       "component_type": {
         "type": "string",
         "enum": [
           "docker",
           "cdap"
         ]
       },
       "name": {
         "$ref": "#/definitions/name"
       }
     },
     "required": [
       "version",
       "name",
       "description",
       "component_type"
     ]
   },
   "streams": {
     "type": "object",
     "properties": {
       "publishes": {
         "type": "array",
         "uniqueItems": true,
         "items": {
           "oneOf": [
             { "$ref": "#/definitions/publisher_http" },
             { "$ref": "#/definitions/publisher_message_router" },
             { "$ref": "#/definitions/publisher_data_router" },
             { "$ref": "#/definitions/publisher_kafka" }
           ]
         }
       },
       "subscribes": {
         "type": "array",
         "uniqueItems": true,
         "items": {
           "oneOf": [
             { "$ref": "#/definitions/subscriber_http" },
             { "$ref": "#/definitions/subscriber_message_router" },
             { "$ref": "#/definitions/subscriber_data_router" },
             { "$ref": "#/definitions/subscriber_kafka" }
           ]
         }
       }
     },
     "required": [
       "publishes",
       "subscribes"
     ]
   },
   "services": {
     "type": "object",
     "properties": {
       "calls": {
         "type": "array",
         "uniqueItems": true,
         "items": {
           "$ref": "#/definitions/caller"
         }
       },
       "provides": {
         "type": "array",
         "uniqueItems": true,
         "items": {
           "$ref": "#/definitions/provider"
         }
       }
     },
     "required": [
       "calls",
       "provides"
     ]
   },
   "parameters" : {
     "anyOf" : [
       {"$ref": "#/definitions/docker-parameters"},
       {"$ref": "#/definitions/cdap-parameters"}
     ]
   },
   "auxilary": {
     "oneOf" : [
       {"$ref": "#/definitions/auxilary_cdap"},
       {"$ref": "#/definitions/auxilary_docker"}
     ]
   },
   "artifacts": {
     "type": "array",
     "description": "List of component artifacts",
     "items": {
       "$ref": "#/definitions/artifact"
     }
   },
   "policy_info": {
     "type": "object",
     "properties": {
       "policy":
       {
         "type": "array",
         "items":
         {
           "type": "object",
           "properties":
           {
             "node_label":
             {
               "type": "string"
             },
             "policy_id":
             {
               "type": "string"
             },
             "policy_model_id":
             {
               "type": "string"
             }
           },
           "required": ["node_label", "policy_model_id"]
         }
       }
     },
     "additionalProperties": false
   }
 },
 "required": [
   "self",
   "streams",
   "services",
   "parameters",
   "auxilary",
   "artifacts"
 ],
 "additionalProperties": false,
 "definitions": {
   "cdap-parameters": {
     "description" : "There are three seperate ways to pass parameters to CDAP: app config, app preferences, program preferences. These are all treated as optional.",
     "type": "object",
     "properties" : {
       "program_preferences": {
         "description" : "A list of {program_id, program_type, program_preference} objects where program_preference is an object passed into program_id of type program_type",
         "type": "array",
         "uniqueItems": true,
         "items": {
           "$ref": "#/definitions/program_preference"
         }
       },
       "app_preferences" : {
         "description" : "Parameters Passed down to the CDAP preference API",
         "type": "array",
         "uniqueItems": true,
         "items": {
           "$ref": "#/definitions/parameter"
         }
       },
       "app_config" : {
         "description" : "Parameters Passed down to the CDAP App Config",
         "type": "array",
         "uniqueItems": true,
         "items": {
           "$ref": "#/definitions/parameter"
         }
       }
     }
   },
   "program_preference": {
     "type": "object",
     "properties": {
       "program_type": {
         "$ref": "#/definitions/program_type"
       },
       "program_id": {
         "type": "string"
       },
       "program_pref":{
         "description" : "Parameters that the CDAP developer wants pushed to this program's preferences API. Optional",
         "type": "array",
         "uniqueItems": true,
         "items": {
           "$ref": "#/definitions/parameter"
         }
       }
     },
     "required": ["program_type", "program_id", "program_pref"]
   },
   "program_type": {
     "type": "string",
     "enum": ["flows","mapreduce","schedules","spark","workflows","workers","services"]
   },
   "docker-parameters": {
     "type": "array",
     "uniqueItems": true,
     "items": {
       "$ref": "#/definitions/parameter"
     }
   },
   "parameter": {
     "oneOf": [
       {"$ref": "#/definitions/parameter-list"},
       {"$ref": "#/definitions/parameter-other"}
     ]
   },
   "parameter-list": {
     "properties": {
       "name": {
         "type": "string"
       },
       "value": {
         "description": "Default value for the parameter"
       },
       "description": {
         "description": "Description for the parameter.",
         "type": "string"
       },
       "type": {
         "description": "Only valid type is list, the entry_schema is required - which contains the type of the list element. All properties set for the parameter apply to all elements in the list at this time",
         "type": "string",
         "enum": ["list"]
       },
       "required": {
         "description": "An optional key that declares a parameter as required (true) or not (false). Default is true.",
         "type": "boolean",
         "default": true
       },
       "constraints": {
         "description": "The optional list of sequenced constraint clauses for the parameter.",
         "type": "array",
         "items": {
           "$ref": "#/definitions/parameter-constraints"
         }
       },
       "entry_schema": {
         "description": "The optional property used to declare the name of the Datatype definition for entries of certain types. entry_schema must be defined when the type is list.  This is the only type it is currently supported for.",
         "type": "object",
         "uniqueItems": true,
         "items": {"$ref": "#/definitions/list-parameter"}
       },
       "designer_editable": {
         "description": "A required property that declares a parameter as editable by designer in SDC Tool (true) or not (false).",
         "type": "boolean"
       },
       "sourced_at_deployment": {
         "description": "A required property that declares that a parameter is assigned at deployment time (true) or not (false).",
         "type": "boolean"
       },
       "policy_editable": {
         "description": "A required property that declares a parameter as editable by DevOps in Policy UI (true) or not (false).",
         "type": "boolean"
       },
       "policy_group": {
         "description": "An optional property used to group policy_editable parameters into groups. Each group will become it's own policy model. Any parameters without this property will be grouped together to form their own policy model",
         "type": "string"
       },
       "policy_schema" :{
         "type": "array",
         "uniqueItems": true,
         "items": {"$ref": "#/definitions/policy_schema_parameter"}
       }
     },
     "required": [
       "name",
       "value",
       "description",
       "designer_editable",
       "policy_editable",
       "sourced_at_deployment",
       "entry_schema"
     ],
     "additionalProperties": false,
     "dependencies": {
       "policy_schema": ["policy_editable"]
     }
   },
   "parameter-other": {
     "properties": {
       "name": {
         "type": "string"
       },
       "value": {
         "description": "Default value for the parameter"
       },
       "description": {
         "description": "Description for the parameter.",
         "type": "string"
       },
       "type": {
         "description": "The required data type for the parameter.",
         "type": "string",
         "enum": [ "string", "number", "boolean", "datetime" ]
       },
       "required": {
         "description": "An optional key that declares a parameter as required (true) or not (false). Default is true.",
         "type": "boolean",
         "default": true
       },
       "constraints": {
         "description": "The optional list of sequenced constraint clauses for the parameter.",
         "type": "array",
         "items": {
           "$ref": "#/definitions/parameter-constraints"
         }
       },
       "designer_editable": {
         "description": "A required property that declares a parameter as editable by designer in SDC Tool (true) or not (false).",
         "type": "boolean"
       },
       "sourced_at_deployment": {
         "description": "A required property that declares that a parameter is assigned at deployment time (true) or not (false).",
         "type": "boolean"
       },
       "policy_editable": {
         "description": "A required property that declares a parameter as editable in Policy UI (true) or not (false).",
         "type": "boolean"
       },
       "policy_group": {
         "description": "An optional property used to group policy_editable parameters into groups. Each group will become it's own policy model. Any parameters without this property will be grouped together to form their own policy model",
         "type": "string"
       },
       "policy_schema" :{
         "description": "An optional property used to define policy_editable parameters as lists or maps",
         "type": "array",
         "uniqueItems": true,
         "items": {"$ref": "#/definitions/policy_schema_parameter"}
       }
     },
     "required": [
       "name",
       "value",
       "description",
       "designer_editable",
       "sourced_at_deployment",
       "policy_editable"
     ],
     "additionalProperties": false,
     "dependencies": {
       "policy_schema": ["policy_editable"]
     }
   },
   "list-parameter": {
     "type": "object",
     "properties": {
       "type": {
         "description": "The required data type for each parameter in the list.",
         "type": "string",
         "enum": ["string", "number"]
       }
     },
     "required": [
       "type"
     ],
     "additionalProperties": false
   },
   "policy_schema_parameter": {
     "type": "object",
     "properties": {
       "name": {
         "type": "string"
       },
       "value": {
         "description": "Default value for the parameter"
       },
       "description": {
         "description": "Description for the parameter.",
         "type": "string"
       },
       "type": {
         "description": "The required data type for the parameter.",
         "type": "string",
         "enum": [ "string", "number", "boolean", "datetime", "list", "map" ]
       },
       "required": {
         "description": "An optional key that declares a parameter as required (true) or not (false). Default is true.",
         "type": "boolean",
         "default": true
       },
       "constraints": {
         "description": "The optional list of sequenced constraint clauses for the parameter.",
         "type": "array",
         "items": {
           "$ref": "#/definitions/parameter-constraints"
         }
       },
       "entry_schema": {
         "description": "The optional key that is used to declare the name of the Datatype definition for entries of certain types. entry_schema must be defined when the type is either list or map. If the type is list and the entry type is a simple type (string, number, boolean, datetime), follow with a simple string to describe the entry type. If the type is list and the entry type is a map, follow with an array to describe the keys for the entry map. If the type is list and the entry type is also list, this is not currently supported here. If the type is map, then follow with an array to describe the keys for this map. ",
         "type": "array", "uniqueItems": true, "items": {"$ref": "#/definitions/policy_schema_parameter"}
       }
     },
     "required": [
       "name",
       "type"
     ],
     "additionalProperties": false
   },
   "parameter-constraints": {
     "type": "object",
     "additionalProperties": false,
     "properties": {
       "equal": {
         "description": "Constrains a property or parameter to a value equal to (‘=’) the value declared."
       },
       "greater_than": {
         "description": "Constrains a property or parameter to a value greater than (‘>’) the value declared.",
         "type": "number"
       },
       "greater_or_equal": {
         "description": "Constrains a property or parameter to a value greater than or equal to (‘>=’) the value declared.",
         "type": "number"
       },
       "less_than": {
         "description": "Constrains a property or parameter to a value less than (‘<’) the value declared.",
         "type": "number"
       },
       "less_or_equal": {
         "description": "Constrains a property or parameter to a value less than or equal to (‘<=’) the value declared.",
         "type": "number"
       },
       "valid_values": {
         "description": "Constrains a property or parameter to a value that is in the list of declared values.",
         "type": "array"
       },
       "length": {
         "description": "Constrains the property or parameter to a value of a given length.",
         "type": "number"
       },
       "min_length": {
         "description": "Constrains the property or parameter to a value to a minimum length.",
         "type": "number"
       },
       "max_length": {
         "description": "Constrains the property or parameter to a value to a maximum length.",
         "type": "number"
       }
     }
   },
   "stream_message_router": {
     "type": "object",
     "properties": {
       "format": {
         "$ref": "#/definitions/name"
       },
       "version": {
         "$ref": "#/definitions/version"
       },
       "config_key": {
         "type": "string"
       },
       "type": {
         "description": "Type of stream to be used",
         "type": "string",
         "enum": [
           "message router", "message_router"
         ]
       }
     },
     "required": [
       "format",
       "version",
       "config_key",
       "type"
     ]
   },
   "stream_kafka": {
     "type": "object",
     "properties": {
       "format": {
         "$ref": "#/definitions/name"
       },
       "version": {
         "$ref": "#/definitions/version"
       },
       "config_key": {
         "type": "string"
       },
       "type": {
         "description": "Type of stream to be used",
         "type": "string",
         "enum": [
           "kafka"
         ]
       }
     },
     "required": [
       "format",
       "version",
       "config_key",
       "type"
     ]
   },
   "publisher_http": {
     "type": "object",
     "properties": {
       "format": {
         "$ref": "#/definitions/name"
       },
       "version": {
         "$ref": "#/definitions/version"
       },
       "config_key": {
         "type": "string"
       },
       "type": {
         "description": "Type of stream to be used",
         "type": "string",
         "enum": [
           "http",
           "https"
         ]
       }
     },
     "required": [
       "format",
       "version",
       "config_key",
       "type"
     ]
   },
   "publisher_message_router": {
     "$ref": "#/definitions/stream_message_router"
   },
   "publisher_data_router": {
     "type": "object",
     "properties": {
       "format": {
         "$ref": "#/definitions/name"
       },
       "version": {
         "$ref": "#/definitions/version"
       },
       "config_key": {
         "type": "string"
       },
       "type": {
         "description": "Type of stream to be used",
         "type": "string",
         "enum": [
           "data router", "data_router"
         ]
       }
     },
     "required": [
       "format",
       "version",
       "config_key",
       "type"
     ]
   },
   "publisher_kafka": {
     "$ref": "#/definitions/stream_kafka"
   },
   "subscriber_http": {
     "type": "object",
     "properties": {
       "format": {
         "$ref": "#/definitions/name"
       },
       "version": {
         "$ref": "#/definitions/version"
       },
       "route": {
         "type": "string"
       },
       "type": {
         "description": "Type of stream to be used",
         "type": "string",
         "enum": [
           "http",
           "https"
         ]
       }
     },
     "required": [
       "format",
       "version",
       "route",
       "type"
     ]
   },
   "subscriber_message_router": {
     "$ref": "#/definitions/stream_message_router"
   },
   "subscriber_data_router": {
     "type": "object",
     "properties": {
       "format": {
         "$ref": "#/definitions/name"
       },
       "version": {
         "$ref": "#/definitions/version"
       },
       "route": {
         "type": "string"
       },
       "type": {
         "description": "Type of stream to be used",
         "type": "string",
         "enum": [
           "data router", "data_router"
         ]
       },
       "config_key": {
         "description": "Data router subscribers require config info to setup their endpoints to handle requests. For example, needs username and password",
         "type": "string"
       }
     },
     "required": [
       "format",
       "version",
       "route",
       "type",
       "config_key"
     ]
   },
   "subscriber_kafka": {
     "$ref": "#/definitions/stream_kafka"
   },
   "provider" : {
     "oneOf" : [
       {"$ref": "#/definitions/docker-provider"},
       {"$ref": "#/definitions/cdap-provider"}
     ]
   },
   "cdap-provider" : {
     "type": "object",
     "properties" : {
       "request": {
         "$ref": "#/definitions/formatPair"
       },
       "response": {
         "$ref": "#/definitions/formatPair"
       },
       "service_name" : {
         "type" : "string"
       },
       "service_endpoint" : {
         "type" : "string"
       },
       "verb" : {
         "type": "string",
         "enum": ["GET", "PUT", "POST", "DELETE"]
       }
     },
     "required" : [
       "request",
       "response",
       "service_name",
       "service_endpoint",
       "verb"
     ]
   },
   "docker-provider": {
     "type": "object",
     "properties": {
       "request": {
         "$ref": "#/definitions/formatPair"
       },
       "response": {
         "$ref": "#/definitions/formatPair"
       },
       "route": {
         "type": "string"
       },
       "verb": {
         "type": "string",
         "enum": ["GET", "PUT", "POST", "DELETE"]
       }
     },
     "required": [
       "request",
       "response",
       "route"
     ]
   },
   "caller": {
     "type": "object",
     "properties": {
       "request": {
         "$ref": "#/definitions/formatPair"
       },
       "response": {
         "$ref": "#/definitions/formatPair"
       },
       "config_key": {
         "type": "string"
       }
     },
     "required": [
       "request",
       "response",
       "config_key"
     ]
   },
   "formatPair": {
     "type": "object",
     "properties": {
       "format": {
         "$ref": "#/definitions/name"
       },
       "version": {
         "$ref": "#/definitions/version"
       }
     }
   },
   "name": {
     "type": "string"
   },
   "version": {
     "type": "string",
     "pattern": "^(\\d+\\.)(\\d+\\.)(\\*|\\d+)$"
   },
   "artifact": {
     "type": "object",
     "description": "Component artifact object",
     "properties": {
       "uri": {
         "type": "string",
         "description": "Uri to artifact"
       },
       "type": {
         "type": "string",
         "enum": ["jar", "docker image"]
       }
     },
     "required": ["uri", "type"]
   },

   "auxilary_cdap": {
     "title": "cdap component specification schema",
     "type": "object",
     "properties": {
       "streamname": {
         "type": "string"
       },
       "artifact_name" : {
         "type": "string"
       },
       "artifact_version" : {
         "type": "string",
         "pattern": "^(\\d+\\.)(\\d+\\.)(\\*|\\d+)$"
       },
       "namespace":{
         "type": "string",
         "description" : "optional"
       },
       "programs": {
         "type": "array",
         "uniqueItems": true,
         "items": {
           "$ref": "#/definitions/cdap_program"
         }
       }
     },
     "required": [
       "streamname",
       "programs",
       "artifact_name",
       "artifact_version"
     ]
   },
   "cdap_program_type": {
     "type": "string",
     "enum": ["flows","mapreduce","schedules","spark","workflows","workers","services"]
   },
   "cdap_program": {
     "type": "object",
     "properties": {
       "program_type": {
         "$ref": "#/definitions/cdap_program_type"
       },
       "program_id": {
         "type": "string"
       }
     },
     "required": ["program_type", "program_id"]
   },

   "auxilary_docker": {
     "title": "Docker component specification schema",
     "type": "object",
     "properties": {
       "healthcheck": {
         "description": "Define the health check that Consul should perfom for this component",
         "type": "object",
         "oneOf": [
           { "$ref": "#/definitions/docker_healthcheck_http" },
           { "$ref": "#/definitions/docker_healthcheck_script" }
         ]
       },
       "ports": {
         "description": "Port mapping to be used for Docker containers. Each entry is of the format <container port>:<host port>.",
         "type": "array",
         "items": {
           "type": "string"
         }
       },
       "log_info": {
         "description": "Component specific details for logging",
         "type": "object",
         "properties": {
           "log_directory": {
             "description": "The path in the container where the component writes its logs. If the component is following the EELF requirements, this would be the directory where the four EELF files are being written. (Other logs can be placed in the directory--if their names in '.log', they'll also be sent into ELK.)",
             "type": "string"
           },
           "alternate_fb_path": {
             "description": "By default, the log volume is mounted at /var/log/onap/<component_type> in the sidecar container's file system. 'alternate_fb_path' allows overriding the default.  Will affect how the log data can be found in the ELK system.",
             "type": "string"
           }
         },
         "additionalProperties": false
       },
       "tls_info": {
         "description": "Component information to use tls certificates",
         "type": "object",
         "properties": {
           "cert_directory": {
             "description": "The path in the container where the component certificates will be placed by the init container",
             "type": "string"
           },
           "use_tls": {
             "description": "Boolean flag to determine if the application is using tls certificates",
             "type": "boolean"
           },
           "use_external_tls": {
             "description": "Boolean flag to determine if the application is using tls certificates for external communication",
             "type": "boolean"
           }
         },
         "required": [
           "cert_directory","use_tls"
         ],
         "additionalProperties": false
       },
       "databases": {
         "description": "The databases the application is connecting to using the pgaas",
         "type": "object",
         "additionalProperties": {
           "type": "string",
           "enum": [
             "postgres"
           ]
         }
       },
       "policy": {
         "properties": {
           "trigger_type": {
             "description": "Only value of docker is supported at this time.",
             "type": "string",
             "enum": ["docker"]
           },
           "script_path": {
             "description": "Script command that will be executed for policy reconfiguration",
             "type": "string"
           }
         },
         "required": [
           "trigger_type","script_path"
         ],
         "additionalProperties": false
       },
       "volumes": {
         "description": "Volume mapping to be used for Docker containers. Each entry is of the format below",
         "type": "array",
         "items": {
           "type": "object",
           "oneOf": [
             { "$ref": "#/definitions/host_path_volume" },
             { "$ref": "#/definitions/config_map_volume" }
           ]
         }
       }
     },
     "required": [
       "healthcheck"
     ],
     "additionalProperties": false
   },
   "host_path_volume": {
     "type": "object",
     "properties": {
       "host": {
         "type": "object",
         "path": {
           "type": "string"
         }
       },
       "container": {
         "type": "object",
         "bind": {
           "type": "string"
         },
         "mode": {
           "type": "string"
         }
       }
     },
     "required": ["host", "container"]
   },
   "config_map_volume": {
     "type": "object",
     "properties": {
       "config_volume": {
         "type": "object",
         "name": {
           "type": "string"
         }
       },
       "container": {
         "type": "object",
         "bind": {
           "type": "string"
         },
         "mode": {
           "type": "string"
         }
       }
     },
     "required": ["config_volume", "container"]
   },
   "docker_healthcheck_http": {
     "properties": {
       "type": {
         "description": "Consul health check type",
         "type": "string",
         "enum": [
           "http",
           "https"
         ]
       },
       "interval": {
         "description": "Interval duration in seconds i.e. 10s",
         "default": "15s",
         "type": "string"
       },
       "timeout": {
         "description": "Timeout in seconds i.e. 10s",
         "default": "1s",
         "type": "string"
       },
       "endpoint": {
         "description": "Relative endpoint used by Consul to check health by making periodic HTTP GET calls",
         "type": "string"
       }
     },
     "required": [
       "type",
       "endpoint"
     ]
   },
   "docker_healthcheck_script": {
     "properties": {
       "type": {
         "description": "Consul health check type",
         "type": "string",
         "enum": [
           "script",
           "docker"
         ]
       },
       "interval": {
         "description": "Interval duration in seconds i.e. 10s",
         "default": "15s",
         "type": "string"
       },
       "timeout": {
         "description": "Timeout in seconds i.e. 10s",
         "default": "1s",
         "type": "string"
       },
       "script": {
         "description": "Script command that will be executed by Consul to check health",
         "type": "string"
       }
     },
     "required": [
       "type",
       "script"
     ]
   }
 }
}

Component Spec Requirements

The component specification contains the following groups of information.

Auxiliary Details

auxiliary contains Docker specific details like health check, port mapping, volume mapping and policy reconfiguration script details.

Name

Type

Description

healthcheck

JSON object

Required. Health check definition details

ports

JSON array

each array item maps a container port to the host port. See example below.

volumes

JSON array

each array item contains volume definition of either: host path or config map volume.

policy

JSON array

Required. Policy reconfiguration script details

tls_info

JSON object

Optional. Information about usage of tls certificates

Health Check Definition

The platform currently supports http and docker script based health checks.

When choosing a value for interval, consider that too frequent healthchecks will put unnecessary load on the platform. If there is a problematic resource, then more frequent healthchecks are warranted (eg 15s or 60s), but as stability increases, so can these values, (eg 300s).

When choosing a value for timeout, consider that too small a number will result in increasing timeout failures, and too large a number will result in a delay in the notification of the resource problem. A suggestion is to start with 5s and work from there.

http

Property Name

Type

Description

type

string

Required. http

interval

string

Interval duration in seconds i.e. 60s

timeout

string

Timeout in seconds i.e. 5s

endpoint

string

Required. GET endpoint provided by the component for checking health

Example:

"auxilary": {
    "healthcheck": {
        "type": "http",
        "interval": "15s",
        "timeout": "1s",
        "endpoint": "/my-health"
    }
}
docker script example

Property Name

Type

Description

type

string

Required. docker

interval

string

Interval duration in seconds i.e. 15s

timeout

string

Timeout in seconds i.e. 1s

script

string

Required. Full path of script that exists in the Docker container to be executed

During deployment, the K8S plugin maps the healthcheck defined into into a Kubernetes readiness probe.

Kubernetes execs the script in the container (using the docker exec API ). It will examine the script result to identify whether your component is healthy. Your component is considered healthy when the script returns 0 otherwise your component is considered not healthy.

Example:

"auxilary": {
    "healthcheck": {
        "type": "docker",
        "script": "/app/resources/check_health.py",
        "timeout": "30s",
        "interval": "180s"
    }
}
Ports

This method of exposing/mapping a local port to a host port is NOT RECOMMENDED because of the possibility of port conflicts. If multiple instances of a docker container will be running, there definitely will be port conflicts. Use at your own risk. (The preferred way to expose a port is to do so in the Dockerfile as described here).

"auxilary": {
    "ports": ["8080:8000"]
}

In the example above, container port 8080 maps to host port 8000.

Volume Mapping
"auxilary": {
    "volumes": [
        {
           "container": {
               "bind": "/tmp/docker.sock",
               "mode": "ro"
            },
            "host": {
                "path": "/var/run/docker.sock"
            }
        },
        {
           "container": {
               "bind": "/tmp/mount_path"
               "mode": "ro"
            },
            "config_volume": {
                "name": "config_map_name"
            }
        }
    ]
}

At the top-level:

Property Name

Type

Description

volumes

array

Contains container with host/config_volume objects

The container object contains:

Property Name

Type

Description

bind

string

path to the container volume

mode

string

ro - indicates read-only volume

w - indicates that the contain can write into the bind mount

The host object contains:

Property Name

Type

Description

path

string

path to the host volume

The config_volume object contains:

Property Name

Type

Description

name

string

name of config map

Here is an example of the minimal JSON with host path volume that must be provided as an input:

"auxilary": {
    "volumes": [
        {
           "container": {
               "bind": "/tmp/docker.sock"
            },
            "host": {
                "path": "/var/run/docker.sock"
            }
        }
    ]
}

In the example above, the container volume “/tmp/docker.sock” maps to host volume “/var/run/docker.sock”.

Here is an example of the minimal JSON with config map volume that must be provided as an input:

"auxilary": {
    "volumes": [
        {
           "container": {
               "bind": "/tmp/mount_path"
            },
            "config_volume": {
                "name": "config_map_name"
            }
        }
    ]
}

In the example above, config map named “config_map_name” is mounted at “/tmp/mount_path”.

Policy

Policy changes made in the Policy UI will be provided to the Docker component by triggering a script that is defined here.

Property Name

Type

Description

reconfigure_type

string

Required. Current value supported is policy

script_path

string

Required. Current value for ‘policy’ reconfigure_type must be “/opt/app/reconfigure.sh”

Example:

"auxilary": {
    "policy": {
        "reconfigure_type": "policy",
        "script_path": "/opt/app/reconfigure.sh"
    }
}

The docker script interface is as follows: `/opt/app/reconfigure.sh $reconfigure_type {“updated policies”: , “application config”: }

Name

Type

Description

reconfigure_type

string

policy

updated_policies

json

TBD

updated_appl_config

json

complete generated app_config, not fully-resolved, but policy-enabled parameters have been updated. In order to get the complete updated app_config, the component would have to call config-binding-service.

TLS Info

TLS Info is used to trigger addition of init containers that can provide main application containers with certificates for internal and external communication.

Property Name

Type

Description

cert_directory

string

Required. Directory where certificates should be created. i.e. /opt/app/dcae-certificate

use_tls

boolean

Required. A boolean that indicates whether server certificates for internal communication should be added to the main container i.e true

use_external_tls

boolean

Optional. A boolean that indicates whether the component uses OOM CertService to acquire operator certificate to protect external (between xNFs and ONAP) traffic. For a time being only operator certificate from CMPv2 server is supported. i.e true

Example:

"auxilary": {
        "tls_info": {
                "cert_directory": "/opt/app/dcae-certificate",
                "use_tls": true
                "use_external_tls": true,
        }
},
Docker Component Spec - Complete Example
{
    "self": {
        "version": "1.0.0",
        "name": "yourapp.component.kpi_anomaly",
        "description": "Classifies VNF KPI data as anomalous",
        "component_type": "docker"
    },
    "streams": {
        "subscribes": [{
            "format": "dcae.vnf.kpi",
            "version": "1.0.0",
            "route": "/data",
            "type": "http"
        }],
        "publishes": [{
            "format": "yourapp.format.integerClassification",
            "version": "1.0.0",
            "config_key": "prediction",
            "type": "http"
        }]
    },
    "services": {
        "calls": [{
            "config_key": "vnf-db",
            "request": {
                "format": "dcae.vnf.meta",
                "version": "1.0.0"
                },
            "response": {
                "format": "dcae.vnf.kpi",
                "version": "1.0.0"
                }
        }],
        "provides": [{
            "route": "/score-vnf",
            "request": {
                "format": "dcae.vnf.meta",
                "version": "1.0.0"
                },
            "response": {
                "format": "yourapp.format.integerClassification",
                "version": "1.0.0"
                }
        }]
    },
    "parameters": [
        {
            "name": "threshold",
            "value": 0.75,
            "description": "Probability threshold to exceed to be anomalous"
        }
    ],
    "auxilary": {
        "healthcheck": {
            "type": "http",
            "interval": "15s",
            "timeout": "1s",
            "endpoint": "/my-health"
        }
    },
    "artifacts": [{
        "uri": "fake.nexus.att.com/dcae/kpi_anomaly:1.0.0",
        "type": "docker image"
    }]
}

DMaaP connection objects

DMaaP Connection objects are generated by the DCAE Platform at runtime and passed to the component in its application_configuration

Message Router

Publishers and subscribers have the same generated Dmaap Connection Object structure. Here’s an example for any given config-key: (This is what will be in application_configuration)

{
    "type": "message_router",
    "aaf_username": "some-user",
    "aaf_password": "some-password",
    "dmaap_info": {
        "client_role": "com.dcae.member",
        "client_id": "1500462518108",
        "location": "mtc00",
        "topic_url": "https://we-are-message-router.us:3905/events/some-topic"
    }
}

At the top-level:

Property Name

Type

Description

type

string

Required as input. Must be message_router for message router topics

aaf_username

string

AAF username message router clients use to authenticate with secure topics

aaf_password

string

AAF password message router clients use to authenticate with secure topics

dmaap_info

JSON object

Required as input. Contains the topic connection details

The dmaap_info object contains:

Property Name

Type

Description

client_role

string

AAF client role that’s requesting publish or subscribe access to the topic

client_id

string

Client id for given AAF client

location

string

DCAE location for the publisher or subscriber, used to set up routing

topic_url

string

Required as input. URL for accessing the topic to publish or receive events

Data Router
Publisher

Here’s an example of what the generated Dmaap Connection Object for Data Router Publisher looks like: (This is what will be in application_configuration)

{
    "type": "data_router",
    "dmaap_info": {
        "location": "mtc00",
        "publish_url": "https://we-are-data-router.us/feed/xyz",
        "log_url": "https://we-are-data-router.us/feed/xyz/logs",
        "username": "some-user",
        "password": "some-password",
        "publisher_id": "123456"
    }
}

At the top-level:

Property Name

Type

Description

type

string

Required as input. Must be data_router for data router feeds

dmaap_info

JSON object

Required as input. Contains the feed connection details

The dmaap_info object contains:

Property Name

Type

Description

location

string

DCAE location for the publisher, used to set up routing

publish_url

string

Required as input. URL to which the publisher makes Data Router publish requests

log_url

string

URL from which log data for the feed can be obtained

username

string

Username the publisher uses to authenticate to Data Router

password

string

Password the publisher uses to authenticate to Data Router

publisher_id

string

Publisher id in Data Router

Subscriber

Here’s an example of what the generated Dmaap Connection Object for a Data Router Subscriber looks like: (This is what will be passed in application_configuration)

{
    "type": "data_router",
    "dmaap_info": {
        "location": "mtc00",
        "delivery_url": "https://my-subscriber-app.dcae:8080/target-path",
        "username": "some-user",
        "password": "some-password",
        "subscriber_id": "789012"
    }
}

At the top-level:

Property Name

Type

Description

type

string

Required as input. Must be data_router for data router feeds

dmaap_info

JSON object

Required as input. Contains the feed connection details

The dmaap_info object contains:

Property Name

Type

Description

location

string

DCAE location for the subscriber, used to set up routing

delivery_url

string

URL to which the Data Router should deliver files

username

string

Username Data Router uses to authenticate to the subscriber when delivering files

password

string

Password Data Router uses to authenticate to the subscriber when delivering files

subscriber_id

string

Subscriber id in Data Router

Streams Formatting Quick Reference

Each of the following tables represents an example of a publisher and its subscriber, which are of course, different components. This focuses on the fields that are ‘different’ for each of these TYPEs, to illustrate the relationship between config_key, dmaap connection object, and the generated configuration. Some notes on specific properties:

  • config_key is an arbitrary string, chosen by the component developer. It is returned in the generated configuration where it contains specific values for the target connection

  • format, version, and type properties in the subscriber would match these properties in the publisher

  • aaf_username and aaf_password may be different between the publisher and the subscriber

Using http
Publishing Component

component spec

runtime platform generated config

“streams”:{ “publishes”:[{ “config_key”:”prediction”, “format”:”some-format”, “type”:”http”, “version”:”0.1.0”   } ]}

“streams_publishes”:{ “prediction”:”10.100.1.100:32567/data”

Subscribing Component

component spec

runtime platform generated config

“streams”:{ “subscribes”:[{ “route”:”/data”, “format”:”some-format”, “type”:”http” “version”:”0.1.0”   } ]}

“N/A”

Using Message Router
Publishing Component

Note: When deploying, this component should be deployed first so satisfy downstream dependencies. Refer to the –force option in component ‘run’ command for more information.

component spec

Dmaap Connection Object

runtime platform generated config

“streams”:{     “config_key”:“mr_output”,     “type”:“message_router”,  }]}

{     “dmaap_info”: {} Note: For message router, this object is identical for the publisher and the subscriber

“streams_publishes”:{ “aaf_username”:“pub-user”,   “type”:“message_router”,      “topic_url”:”https://we-are-message-router.us:3905/events/some-topic”“streams_subscribes”:{…}

Subscribing Component

component spec

Dmaap Connection Object

runtime platform generated config

“streams”:{     “config_key”:“mr_input”,     “type”:“message_router”, }]}

{     “dmaap_info”: {} Note: For message router, this object is identical for the publisher and the subscriber

“streams_publishes”:{…}, “streams_subscribes”:{ “aaf_username”:“sub-user”,   “type”:“message_router”,      “topic_url”:“https://we-are-message-router.us:3905/events/some-topic

Using Data Router
Publishing Component

component spec

Dmaap Connection Object

runtime platform generated config

“streams”:{ “config_key: “dr_output” , “type”: “data_router”,   }] }

{    “dmaap_info”: {      “location”: “mtc00”, “publish_url”: “https://we-are-data-router.us/feed/xyz”, “log_url”:“https://we-are-data-router.us/feed/xyz/logs”, “username”: “pub-user”, “password”: “pub-password”, “publisher_id”: “123456”}}

streams_publishes“:{    ”type“:”data_router“,       “location”:”mtc00” , “publish_url“: “http://we-are-data-router.us/feed/xyz” , “log_url“:”https://we-are-data-router.us/feed/xyz/logs” , ”username“:”pub-user“, ”publisher_id“:”123456“}},  ”streams_subscribes“:{ … }

Subscribing Component

component spec

Dmaap Connection Object

runtime platform generated config

“streams”:{     “config_key”:“dr_input”,     “type”:“data_router”,     “route”: “/target-path”}

{      “dmaap_info”: {      “location”: “mtc00”, “delivery_url”: “https://my-subscriber-app.dcae:8080/target-path”,      “password”: “sub-password”, “subscriber_id”: “789012”}}

“streams_publishes”:{ … }, “streams_subscribes”:{ “type”:“data_router”,   “location”:“mtc00”,          “delivery_url”:”https://my-subscriber-app.dcae:8080/target-path”, “username”:“sub-user”,

“subscriber_id”:“789012”}}

Configuration Quick Reference

Default Values

The component developer can provide default values for any parameter in the component specification. These defaults will be passed to the component in its generated configuration.

Overridden/Entered Values

Depending on the other properties set for the parameter, the default value can be overridden at ‘design-time’, ‘deploy-time’ or once the microservice is running (‘run-time’).

Design-Time Input

CLAMP Input

Policy Input

Deploy-Time Input

Description

Applies to self-service components

Applies to components deployed by CLAMP

(not yet supported)

Applies to manually deployed services

Input provided by

Service Designe r

CLAMP

Operations

DevOps

How it is provided

In the SDC/MOD UI

In the CLAMP UI

In the POLICY GUI

In the DCAE Dashboard (or Jenkins job)

Component Specification Details

‘designer-editable’ set to ‘true’

None. Developer provides CLAMP an email with parameters to be supported

‘policy_editable’ must be set to ‘true’ and ‘policy_schema’ must be provided

‘sourced_at_deployment’ must be set to ‘true’

Additional Info for Component Developer

For Docker only: In the auxiliary section: {“policy”: {“trigger_type”: “policy”,“script_path”: “/opt/app/reconfigure.sh ”} } Script interface would then be “/opt/app/reconfigure.sh ” $trigger_type $updated_policy” where $updated_policy is json provided by the Policy Handler.

Data Formats

Data formats are descriptions of data; they are the data contract between your component and other components. When the components are ‘composed’ into services in the Design tool, they can only be matched with components that have compatible data formats. Data formats will be onboarded to Design tool and assigned a UUID at that time. This UUID is then used to ensure compatibility amoung components. (If component X outputs data format ‘DF-Y’, and another component Z specifies ‘DF-Y’ as its input data format, then X is said to be composable with component Z).

Since data formats will be shared across components, the onboarding catalog should be checked first to see if the desired data format is available before creating one. The vision is to have a repository of shared data formats that developers and teams can re-use and also provide them the means to extend and create new custom data formats. A data format is referenced by its data format id and version number.

JSON schema

The data format specification is represented (and validated) against this Data Format json schema and described below:

Meta Schema Definition

The “Meta Schema” implementation defines how data format JSON schemas can be written to define user input. It is itself a JSON schema (thus it is a “meta schema”). It requires the name of the data format entry, the data format entry version and allows a description under “self” object. The meta schema version must be specified as the value of the “dataformatversion” key. Then the input schema itself is described as one of the four types listed below:

Type

Description

jsonschema

inline standard JSON Schema definitions of JSON inputs

delimitedschema

delimited data input using a JSON description and defined delimiter

unstructured

unstructured text, and reference that allows a pointer to another artifact for a schema.

reference

allows for XML and Protocol Buffers schema, but can be used to reference other JSON, delimitedschema and unstructured schemas as well.

Example Schemas

By reference example - Common Event Format

First the full JSON schema description of the Common Event Format would be loaded with a name of “Common Event Format” and the current version of “25.0.0”.

Then the data format description is loaded by this schema:

{
    "self": {
        "name": "Common Event Format Definition",
        "version": "25.0.0",
        "description": "Common Event Format Definition"

    },
    "dataformatversion": "1.0.0",
    "reference": {
        "name": "Common Event Format",
        "format": "JSON",
        "version": "25.0.0"
   }
}
Simple JSON Example
{
    "self": {
        "name": "Simple JSON Example",
        "version": "1.0.0",
        "description": "An example of unnested JSON schema for Input and output"

    },
    "dataformatversion": "1.0.0",
    "jsonschema": {
        "$schema": "http://json-schema.org/draft-04/schema#",
        "type": "object",
        "properties": {
            "raw-text": {
                "type": "string"
            }
        },
        "required": ["raw-text"],
        "additionalProperties": false
    }
}
Nested JSON Example
{
    "self": {
        "name": "Nested JSON Example",
        "version": "1.0.0",
        "description": "An example of nested JSON schema for Input and output"

    },
    "dataformatversion": "1.0.0",
    "jsonschema": {
        "$schema": "http://json-schema.org/draft-04/schema#",
        "properties": {
            "numFound": {
                "type": "integer"
            },
            "start": {
                "type": "integer"
            },
            "engagements": {
                "type": "array",
                "items": {
                    "properties": {
                        "engagementID": {
                            "type": "string",
                            "transcript": {
                                "type": "array",
                                "items": {
                                    "type": {
                                        "type": "string"
                                    },
                                    "content": {
                                        "type": "string"
                                    },
                                    "senderName": {
                                        "type": "string"
                                    },
                                    "iso": {
                                        "type": "string"
                                    },
                                    "timestamp": {
                                        "type": "integer"
                                    },
                                    "senderId": {
                                        "type": "string"
                                    }
                                }
                            }
                        }
                    }
                }
            }
        },
        "additionalProperties": false
    }
}
Unstructured Example
{
    "self": {
        "name": "Unstructured Text Example",
        "version": "25.0.0",
        "description": "An example of a unstructured text used for both input and output for "

    },
    "dataformatversion": "1.0.0",
    "unstructured": {
        "encoding": "UTF-8"
    }
}

An example of a delimited schema

{
    "self": {
        "name": "Delimited Format Example",
        "version": "1.0.0",
        "description": "Delimited format example just for testing"

    },
    "dataformatversion": "1.0.0",
    "delimitedschema": {
        "delimiter": "|",
        "fields": [{
            "name": "field1",
            "description": "test field1",
            "fieldtype": "string"
        }, {
            "name": "field2",
            "description": "test field2",
            "fieldtype": "boolean"
        }]
    }
}

Note: The referenced data format (in this case, a schema named “Common Event Format” with version of “25.0.0”) must already exist in the onboarding catalog.

Working with Data Formats

Data Formats can be validated using schema Once validated, the dataformat can be onboarded using DCAE-MOD

Glossary

A&AI - Active and Available Inventory

Inventory DB for all network components

CLAMP

Non DCAE Platform Component - Controls the input and processing for Closed Loop services.

Closed Loop

Services designed to monitor and report back to a controlling function that automatically deals with the event reported without human interaction.

Cloudify

Open Source application and network orchestration framework, based on TOSCA used in DCAE to deploy platform and service components from Cloudify Blueprints.

Cloudify Blueprints

YAML formatted file used by Cloudify to deploy platform and service components. Contains all the information needed for installation.

Consul

Opensource Platform Component that supports Service Discovery, Configuration, and Healthcheck. Refer to Architecture for more information.

Component

Refers to a DCAE service component which is a single micro-service that is written to be run by the DCAE platform and to be composeable to form a DCAE service. That composition occurs in the SDC.

Config Binding Service

DCAE Platform Component - Service Components use Config Binding Service to access Consul and retrieve configuration variables.

Component Specification

JSON formatted file that fully describes a component and its interfaces

Data Format / Data Format Specification

JSON formatted file that fully describes a components input or output

Deployment Handler

DCAE Platform Component - Receives Input from DTI Handler, and talks to Cloudify to deploy components.

Design-Time

Refers to when the System Designer uses a design tool to compose services from components in the catalog. The Designer can provide input to assign/override defaults for configuration for any parameter with the property ‘designer_editable’ set to ‘true’.

Deploy-Time

Refers to when a service is being deployed. This can be done automatically via the SDC Tool, or manually via the DCAE Dashboard or CLAMP UI. When manually deployed, DevOps can provide input to assign/override defaults for configuration for any parameter with the property ‘sourced_at_deployment’ set to ‘true’.

Docker

Opensource Platform for development of containerized applications in the cloud. Many DCAE service components and all DCAE collectors are written utilizing Docker.

Dmaap

AT&T data transportation service platform that supports message-based topics and file-based feeds. Runs locally at the Edge and Centrally.

Inventory

DCAE Platform Component - Postgres DB containing Cloudify Blueprints for platform and service components.

Policy

Refers to the setting of configuration parameters for a component, by Operations via the Policy UI.

Policy Handler

DCAE Platform Component that received Policy updates from Policy UI

Policy UI

Non DCAE Component - Policy User Interace where Operations assigns values to configuraton specified for this.

Run-Time

Refers to the when a service is running on the platform. Often used in conjunction with DTI events which occur at Run-time.

SCH - Service Change Handler

DCAE Platform Component - Receives updates from SDC and updates Inventory

SDC - Service Design and Creation

ONAP design catalog for onboarding VNF/PNF packages

Self-Service

Refers to services that are supported by SDC, and that are automatically installed as a result of a Service Designer’s composition and submission of a service. Only a handful of services are ‘self-service’ currently. Most require manual effort to generate the Tosca Model files and Cloudify Blueprints.

Service Component

Microservice that provides network monitoring or analytic function on the DCAE platform.

Service

Generally composed of multiple service components, which is deployed to the DCAE platform.

VNF - Virtualized Network Function

A network function that runs on one or more virtualized machines.

DCAE Service components

Collectors

DataFile Collector(DFC)

Architecture
Introduction

DataFile Collector (DFC) is a part of DCAEGEN2. Some information about DFC and the reasons of its implementation can be found here: 5G bulk PM wiki page.

DFC will handle the collection of bulk PM data flow:
  1. Subscribes to fileReady DMaaP topic

  2. Collects the files from the xNF

  3. Sends the collected files and files’ data to DataRouter.

DFC is delivered as one Docker container which hosts application server. See Delivery for more information about the docker container.

Functionality
_images/DFC.png
Interaction

DFC will interact with the DMaaP Message Router, using json, and with the Data Router, using metadata in the header and file in the body, via secured protocol. So far, the implemented protocols to communicate with xNFs are http, https, sftp and ftpes. When HTTP protocol protocol is used, following ways of authentication are supported: basic authentication and bearer token (e.g. JWT) authentication. When HTTPS protocol protocol is used, following ways of authentication are supported: client certificate authentication, basic authentication, bearer token (e.g. JWT) authentication and no authentication.

Retry mechanism

DFC is designed to retry downloading and publishing of files in order to recover from temporary faults. Each time an event is received, DFC will try to download it and publish each previously unpublished file in the event. The event is received from the Message Router (MR), the files are fetched from a PNF and are published to Data Router (DR). Both fetching of a file and publishing is retried a number of times with an increasing delay between each attempt. After a number of attempts, the DFC will log an error message and give up. Failing of processing of one file does not affect the handling of others.

Generalized DFC

From version 1.2.1 and onwards, the DFC has more general use. Instead of only handling PM files, any kind of files are handled. The ‘changeIdentifier’ field in the FileReady VES event (which is reported from the PNFs) identifies the file type. This is mapped to a publishing stream in the DR.

Delivery
Docker Container

DFC is delivered as a docker container. The latest released version can be downloaded from nexus:

docker pull nexus3.onap.org:10001/onap/org.onap.dcaegen2.collectors.datafile.datafile-app-server:1.2.2

For another version, it is possible to replace the tag ‘1.2.2’ with any version that seems suitable. Available images are visible following this link.

ONAP Gerrit

It is possible to clone the Gerrit repository of DFC at this link. Choose your preferred settings (ssh, http or https, with or without hook) and run the command in your terminal.

If using Cloudify to deploy DFC, the blueprints are needed, and can be found here.

Logging

Logging is controlled by the configuration provided to datafile in the application.yaml file located in datafile-app-server/config folder.

To activate logging, please follow the instructions on this page.

Where is the log file?

The log file is located under /var/log/ONAP/ and called application.log.

Certificates (From AAF)

DCAE service components will use common certifcates generated from AAF/test instance and made available during deployment of DCAE TLS init container.

DCAE has a generalized process of certificate distribution as documented here - https://docs.onap.org/projects/onap-dcaegen2/en/latest/sections/tls_enablement.html

The updated certificates are located in https://git.onap.org/dcaegen2/deployments/tree/tls-init-container/tls

Certificates (Manual configuration of self-signed certifcates)

Configuration of Certificates in test environment(For FTP over TLS):

DFC supports two protocols: FTPES and SFTP. For FTPES, it is mutual authentication with certificates. In our test environment, we use vsftpd to simulate xNF, and we generate self-signed keys & certificates on both vsftpd server and DFC.

1. Generate key/certificate with openssl for DFC:
openssl genrsa -out dfc.key 2048
openssl req -new -out dfc.csr -key dfc.key
openssl x509 -req -days 365 -in dfc.csr -signkey dfc.key -out dfc.crt
2. Generate key & certificate with openssl for vsftpd:
openssl genrsa -out ftp.key 2048
openssl req -new -out ftp.csr -key ftp.key
openssl x509 -req -days 365 -in ftp.csr -signkey ftp.key -out ftp.crt
3. Configure java keystore in DFC:

We have two keystore files, one for TrustManager, one for KeyManager.

For TrustManager:

  1. First, convert your certificate in a DER format :

openssl x509 -outform der -in ftp.crt -out ftp.der
  1. And after copy existing keystore and password from container:

kubectl cp <DFC pod>:/opt/app/datafile/etc/cert/trust.jks trust.jks
kubectl cp <DFC pod>:/opt/app/datafile/etc/cert/trust.pass trust.pass
  1. Import DER certificate in the keystore :

keytool -import -alias ftp -keystore trust.jks -file ftp.der

For KeyManager:

  1. Import dfc.crt and dfc.key to dfc.jks. This is a bit troublesome.

Convert x509 Cert and Key to a pkcs12 file

openssl pkcs12 -export -in dfc.crt -inkey dfc.key -out cert.p12 -name dfc

Note: Make sure you put a password on the p12 file - otherwise you’ll get a null reference exception when you try to import it.

  1. Create password files for cert.p12

    printf “[your password]” > p12.pass

4. Update existing KeyStore files

Copy the new trust.jks and cert.p12 and password files from local environment to the DFC container.

5. Update configuration in consul
Change path in consul:
Consul’s address: http://<worker external IP>:<Consul External Port>
_images/consule-certificate-update.png
6. Configure vsftpd:

update /etc/vsftpd/vsftpd.conf:

rsa_cert_file=/etc/ssl/private/ftp.crt
rsa_private_key_file=/etc/ssl/private/ftp.key
ssl_enable=YES
allow_anon_ssl=NO
force_local_data_ssl=YES
force_local_logins_ssl=YES

ssl_tlsv1=YES
ssl_sslv2=YES
ssl_sslv3=YES

require_ssl_reuse=NO
ssl_ciphers=HIGH

require_cert=YES
ssl_request_cert=YES
ca_certs_file=/home/vsftpd/myuser/dfc.crt
7. Other conditions

This has been tested with vsftpd and dfc, with self-signed certificates. In real deployment, we should use ONAP-CA signed certificate for DFC, and vendor-CA signed certificate for xNF

Configuration and Performance

The DataFile Collector (DFC) gets fileReady messages from the Message Router (MR) sent from xNFs, via the VES Collector. These messages contains data about files ready to get from the xNF. DFC then collects these files from the xNF and publishes them to the DataRouter (DR) on a feed. Consumers can subscribe to the feed from DR and process the file for its specific purpose. The connection between a file type and the feed it will be published to is the changeIdentifier. DFC can handle multiple changeIdentifier/feed combinations, see picture below.

_images/DFC_config.png
Configuration

By default, DFC handles the “PM_MEAS_FILES” change identifier and publishes these files on the “bulk_pm_feed” feed. But it can also be configured to handle more/other change identifiers and publish them to more/other feeds. The configuration of DFC is controlled via a blueprint.

Blueprint Configuration Explained

For the communication with the Message Router, the user must provide the host name, port, and protocol of the DMaaP Message router.

  inputs:
    dmaap_mr_host:
      type: string
      description: dmaap messagerouter host
      default: message-router.onap.svc.cluster.local
    dmaap_mr_port:
      type: integer
      description: dmaap messagerouter port
      default: 3904
    dmaap_mr_protocol:
      type: string
      description: dmaap messagerouter protocol
      default: "http"

The user can also specify which version of DFC to use.

  inputs:
    tag_version:
      type: string
      description: DFC image tag/version
      default: "nexus3.onap.org:10001/onap/org.onap.dcaegen2.collectors.datafile.datafile-app-server:1.2.0"

The user can also enable secure communication with the DMaaP Message Router.

  inputs:
    secureEnableCert:
      type: boolean
      description: enable certificate based connection with DMaap
      default: false

DFC can handle multiple change identifiers. For each change identifier/feed combination the user must provide the change identifier, feed name, and feed location.

Note! The feed name provided should be used by the consumer/s to set up the subscription to the feed.

The feed name and feed location are defined as inputs for the user to provide.

  inputs:
    feed0_name:
      type: string
      description: The name of the feed the files will be published to. Should be used by the subscriber.
      default: "bulk_pm_feed"
    feed0_location:
      type: string
      description: The location of the feed.
      default: "loc00"

The feed name shall be used in the definition of the feed for the DMaaP plugin under the “node_templates” section under a tag for the internal “feed identifier” for the feed (feed0 in the example).

  feed0:
    type: ccsdk.nodes.Feed
    properties:
      feed_name:
        get_input: feed0_name
      useExisting: true

The feed location shall be used under the streams_publishes section under a tag for the internal “feed identifier” for the feed.

    streams_publishes:
    - name: feed0
      location:
        get_input: feed0_location
      type: data_router

The change identifier shall be defined as an item under the streams_publishes tag in the “application_config” section. Under this tag the internal “feed identifier” for the feed shall also be added to get the info about the feed substituted in by CBS (that’s what the <<>> tags are for).

    application_config:
      service_calls: []
      streams_publishes:
        PM_MEAS_FILES:
          dmaap_info: <<feed0>>
          type: data_router

And, lastly, to set up the publication relationship for the feed, the “feed identifier” must be added to the “relationships” section of the blueprint.

 relationships:
  - type: ccsdk.relationships.publish_files
    target: feed0
Sample blueprint configuration

The format of the blueprint configuration that drives all behavior of DFC is probably best described using an example. The blueprint below configures DFC to handle the two feeds shown in the picture above.

inputs:
  dmaap_mr_host:
    type: string
    description: dmaap messagerouter host
    default: message-router.onap.svc.cluster.local
  dmaap_mr_port:
    type: integer
    description: dmaap messagerouter port
    default: 3904
  dmaap_mr_protocol:
    type: string
    description: dmaap messagerouter protocol
    default: "http"
  tag_version:
    type: string
    description: DFC image tag/version
    default: "nexus3.onap.org:10001/onap/org.onap.dcaegen2.collectors.datafile.datafile-app-server:1.2.0"
  replicas:
    type: integer
    description: number of instances
    default: 1
  secureEnableCert:
    type: boolean
    description: enable certificate based connection with DMaap
    default: false
  envs:
    default: {}
  feed0_name:
    type: string
    description: The name of the feed the files will be published to. Should be used by the subscriber.
    default: "bulk_pm_feed"
  feed0_location:
    type: string
    description: The location of the feed.
    default: "loc00"
  feed1_name:
    type: string
    description: The name of the feed the files will be published to. Should be used by the subscriber.
    default: "log_feed"
  feed1_location:
    type: string
    description: The location of the feed.
    default: "loc00"
node_templates:
  datafile-collector:
    type: dcae.nodes.ContainerizedServiceComponentUsingDmaap
    interfaces:
      cloudify.interfaces.lifecycle:
        start:
          inputs:
        envs:
          get_input: envs
    properties:
      application_config:
        service_calls: []
        dmaap.security.enableDmaapCertAuth: { get_input: secureEnableCert }
        streams_subscribes:
          dmaap_subscriber:
            dmaap_info:
              topic_url:
                { concat: [{ get_input: dmaap_mr_protocol },"://",{ get_input: dmaap_mr_host },
                           ":",{ get_input: dmaap_mr_port },"/events/unauthenticated.VES_NOTIFICATION_OUTPUT/OpenDcae-c12/C12"]}
        streams_publishes:
          PM_MEAS_FILES:
            dmaap_info: <<feed0>>
            type: data_router
          LOG_FILES:
            dmaap_info: <<feed1>>
            type: data_router
      image:
        get_input: tag_version
      service_component_type: datafile-collector
      streams_publishes:
      - name: feed0
        location:
          get_input: feed0_location
        type: data_router
      - name: feed1
        location:
          get_input: feed1_location
        type: data_router
    relationships:
      - type: ccsdk.relationships.publish_files
        target: feed0
      - type: ccsdk.relationships.publish_files
        target: feed1
  feed0:
    type: ccsdk.nodes.Feed
    properties:
      feed_name:
        get_input: feed0_name
      useExisting: true
  feed1:
    type: ccsdk.nodes.Feed
    properties:
      feed_name:
        get_input: feed1_name
      useExisting: true
Turn On/Off StrictHostChecking

StrictHostChecking is a SSH connection option which prevents Man in the Middle (MitM) attacks. If it is enabled, client checks HostName and public key provided by server and compares it with keys stored locally. Only if matching entry is found, SSH connection can be established. By default in DataFile Collector this option is enabled (true) and requires to provide known_hosts list to DFC container.

Important: DFC requires public keys in sha-rsa KeyAlgorithm

Known_hosts file is a list in following format:

<HostName/HostIP> <KeyAlgorithms> <Public Key>

e.g:

172.17.0.3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDRibxPenQC//2hzTuscdQDUA7P3gB9k4E8IgwCJxZM8YrJ2vqHomN8boByubebvo0L8+DWqzAtjy0nvgzsoEme9Y3lLWZ/2g9stlsOurwm+nFmWn/RPnwjqsAGNQjukV8C9D82rPMOYRES6qSGactFw4i8ZWLH8pmuJ3js1jb91HSlwr4zbZZd2XPKHk3nudyh8/Mwf3rndCU5FSnzjpBo55m48nsl2M1Tb6Xj1R0jQc5LWN0fsbrm5m+szsk4ccgHw6Vj9dr0Jh4EaIpNwA68k4LzrWb/N20bW8NzUsyDSQK8oEo1dvsiw8G9/AogBjQu9N4bqKWcrk5DOLCZHiCTSbbvdMWAMHXBdxEt9GZ0V53Fzwm8fI2EmIHdLhI4BWKZajumsfHRnd6UUxxna9ySt6qxVYZTyrPvfOFR3hRxVaxHL3EXplGeHT8fnoj+viai+TeSDdjMNwqU4MrngzrNKNLBHIl705uASpHUaRYQxUfWw/zgKeYlIbH+aGgE+4Q1vnh10Y35pATePRZgBIu+h2KsYBAtrP88LqW562OQ6T7VkfoAYwOjx9WV3/y5qonsStPhhzmJHDF22oBh5E5tZQxRcIlQF+5kHmXnFRUZtWshFnQATBh3yhOzJbh66CXn7aPj5Kl8TuuSN48zuI2lulVVqcv7GmTS0tWNpbxpzw==

HostName could also be hashed, e.g:

|1|FwSOxXYeJyZMAQM3jREjLSIcxRw=|o/b+CHEeHuED7WZS6sb3Y1IyHjk= ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDRibxPenQC//2hzTuscdQDUA7P3gB9k4E8IgwCJxZM8YrJ2vqHomN8boByubebvo0L8+DWqzAtjy0nvgzsoEme9Y3lLWZ/2g9stlsOurwm+nFmWn/RPnwjqsAGNQjukV8C9D82rPMOYRES6qSGactFw4i8ZWLH8pmuJ3js1jb91HSlwr4zbZZd2XPKHk3nudyh8/Mwf3rndCU5FSnzjpBo55m48nsl2M1Tb6Xj1R0jQc5LWN0fsbrm5m+szsk4ccgHw6Vj9dr0Jh4EaIpNwA68k4LzrWb/N20bW8NzUsyDSQK8oEo1dvsiw8G9/AogBjQu9N4bqKWcrk5DOLCZHiCTSbbvdMWAMHXBdxEt9GZ0V53Fzwm8fI2EmIHdLhI4BWKZajumsfHRnd6UUxxna9ySt6qxVYZTyrPvfOFR3hRxVaxHL3EXplGeHT8fnoj+viai+TeSDdjMNwqU4MrngzrNKNLBHIl705uASpHUaRYQxUfWw/zgKeYlIbH+aGgE+4Q1vnh10Y35pATePRZgBIu+h2KsYBAtrP88LqW562OQ6T7VkfoAYwOjx9WV3/y5qonsStPhhzmJHDF22oBh5E5tZQxRcIlQF+5kHmXnFRUZtWshFnQATBh3yhOzJbh66CXn7aPj5Kl8TuuSN48zuI2lulVVqcv7GmTS0tWNpbxpzw==

To provide known_hosts list to DFC, execute following steps:

  1. Create file called known_hosts with desired entries.

  2. Mount file using Kubernetes Config Map.

kubectl -n <ONAP NAMESPACE> create cm <config map name> --from-file <path to known_hosts file>

e.g:

kubectl -n onap create cm onap-dcae-dfc-known-hosts --from-file /home/ubuntu/.ssh/known_hosts
  1. Mount newly created Config Map as Volume to DFC by editing DFC deployment. DFC deployment contains 3 containers, pay attention to mount the file to the appropriate container.

...
kind: Deployment
metadata:
...
spec:
  ...
  template:
    ...
    spec:
      containers:
      - image: <DFC image>
        ...
        volumeMounts:
          ...
        - mountPath: /home/datafile/.ssh/
          name: onap-dcae-dfc-known-hosts
          ...
      volumes:
      ...
      - configMap:
          name: <config map name, same as in step 1, e.g. onap-dcae-dfc-known-hosts>
        name: onap-dcae-dfc-known-hosts
    ...

Known_hosts file path can be controlled by Environment Variable KNOWN_HOSTS_FILE_PATH. Full (absolute) path has to be provided. Sample deployment with changed known_hosts file path can be seen below.

...
kind: Deployment
metadata:
...
spec:
  ...
  template:
    ...
    spec:
      containers:
      - image: <DFC image>
        envs:
          - name: KNOWN_HOSTS_FILE_PATH
            value: /home/datafile/.ssh/new/path/<known_hosts file name, e.g. my_custom_keys>
        ...
        volumeMounts:
          ...
        - mountPath: /home/datafile/.ssh/new/path
          name: onap-dcae-dfc-known-hosts
          ...
      volumes:
      ...
      - configMap:
          name: <config map name, same as in step 1, e.g. onap-dcae-dfc-known-hosts>
        name: onap-dcae-dfc-known-hosts
    ...

To change mounted known_hosts list, edit existing Config Map or delete and create it again. The DFC container may refresh changes with a delay. Pod, nor container restart is NOT required.

To edit Config Map execute:

kubectl -n <ONAP NAMESPACE> edit cm <config map name>

e.g:

kubectl -n onap edit cm onap-dcae-dfc-known-hosts

To delete and create again Config Map execute:

kubectl -n <ONAP NAMESPACE> delete cm <config map name>
kubectl -n <ONAP NAMESPACE> create cm <config map name> --from-file <path to known_hosts file>

e.g:

kubectl -n onap delete cm onap-dcae-dfc-known-hosts
kubectl -n onap create cm onap-dcae-dfc-known-hosts --from-file /home/ubuntu/.ssh/known_hosts

To turn off StrictHostChecking, set below option to false. It could be changed in DCAE Config Binding Service (CBS).

WARNING: such operation is not recommended as it decreases DFC security and exposes DFC to MitM attacks.

"sftp.security.strictHostKeyChecking": false
Performance

To see the performance of DFC, see “Datafile Collector (DFC) performance baseline results”.

API
GET /events/unauthenticated.VES_NOTIFICATION_OUTPUT
Description

Reads fileReady events from DMaaP (Data Movement as a Platform)

Responses

HTTP Code

Description

200

successful response

GET /FEEDLOG_TOPIC/DEFAULT_FEED_ID?type=pub&filename=FILENAME
Description

Querying the Data Router to check whether a file has been published previously.

Responses

HTTP Code

Body

Description

400

NA

error in query

200

[]

Not published yet

200

[$FILENAME]

Already published

POST /publish
Description
Publish the collected file/s as a stream to DataRouter
  • file as stream

  • compression

  • fileFormatType

  • fileFormatVersion

  • productName

  • vendorName

  • lastEpochMicrosec

  • sourceName

  • startEpochMicrosec

  • timeZoneOffset

Responses

HTTP Code

Description

200

successful response

Administration

DFC has a healthcheck functionality. The service can then be started and stopped through an API. One can also check the liveliness of the service.

Main API Endpoints
Running with dev-mode of DFC
  • Heartbeat: http://<container_address>:8100/heartbeat or https://<container_address>:8433/heartbeat

  • Start DFC: http://<container_address>:8100/start or https://<container_address>:8433/start

  • Stop DFC: http://<container_address>:8100/stopDatafile or https://<container_address>:8433/stopDatafile

The external port allocated for 8100 (http) is 30245.

HTTP/HTTPS notes
HTTP Basic Authentication in FileReady messages

File ready message for http server is the same like in other protocols. The only difference is scheme set to “http”. Processed uri is in the form of:

scheme://userinfo@host:port/path
i.e.
http://demo:demo123456!@example.com:80/C20200502.1830+0200-20200502.1845+0200_195500.xml.gz

If port number was not provided, port 80 is used by default.

Example file ready message is as follows:

curl --location --request POST 'https://portal.api.simpledemo.onap.org:30417/eventListener/v7' \
--header 'Content-Type: application/json' \
--header 'Authorization: Basic c2FtcGxlMTpzYW1wbGUx' \
--data-raw '{
  "event": {
    "commonEventHeader": {
      "version": "4.0.1",
      "vesEventListenerVersion": "7.0.1",
      "domain": "notification",
      "eventName": "Notification_gnb-Nokia_FileReady",
      "eventId": "FileReady_1797490e-10ae-4d48-9ea7-3d7d790b25e1",
      "lastEpochMicrosec": 8745745764578,
      "priority": "Normal",
      "reportingEntityName": "NOK6061ZW3",
      "sequence": 0,
      "sourceName": "NOK6061ZW3",
      "startEpochMicrosec": 8745745764578,
      "timeZoneOffset": "UTC+05.30"
    },
    "notificationFields": {
      "changeIdentifier": "PM_MEAS_FILES",
      "changeType": "FileReady",
      "notificationFieldsVersion": "2.0",
      "arrayOfNamedHashMap": [
        {
          "name": "C_28532_measData_file.xml",
          "hashMap": {
            "location": "http://login:password@server.com:80/C20200502.1830+0200-20200502.1845+0200_195500.xml.gz",
            "compression": "gzip",
            "fileFormatType": "org.3GPP.32.435#measCollec",
            "fileFormatVersion": "V10"
          }
        }
      ]
    }
  }
}'

Note, more than one file from the same location can be added to the “arrayOfNamedHashMap”. If so, they are downloaded from the endpoint through single http connection.

HTTPS connection with DFC

The file ready message for https server is the same as used in other protocols and http. The only difference is that the scheme is set to “https”:

...
"arrayOfNamedHashMap": [
        {
          "name": "C_28532_measData_file.xml",
          "hashMap": {
            "location": "https://login:password@server.com:443/file.xml.gz",
...

The processed uri depends on the https connection type that has to be established (client certificate authentication, basic authentication, and no authentication).

For client certificate authentication:

scheme://host:port/path
i.e.
https://example.com:443/C20200502.1830+0200-20200502.1845+0200_195500.xml.gz

Authentication is based on the certificate used by the DFC.

For basic authentication:

scheme://userinfo@host:port/path
i.e.
https://demo:demo123456!@example.com:443/C20200502.1830+0200-20200502.1845+0200_195500.xml.gz

Authentication is based on the “userinfo” applied within the link.

If no authentication is required:

scheme://host:port/path
i.e.
https://example.com:443/C20200502.1830+0200-20200502.1845+0200_195500.xml.gz

Note, effective way of authentication depends of uri provided and http server configuration.

If port number was not supplied , port 443 is used by default. Every file is sent through separate https connection.

JWT token in HTTP/HTTPS connection

JWT token is processed, if it is provided as a access_token in the query part of the location entry:

scheme://host:port/path?access_token=<token>
i.e.
https://example.com:443/C20200502.1830+0200-20200502.1845+0200_195500.xml.gz?access_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJkZW1vIiwiaWF0IjoxNTE2MjM5MDIyfQ.MWyG1QSymi-RtG6pkiYrXD93ZY9NJzaPI-wS4MEpUto

JWT tokens are consumed both in HTTP and HTTPS connections. Using JWT token is optional. If it is provided, its validity is not verified. Token is extracted to the HTTP header as Authorization: Bearer <token> and is NOT used in URL in HTTP GET call. Only single JWT token entry in the query is acceptable. If more than one ‘’access_token’’ entry is found in the query, such situation is reported as error and DFC tries to download file without token. Another query parameters are not modified at all and are used in URL in HTTP GET call.

If both JWT token and basic authentication are provided, JWT token has the priority. Such situation is considered as fault and is logged on warning level.

Troubleshooting

In order to find the origin of an error, we suggest to use the logs resulting from tracing, which needs to be activated.

Using the DFC REST API

The DFC supports a REST API which includes features to facilitate troubleshooting.

One REST primitive, status, returns statistics and status information for the DFC processing. Here follows an example on how to use (here curl is used, but a web-browser can also be used. If you are logged in to a container, wget can probably be used):

curl http://127.0.0.1:8100/status  -i -X GET

The following features are implemented by enabling so called ‘actuators’ in the Springboot framework used:

loggers - is used to control the logging level on different loggers (so you can enabled debug tracing on a certain logger.

logfile - get logged information.

health - get health check info, there is currently no info here. But the endpoint is enabled.

metrics - read metrics from the Java execution environment; such as memory consumption, number of threads, open file descriptors etc.

Here follow some examples: Activate debug tracing on all classes in the DFC:

curl http://127.0.0.1:8100/actuator/loggers/org.onap.dcaegen2.collectors.datafile -i -X POST  -H 'Content-Type: application/json' -d '{"configuredLevel":"debug"}'

Read the log file:

curl http://127.0.0.1:8100/actuator/logfile  -i -X GET

Get build information:

curl http://127.0.0.1:8100/actuator/info

Get metric from the JVM. This lists the metrics that are available:

curl http://127.0.0.1:8100/actuator/metrics  -i -X GET

To see the value of a particular metric, just add /[nameOfTheMetric] in the end of address, for example:

curl http://127.0.0.1:8100/actuator/metrics/process.cpu.usage  -i -X GET

Certificate failure

If there is an error linked to the certificate, it is possible to get information about it. A possible cause for the error can be that the expiry date of the certificate is past.

keytool -list -v -keystore dfc.jks

The command to encode the b64 jks file to local execution is (the *.jks.b64 is in the repo and the Dockerfile is encoding it into .jks. So when you pull from nexus, this won’t be needed, only when git-checkout and java/mvn run):

base64 -d dfc.jks.b64 > dfc.jks

Common logs due to configuration errors

Do not rely on exact log messages or their presence, as they are often subject to change.

DFC uses a number of configuration parameters. You can find below the kind of reply you get if any parameter is not valid:

-Wrong trustedCaPassword:

org.onap.dcaegen2.collectors.datafile.tasks.FileCollector     |2019-04-24T14:05:54.494Z     |WARN     |Failed to download file: PNF0 A20000626.2315+0200-2330+0200_PNF0-0-1MB.tar.gz, reason: org.onap.dcaegen2.collectors.datafile.exceptions.DatafileTaskException: Could not open connection: java.io.IOException: Keystore was tampered with, or password was incorrect     |RequestID=A20000626.2315+0200-2330+0200_PNF0-0-1MB.tar.gz     |     |     |FileCollectorWorker-2     |
\...     |WARN     |Failed to download file: ..., reason: org.onap.dcaegen2.collectors.datafile.exceptions.DatafileTaskException: Could not open connection: java.io.IOException: Keystore was tampered with, or password was incorrect     ...
\...     |WARN     |Failed to download file: ..., reason: org.onap.dcaegen2.collectors.datafile.exceptions.DatafileTaskException: Could not open connection: java.io.IOException: Keystore was tampered with, or password was incorrect     ...
\...     |WARN     |Failed to download file: ..., reason: org.onap.dcaegen2.collectors.datafile.exceptions.DatafileTaskException: Could not open connection: java.io.IOException: Keystore was tampered with, or password was incorrect     ...
org.onap.dcaegen2.collectors.datafile.tasks.ScheduledTasks     |2019-04-24T14:06:40.609Z     |ERROR     |File fetching failed, fileData

-Wrong trustedCa:

org.onap.dcaegen2.collectors.datafile.tasks.FileCollector     |2019-04-24T14:11:22.584Z     |WARN     |Failed to download file: PNF0 A20000626.2315+0200-2330+0200_PNF0-0-1MB.tar.gz, reason: org.onap.dcaegen2.collectors.datafile.exceptions.DatafileTaskException: Could not open connection: java.io.FileNotFoundException: **WRONGconfig/ftp.jks**     |RequestID=A20000626.2315+0200-2330+0200_PNF0-0-1MB.tar.gz     |     |     |FileCollectorWorker-2     |
\...     |WARN     |Failed to download file: ..., reason: org.onap.dcaegen2.collectors.datafile.exceptions.DatafileTaskException: Could not open connection: java.io.FileNotFoundException: WRONGconfig/ftp.jks     ...
\...     |WARN     |Failed to download file: ..., reason: org.onap.dcaegen2.collectors.datafile.exceptions.DatafileTaskException: Could not open connection: java.io.FileNotFoundException: WRONGconfig/ftp.jks     ...
\...     |WARN     |Failed to download file: ..., reason: org.onap.dcaegen2.collectors.datafile.exceptions.DatafileTaskException: Could not open connection: java.io.FileNotFoundException: WRONGconfig/ftp.jks     ...
org.onap.dcaegen2.collectors.datafile.tasks.ScheduledTasks     |2019-04-24T14:11:58.963Z     |ERROR     |File fetching failed, fileData

-Wrong keyPassword:

org.onap.dcaegen2.collectors.datafile.tasks.FileCollector     |2019-04-24T14:15:40.694Z     |WARN     |Failed to download file: PNF0 A20000626.2315+0200-2330+0200_PNF0-0-1MB.tar.gz, reason: org.onap.dcaegen2.collectors.datafile.exceptions.DatafileTaskException: Could not open connection: java.io.IOException: Keystore was tampered with, or password was incorrect     |RequestID=A20000626.2315+0200-2330+0200_PNF0-0-1MB.tar.gz     |     |     |FileCollectorWorker-2     |
\...     |WARN     |Failed to download file: ..., reason: org.onap.dcaegen2.collectors.datafile.exceptions.DatafileTaskException: Could not open connection: java.io.IOException: Keystore was tampered with, or password was incorrect     ...
\...     |WARN     |Failed to download file: ..., reason: org.onap.dcaegen2.collectors.datafile.exceptions.DatafileTaskException: Could not open connection: java.io.IOException: Keystore was tampered with, or password was incorrect     ...
\...     |WARN     |Failed to download file: ..., reason: org.onap.dcaegen2.collectors.datafile.exceptions.DatafileTaskException: Could not open connection: java.io.IOException: Keystore was tampered with, or password was incorrect     ...
org.onap.dcaegen2.collectors.datafile.tasks.ScheduledTasks     |2019-04-24T14:16:08.292Z     |ERROR     |File fetching failed, fileData

-Wrong keyCert:

org.onap.dcaegen2.collectors.datafile.tasks.FileCollector     |2019-04-24T14:20:46.308Z     |WARN     |Failed to download file: PNF0 A20000626.2315+0200-2330+0200_PNF0-0-1MB.tar.gz, reason: org.onap.dcaegen2.collectors.datafile.exceptions.DatafileTaskException: Could not open connection: java.io.FileNotFoundException: **WRONGconfig/dfc.jks (No such file or directory)**     |RequestID=A20000626.2315+0200-2330+0200_PNF0-0-1MB.tar.gz     |     |     |FileCollectorWorker-2     |
\...     |WARN     |Failed to download file: ..., reason: org.onap.dcaegen2.collectors.datafile.exceptions.DatafileTaskException: Could not open connection: java.io.FileNotFoundException: WRONGconfig/dfc.jks (No such file or directory)     ...
\...     |WARN     |Failed to download file: ..., reason: org.onap.dcaegen2.collectors.datafile.exceptions.DatafileTaskException: Could not open connection: java.io.FileNotFoundException: WRONGconfig/dfc.jks (No such file or directory)     ...
\...     |WARN     |Failed to download file: ..., reason: org.onap.dcaegen2.collectors.datafile.exceptions.DatafileTaskException: Could not open connection: java.io.FileNotFoundException: WRONGconfig/dfc.jks (No such file or directory)     ...
org.onap.dcaegen2.collectors.datafile.tasks.ScheduledTasks     |2019-04-24T14:21:16.447Z     |ERROR     |File fetching failed, fileData

-Wrong consumer dmaapHostName:

org.onap.dcaegen2.collectors.datafile.tasks.ScheduledTasks     |2019-04-24T14:27:06.578Z     |ERROR     |Polling for file ready message failed, exception: java.net.UnknownHostException: **WRONGlocalhost**: Try again, config: DmaapConsumerConfiguration{consumerId=C12, consumerGroup=OpenDcae-c12, timeoutMs=-1, messageLimit=1, **dmaapHostName=WRONGlocalhost**, dmaapPortNumber=2222, dmaapTopicName=/events/unauthenticated.VES_NOTIFICATION_OUTPUT, dmaapProtocol=http, dmaapUserName=, dmaapUserPassword=, dmaapContentType=application/json, trustStorePath=change it, trustStorePasswordPath=change it, keyStorePath=change it, keyStorePasswordPath=change it, enableDmaapCertAuth=false}     |RequestID=90fe7450-0bc2-4bf6-a2f0-2aeef6f196ae     |     |     |reactor-http-epoll-3     |
\...     |ERROR     |Polling for file ready message failed, exception: java.net.UnknownHostException: *WRONGlocalhost*, config: DmaapConsumerConfiguration{..., dmaapHostName=*WRONGlocalhost*, ...}     ...
\...     |ERROR     |Polling for file ready message failed, exception: java.net.UnknownHostException: *WRONGlocalhost*: Try again, config: DmaapConsumerConfiguration{..., dmaapHostName=*WRONGlocalhost*, ...}     ...
\...     |ERROR     |Polling for file ready message failed, exception: java.net.UnknownHostException: *WRONGlocalhost*: Try again, config: DmaapConsumerConfiguration{..., dmaapHostName=*WRONGlocalhost*, ...}     ...
\...     |ERROR     |Polling for file ready message failed, exception: java.net.UnknownHostException: *WRONGlocalhost*: Try again, config: DmaapConsumerConfiguration{..., dmaapHostName=*WRONGlocalhost*, ...}     ...
\...     |ERROR     |Polling for file ready message failed, exception: java.net.UnknownHostException: *WRONGlocalhost*: Try again, config: DmaapConsumerConfiguration{..., dmaapHostName=*WRONGlocalhost*, ...}     ...

-Wrong consumer dmaapPortNumber:

org.onap.dcaegen2.collectors.datafile.tasks.ScheduledTasks     |2019-04-24T14:33:35.286Z     |ERROR     |Polling for file ready message failed, exception: io.netty.channel.AbstractChannel$AnnotatedConnectException: syscall:getsockopt(..) failed: Connection refused: localhost/127.0.0.1:**WRONGport**, config: DmaapConsumerConfiguration{consumerId=C12, consumerGroup=OpenDcae-c12, timeoutMs=-1, messageLimit=1, dmaapHostName=localhost, **dmaapPortNumber=WRONGport**, dmaapTopicName=/events/unauthenticated.VES_NOTIFICATION_OUTPUT, dmaapProtocol=http, dmaapUserName=, dmaapUserPassword=, dmaapContentType=application/json, trustStorePath=change it, trustStorePasswordPath=change it, keyStorePath=change it, keyStorePasswordPath=change it, enableDmaapCertAuth=false}     |RequestID=b57c68fe-84bf-442f-accd-ea821a5a321f     |     |     |reactor-http-epoll-3     |
\...     |ERROR     |Polling for file ready message failed, exception: io.netty.channel.AbstractChannel$AnnotatedConnectException: syscall:getsockopt(..) failed: Connection refused: localhost/127.0.0.1:*WRONGport*, config: DmaapConsumerConfiguration{..., dmaapPortNumber=*WRONGport*, ...}     ...
\...     |ERROR     |Polling for file ready message failed, exception: io.netty.channel.AbstractChannel$AnnotatedConnectException: syscall:getsockopt(..) failed: Connection refused: localhost/127.0.0.1:*WRONGport*, config: DmaapConsumerConfiguration{..., dmaapPortNumber=*WRONGport*, ...}     ...
\...     |ERROR     |Polling for file ready message failed, exception: io.netty.channel.AbstractChannel$AnnotatedConnectException: syscall:getsockopt(..) failed: Connection refused: localhost/127.0.0.1:*WRONGport*, config: DmaapConsumerConfiguration{..., dmaapPortNumber=*WRONGport*, ...}     ...
\...     |ERROR     |Polling for file ready message failed, exception: io.netty.channel.AbstractChannel$AnnotatedConnectException: syscall:getsockopt(..) failed: Connection refused: localhost/127.0.0.1:*WRONGport*, config: DmaapConsumerConfiguration{..., dmaapPortNumber=*WRONGport*, ...}     ...
\...     |ERROR     |Polling for file ready message failed, exception: io.netty.channel.AbstractChannel$AnnotatedConnectException: syscall:getsockopt(..) failed: Connection refused: localhost/127.0.0.1:*WRONGport*, config: DmaapConsumerConfiguration{..., dmaapPortNumber=*WRONGport*, ...}     ...

-Wrong consumer dmaapTopicName:

org.onap.dcaegen2.collectors.datafile.tasks.ScheduledTasks     |2019-04-24T14:38:07.097Z     |ERROR     |Polling for file ready message failed, exception: java.lang.RuntimeException: DmaaPConsumer HTTP 404 NOT_FOUND, config: DmaapConsumerConfiguration{consumerId=C12, consumerGroup=OpenDcae-c12, timeoutMs=-1, messageLimit=1, dmaapHostName=localhost, dmaapPortNumber=2222, **dmaapTopicName=/events/unauthenticated.VES_NOTIFICATION_OUTPUTWRONG**, dmaapProtocol=http, dmaapUserName=, dmaapUserPassword=, dmaapContentType=application/json, trustStorePath=change it, trustStorePasswordPath=change it, keyStorePath=change it, keyStorePasswordPath=change it, enableDmaapCertAuth=false}     |RequestID=8bd71bac-68af-494b-9518-3ab4478371cf     |     |     |reactor-http-epoll-4     |
\...     |ERROR     |Polling for file ready message failed, exception: java.lang.RuntimeException: DmaaPConsumer HTTP 404 NOT_FOUND, config: DmaapConsumerConfiguration{..., dmaapTopicName=*/events/unauthenticated.VES_NOTIFICATION_OUTPUTWRONG*, ...}     ...
\...     |ERROR     |Polling for file ready message failed, exception: java.lang.RuntimeException: DmaaPConsumer HTTP 404 NOT_FOUND, config: DmaapConsumerConfiguration{..., dmaapTopicName=*/events/unauthenticated.VES_NOTIFICATION_OUTPUTWRONG*, ...}     ...
\...     |ERROR     |Polling for file ready message failed, exception: java.lang.RuntimeException: DmaaPConsumer HTTP 404 NOT_FOUND, config: DmaapConsumerConfiguration{..., dmaapTopicName=*/events/unauthenticated.VES_NOTIFICATION_OUTPUTWRONG*, ...}     ...
\...     |ERROR     |Polling for file ready message failed, exception: java.lang.RuntimeException: DmaaPConsumer HTTP 404 NOT_FOUND, config: DmaapConsumerConfiguration{..., dmaapTopicName=*/events/unauthenticated.VES_NOTIFICATION_OUTPUTWRONG*, ...}     ...
\...     |ERROR     |Polling for file ready message failed, exception: java.lang.RuntimeException: DmaaPConsumer HTTP 404 NOT_FOUND, config: DmaapConsumerConfiguration{..., dmaapTopicName=*/events/unauthenticated.VES_NOTIFICATION_OUTPUTWRONG*, ...}     ...

-Consumer dmaapProtocol: Not configurable.

Missing known_hosts file

When StrictHostKeyChecking is enabled and DFC cannot find a known_hosts file, the warning information shown below is visible in the logfile. In this case, DFC acts like StrictHostKeyChecking is disabled.

org.onap.dcaegen2.collectors.datafile.ftp.SftpClient     |2020-07-24T06:32:56.010Z
|WARN     |StrictHostKeyChecking is enabled but environment variable KNOWN_HOSTS_FILE_PATH is not set or points to not existing file [/home/datafile/.ssh/known_hosts]  -->  falling back to StrictHostKeyChecking='no'.

To resolve this warning, provide a known_hosts file or disable StrictHostKeyChecking, see DFC config page - Turn On/Off StrictHostChecking.

Inability to download file from xNF due to certificate problem

When collecting files using HTTPS and DFC contains certs from CMPv2 server, an exception like “unable to find valid certification path to requested target” may occur. Except obvious certificates problems make sure, that xNF which are connecting to the DFC are supplied with certificates coming from the same CMPv2 server and the same CA which is configured on ONAP side and used by DFC.

Inability to properly run DFC (v1.5.3 and above)

Note, since DFC 1.5.3 FTPeS/HTTPS config blueprint was slighly changed.

"dmaap.ftpesConfig.*"

was changed with

"dmaap.certificateConfig.*"

Container update without updating DFC config (or blueprint) will result in inability to run DFC with FTPeS and HTTPS.

Release Notes
Version 1.2.1

The DFC is now generalized: it can handle any kind of files, not only PM files.

Version 1.1.3

Messages are now handled in parallel

Retry mechanism implemented

Adapting to ONAP logging standard

Deployment using Cloudify made available

Bug fix: Too old files (thus not existing anymore) are ignored

Version: 1.1.1
Release Date

2019-01-30 (Casablanca Maintenance fixes)

Bug Fixes

DCAEGEN2-940 - Larger files of size 100Kb publish to DR

DCAEGEN2-941 - DFC error after running over 12 hours

DCAEGEN2-1001 - Multiple Fileready notification not handled

Version: 1.0.4
Release Date

2018-11-08 (Casablanca)

New Features

All DFC features from v1.0.4 are new.

Bug Fixes

This is the initial release.

Known Issues

No known issues.

Known limitations

  • DFC has only been tested successfully with one node.

  • The certificates are distributed hand to hand, no automated process.

Security Issues

No known security issues.

Upgrade Notes

This is the initial release.

RestConf Collector

Overview

Restconf collector is a microservice in ONAP DCAE. It subscribes to external controllers and receives event data. After receiving event data it may modify it as per usecase’s requirement and produce a DMaaP event. This DMaap event usually consumed by VES mapper. Restconf Collector can subscribe multiple events from multiple controllers.

Functionality

RestconfCollector interaction with DCAE components.

_images/rcc_diag.png

RestconfCollector interaction with an external controller.

_images/rcc_diag_interact.png

For more details about the Restconfcollector, visit * https://wiki.onap.org/pages/viewpage.action?pageId=60891182

Compiling RestConf Collector

RestconfCollector is a sub-project of dcaegen2/colletcors/ (https://gerrit.onap.org/r/dcaegen2/collectors/restconf). To build the Restconf Collector component, run the following maven command from within collectors/restconf directory mvn clean install

Maven GroupId:

org.onap.dcaegen2.collectors.restconf

Maven Parent ArtifactId:

org.onap.oparen:oparent:1.2.0

SNMP Trap Collector

Architecture

The ONAP SNMPTRAP project (referred to as “trapd” - as in “trap daemon” throughout this documentation) is a network facing ONAP platform component.

The simple network management protocol (or “SNMP”, for short) is a pervasive communication protocol standard used between managed devices and a management system. It is used to relay data that can be valuable in the operation, fault identification and planning processes of all networks.

SNMP utilizes a message called a “trap” to inform SNMP managers of abnormal or changed conditions on a resource that is running a SNMP agent. These agents can run on physical or virtual resources (no difference in reporting) and can notify on anything from hardware states, resource utilization, software processes or anything else specific to the agent’s environment.

Capabilities

trapd receives SNMP traps and publishes them to a message router (DMAAP/MR) instance based on attributes obtained from configuration binding service (“CBS”).

_images/ONAP_trapd.png
Interactions

Traps are published to DMAAP/MR in a json format. Once traps are published to a DMAAP/MR instance, they are available to consumers that are subscribed to the topic they were published to.

Usage Scenarios

trapd runs in a docker container based on python 3.6. Running an instance of trapd will result in arriving traps being published to the topic specified by config binding services. If CBS is not present, SNMPTRAP will look for a JSON configuration file specified via the environment variable CBS_SIM_JSON at startup (see “CONFIGURATION” link for details).

Delivery
Docker Container

trapd is delivered as a docker container that can be downloaded from onap:

docker run --detach -t --rm -p 162:6162/udp -P --name=SNMPTRAP nexus3.onap.org:10001/onap/org.onap.dcaegen2.collectors.snmptrap:2.0.3 ./bin/snmptrapd.sh start

Standalone

trapd can also be run outside of a docker environment (for details, see “Installation” link) by downloading the source image from:

gerrit.onap.org:29418/dcaegen2/collectors/snmptrap
Offered APIs

trapd supports the Simple Network Management Protocol (SNMP) standard. It is a well documented and pervasive protocol, used in all networks worldwide.

As an API offering, the only way to interact with trapd is to send traps that conform to the industry standard specification (RFC1215 - available at https://tools.ietf.org/html/rfc1215 ) to a running instance. To accomplish this, you may:

  1. Configure SNMP agents to send native traps to a SNMPTRAP instance. In SNMP agent configurations, this is usually accomplished by setting the “trap target” or “snmp manager” to the IP address of the running VM/container hosting SNMPTRAP.

  2. Simulate a SNMP trap using various freely available utilities. Two examples are provided below, be sure to change the target (“localhost”) and port (“162”) to applicable values in your environment.

NetSNMP snmptrap

One way to simulate an arriving SNMP trap is to use the Net-SNMP utility/command snmptrap. This command can send V1, V2c or V3 traps to a manager based on the parameters provided.

The example below sends a SNMP V1 trap to the specified host. Prior to running this command, export the values of to_ip_address (set it to the IP of the VM hosting the ONAP trapd container) and to_port (typically set to “162”):

export to_ip_address=192.168.1.1

export to_port=162

Then run the Net-SNMP command/utility:

snmptrap -d -v 1 -c not_public ${to_ip_address}:${to_portt} .1.3.6.1.4.1.99999 localhost 6 1 '55' .1.11.12.13.14.15  s "test trap"

Note

This will display some “read_config_store open failure” errors; they can be ignored, the trap has successfully been sent to the specified destination.

python using pysnmp

Another way to simulate an arriving SNMP trap is to send one with the python pysnmp module. (Note that this is the same module that ONAP trapd is based on).

To do this, create a python script called “send_trap.py” with the following contents. You’ll need to change the target (from “localhost” to whatever the destination IP/hostname of the trap receiver is) before saving:

from pysnmp.hlapi import *
from pysnmp import debug

# debug.setLogger(debug.Debug('msgproc'))

errorIndication, errorStatus, errorIndex, varbinds = next(sendNotification(SnmpEngine(),
     CommunityData('not_public'),
     UdpTransportTarget(('localhost', 162)),
     ContextData(),
     'trap',
     [ObjectType(ObjectIdentity('.1.3.6.1.4.1.999.1'), OctetString('test trap - ignore')),
      ObjectType(ObjectIdentity('.1.3.6.1.4.1.999.2'), OctetString('ONAP pytest trap'))])
)

if errorIndication:
    print(errorIndication)
else:
    print("successfully sent trap")

To run the pysnmp example:

python ./send_trap.py

Logging

Logging is controlled by the configuration provided to trapd by CBS, or via the fallback config file specified as the environment variable “CBS_SIM_JSON” at startup. The section of the JSON configuration that influences the various forms of application logging is referenced throughout this document, with examples.

Using the JSON configuration, a base directory is specified for application data and EELF log files. Specific filenames (again, from the JSON config) are appended to the base directory value to create a full-path filename for use by SNMPTRAP.

Also available is the ability to modify how frequently logs are rolled to time-stamped versions (and a new empty file is started) as well as what severity level to log to program diagnostic logs. The actual archival (to a timestamped filename) occurs when the first trap is received in a new hour (or minute, or day - depending on “roll_frequency” value).

Defaults are shown below:

"files": {
    <other json data>
    ...
    "roll_frequency": "day",
    "minimum_severity_to_log": 3
    <other json data>
    ...
},
Roll Frequency

Roll frequency can be modified based on your environment (e.g. if trapd is handling a heavy trap load, you will probably want files to roll more frequently). Valid “roll_frequency” values are:

  • minute

  • hour

  • day

Minimum Severity To Log

Logging levels should be modified based on your need. Log levels in lab environments should be “lower” (e.g. minimum severity to log = “0” creates verbose logging) vs. production (values of “3” and above is a good choice).

Valid “minimum_severity_to_log” values are:

  • “1” (debug mode - everything you want to know about process, and more. NOTE: Not recommended for production environments)

  • “2” (info - verbose logging. NOTE: Not recommended for production environments)

  • “3” (warnings - functionality not impacted, but abnormal/uncommon event)

  • “4” (critical - functionality impacted, but remains running)

  • “5” (fatal - causing runtime exit)

WHERE ARE THE LOG FILES?
APPLICATION DATA

trapd produces application-specific logs (e.g. trap logs/payloads, etc) as well as various other statistical and diagnostic logs. The location of these logs is controlled by the JSON config, using these values:

"files": {
    "runtime_base_dir": "/opt/app/snmptrap",
    "log_dir": "logs",
    "data_dir": "data",
    "pid_dir": "tmp",
    "arriving_traps_log": "snmptrapd_arriving_traps.log",
    "snmptrapd_diag": "snmptrapd_prog_diag.log",
    "traps_stats_log": "snmptrapd_stats.csv",
    "perm_status_file": "snmptrapd_status.log",
    "roll_frequency": "hour",
    "minimum_severity_to_log": 2
    <other json data>
    ...
},

The base directory for all data logs is specified with:

runtime_base_dir

Remaining log file references are appended to the runtime_base_dir value to specify a logfile location. The result using the above example would create the files:

/opt/app/snmptrap/logs/snmptrapd_arriving_traps.log
/opt/app/snmptrap/logs/snmptrapd_prog_diag.log
/opt/app/snmptrap/logs/snmptrapd_stats.csv
/opt/app/snmptrap/logs/snmptrapd_status.log
ARRIVING TRAPS

trapd logs all arriving traps. These traps are saved in a filename created by appending runtime_base_dir, log_dir and arriving_traps_log from the JSON config. Using the example above, the resulting arriving trap log would be:

/opt/app/snmptrap/logs/snmptrapd_arriving_traps.log

An example from this log is shown below:

1529960544.4896748 Mon Jun 25 17:02:24 2018; Mon Jun 25 17:02:24 2018 com.att.dcae.dmaap.IST3.DCAE-COLLECTOR-UCSNMP 15299605440000 1.3.6.1.4.1.999.0.1 server001 127.0.0.1 server001 v2c 751564798 0f40196a-78bb-11e8-bac7-005056865aac , "varbinds": [{"varbind_oid": "1.3.6.1.4.1.999.0.1.1", "varbind_type": "OctetString", "varbind_value": "TEST TRAP"}]

NOTE: Format of this log will change with 1.5.0; specifically, “varbinds” section will be reformatted/json struct removed and will be replaced with a flat file format.

PUBLISHED TRAPS

SNMPTRAP’s main purpose is to receive and decode SNMP traps, then publish the results to a configured DMAAP/MR message bus. Traps that are successfully published (e.g. publish attempt gets a “200/ok” response from the DMAAP/MR server) are logged to a file named by the technology being used combined with the topic being published to.

If you find a trap in this published log, it has been acknowledged as received by DMAAP/MR. If consumers complain of “missing traps”, the source of the problem will be downstream (not with SNMPTRAP) if the trap has been logged here.

For example, with a json config of:

"dmaap_info": {
    "location": "mtl5",
    "client_id": null,
    "client_role": null,
    "topic_url": "http://172.17.0.1:3904/events/ONAP-COLLECTOR-SNMPTRAP"

and

"files": {
    "**runtime_base_dir**": "/opt/app/snmptrap",

result in traps that are confirmed as published (200/ok response from DMAAP/MR) logged to the file:

/opt/app/snmptrap/logs/DMAAP_ONAP-COLLECTOR-SNMPTRAP.json

An example from this JSON log is shown below:

{
    "uuid": "0f40196a-78bb-11e8-bac7-005056865aac",
    "agent address": "127.0.0.1",
    "agent name": "server001",
    "cambria.partition": "server001",
    "community": "",
    "community len": 0,
    "epoch_serno": 15299605440000,
    "protocol version": "v2c",
    "time received": 1529960544.4896748,
    "trap category": "DCAE-COLLECTOR-UCSNMP",
    "sysUptime": "751564798",
    "notify OID": "1.3.6.1.4.1.999.0.1",
    "notify OID len": 9,
    "varbinds": [
        {
            "varbind_oid": "1.3.6.1.4.1.999.0.1.1",
            "varbind_type": "OctetString",
            "varbind_value": "TEST TRAP"
        }
    ]
}
EELF

For program/operational logging, trapd follows the EELF logging convention. Please be aware that the EELF specification results in messages spread across various files. Some work may be required to find the right location (file) that contains the message you are looking for.

EELF logging is controlled by the configuration provided to trapd by CBS, or via the fallback config file specified as an environment variable “CBS_SIM_JSON” at startup. The section of that JSON configuration that influences EELF logging is:

"files": {
    <other json data>
    ...
    "**eelf_base_dir**": "/opt/app/snmptrap/logs",
    "eelf_error": "error.log",
    "eelf_debug": "debug.log",
    "eelf_audit": "audit.log",
    "eelf_metrics": "metrics.log",
    "roll_frequency": "hour",
},
<other json data>
...

The base directory for all EELF logs is specified with:

eelf_base_dir

Remaining eelf_<file> references are appended to the eelf_base_dir value to specify a logfile location. The result using the above example would create the files:

/opt/app/snmptrap/logs/error.log
/opt/app/snmptrap/logs/debug.log
/opt/app/snmptrap/logs/audit.log
/opt/app/snmptrap/logs/metrics.log

Again using the above example configuration, these files will be rolled to an archived/timestamped version hourly. The actually archival (to a timestamped filename) occurs when the first trap is received in a new hour (or minute, or day - depending on “roll_frequency” value).

Error / Warning Messages
Program Diagnostics

Detailed application log messages can be found in “snmptrapd_diag” (JSON config reference). These can be very verbose and roll quickly depending on trap arrival rates, number of varbinds encountered, minimum_severity_to_log setting in JSON config, etc.

In the default config, this file can be found at:

/opt/app/snmptrap/logs/snmptrapd_diag.log

Messages will be in the general format of:

2018-04-25T17:28:10,305|<module>|snmptrapd||||INFO|100||arriving traps logged to: /opt/app/snmptrap/logs/snmptrapd_arriving_traps.log
2018-04-25T17:28:10,305|<module>|snmptrapd||||INFO|100||published traps logged to: /opt/app/snmptrap/logs/DMAAP_com.att.dcae.dmaap.IST3.DCAE-COLLECTOR-UCSNMP.json
2018-04-25T17:28:10,306|<module>|snmptrapd||||INFO|100||Runtime PID file: /opt/app/snmptrap/tmp/snmptrapd.py.pid
2018-04-25T17:28:48,019|snmp_engine_observer_cb|snmptrapd||||DETAILED|100||snmp trap arrived from 192.168.1.139, assigned uuid: 1cd77e98-48ae-11e8-98e5-005056865aac
2018-04-25T17:28:48,023|snmp_engine_observer_cb|snmptrapd||||DETAILED|100||dns cache expired or missing for 192.168.1.139 - refreshing
2018-04-25T17:28:48,027|snmp_engine_observer_cb|snmptrapd||||DETAILED|100||cache for server001 (192.168.1.139) updated - set to expire at 1524677388
2018-04-25T17:28:48,034|snmp_engine_observer_cb|snmptrapd||||DETAILED|100||snmp trap arrived from 192.168.1.139, assigned uuid: 0f40196a-78bb-11e8-bac7-005056
2018-04-25T17:28:48,036|notif_receiver_cb|snmptrapd||||DETAILED|100||processing varbinds for 0f40196a-78bb-11e8-bac7-005056
2018-04-25T17:28:48,040|notif_receiver_cb|snmptrapd||||DETAILED|100||adding 0f40196a-78bb-11e8-bac7-005056 to buffer

2018-06-25T21:02:24,491|notif_receiver_cb|snmptrapd||||DETAILED|100||trap 0f40196a-78bb-11e8-bac7-005056865aac : {"uuid": "0f40196a-78bb-11e8-bac7-005056865aac", "agent address": "192.168.1.139", "agent name": "server001", "cambria.partition": "server001", "community": "", "community len": 0, "epoch_serno": 15299605440000, "protocol version": "v2c", "time received": 1529960544.4896748, "trap category": "com.companyname.dcae.dmaap.location.DCAE-COLLECTOR-UCSNMP", "sysUptime": "751564798", "notify OID": "1.3.6.1.4.1.999.0.1", "notify OID len": 9, "varbinds": [{"varbind_oid": "1.3.6.1.4.1.999.0.1.1", "varbind_type": "OctetString", "varbind_value": "TEST TRAP"}]}
2018-06-25T21:02:24,496|post_dmaap|snmptrapd||||DETAILED|100||post_data_enclosed: {"uuid": "0f40196a-78bb-11e8-bac7-005056865aac", "agent address": "192.168.1.139", "agent name": "server001", "cambria.partition": "server001", "community": "", "community len": 0, "epoch_serno": 15299605440000, "protocol version": "v2c", "time received": 1529960544.4896748, "trap category": "com.att.dcae.dmaap.IST3.DCAE-COLLECTOR-UCSNMP", "sysUptime": "751564798", "notify OID": "1.3.6.1.4.1.999.0.1", "notify OID len": 9, "varbinds": [{"varbind_oid": "1.3.6.1.4.1.999.0.1.1", "varbind_type": "OctetString", "varbind_value": "TEST TRAP"}]}
Platform Status

A permanent (left to user to archive/compress/etc) status file is maintained in the file referenced by:

perm_status_file

"perm_status_file": "snmptrapd_status.log",

Combined with runtime_base_dir and log_dir settings from snmptrapd.json, the perm_status_file in default installations can be found at:

/opt/app/uc/logs/snmptrapd_stats.log
Configuration

trapd configuration is controlled via a single JSON ‘transaction’. This transaction can be:

  • a reply from Config Binding Services

  • a locally hosted JSON file

The format of this message is described in the SNMPTRAP package, under:

<base install dir>/spec/snmptrap-collector-component-spec.json

There will also be a template JSON file with example/default values found at:

<base install dir>/etc/snmptrapd.json

If you are going to use a local file, the env variable below must be defined before SNMPTRAP runs. There is a default value set in the SNMPTRAP startup script (bin/snmptrapd.sh):

export CBS_SIM_JSON=../etc/snmptrapd.json

In either scenario, the format of the config message/transaction is the same. An example is described below.

JSON CONFIGURATION EXPLAINED

Variables of interest (e.g. variables that should be inspected/modified for a specific runtime environment) are listed below for convenience. The entire file is provided later in this page for reference.

Potential Config Changes in your environment
in protocols section:

   "ipv4_interface": "0.0.0.0",    # IPv4 address of interface to listen on - "0.0.0.0" == "all"
   "ipv4_port": 6162,              # UDP port to listen for IPv4 traps on (6162 used in docker environments when forwarding has been enabled)
   "ipv6_interface": "::1",        # IPv6 address of interface to listen on - "::1" == "all"
   "ipv6_port": 6162               # UDP port to listen for IPv6 traps on (6162 used in docker environments when forwarding has been enabled)

in cache section:

   "dns_cache_ttl_seconds": 60     # number of seconds trapd will cache IP-to-DNS-name values before checking for update

in files section:

   "minimum_severity_to_log": 2    # minimum message level to log; 0 recommended for debugging, 3+ recommended for runtime/production

in snmpv3_config section:

   (see detailed snmpv3_config discussion below)
snmpv3_config

SNMPv3 added significant authorization and privacy capabilities to the SNMP standard. As it relates to traps, this means providing the proper privacy, authorization, engine and user criteria for each agent that would like to send traps to a particular trapd instance.

This is done by adding blocks of valid configuration data to the “snmpv3_config” section of the JSON config/transaction. These blocks are recurring sets of:

{
"user": "<userId>",
"engineId": "<engineId>",
"<authProtocol>": "<authorizationKeyValue>",
"<privProtocol>": "<privacyKeyValue>"
}

Valid values for authProtocol in JSON configuration:

usmHMACMD5AuthProtocol
usmHMACSHAAuthProtocol
usmHMAC128SHA224AuthProtocol
usmHMAC192SHA256AuthProtocol
usmHMAC256SHA384AuthProtocol
usmHMAC384SHA512AuthProtocol
usmNoAuthProtocol

Valid values for privProtocol in JSON configuration:

usm3DESEDEPrivProtocol
usmAesCfb128Protocol
usmAesCfb192Protocol
usmAesBlumenthalCfb192Protocol
usmAesCfb256Protocol
usmAesBlumenthalCfb256Protocol
usmDESPrivProtocol
usmNoPrivProtocol

User and engineId values are left up to the administrator, and must conform to SNMPv3 specifications as explained at https://tools.ietf.org/html/rfc3414 .

Sample JSON configuration

The format of the JSON configuration that drives all behavior of SNMPTRAP is probably best described using an example:

{
    "snmptrapd": {
        "version": "1.4.0",
        "title": "ONAP SNMP Trap Receiver"
    },
    "protocols": {
        "transport": "udp",
        "ipv4_interface": "0.0.0.0",
        "ipv4_port": 6162,
        "ipv6_interface": "::1",
        "ipv6_port": 6162

    },
    "cache": {
        "dns_cache_ttl_seconds": 60
    },
    "publisher": {
        "http_timeout_milliseconds": 1500,
        "http_retries": 3,
        "http_milliseconds_between_retries": 750,
        "http_primary_publisher": "true",
        "http_peer_publisher": "unavailable",
        "max_traps_between_publishes": 10,
        "max_milliseconds_between_publishes": 10000
    },
    "streams_publishes": {
        "sec_fault_unsecure": {
            "type": "message_router",
            "aaf_password": null,
            "dmaap_info": {
                "location": "mtl5",
                "client_id": null,
                "client_role": null,
                "topic_url": "http://localhost:3904/events/ONAP-COLLECTOR-SNMPTRAP"
            },
            "aaf_username": null
        }
    },
    "files": {
        "runtime_base_dir": "/opt/app/snmptrap",
        "log_dir": "logs",
        "data_dir": "data",
        "pid_dir": "tmp",
        "arriving_traps_log": "snmptrapd_arriving_traps.log",
        "snmptrapd_diag": "snmptrapd_prog_diag.log",
        "traps_stats_log": "snmptrapd_stats.csv",
        "perm_status_file": "snmptrapd_status.log",
        "eelf_base_dir": "/opt/app/snmptrap/logs",
        "eelf_error": "error.log",
        "eelf_debug": "debug.log",
        "eelf_audit": "audit.log",
        "eelf_metrics": "metrics.log",
        "roll_frequency": "hour",
        "minimum_severity_to_log": 3
    },
    "snmpv3_config": {
        "usm_users": [
            {
                "engineId": "8000000000000001",
                "user": "user1",
                "usmDESPrivProtocol": "privkey1",
                "usmHMACMD5AuthProtocol": "authkey1"
            },
            {
                "engineId": "8000000000000002",
                "user": "user2",
                "usm3DESEDEPrivProtocol": "privkey2",
                "usmHMACMD5AuthProtocol": "authkey2"
            },
            {
                "engineId": "8000000000000003",
                "user": "user3",
                "usmAesCfb128Protocol": "privkey3",
                "usmHMACMD5AuthProtocol": "authkey3"
            },
            {
                "engineId": "8000000000000004",
                "user": "user4",
                "usmAesBlumenthalCfb192Protocol": "privkey4",
                "usmHMACMD5AuthProtocol": "authkey4"
            },
            {
                "engineId": "8000000000000005",
                "user": "user5",
                "usmAesBlumenthalCfb256Protocol": "privkey5",
                "usmHMACMD5AuthProtocol": "authkey5"
            },
            {
                "engineId": "8000000000000006",
                "user": "user6",
                "usmAesCfb192Protocol": "privkey6",
                "usmHMACMD5AuthProtocol": "authkey6"
            },
            {
                "engineId": "8000000000000007",
                "user": "user7",
                "usmAesCfb256Protocol": "privkey7",
                "usmHMACMD5AuthProtocol": "authkey7"
            },
            {
                "engineId": "8000000000000009",
                "user": "user9",
                "usmDESPrivProtocol": "privkey9",
                "usmHMACSHAAuthProtocol": "authkey9"
            },
            {
                "engineId": "8000000000000010",
                "user": "user10",
                "usm3DESEDEPrivProtocol": "privkey10",
                "usmHMACSHAAuthProtocol": "authkey10"
            },
            {
                "engineId": "8000000000000011",
                "user": "user11",
                "usmAesCfb128Protocol": "privkey11",
                "usmHMACSHAAuthProtocol": "authkey11"
            },
            {
                "engineId": "8000000000000012",
                "user": "user12",
                "usmAesBlumenthalCfb192Protocol": "privkey12",
                "usmHMACSHAAuthProtocol": "authkey12"
            },
            {
                "engineId": "8000000000000013",
                "user": "user13",
                "usmAesBlumenthalCfb256Protocol": "privkey13",
                "usmHMACSHAAuthProtocol": "authkey13"
            },
            {
                "engineId": "8000000000000014",
                "user": "user14",
                "usmAesCfb192Protocol": "privkey14",
                "usmHMACSHAAuthProtocol": "authkey14"
            },
            {
                "engineId": "8000000000000015",
                "user": "user15",
                "usmAesCfb256Protocol": "privkey15",
                "usmHMACSHAAuthProtocol": "authkey15"
            },
            {
                "engineId": "8000000000000017",
                "user": "user17",
                "usmDESPrivProtocol": "privkey17",
                "usmHMAC128SHA224AuthProtocol": "authkey17"
            },
            {
                "engineId": "8000000000000018",
                "user": "user18",
                "usm3DESEDEPrivProtocol": "privkey18",
                "usmHMAC128SHA224AuthProtocol": "authkey18"
            },
            {
                "engineId": "8000000000000019",
                "user": "user19",
                "usmAesCfb128Protocol": "privkey19",
                "usmHMAC128SHA224AuthProtocol": "authkey19"
            },
            {
                "engineId": "8000000000000020",
                "user": "user20",
                "usmAesBlumenthalCfb192Protocol": "privkey20",
                "usmHMAC128SHA224AuthProtocol": "authkey20"
            },
            {
                "engineId": "8000000000000021",
                "user": "user21",
                "usmAesBlumenthalCfb256Protocol": "privkey21",
                "usmHMAC128SHA224AuthProtocol": "authkey21"
            },
            {
                "engineId": "8000000000000022",
                "user": "user22",
                "usmAesCfb192Protocol": "privkey22",
                "usmHMAC128SHA224AuthProtocol": "authkey22"
            },
            {
                "engineId": "8000000000000023",
                "user": "user23",
                "usmAesCfb256Protocol": "privkey23",
                "usmHMAC128SHA224AuthProtocol": "authkey23"
            },
            {
                "engineId": "8000000000000025",
                "user": "user25",
                "usmDESPrivProtocol": "privkey25",
                "usmHMAC192SHA256AuthProtocol": "authkey25"
            },
            {
                "engineId": "8000000000000026",
                "user": "user26",
                "usm3DESEDEPrivProtocol": "privkey26",
                "usmHMAC192SHA256AuthProtocol": "authkey26"
            },
            {
                "engineId": "8000000000000027",
                "user": "user27",
                "usmAesCfb128Protocol": "privkey27",
                "usmHMAC192SHA256AuthProtocol": "authkey27"
            },
            {
                "engineId": "8000000000000028",
                "user": "user28",
                "usmAesBlumenthalCfb192Protocol": "privkey28",
                "usmHMAC192SHA256AuthProtocol": "authkey28"
            },
            {
                "engineId": "8000000000000029",
                "user": "user29",
                "usmAesBlumenthalCfb256Protocol": "privkey29",
                "usmHMAC192SHA256AuthProtocol": "authkey29"
            },
            {
                "engineId": "8000000000000030",
                "user": "user30",
                "usmAesCfb192Protocol": "privkey30",
                "usmHMAC192SHA256AuthProtocol": "authkey30"
            },
            {
                "engineId": "8000000000000031",
                "user": "user31",
                "usmAesCfb256Protocol": "privkey31",
                "usmHMAC192SHA256AuthProtocol": "authkey31"
            },
            {
                "engineId": "8000000000000033",
                "user": "user33",
                "usmDESPrivProtocol": "privkey33",
                "usmHMAC256SHA384AuthProtocol": "authkey33"
            },
            {
                "engineId": "8000000000000034",
                "user": "user34",
                "usm3DESEDEPrivProtocol": "privkey34",
                "usmHMAC256SHA384AuthProtocol": "authkey34"
            },
            {
                "engineId": "8000000000000035",
                "user": "user35",
                "usmAesCfb128Protocol": "privkey35",
                "usmHMAC256SHA384AuthProtocol": "authkey35"
            },
            {
                "engineId": "8000000000000036",
                "user": "user36",
                "usmAesBlumenthalCfb192Protocol": "privkey36",
                "usmHMAC256SHA384AuthProtocol": "authkey36"
            },
            {
                "engineId": "8000000000000037",
                "user": "user37",
                "usmAesBlumenthalCfb256Protocol": "privkey37",
                "usmHMAC256SHA384AuthProtocol": "authkey37"
            },
            {
                "engineId": "8000000000000038",
                "user": "user38",
                "usmAesCfb192Protocol": "privkey38",
                "usmHMAC256SHA384AuthProtocol": "authkey38"
            },
            {
                "engineId": "8000000000000039",
                "user": "user39",
                "usmAesCfb256Protocol": "privkey39",
                "usmHMAC256SHA384AuthProtocol": "authkey39"
            },
            {
                "engineId": "8000000000000041",
                "user": "user41",
                "usmDESPrivProtocol": "privkey41",
                "usmHMAC384SHA512AuthProtocol": "authkey41"
            },
            {
                "engineId": "8000000000000042",
                "user": "user42",
                "usm3DESEDEPrivProtocol": "privkey42",
                "usmHMAC384SHA512AuthProtocol": "authkey42"
            },
            {
                "engineId": "8000000000000043",
                "user": "user43",
                "usmAesCfb128Protocol": "privkey43",
                "usmHMAC384SHA512AuthProtocol": "authkey43"
            },
            {
                "engineId": "8000000000000044",
                "user": "user44",
                "usmAesBlumenthalCfb192Protocol": "privkey44",
                "usmHMAC384SHA512AuthProtocol": "authkey44"
            },
            {
                "engineId": "8000000000000045",
                "user": "user45",
                "usmAesBlumenthalCfb256Protocol": "privkey45",
                "usmHMAC384SHA512AuthProtocol": "authkey45"
            },
            {
                "engineId": "8000000000000046",
                "user": "user46",
                "usmAesCfb192Protocol": "privkey46",
                "usmHMAC384SHA512AuthProtocol": "authkey46"
            },
            {
                "engineId": "8000000000000047",
                "user": "user47",
                "usmAesCfb256Protocol": "privkey47",
                "usmHMAC384SHA512AuthProtocol": "authkey47"
            }

   }
Administration
Processes

trapd runs as a single (python) process inside (or outside) the container. You can monitor it using the commands documented below.

NOTE: Familiarity with docker environments is assumed below - for example, if you stop a running instance of snmptrapd that was started using the default snmptrapd docker configuration, the container itself will exit. Similarly, if you start an instance of snmptrapd inside a container, it will not run in the background (this is a dependency/relationship between docker and the application -> if the command registered to run the service inside the container terminates, it is assumed that the application has failed and docker will terminate the container itself).

Actions
Starting snmptrapd

The trapd service can be started by running the command:

/opt/app/snmptrap/bin/snmptrapd.sh start

Output from this command will be two-fold. First will be the textual response:

2018-10-16T15:14:59,461 Starting snmptrapd...
2018-10-16T19:15:01,966 ONAP controller not present, trying json config override via CBS_SIM_JSON env variable
2018-10-16T19:15:01,966 ONAP controller override specified via CBS_SIM_JSON: ../etc/snmptrapd.json
2018-10-16T19:15:01,973 ../etc/snmptrapd.json loaded and parsed successfully
2018-10-16T19:15:02,038 load_all_configs|snmptrapd||||INFO|100||current config logged to : /opt/app/snmptrap/tmp/current_config.json
2018-10-16T19:15:02,048 snmptrapd.py : ONAP SNMP Trap Receiver version 1.4.0 starting
2018-10-16T19:15:02,049 arriving traps logged to: /opt/app/snmptrap/logs/snmptrapd_arriving_traps.log
2018-10-16T19:15:02,050 published traps logged to: /opt/app/snmptrap/logs/DMAAP_unauthenticated.ONAP-COLLECTOR-SNMPTRAP.json

NOTE: This command will remain in the foreground for reasons explained above.

Checking Status

The trapd container can be monitored for status by running this command from inside the container:

/opt/app/snmptrap/bin/snmptrapd.sh status

If SNMPTRAPD is present/running, output from this command will be:

2018-10-16T15:01:47,705 Status: snmptrapd running
ucsnmp    16109  16090  0 Oct08 ?        00:07:16 python ./snmptrapd.py

and the return code presented to the shell upon exit:

0 -> if command executed successfully and the process was found 1 -> if the command failed, and/or the process is not running

$ echo $?

0

If trapd is not present, output from this command will be:

2018-10-16T15:10:47,815 PID file /opt/app/snmptrap/tmp/snmptrapd.py.pid does not exist or not readable - unable to check status of snmptrapd
2018-10-16T15:10:47,816 Diagnose further at command line as needed.

and the return code presented to the shell upon exit:

$ echo $?

1

Stopping trapd

trapd can be stopped by running the command:

/opt/app/snmptrap/bin/snmptrapd.sh stop

Output from this command will be two-fold. First will be the textual response:

2018-10-16T15:10:07,808 Stopping snmptrapd PID 16109...
2018-10-16T15:10:07,810 Stopped

Second will be the return code presented to the shell upon exit:

0 - if command executed successfully 1 - if the request to stop failed

$ echo $?

0

Other commands of interest
Checking for snmptrapd inside a container

ps -ef | grep snmptrap.py | grep -v grep

Checking for snmptrapd outside the container

docker exec -it <container name> ps -ef | grep snmptrap.py | grep -v grep

Human Interfaces
Graphical

There are no graphical interfaces for snmptrap.

Command Line

There is a command line interface available, which is a shell script that provides all needed interactions with trapd.

Usage

bin/snmptrapd.sh [start|stop|restart|status|reloadCfg]

start - start an instance of snmptrapd inside the container

stop - terminate the snmptrapd process currently running inside container

restart - restart an instance of snmptrapd inside current container (NOTE: this may cause container to exit depending on how it was started!)

status - check and display status of snmptrapd inside container

reloadCfg - signal current instance of snmptrapd to re-request configuration from Config Binding Service (NOTE: Known issue for configurations that include SNMPv3 credentials, this option will not work as expected)

Release Notes
Version: 2.3.0
Release Date

2020-04-01

New Features

  • https://jira.onap.org/browse/DCAEGEN2-2020 Eliminate use of consul service discovery in snmptrap

  • https://jira.onap.org/browse/DCAEGEN2-2068 Updated dependency library version; stormwatch support

Bug Fixes

Known Issues

Security Issues
  • None

Upgrade Notes

Deprecation Notes

Other

Version: 1.4.0
Release Date

2018-10-01

New Features

  • https://jira.onap.org/browse/DCAEGEN2-630 Added support for SNMPv3 traps with varying levels of privacy and authentication support.

Bug Fixes
  • https://jira.onap.org/browse/DCAEGEN2-842 Remove additional RFC3584 (Sec 3.1 (4)) varbinds from published/logged SNMPv1 messages, fix DMAAP publish error for traps with no varbinds present.

Known Issues

Security Issues
  • None

Upgrade Notes

Deprecation Notes

Other


Version: 1.3.0
Release Date

2018-05-02

New Features

Support for config binding services.

Bug Fixes
  • https://jira.onap.org/browse/DCAEGEN2-465

Known Issues
  • https://jira.onap.org/browse/DCAEGEN2-465 Default config causes standalone instance startup failure.

Security Issues
  • None

Upgrade Notes

Deprecation Notes

Other

Event Processor

BBS-EventProcessor

Overview

BBE-ep is responsible for handling two types of events for the BBS use case.

First are PNF re-registration internal events published by PRH. BBS-ep must process these internal events to understand if they actually constitute ONT(CPE) relocation events. In the relocation case, it publishes an event towards unauthenticated.DCAE_CL_OUTPUT DMaaP topic to trigger further Policy actions related to BBS use case.

Second type of events are CPE authentication events originally published by the Edge SDN M&C component of BBS use case architecture. Through RestConf-Collector or VES-Collector, these events are consumed by BBS-ep and they are forwared towards unauthenticated.DCAE_CL_OUTPUT DMaaP topic to trigger further Policy actions related to BBS use case.

BBE-ep periodically polls for the two events. Polling interval is configurable and can be changed dynamically from Consul. Its implementation is based on Reactive Streams (Reactor library), so it is fully asynchronous and non-blocking.

Functionality

PNF re-registration processing logic

_images/bbs-ep-pnf-relocation.png

CPE authentication processing logic

_images/bbs-ep-cpe-authentication.png

For more details about the exact flows and where BBS-EP fits in the overall BBS use case flows, visit * https://wiki.onap.org/display/DW/BBS+Notifications

Compiling BBS-EP

BBS-ep is a sub-project of dcaegen2/services (inside components directory). To build just the BBS-ep component, run the following maven command from within components/bbs-event-processor directory mvn clean install

API Endpoints
Running with dev-mode of BBS-EP
  • Heartbeat: GET http://<container_address>:8100/heartbeat

  • Start Polling for events: POST http://<container_address>:8100/start-tasks

  • Stop Polling for events: POST http://<container_address>:8100/cancel-tasks

  • Execute just one polling for PNF re-registration internal events: POST http://<container_address>:8100/poll-reregistration-events

  • Execute just one polling for CPE authentication events: POST http://<container_address>:8100/poll-cpe-authentication-events

  • Change application logging level: POST http://<container_address>:8100/logging/{level}

More detailed API specifications can be found in ../../apis/swagger-bbs-event-processor.

Maven GroupId:

org.onap.dcaegen2.services.components

Maven Parent ArtifactId:

org.onap.oparen:oparent:1.2.3

DataLake-Handler MS

DataLake-Handler MS is a software component of ONAP that can systematically persist the events from DMaaP into supported Big Data storage systems. It has a Admin UI, where a system administrator configures which Topics to be monitored, and to which data storage to store the data. It is also used to manage the settings of the storage and associated data analytics tool. The second part is the Feeder, which does the data transfer work and is horizontal scalable. The third part, Data Extraction Service (DES), which will expose the data in the data storage via REST API for other ONAP components and external systems to consume.

_images/DL-DES.PNG
DataLake-Handler MS overview and functions
Architecture
Background

There are large amount of data flowing among ONAP components, mostly via DMaaP and Web Services. For example, all events/feed collected by DCAE collectors go through DMaaP. DMaaP is backed by Kafka, which is a system for Publish-Subscribe, where data is not meant to be permanent and gets deleted after certain retention period. Kafka is not a database, means that data there is not for query. Though some components may store processed result into their local databases, most of the raw data will eventually lost. We should provide a systematic way to store these raw data, and even the processed result, which will serve as the source for data analytics and machine learning, providing insight to the network operation.

Relations with Other ONAP Components

The architecture below depicts the DataLake MS as a part of ONAP. Only the relevant interactions and components are shown.

_images/arch.PNG
Note that not all data storage systems in the picture are supported. In R6, the following storage are supported:
  • MongoDB

  • Couchbase

  • Elasticsearch and Kibana

  • HDFS

Depending on demands, new systems may be added to the supported list. In the following we use the term database for the storage, even though HDFS is a file system (but with simple settings, it can be treats as a database, e.g. Hive.)

Note that once the data is stored in databases, other ONAP components and systems will directly query data from the databases, without interacting with DataLake Handler.

Description

DataLake Handler’s main function is to monitor and persist data flow through DMaaP and provide a query API for other component or external services. The databases are outside of ONAP scope, since the data is expected to be huge, and a database may be a complicated cluster consisting of thousand of nodes.

Admin UI
A system administrator uses DataLake Admin UI to:
  • Configure external database connections, such as host, port, login.

  • Configure which Topics to monitor, which databases to store the data for each Topic.

  • Pre-configured 3rd Party Tools dashboards and templates.

This UI tool is used to manage all the Dayalake settings stored in postgres. Here is the database schema:

_images/dbschema.PNG
Feeder

Architecture .. image:: ./feeder-arch.PNG

Features

  • Read data directly from Kafka for performance.

  • Support for pluggable databases. To add a new database, we only need to implement its corrosponding service.

  • Support REST API for inter-component communications. Besides managing DatAlake settings in postgres, Admin UI also use this API to start/stop Feeder, query Feeder status and statistics.

  • Use postgres to store settings.

  • Support data processing features. Before persisting data, data can be massaged in Feeder. Currently two features are implemented: Correlate Cleared Message (in org.onap.datalake.feeder.service.db.ElasticsearchService) and Flatten JSON Array (org.onap.datalake.feeder.service.StoreService).

  • Connection to Kafka and DBs are secured

Des

Architecture .. image:: ./des-arch.PNG

Features

  • Provide a data query API for other components to consume.

  • Integrate with Presto to do data query via sql template.

DataLake-Handler MS Installation Steps and Configurations
Helm Installation

DL-handler consists of three pods- the feeder, admin UI and des. It can be deployed by using helm charts. The following steps guides you launch datalake though helm.

Pre-requisites
  • Datalake postgres should be properly deployed and functional.

  • Presto service should be deployed for des deployment.Here is a sample how presto is deployed in the environment.

    Deploying presto service:

    The package of presto version we are using is v0.0.2:presto-v0.0.2.tar.gz

    #docker build -t presto:v0.0.2 . #docker tag presto:v0.0.2 registry.baidubce.com/onap/presto:v0.0.2 #docker push registry.baidubce.com/onap/presto:v0.0.2

    Note: Replace the repository path with your own repository.

    #kubectl -n onap run dl-presto –image=registry.baidubce.com/onap/presto:v0.0.2 –env=”MongoDB_IP=192.168.235.11” –env=”MongoDB_PORT=27017” #kubectl -n onap expose deployment dl-presto –port=9000 –target-port=9000 –type=NodePort

    Note: MonoDB_IP and Mongo_PORT you can replace this two values with your own configuration.

  • The environment should have helm and kubernetes installed.

  • Check whether all the charts mentioned in the requirements.yaml file are present in the charts/ folder. If not present, package the respective chart and put it in the charts/ folder.

For example:
helm package <dcaegen2-services-common>
Deployment steps
Validate the charts using below commands
helm lint <dcae-datalake-admin-ui>
helm lint <dcae-datalake-feeder>
helm lint <dcae-datalake-des>
Deploy the charts using below commands
helm install <datalake-admin-ui> <dcae-datalake-admin-ui> --namespace onap --set global.masterPassword=<password>
helm install <datalake-feeder> <dcae-datalake-feeder> --namespace onap --set global.masterPassword=<password>
helm install <datalake-des> <dcae-datalake-des> --namespace onap --set global.masterPassword=<password>
For checking logs of the containers
kubectl logs -f -n onap <dev-dcae-datalake-admin-ui-843bfsk4f4-btd7s> -c <dcae-datalake-admin-ui>
kubectl logs -f -n onap <dev-dcae-datalake-feeder-758bbf547b-ctf6s> -c <dcae-datalake-feeder>
kubectl logs -f -n onap <dev-dcae-datalake-des-56465d86fd-2w56c> -c <dcae-datalake-des>
To un-deploy
helm uninstall <datalake-admin-ui>
helm uninstall <datalake-feeder>
helm uninstall <datalake-des>
Application configurations

Datalake-admin-ui:

Configuration

Description

FEEDER_ADDR

Host where dl-feeder is running

Datalake-feeder:

Configuration

Description

PRESTO_HOST

Host where the presto application is running

PG_HOST

Host where the postgres application is running

CONSUL_HOST

Host where counsul loader container is running

PG_DB

Postgress database name

Datalake-Des:

Configuration

Description

PRESTO_HOST

Host where the presto application is running

PG_HOST

Host where the postgres application is running

PG_DB

Postgress database name

DataLake-Handler MS Admin UI User Guide
Admin UI User Guide
Introduction

DataLake Admin UI aims to provide a user-friendly dashboard to easily monitor and manage DataLake configurations for the involved components, ONAP topics, databases, and 3rd-party tools. Please refer to the link to access the Admin UI portal via http://datalake-admin-ui:30479

DataLake Feeder Management _images/adminui-feeder.png

Click the “DataLake Feeder” on the menu bar, and the dashboard will show the overview DataLake Feeder information, such as the numbers of topics. Also, you can enable or disable DataLake Feeder process backend process by using the toggle switch.

Kafka Management _images/adminui-kafka.png

Click the “Kafka” on the menu bar, and it provides the kafka resource settings including add, modify and delete in the page to fulfill your management demand.

_images/adminui-kafka-edit.png

You can modify the kafka resource via clicking the card, and click the plus button to add a new Kafka resource. Then, you will need to fill the required information such as identifying name, message router address and zookeeper address, and so on to build it up.

Topics Management _images/adminui-topics.png _images/adminui-topic-edit1.png _images/adminui-topic-edit2.png _images/adminui-topic-edit3.png

The Topic page lists down all the topics which you have been configured by topic management. You can edit the topic setting via double click the specific row. The setting includes DataLake feeder status - catch the topic or not, data format, and the numbers of time to live for the topic. And choose one or more Kafka items as topic resource and define the databased to store topic info are necessary.

_images/adminui-topic-config.png

For the default configuration of Topics, you can click the “Default configurations” button to do the setting. When you add a new topic, these configurations will be filled into the form automatically.

_images/adminui-topic-new.png

To add a new topic for the DataLake Feeder, you can click the “plus icon” button to catch the data into the 3rd-party database. Please be noted that only existing topics in the Kafka can be added.

Database Management _images/adminui-dbs.png _images/adminui-dbs-edit.png

In the Database Management page, it allows you to add, modify and delete the database resources where the message from topics will be stored. DataLake supports a bunch of databases including Couchbase DB, Apache Druid, Elasticsearch, HDFS, and MongoDB.

3rd-Party Tools Management _images/adminui-tools.png

In the Tools page, it allows you to manage the resources of 3rd-party tools for data visualization. Currently, DataLake supports two Tools which are Kibana and Apache Superset.

3rd-Party Design Tools Management _images/adminui-design.png _images/adminui-design-edit.png

After setting up the 3rd-party tools, you can import the template as the JSON, YAML or other formats for data exploration, data visualization and dashboarding. DataLake supports Kibana dashboarding, Kibana searching, Kibana visualization, Elasticsearch field mapping template, and Apache Druid Kafka indexing service.

VES-Mapper

Different VNF vendors generate event and telemetry data in different formats. Out of the box, all VNF vendors may not support VES format. VES-Mapper provides a generic adapter to convert different formats of event and telemetry data into VES structure that can be consumed by existing DCAE analytics applications.

Note: Currently mapping files are available for SNMP collector and RESTConf collector.

VES-Mapper converts the telemetry data into the required VES format and publishes to the DMaaP for further action to be taken by the DCAE analytics applications.

Flow for converting RestConf Collector notification

[1] RestConf Collector generates rcc-notication in JSON format and publishes it on DMaaP topic unathenticated.DCAE_RCC_OUTPUT [2] The Universal VES Adapter(UVA) microservice has subscribed to this DMaaP topic. [3] On receiving an event from DMaaP, the adapter uses the corresponding mapping file and converts the received notification into the VES event. It uses the notification-id from the received notification to find the required mapping file. [4] Those notifications for which no mapping file is identified, a default mapping file is used with generic mappings to create the VES event. [5] The VES formatted Event will be then published on DMaaP topic unauthenticated.VES_PNFREG_OUTPUT.

RestConf flow
Flow for converting SNMP Collector notification

[1] VNF submits SNMP traps to the SNMP collector. [2] Collector converts the trap into JSON format and publishes it on DMaaP topic unauthenticated.ONAP-COLLECTOR-SNMPTRAP [3] The Universal VES Adapter(UVA) microservice has subscribed to this DMaaP topic. [4] On receiving an event from DMaaP, the adapter uses the corresponding mapping file and converts the received event into the VES event. It uses the enterprise ID from the received event to find the required mapping file. [5] Those SNMP Traps for which no mapping file is identified, a default mapping file is used with generic mappings to create the VES event. [6] The VES formatted Event will be then published on DMaaP topic unauthenticated.SEC_FAULT_OUTPUT.

SNMP flow
Delivery

Mapper is delivered with 1 Docker container having spring boot microservice, UniversalVesAdapter. UniversalVesAdapter converts telementary data to VES.

In current release, the UniversalVesAdapter is integrated with DCAE’s config binding service. On start, it fetches the initial configuration from CBS and uses the same. Currently it is not having functionality to refresh the configuration changes made into Consul KV store.
Docker Containers

Docker images can be pulled from ONAP Nexus repository with below commands:

docker pull nexus3.onap.org:10001/onap/org.onap.dcaegen2.services.mapper.vesadapter.universalvesadaptor:latest

Mapping File

Mapping file is needed by Universal VES Adapter to convert the telemetry data into the VES format. The Adapter uses Smooks Framework to do the data format conversion by using the mapping files.

To know more about smooks framework check the following link:
SNMP Collector Default Mapping File

Following is the default snmp mapping file which is used when no mapping file is found while processing event from SNMP Trap Collector.

<?xml version="1.0" encoding="UTF-8"?><smooks-resource-list xmlns="http://www.milyn.org/xsd/smooks-1.1.xsd" xmlns:jb="http://www.milyn.org/xsd/smooks/javabean-1.4.xsd" xmlns:json="http://www.milyn.org/xsd/smooks/json-1.1.xsd">
  <json:reader rootName="vesevent" keyWhitspaceReplacement="-">
     <json:keyMap>
        <json:key from="date&amp;time" to="date-and-time" />
     </json:keyMap>
  </json:reader>
  <jb:bean class="org.onap.dcaegen2.ves.domain.ves54.VesEvent" beanId="vesEvent" createOnElement="vesevent">
     <jb:wiring property="event" beanIdRef="event" />
  </jb:bean>
  <jb:bean class="org.onap.dcaegen2.ves.domain.ves54.Event" beanId="event" createOnElement="vesevent">
     <jb:wiring property="commonEventHeader" beanIdRef="commonEventHeader" />
     <jb:wiring property="faultFields" beanIdRef="faultFields" />
  </jb:bean>
  <jb:bean class="org.onap.dcaegen2.ves.domain.ves54.CommonEventHeader" beanId="commonEventHeader" createOnElement="vesevent">
     <jb:expression property="version">'3.0'</jb:expression>
     <jb:expression property="eventType">'FaultField'</jb:expression>
     <jb:expression property="eventId" execOnElement="vesevent">'XXXX'</jb:expression>
     <jb:expression property="reportingEntityName">'VESMapper'</jb:expression>
     <jb:expression property="domain">org.onap.dcaegen2.ves.domain.ves54.CommonEventHeader.Domain.FAULT</jb:expression>
     <jb:expression property="eventName" execOnElement="vesevent">commonEventHeader.domain</jb:expression>
     <jb:value property="sequence" data="0" default="0" decoder="Long" />
     <jb:value property="lastEpochMicrosec" data="#/time-received" />
     <jb:value property="startEpochMicrosec" data="#/time-received" />
     <jb:expression property="priority">org.onap.dcaegen2.ves.domain.ves54.CommonEventHeader.Priority.NORMAL</jb:expression>
     <jb:expression property="sourceName">'VesAdapter'</jb:expression>
  </jb:bean>
  <jb:bean class="org.onap.dcaegen2.ves.domain.ves54.FaultFields" beanId="faultFields" createOnElement="vesevent">
     <jb:value property="faultFieldsVersion" data="2.0" default="2.0" decoder="Double" />
     <jb:value property="alarmCondition" data="#/trap-category" />
     <jb:expression property="specificProblem">'SNMP Fault'</jb:expression>
     <jb:expression property="vfStatus">org.onap.dcaegen2.ves.domain.ves54.FaultFields.VfStatus.ACTIVE</jb:expression>
     <jb:expression property="eventSeverity">org.onap.dcaegen2.ves.domain.ves54.FaultFields.EventSeverity.MINOR</jb:expression>
     <jb:wiring property="alarmAdditionalInformation" beanIdRef="alarmAdditionalInformationroot" />
  </jb:bean>
  <jb:bean class="java.util.ArrayList" beanId="alarmAdditionalInformationroot" createOnElement="vesevent">
     <jb:wiring beanIdRef="alarmAdditionalInformation" />
  </jb:bean>
  <jb:bean class="org.onap.dcaegen2.ves.domain.ves54.AlarmAdditionalInformation" beanId="alarmAdditionalInformation" createOnElement="varbinds/element">
     <jb:value property="name" data="#/varbind_oid" />
     <jb:value property="value" data="#/varbind_value" />
  </jb:bean></smooks-resource-list>
RestConf Collector Default Mapping File

Following is the default RestConf collector mapping file which is used when no mapping file is found while processing notification from RestConf Collector.

<?xml version="1.0" encoding="UTF-8"?><smooks-resource-list xmlns="http://www.milyn.org/xsd/smooks-1.1.xsd" xmlns:jb="http://www.milyn.org/xsd/smooks/javabean-1.4.xsd" xmlns:json="http://www.milyn.org/xsd/smooks/json-1.1.xsd">
  <json:reader rootName="vesevent" keyWhitspaceReplacement="-">
     <json:keyMap>
        <json:key from="date&amp;time" to="date-and-time" />
     </json:keyMap>
  </json:reader>
  <jb:bean class="org.onap.dcaegen2.ves.domain.ves70.VesEvent" beanId="vesEvent" createOnElement="vesevent">
     <jb:wiring property="event" beanIdRef="event" />
  </jb:bean>
  <jb:bean class="org.onap.dcaegen2.ves.domain.ves70.Event" beanId="event" createOnElement="vesevent">
     <jb:wiring property="commonEventHeader" beanIdRef="commonEventHeader" />
     <jb:wiring property="pnfRegistrationFields" beanIdRef="pnfRegistrationFields" />
  </jb:bean>
  <jb:bean class="org.onap.dcaegen2.ves.domain.ves70.CommonEventHeader" beanId="commonEventHeader" createOnElement="vesevent">
     <jb:expression property="version">org.onap.dcaegen2.ves.domain.ves70.CommonEventHeader.Version._4_0_1</jb:expression>
     <jb:expression property="eventType">'pnfRegistration'</jb:expression>
     <jb:expression property="vesEventListenerVersion">org.onap.dcaegen2.ves.domain.ves70.CommonEventHeader.VesEventListenerVersion._7_0_1</jb:expression>
     <jb:expression property="eventId" execOnElement="vesevent">'registration_'+commonEventHeader.ts1</jb:expression>
     <jb:expression property="reportingEntityName">'VESMapper'</jb:expression>
     <jb:expression property="domain">org.onap.dcaegen2.ves.domain.ves70.CommonEventHeader.Domain.PNF_REGISTRATION</jb:expression>
     <jb:expression property="eventName" execOnElement="vesevent">commonEventHeader.domain</jb:expression>
     <jb:value property="sequence" data="0" default="0" decoder="Long" />
     <jb:expression property="lastEpochMicrosec" execOnElement="vesevent">commonEventHeader.ts1</jb:expression>
     <jb:expression property="startEpochMicrosec" execOnElement="vesevent">commonEventHeader.ts1</jb:expression>
     <jb:expression property="priority">org.onap.dcaegen2.ves.domain.ves70.CommonEventHeader.Priority.NORMAL</jb:expression>
     <jb:expression property="sourceName" execOnElement="vesevent">pnfRegistrationFields.vendorName+'-'+pnfRegistrationFields.serialNumber</jb:expression>
  </jb:bean>
  <jb:bean class="org.onap.dcaegen2.ves.domain.ves70.PnfRegistrationFields" beanId="pnfRegistrationFields" createOnElement="vesevent">
     <jb:expression property="pnfRegistrationFieldsVersion">org.onap.dcaegen2.ves.domain.ves70.PnfRegistrationFields.PnfRegistrationFieldsVersion._2_0</jb:expression>
     <jb:value property="serialNumber" data="pnfRegistration/serialNumber" />
     <jb:value property="lastServiceDate" data="pnfRegistration/lastServiceDate" />
     <jb:value property="manufactureDate" data="pnfRegistration/manufactureDate" />
     <jb:value property="modelNumber" data="pnfRegistration/modelNumber" />
     <jb:value property="oamV4IpAddress" data="pnfRegistration/oamV4IpAddress" />
     <jb:value property="oamV6IpAddress" data="pnfRegistration/oamV6IpAddress" />
     <jb:value property="softwareVersion" data="pnfRegistration/softwareVersion" />
     <jb:value property="unitFamily" data="pnfRegistration/unitFamily" />
     <jb:value property="unitType" data="pnfRegistration/unitType" />
     <jb:value property="vendorName" data="pnfRegistration/vendorName" />
     <jb:wiring property="additionalFields" beanIdRef="alarmAdditionalInformation" />
  </jb:bean>
  <jb:bean class="org.onap.dcaegen2.ves.domain.ves70.AlarmAdditionalInformation" beanId="alarmAdditionalInformation" createOnElement="vesevent">
     <jb:wiring property="additionalProperties" beanIdRef="additionalFields2" />
  </jb:bean>
  <jb:bean beanId="additionalFields2" class="java.util.HashMap" createOnElement="vesevent/pnfRegistration/additionalFields">
     <jb:value data="pnfRegistration/additionalFields/*" />
  </jb:bean></smooks-resource-list>
Sample Snmp trap Conversion:

Following is the Sample SNMP Trap that will be received by the Universal VES Adapter from the Snmp Trap Collector :

   {
  "cambria.partition":"10.53.172.132",
  "trap category":"ONAP-COLLECTOR-SNMPTRAP",
  "community len":0,
  "protocol version":"v2c",
  "varbinds":[
     {
        "varbind_value":"CLEARED and CRITICAL severities have the same name",
        "varbind_oid":"1.3.6.1.4.1.19444.12.2.0.2.0",
        "varbind_type":"OctetString"
     },
     {
        "varbind_value":"1.3",
        "varbind_oid":"1.3.6.1.4.1.19444.12.2.0.3.0",
        "varbind_type":"ObjectIdentifier"
     },
     {
        "varbind_value":"1.3",
        "varbind_oid":"1.3.6.1.4.1.19444.12.2.0.4.0",
        "varbind_type":"ObjectIdentifier"
     },
     {
        "varbind_value":"CLEARED",
        "varbind_oid":"1.3.6.1.4.1.19444.12.2.0.5.0",
        "varbind_type":"OctetString"
     },
     {
        "varbind_value":"Queue manager: Process failure cleared",
        "varbind_oid":"1.3.6.1.4.1.19444.12.2.0.6.0",
        "varbind_type":"OctetString"
     },
     {
        "varbind_value":"The queue manager process has been restored to normal operation",
        "varbind_oid":"1.3.6.1.4.1.19444.12.2.0.7.0",
        "varbind_type":"OctetString"
     },
     {
        "varbind_value":"The queue manager process has been restored to normal operation. The previously issued alarm has been cleared",
        "varbind_oid":"1.3.6.1.4.1.19444.12.2.0.8.0",
        "varbind_type":"OctetString"
     },
     {
        "varbind_value":"Changes to shared config will be synchronized across the cluster",
        "varbind_oid":"1.3.6.1.4.1.19444.12.2.0.9.0",
        "varbind_type":"OctetString"
     },
     {
        "varbind_value":"No action",
        "varbind_oid":"1.3.6.1.4.1.19444.12.2.0.10.0",
        "varbind_type":"OctetString"
     },
     {
        "varbind_value":"sprout-1.example.com",
        "varbind_oid":"1.3.6.1.4.1.19444.12.2.0.12.0",
        "varbind_type":"OctetString"
     }
  ],
  "notify OID":"1.3.6.1.6.3.1.1.5.3",
  "community":"",
  "uuid":"1fad4802-a6d0-11e8-a349-0242ac110002",
  "epoch_serno":15350273450000,
  "agent name":"10.53.172.132",
  "sysUptime":"0",
  "time received":1.535027345042007E9,
  "agent address":"10.53.172.132",
  "notify OID len":10
}

Following is the converted VES Format of the above SNMP Sample Trap by using the default SNMP Trap Mapping File:

{
  "event":{
     "commonEventHeader":{
        "startEpochMicrosec":1.5350269902625413E9,
        "eventId":"XXXX",
        "sequence":0,
        "domain":"fault",
        "lastEpochMicrosec":1.5350269902625413E9,
        "eventName":"fault__ONAP-COLLECTOR-SNMPTRAP",
        "sourceName":"10.53.172.132",
        "priority":"Medium",
        "version":3,
        "reportingEntityName":"VesAdapter"
     },
     "faultFields":{
        "eventSeverity":"MINOR",
        "alarmCondition":"ONAP-COLLECTOR-SNMPTRAP",
        "faultFieldsVersion":2,
        "specificProblem":"SNMP Fault",
        "alarmAdditionalInformation":[
           {
              "name":"1.3.6.1.4.1.19444.12.2.0.2.0",
              "value":"CLEARED and CRITICAL severities have the same name"
           },
           {
              "name":"1.3.6.1.4.1.19444.12.2.0.3.0",
              "value":"1.3"
           },
           {
              "name":"1.3.6.1.4.1.19444.12.2.0.4.0",
              "value":"1.3"
           },
           {
              "name":"1.3.6.1.4.1.19444.12.2.0.5.0",
              "value":"CLEARED"
           },
           {
              "name":"1.3.6.1.4.1.19444.12.2.0.6.0",
              "value":"Queue manager: Process failure cleared"
           },
           {
              "name":"1.3.6.1.4.1.19444.12.2.0.7.0",
              "value":"The queue manager process has been restored to normal operation"
           },
           {
              "name":"1.3.6.1.4.1.19444.12.2.0.8.0",
              "value":"The queue manager process has been restored to normal operation. The previously issued alarm has been cleared"
           },
           {
              "name":"1.3.6.1.4.1.19444.12.2.0.9.0",
              "value":"Changes to shared config will be synchronized across the cluster"
           },
           {
              "name":"1.3.6.1.4.1.19444.12.2.0.10.0",
              "value":"No action"
           },
           {
              "name":"1.3.6.1.4.1.19444.12.2.0.12.0",
              "value":"sprout-1.example.com"
           }
        ],
        "eventSourceType":"SNMP Agent",
        "vfStatus":"Active"
     }
  }
}
Troubleshooting

NOTE

According to ONAP logging policy, Mapper logs contain all required markers as well as service and client specific Mapped Diagnostic Context (later referred as MDC)

Default console log pattern:

|%date{&quot;HH:mm:ss.SSSXXX&quot;, UTC}\t[ %thread\t] %highlight(%-5level)\t - %msg\t

A sample, fully qualified message implementing this pattern:

|11:10:13.230 [rcc-notification] INFO metricsLogger - fetch and publish from and to Dmaap started:rcc-notification
For simplicity, all log messages in this section are shortened to contain only:
  • logger name

  • log level

  • message

Error and warning logs contain also:
  • exception message

  • stack trace

Do not rely on exact log messages or their presence, as they are often subject to change.

Deployment/Installation errors

Missing Default Config File in case of using local config instead of Consul

|13:04:37.535 [main] ERROR errorLogger - Default Config file kv.json is missing
|13:04:37.537 [main] ERROR errorLogger - Application stoped due to missing default Config file
|13:04:37.538 [main] INFO  o.s.s.c.ThreadPoolTaskExecutor - Shutting down ExecutorService 'applicationTaskExecutor'
|15:40:43.982 [main] WARN  debugLogger - All Smooks objects closed

These log messages are printed when the default configuration file “kv.json”, was not present.

Invalid Default Config File in case of using local config instead of Consul

If Default Config File is an invalid json file, we will get below exception

|15:19:52.489 [main] ERROR o.s.boot.SpringApplication - Application run failed
|java.lang.IllegalStateException: Failed to execute CommandLineRunner
       at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:816)
       at org.springframework.boot.SpringApplication.callRunners(SpringApplication.java:797)
       at org.springframework.boot.SpringApplication.run(SpringApplication.java:324)
       at org.springframework.boot.SpringApplication.run(SpringApplication.java:1260)
       at org.springframework.boot.SpringApplication.run(SpringApplication.java:1248)
       at org.onap.universalvesadapter.Application.main(Application.java:29)
|Caused by: org.json.JSONException: Expected a ',' or '}' at 8100 [character 2 line 54]
       at org.json.JSONTokener.syntaxError(JSONTokener.java:433)
       at org.json.JSONObject.<init>(JSONObject.java:229)
       at org.json.JSONObject.<init>(JSONObject.java:321)
       at org.onap.universalvesadapter.utils.FetchDynamicConfig.verifyConfigChange(FetchDynamicConfig.java:97)
       at org.onap.universalvesadapter.utils.FetchDynamicConfig.cbsCall(FetchDynamicConfig.java:66)
       at org.onap.universalvesadapter.service.VESAdapterInitializer.run(VESAdapterInitializer.java:83)
       at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:813)
       ... 5 common frames omitted
|15:19:52.492 [main] INFO  o.s.s.c.ThreadPoolTaskExecutor - Shutting down ExecutorService 'applicationTaskExecutor'
|15:19:52.493 [main] WARN  debugLogger - All Smooks objects closed

Invalid Smooks mapping file

If VES-Mapper blueprint or local config file contains invalid Smooks mapping file, then we will get below SAXException / JsonProcessingException / JsonSyntaxException / JsonParseException while processing the incoming notifications and the notification will be dropped without converting into required VES event. All such dropped notifications will be logged in error log file.

3GPP PM Mapper Service

Architecture
Introduction

3GPP PM Mapper is a part of DCAEGEN2. Some information about PM Mapper can be found here: 5G bulk PM wiki page.

3GPP PM Mapper will process 3GPP PM XML files to produce perf3gpp VES PM Events.

_images/pm-mapper.png
Functionality

The 3GPP PM Mapper micro-service will extract selected measurements from a 3GPP XML file and publish them as VES events on a DMaaP Message Router topic for consumers that prefer such data in VES format. The mapper receives the files by subscribing to a Data Router feed.

_images/pmmapper-flow.png
Interaction

PM Mapper interacts with the Config Binding Service to get configuration information.

Delivery
Docker Container

PM Mapper is delivered as a docker image that can be downloaded from ONAP docker registry:

``docker run -d --name pmmapper -e CONFIG_BINDING_SERVICE_SERVICE_HOST=<IP Required> -e CONFIG_BINDING_SERVICE_SERVICE_PORT=<Port Required> -e HOSTNAME=<HOSTNAME>  nexus3.onap.org:10001/onap/org.onap.dcaegen2.services.pm-mapper``
Logging

There are two separate log files in the PM Mapper container.

The main log file is located under /var/log/ONAP/dcaegen2/services/pm-mapper/pm-mapper_output.log.

The human readable log file which contains less information is located under /var/log/ONAP/dcaegen2/services/pm-mapper/pm-mapper_output_readable.log.

Log Level

The PM Mapper log level is set to INFO by default. This can be changed in the running container by editing the logLevel variable in the logback.xml file located under /opt/app/pm-mapper/etc/logback.xml. Changes to this file will be picked up every 30 seconds.

Configuration and Performance
Files Processing Configuration

The PM Mapper consumes the 3GPP XML files from DMaaP-DR, and processes them. It is possible to process it in parallel. In order to parallel processing, new configuration env has been introduced:

  • PROCESSING_LIMIT_RATE (optional, default value: 1) - allows to limit the rate of processing files through channel.

  • THREADS_MULTIPLIER (optional, default value: 1) - allows to specify multiplier to calculate the amount of threads.

  • PROCESSING_THREADS_COUNT (optional, default value: number of threads available to JVM) - allows to specify number of threads that will be used for files processing.

Envs should be specified in section “envs:” in blueprint. Example part of blueprint configuration:

...
pm-mapper:
  type: dcae.nodes.ContainerizedServiceComponentUsingDmaap
  interfaces:
    cloudify.interfaces.lifecycle:
      create:
        inputs:
          ports:
            - '8443:0'
            - '8081:0'
          envs:
            PROCESSING_LIMIT_RATE: "1"
            THREADS_MULTIPLIER: "2"
            PROCESSING_THREADS_COUNT: "3"
  relationships:
    - type: dcaegen2.relationships.subscribe_to_files
      target: pm-feed
    - type: dcaegen2.relationships.publish_events
      target: pm-topic
...
PM Mapper Filtering

The PM Mapper performs data reduction, by filtering the PM telemetry data it receives. This filtering information is provided to the service as part of its configuration, and is used to identify desired PM measurements (measType) contained within the data. The service can accept an exact match to the measType or regex(java.util.regex) identifying multiple measTypes (it is possible to use both types simultaneously). If a filter is provided, any measurement that does not match the filter, will be ignored and a warning will be logged. PM Mapper expects the filter in the following JSON format:

"filters":[{
   "pmDefVsn": "1.3",
   "nfType": "gnb",
   "vendor": "Ericsson",
   "measTypes": [ "attTCHSeizures", "succTCHSeizures", "att.*", ".*Seizures" ]
}]

Field

Description

Type

pmDefVsn

PM Dictionary version.

String

vendor

Vendor of the xNF type.

String

nfType

nfType is vendor defined and should match the string used in file ready eventName.

String

measTypes

Measurement name used in PM file in 3GPP format where specified, else vendor defined.

List of Strings, Regular expressions

Message Router Topic Name

PM Mapper publishes the perf3gpp VES PM Events to the following authenticated MR topic;

org.onap.dmaap.mr.PERFORMANCE_MEASUREMENTS
Performance

To see the performance of PM Mapper, see “PM Mapper performance baseline results”.

Troubleshooting

NOTE

According to ONAP logging policy, PM Mapper logs contain all required markers as well as service and client specific Mapped Diagnostic Context (later referred as MDC)

Default console log pattern:

| %date{&quot;yyyy-MM-dd'T'HH:mm:ss.SSSXXX&quot;, UTC}\t| %thread\t| %highlight(%-5level)\t| %msg\t| %marker\t| %rootException\t| %mdc\t| %thread

A sample, fully qualified message implementing this pattern:

| 2018-12-18T13:12:44.369Z       | p.dcae | DEBUG        | Client connection request received    | ENTRY         |       | RequestID=d7762b18-854c-4b8c-84aa-95762c6f8e62, InstanceID=9b9799ca-33a5-4f61-ba33-5c7bf7e72d07, InvocationID=b13d34ba-e1cd-4816-acda-706415308107, PartnerName=C=PL, ST=DL, L=Wroclaw, O=Nokia, OU=MANO, CN=dcaegen2-hvves-client, StatusCode=INPROGRESS, ClientIPAddress=192.168.0.9, ServerFQDN=a4ca8f96c7e5       | reactor-tcp-nio-2
For simplicity, all log messages in this section are shortened to contain only:
  • logger name

  • log level

  • message

Error and warning logs contain also:
  • exception message

  • stack trace

Do not rely on exact log messages or their presence, as they are often subject to change.

Configuration errors

Config binding service not responding

2019-02-19T17:25:17.499Z        main    INFO    org.onap.dcaegen2.services.pmmapper.config.ConfigHandler                Fetching pm-mapper configuration from Configbinding Service             ENTRY
2019-02-19T17:25:17.502Z        main    INFO    org.onap.dcaegen2.services.pmmapper.utils.RequestSender         ee5ff670-accd-4c30-8689-0a1d12491b51            INVOKE [ SYNCHRONOUS ]
2019-02-19T17:25:17.509Z        main    INFO    org.onap.dcaegen2.services.pmmapper.utils.RequestSender         Sending:\n{X-ONAP-PartnerName=[pm-mapper], X-ONAP-InvocationID=[ee5ff670-accd-4c30-8689-0a1d12491b51], X-ONAP-RequestID=[2778e346-590a-4ade-8f45-358d1adf048b]}
2019-02-19T17:25:18.515Z        main    INFO    org.onap.dcaegen2.services.pmmapper.utils.RequestSender         Sending:\n{X-ONAP-PartnerName=[pm-mapper], X-ONAP-InvocationID=[ee5ff670-accd-4c30-8689-0a1d12491b51], X-ONAP-RequestID=[2778e346-590a-4ade-8f45-358d1adf048b]}
2019-02-19T17:25:19.516Z        main    INFO    org.onap.dcaegen2.services.pmmapper.utils.RequestSender         Sending:\n{X-ONAP-PartnerName=[pm-mapper], X-ONAP-InvocationID=[ee5ff670-accd-4c30-8689-0a1d12491b51], X-ONAP-RequestID=[2778e346-590a-4ade-8f45-358d1adf048b]}
2019-02-19T17:25:20.518Z        main    INFO    org.onap.dcaegen2.services.pmmapper.utils.RequestSender         Sending:\n{X-ONAP-PartnerName=[pm-mapper], X-ONAP-InvocationID=[ee5ff670-accd-4c30-8689-0a1d12491b51], X-ONAP-RequestID=[2778e346-590a-4ade-8f45-358d1adf048b]}
2019-02-19T17:25:21.519Z        main    INFO    org.onap.dcaegen2.services.pmmapper.utils.RequestSender         Sending:\n{X-ONAP-PartnerName=[pm-mapper], X-ONAP-InvocationID=[ee5ff670-accd-4c30-8689-0a1d12491b51], X-ONAP-RequestID=[2778e346-590a-4ade-8f45-358d1adf048b]}
2019-02-19T17:25:21.520Z        main    INFO    org.onap.dcaegen2.services.pmmapper.config.ConfigHandler                Received pm-mapper configuration from ConfigBinding Service:\n          EXIT
Exception in thread "main" org.onap.dcaegen2.services.pmmapper.exceptions.CBSServerError: Error connecting to Configbinding Service:
at org.onap.dcaegen2.services.pmmapper.config.ConfigHandler.getMapperConfig(ConfigHandler.java:78)
at org.onap.dcaegen2.services.pmmapper.App.main(App.java:58)
caused by: java.net.ConnectException: Connection refused (Connection refused)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at sun.net.www.protocol.http.HttpURLConnection$10.run(HttpURLConnection.java:1944)
at sun.net.www.protocol.http.HttpURLConnection$10.run(HttpURLConnection.java:1939)
at java.security.AccessController.doPrivileged(Native Method)
at sun.net.www.protocol.http.HttpURLConnection.getChainedException(HttpURLConnection.java:1938)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1508)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
at java.net.HttpURLConnection.getResponseMessage(HttpURLConnection.java:546)
at org.onap.dcaegen2.services.pmmapper.utils.RequestSender.send(RequestSender.java:80)
at org.onap.dcaegen2.services.pmmapper.config.ConfigHandler.getMapperConfig(ConfigHandler.java:76)
... 1 more
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
t java.net.Socket.connect(Socket.java:538)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:463)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:558)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:242)
at sun.net.www.http.HttpClient.New(HttpClient.java:339)
at sun.net.www.http.HttpClient.New(HttpClient.java:357)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1220)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1156)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1050)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:984)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1564)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
at org.onap.dcaegen2.services.pmmapper.utils.RequestSender.send(RequestSender.java:66)

Make sure Config Binding Service is up and running and the ip + port combination is correct.


Missing configuration on Consul

2019-02-19T17:36:32.664Z        main    INFO    org.onap.dcaegen2.services.pmmapper.config.ConfigHandler                Fetching pm-mapper configuration from Configbinding Service             ENTRY
2019-02-19T17:36:32.666Z        main    INFO    org.onap.dcaegen2.services.pmmapper.utils.RequestSender         9fa1b84f-05ce-4e27-bba9-4ea477c1baa7            INVOKE [ SYNCHRONOUS ]
2019-02-19T17:36:32.671Z        main    INFO    org.onap.dcaegen2.services.pmmapper.utils.RequestSender         Sending:\n{X-ONAP-PartnerName=[pm-mapper], X-ONAP-InvocationID=[9fa1b84f-05ce-4e27-bba9-4ea477c1baa7], X-ONAP-RequestID=[6e861d17-3f4b-4a2e-9ea8-a31bb9dbb7e8]}
2019-02-19T17:36:32.696Z        main    INFO    org.onap.dcaegen2.services.pmmapper.utils.RequestSender         Received:\n{"pm-mapper-filter": "{ \"filters\":[]}", "3GPP.schema.file": "{\"3GPP_Schema\":\"./etc/3GPP_relaxed_schema.xsd\"}", "streams_subscribes": {"dmaap_subscriber": {"type": "data_router", "aaf_username": null, "aaf_password": null, "dmaap_infooooo": {"location": "csit-pmmapper", "delivery_url": "3gpppmmapper", "username": "username", "password": "password", "subscriber_id": "subsriber_id"}}}, "streams_publishes": {"pm_mapper_handle_out": {"type": "message_router", "aaf_password": null, "dmaap_info": {"topic_url": "https://message-router:3904/events/org.onap.dmaap.onapCSIT.pm_mapper", "client_role": "org.onap.dmaap.client.pub", "location": "csit-pmmapper", "client_id": null}, "aaf_username": null}}, "buscontroller_feed_subscription_endpoint": "http://dmaap-bc:8080/webapi/dr_subs", "services_calls": {}}
2019-02-19T17:36:32.696Z        main    INFO    org.onap.dcaegen2.services.pmmapper.config.ConfigHandler                Received pm-mapper configuration from ConfigBinding Service:\n{"pm-mapper-filter": "{ \"filters\":[]}", "3GPP.schema.file": "{\"3GPP_Schema\":\"./etc/3GPP_relaxed_schema.xsd\"}", "streams_subscribes": {"dmaap_subscriber": {"type": "data_router", "aaf_username": null, "aaf_password": null, "dmaap_infooooo": {"location": "csit-pmmapper", "delivery_url": "3gpppmmapper", "username": "username", "password": "password", "subscriber_id": "subsriber_id"}}}, "streams_publishes": {"pm_mapper_handle_out": {"type": "message_router", "aaf_password": null, "dmaap_info": {"topic_url": "https://message-router:3904/events/org.onap.dmaap.onapCSIT.pm_mapper", "client_role": "org.onap.dmaap.client.pub", "location": "csit-pmmapper", "client_id": null}, "aaf_username": null}}, "buscontroller_feed_subscription_endpoint": "http://dmaap-bc:8080/webapi/dr_subs", "services_calls": {}}          EXIT
Exception in thread "main" org.onap.dcaegen2.services.pmmapper.exceptions.MapperConfigException: Error parsing mapper configuration:
{}{"pm-mapper-filter": "{ \"filters\":[]}", "3GPP.schema.file": "{\"3GPP_Schema\":\"./etc/3GPP_relaxed_schema.xsd\"}", "streams_subscribes": {"dmaap_subscriber": {"type": "data_router", "aaf_username": null, "aaf_password": null, "dmaap_infooooo": {"location": "csit-pmmapper", "delivery_url": "3gpppmmapper", "username": "username", "password": "password", "subscriber_id": "subsriber_id"}}}, "streams_publishes": {"pm_mapper_handle_out": {"type": "message_router", "aaf_password": null, "dmaap_info": {"topic_url": "https://message-router:3904/events/org.onap.dmaap.onapCSIT.pm_mapper", "client_role": "org.onap.dmaap.client.pub", "location": "csit-pmmapper", "client_id": null}, "aaf_username": null}}, "buscontroller_feed_subscription_endpoint": "http://dmaap-bc:8080/webapi/dr_subs", "services_calls": {}}
at org.onap.dcaegen2.services.pmmapper.config.ConfigHandler.convertMapperConfigToObject(ConfigHandler.java:94)
at org.onap.dcaegen2.services.pmmapper.config.ConfigHandler.getMapperConfig(ConfigHandler.java:83)
at org.onap.dcaegen2.services.pmmapper.App.main(App.java:58)
Caused by: com.google.gson.JsonParseException: Failed to check fields.
at org.onap.dcaegen2.services.pmmapper.utils.RequiredFieldDeserializer.deserialize(RequiredFieldDeserializer.java:49)
at com.google.gson.internal.bind.TreeTypeAdapter.read(TreeTypeAdapter.java:69)
at com.google.gson.Gson.fromJson(Gson.java:927)
at com.google.gson.Gson.fromJson(Gson.java:892)
at com.google.gson.Gson.fromJson(Gson.java:841)
at com.google.gson.Gson.fromJson(Gson.java:813)
at org.onap.dcaegen2.services.pmmapper.config.ConfigHandler.convertMapperConfigToObject(ConfigHandler.java:92)
... 2 more
Caused by: com.google.gson.JsonParseException: Field: 'busControllerFeedId', is required but not found.
at org.onap.dcaegen2.services.pmmapper.utils.RequiredFieldDeserializer.deserialize(RequiredFieldDeserializer.java:46)

PM Mapper logs this information when connected to Consul, but cannot find a valid JSON configuration.

Analytics

Heartbeat Microservice

The main objective of Heartbeat Microservice is to receive the periodic heartbeat from the configured eventNames and report the loss of heartbeat onto DMaap if number of consecutive missed heartbeat count is more than the configured missed heartbeat count

Heartbeat Microservice overview and functions
High-level architecture of Heartbeat Microservice

Heartbeat Microservice startup script (misshtbtd.py) gets the configuration from CBS and parses these entries and saves them in the postgres database having table name vnf_table_1. Each entry in the configuration is for a particular eventName. Each entry has missed heartbeat count, heartbeat interval, Control loop name etc. along with many other parameters.

Whenever a heartbeat event is received, the sourceName, lastEpochTime and other information is stored in another postgres database having table name vnf_table_2. It is designed to process the heartbeat event having different sourceNames having same eventName. In such case, sourceName count is maintained in vnf_table_1 which would give number of SouceNames that have same eventName. As and when new sourceName is received, sourceName count is incremented in vnf_table_1

The heartbeat Microservice is designed to support multiple instances of HB Microservice to run simultaneously. The first instance of the HB Microservice would assume the role of active instance, and instances that started running later would become inactive instances. If the active HB microservice is not responding or killed, the inactive HB instance would take over the active role. To achieve this functionality, one more postgres table hb_common is introduced which has parameters specific to active instances such as process id/hostname of the active instance, last accessed time updated by active instance.

Heartbeat Microservice supports the periodic download of CBS configuration. The periodicity of download can be configured.

Heartbeat Microservice also supports the download of CBS configuration whenever configuration changes. Here Docker container would call the function/method to download the CBS configuration.

The heartbeat microservice has 2 states

Reconfiguration state – Download configuration from CBS and update the vnf_table_1 is in progress.

Running state – Normal working that comprises of receiving of HB events and sending of control loop event if required conditions are met.

Design

There are 4 processes created as below

Main process

This is the initial process which does the following.

  • Download CBS configuration and update the vnf_table_1

  • Spawns HB worker process, DB Monitoring process and CBS polling process (if required)

  • Periodically update the hb_common table

HB worker process

This process is created by main process and does the following.

  • It waits on the HB Json event message from DMaaP message router

  • It receives the HB Json message and retrieves sourceName, lastEpochTime, eventName in the incoming message

  • It checks for the received eventName against the eventName in vnf_table_1. If eventName is not matched, then it discards the message.

  • It checks for the received sourceName in the vnf_table_2. If the sourceName is already there in vnf_table_2, then it updates the received HB Json message in vnf_table_2 against that sourceName. If the sourceName is not there in vnf_table_2, then it adds an entry in vnf_table_2 for that eventName and increments the sourceName count in vnf_table_1

DB Monitoring process

This process is created by main process and does the following.

  • The DB monitoring process scans through each entry of vnf_table_1 and looks at the corresponding vnf_table_2 and checks the condition for Control Loop event is met or not

  • If it finds that the multiple consecutive HB are missed, it raises the Control Loop event.

  • It also clears the control loop event by looking at recently received HB message.

  • Because of reconfiguration procedure, some of the existing entries in vnf_table_1 may become invalid. DB Monitoring process would clean the DB by looking at validity flag maintained in each vnf_table_1 table entry. If not valid, it removes the entry in vnf_table_1 and also removes the corresponding entries of vnf_table_2.

CBS polling process

If the local configuration file (config/hbproperties.yaml) indicates that CBS polling is required, then main process would create the CBS polling process. It does the following.

  • It takes the CBS polling interval from the configuration file.

  • For every CBS polling interval, it sets the hb_common with state as reconfiguration to indicate the main process to download CBS configuration

CBS configuration download support

Apart from the above, a function/method is provided to Docker container that would download the CBS configuration whenever the configuration changes. This method/function would read hb_common state and change the state to reconfiguration.

Heartbeat Microserice Multi instance support

In order to work smoothly in an environment having multiple HB micro services instances, processes would work differently as mentioned below.

Main Process:

Active Instance:
  • Download CBS configuration and process it

  • Spawns processes

  • Periodically update hb_common with last accessed time to indicate that active instance is Alive.

Inactive Instance:
  • Spawns processes

  • Constantly check hb_common entry for last accessed time

  • If the last accessed time is more than a minute or so, then it assumes the role of active instance

HB worker process: Both active and inactive instance behaves the sames as metnioned in the Design section.

DB Monitoring process: Both active periodically checks its process ID/hostname with hb_common data to know whether it is an active instance or not. If inactive instance it does nothing. If active instance, it behaves as mentioned in design section.

CBS Polling process: Periodically checks its process ID/hostname with hb_common data to know whether it is an active instance or not. If inactive instance it does nothing. If active instance, it behaves as mentioned in design section.

Handling of some of the failure scenarios

Failure to download the configuration from CBS – In this case, local configuration file etc/config.json is considered as the configuration file and vnf_table_1 is updated accordingly.

The Reconfiguration procedure is as below
  • If the state is Reconfiguration, then HB worker process, DB monitoring process and CBS polling process would wait for reconfiguration to complete.

  • Set each entry as invalid by using validity flag in vnf_table_1

  • Download the json file from CBS.

  • Set the validity flag to indicate to valid when an entry is updated.

Postgres Database

There are 3 tables maintained.

Vnf_table_1 table: This is table is indexed by eventName. Each entry has following parameters in it.

  • eventName

  • Configured heartbeat Missed Count

  • Configured Heartbeat Interval

  • Number of SourceName having same eventName

  • Validity flag that indicates VNF entry is valid or not

  • It also has following parameter related to Control loop event
    • policyVersion

    • policyName

    • policyScope

    • target_type

    • target

    • closedLoopControlName

    • version

Vnf_table_2 table: For each sourceName there would be an entry in vnf_table_2. This is indexed by eventName and SourceName. Each entry has below parameters

  • SourceName

  • Last received heartbeat epoch time

  • Control loop event raised flag. 0 indicates not raised, 1 indicates CL event raised

hb_common table: This is a single entry table.

  • The configuration status which would have one of the below.
    • RECONFIGURATION – indicates CBS configuration processing is in

      progress.

    • RUNNING – CBS configuration is completed and ready to process HB

      event and send CL event.

  • The process ID – This indicates the main process ID of the active HB instance which is responsible to take care of reconfiguration

  • The source Name – It has 2 parts, hostname and service name. The hostname is the Docker container ID. The service name is the environment variable set for SERVICE_NAME

  • The last accessed time – The time last accessed by the main process having the above process ID.

Build and Setup procedure
ONAP Repository

Use the below repository for Heartbeat Microservice.

POD 25 access

To run heartbeat Micro Service in development environment, POD25 access is required. Please get the access and install Openvpn.

Connect to POD25 setup using Openvpn and the credentials obtained.

Docker build procedure

Clone the code using below command

::

git clone –depth 1 https://gerrit.onap.org/r/dcaegen2/services/heartbeat

give executable permission to mvn-phase-script.sh if not there already

::

chmod +x mvn-phase-script.sh

Setting up the postgres DB, group/consumer IDs, CBS download and CBS polling. The following environment variables are to be set.

For postgres and CBS download, the environment setting file to be passed while running the Docker. The file would contain following parameters. The sample values are shown for reference.

::

pg_ipAddress=10.0.4.1 pg_portNum=5432 pg_userName=postgres pg_passwd=abc #Below parameters for CBS download SERVICE_NAME=mvp-dcaegen2-heartbeat-static CONSUL_HOST=10.12.6.50 HOSTNAME=mvp-dcaegen2-heartbeat-static #Below parameter for heartbeat worker process to receive message groupID=group1 consumerID=1

If the postgres parameters are not there in environment setting file, then it takes the values from miss_htbt_service/config/hbproperties.yaml file. Make sure that postgres running in the machine where pg_ipAddress parameter is mentioned.

Run below netstat command to check postgres port number and IP address are fine.

   netstat -ant

If CBS parameters are not there in the environment setting file, then
local config file (etc/config.json) is considered as a default
configuration file.

For CBS polling CBS_polling_allowed & CBS_polling_interval to be set
appropriately in miss_htbt_service/config/hbproperties.yaml file

The sample values in miss_htbt_service/config/hbproperties.yaml file
are as follows
        pg_ipAddress: 10.0.4.1
        pg_portNum: 5432
        pg_userName: postgres
        pg_passwd: postgres
        pg_dbName: hb_vnf
        CBS_polling_allowed: True
        CBS_polling_interval: 300

PS: Change the groupID and consumerID in the environment accordingly
for each HB instance so that HB worker process receive the HB event
correctly. Usually groupID remains the same for all instance of HB
where as consumerID would be changed for each instance of HB Micro
service. If groupID and consumerID is not provided, then it takes
“DefaultGroup” and “1” respectively.

Setting CBS configuration parameters using the consule KV URL.

The sample consul KV is as below.

http://10.12.6.50:8500/ui/#/dc1/kv/mvp-dcaegen2-heartbeat-static

Go to the above link and click on KEY/VALUE tab

Click on mvp-dcaegen2-heartbeat-static

Copy the configuration in the box provided and click on update.

The sample configuration is as below

{
        "heartbeat_config": {
                "vnfs": [{
                                "eventName": "Heartbeat_S",
                                "heartbeatcountmissed": 3,
                                "heartbeatinterval": 60,
                                "closedLoopControlName": "ControlLoopEvent1",
                                "policyVersion": "1.0.0.5",
                                "policyName": "vFireWall",
                                "policyScope": "resource=sampleResource,type=sampletype,CLName=sampleCLName",
                                "target_type": "VM",
                                "target": "genVnfName",
                                "version": "2.0"
                        },
                        {
                                "eventName": "Heartbeat_vFW",
                                "heartbeatcountmissed": 3,
                                "heartbeatinterval": 60,
                                "closedLoopControlName": "ControlLoopEvent1",
                                "policyVersion": "1.0.0.5",
                                "policyName": "vFireWall",
                                "policyScope": "resource=sampleResource,type=sampletype,CLName=sampleCLName",
                                "target_type": "VNF",
                                "target": "genVnfName",
                                "version": "2.0"
                        }
                ]
        },

        "streams_publishes": {
                "ves_heartbeat": {
                        "dmaap_info": {
                                "topic_url": "http://10.12.5.252:3904/events/unauthenticated.DCAE_CL_OUTPUT/"
                        },
                        "type": "message_router"
                }
        },
        "streams_subscribes": {
                "ves_heartbeat": {
                        "dmaap_info": {
                                "topic_url": "http://10.12.5.252:3904/events/unauthenticated.SEC_HEARTBEAT_INPUT/"
                        },
                        "type": "message_router"
                }
        }
}

Build the Docker using below command with a image name

    sudo Docker build --no-cache --network=host -f ./Dockerfile -t
    heartbeat.test1:latest .

To check whether image is built or not, run below command
sudo Docker images |grep heartbeat.test1

Run the Docker using below command which uses the environment file mentioned in the above section.

     sudo Docker run -d --name hb1 --env-file env.list
     heartbeat.test1:latest

To check the logs, run below command
sudo Docker logs -f hb1

To stop the Docker run

Get the Docker container ID from below command

    sudo Docker ps -a \| grep heartbeat.test1

Run below commands to stop the Docker run
sudo Docker stop <Docker container ID)
sudo Docker rm -f hb1

Initiate the maven build

To run the maven build, execute any one of them.

   sudo mvn -s settings.xml deploy
   OR
   sudo mvn -s settings.xml -X deploy

If there is a libxml-xpath related issue, then install the
libxml-xpath as below. If the issue is something else, follow the
link given as part of the build failure.
::

sudo apt install libxml-xpath-perl

Test procedures and Postgres Database access
Postgres DB access

Login into postgres DB

Run below commands to login into postgres DB and connect to HB Micro service DB.

sudo su postgres
psql
\l hb_vnf

Sample output is as below

ubuntu@r3-dcae:~$ sudo su postgres
postgres@r3-dcae:/home/ubuntu$ psql
psql (9.5.14)
Type "help" for help.

postgres=# \l
                                                                  List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges
-----------+----------+----------+-------------+-------------+-----------------------
 hb_vnf    | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
 postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
 template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
                   |          |          |             |             | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
                   |          |          |             |             | postgres=CTc/postgres
(4 rows)

postgres=# \c hb_vnf
You are now connected to database "hb_vnf" as user "postgres".
hb_vnf=#
Delete all tables before starting Docker run or local run

After login into postgres and connect to hb_vnf as mentioned in (4.a), use below commands to delete the tables if exists

DROP TABLE vnf_table_1; DROP TABLE vnf_table_2; DROP TABLE hb_common;

The sample output is as below

hb_vnf=# DROP TABLE vnf_table_1;
DROP TABLE
hb_vnf=# DROP TABLE vnf_table_2;
DROP TABLE
hb_vnf=# DROP TABLE hb_common;
DROP TABLE
hb_vnf=#
Use select command to check the contents of vnf_table_1, vnf_table_2 and hb_common

SELECT * FROM vnf_table_1; SELECT * FROM vnf_table_2; SELECT * FROM hb_common;

The sample output is as below

hb_vnf=# SELECT * FROM vnf_table_1;

  event_name   | heartbeat_missed_count | heartbeat_interval | closed_control_loop_name | policy_version | policy_name |                        policy_scope                         | target_type |   target   | version | source_name_count | validity_flag
---------------+------------------------+--------------------+--------------------------+----------------+-------------+-------------------------------------------------------------+-------------+------------+---------+-------------------+---------------
 Heartbeat_S   |                      4 |                 60 | ControlLoopEvent1        | 1.0.0.5        | vFireWall   | resource=sampleResource,type=sampletype,CLName=sampleCLName | VM          | genVnfName | 2.0     |                 0 |             1
 Heartbeat_vFW |                      4 |                 50 | ControlLoopEvent1        | 1.0.0.5        | vFireWall   | resource=sampleResource,type=sampletype,CLName=sampleCLName | VNF         | genVnfName | 2.0     |                 0 |             1
(2 rows)

hb_vnf=# SELECT * FROM vnf_table_2;
  event_name   | source_name_key | last_epo_time | source_name  | cl_flag
---------------+-----------------+---------------+--------------+---------
 Heartbeat_vFW |               1 | 1544705272479 | SOURCE_NAME1 |       0
(1 row)

hb_vnf=#

hb_vnf=# SELECT * FROM hb_common;
 process_id |                source_name                 | last_accessed_time | current_state
------------+--------------------------------------------+--------------------+---------------
                  8 | 21d744ae8cd5-mvp-dcaegen2-heartbeat-static |         1544710271 | RUNNING
(1 row)

hb_vnf=#
Testing procedures
Injecting event into HB micro service

Once after starting the Docker run or local run, below commands run from tests/ directory would send event to HB worker process

curl -i -X POST -d {"test":"msg"} --header "Content-Type: application/json" http://10.12.5.252:3904/events/unauthenticated.SEC_HEARTBEAT_INPUT
curl -i -X POST -d @test1.json --header "Content-Type: application/json" http://10.12.5.252:3904/events/unauthenticated.SEC_HEARTBEAT_INPUT
curl -i -X POST -d @test2.json --header "Content-Type: application/json" http://10.12.5.252:3904/events/unauthenticated.SEC_HEARTBEAT_INPUT
curl -i -X POST -d @test3.json --header "Content-Type: application/json" http://10.12.5.252:3904/events/unauthenticated.SEC_HEARTBEAT_INPUT

The sample output is as below

ubuntu@r3-aai-inst2:~/heartbeat12Dec/heartbeat/tests$ curl -i -X POST -d @test1.json --header "Content-Type: application/json" http://10.12.5.252:3904/events/unauthenticated.SEC_HEARTBEAT_INPUT
HTTP/1.1 200 OK
Date: Wed, 12 Dec 2018 12:41:26 GMT
Content-Type: application/json
Accept: */*
breadcrumbId: ID-22f076777975-37104-1543559663227-0-563929
User-Agent: curl/7.47.0
X-CSI-Internal-WriteableRequest: true
Content-Length: 41
Server: Jetty(9.3.z-SNAPSHOT)

{
        "serverTimeMs": 0,
        "count": 1
}



curl -i -X POST -d @test1.json --header "Content-Type: application/json" http://10.12.5.252:3904/events/unauthenticated.SEC_HEARTBEAT_INPUT
ubuntu@r3-aai-inst2:~/heartbeat12Dec/heartbeat/tests$ curl -i -X POST -d @test2.json --header "Contet-Type: application/json" http://10.12.5.252:3904/events/unauthenticated.SEC_HEARTBEAT_INPUT
HTTP/1.1 200 OK
Date: Wed, 12 Dec 2018 12:41:39 GMT
Content-Type: application/json
Accept: */*
breadcrumbId: ID-22f076777975-37104-1543559663227-0-563937
User-Agent: curl/7.47.0
X-CSI-Internal-WriteableRequest: true
Content-Length: 41
Server: Jetty(9.3.z-SNAPSHOT)

{
        "serverTimeMs": 0,
        "count": 1
}


ubuntu@r3-aai-inst2:~/heartbeat12Dec/heartbeat/tests$ curl -i -X POST -d @test3.json --header "Contet-Type: application/json" http://10.12.5.252:3904/events/unauthenticated.SEC_HEARTBEAT_INPUT
HTTP/1.1 200 OK
Date: Wed, 12 Dec 2018 12:41:39 GMT
Content-Type: application/json
Accept: */*
breadcrumbId: ID-22f076777975-37104-1543559663227-0-563937
User-Agent: curl/7.47.0
X-CSI-Internal-WriteableRequest: true
Content-Length: 41
Server: Jetty(9.3.z-SNAPSHOT)

{
        "serverTimeMs": 0,
        "count": 1
}
Testing Control loop event
  • Modify the Json as below

    Modify the lastEpochTime and startEpochTime with current time in Test1.json Modify the eventName in Test1.json to one of the eventName in vnf_table_1

  • Inject the Test1.json as mentioned in above section

  • Get missed heartbeat count (for e.g 3) and heartbeat interval (for e.g. 60 seconds) for the eventName from vnf_table_1. Wait for heartbeat to miss multiple time, i.e. 3 * 60seconds = 180 seconds.

After waiting for the specified period, you would see the control loop event. The sample one is as below.

2018-12-13 12:51:13,016 | __main__ | db_monitoring | db_monitoring | 95 |  INFO | ('DBM:Time to raise Control Loop Event for target type - ', 'VNF')
2018-12-13 12:51:13,016 | __main__ | db_monitoring | db_monitoring | 132 |  INFO | ('DBM: CL Json object is', '{"closedLoopEventClient": "DCAE_Heartbeat_MS", "policyVersion": "1.0.0.5", "policyName": "vFireWall", "policyScope": "resource=sampleResource,type=sampletype,CLName=sampleCLName", "target_type": "VNF", "AAI": {"generic-vnf.vnf-name": "SOURCE_NAME1"}, "closedLoopAlarmStart": 1544705473016, "closedLoopEventStatus": "ONSET", "closedLoopControlName": "ControlLoopEvent1", "version": "2.0", "target": "genVnfName", "requestID": "8c1b8bd8-06f7-493f-8ed7-daaa4cc481bc", "from": "DCAE"}')

The postgres DB also have a CL_flag set indicating control loop event with ONSET is raised.

hb_vnf=# SELECT * FROM vnf_table_2;
  event_name   | source_name_key | last_epo_time | source_name  | cl_flag
---------------+-----------------+---------------+--------------+---------
 Heartbeat_vFW |               1 | 1544705272479 | SOURCE_NAME1 |       1
(1 row)

hb_vnf=#

The sample log from startup is as below

ubuntu@r3-aai-inst2:~/heartbeat12Dec/heartbeat$ sudo Docker run -d --name hb1 --env-file env.list heartbeat.test1:latest102413e8af4ab754e008cee43a01bf3d5439820aa91cfb4e099a140a7931fd71
ubuntu@r3-aai-inst2:~/heartbeat12Dec/heartbeat$ sudo Docker logs -f hb1
/usr/local/lib/python3.6/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install --no-cache-dir psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
  """)
2018-12-12 12:39:58,968 | __main__ | misshtbtd | main | 309 |  INFO | MSHBD:Execution Started
2018-12-12 12:39:58,970 | __main__ | misshtbtd | main | 314 |  INFO | ('MSHBT:HB Properties -', '10.0.4.1', '5432', 'postgres', 'abc', 'hb_vnf', True, 300)
2018-12-12 12:39:58,970 | onap_dcae_cbs_docker_client.client | client | _get_uri_from_consul | 36 |  DEBUG | Trying to lookup service: http://10.12.6.50:8500/v1/catalog/service/config_binding_service
2018-12-12 12:39:58,974 | urllib3.connectionpool | connectionpool | _new_conn | 208 |  DEBUG | Starting new HTTP connection (1): 10.12.6.50
2018-12-12 12:39:58,976 | urllib3.connectionpool | connectionpool | _make_request | 396 |  DEBUG | http://10.12.6.50:8500 "GET /v1/catalog/service/config_binding_service HTTP/1.1" 200 375
2018-12-12 12:39:58,979 | urllib3.connectionpool | connectionpool | _new_conn | 208 |  DEBUG | Starting new HTTP connection (1): 10.12.6.50
2018-12-12 12:39:58,988 | urllib3.connectionpool | connectionpool | _make_request | 396 |  DEBUG | http://10.12.6.50:10000 "GET /service_component/mvp-dcaegen2-heartbeat-static HTTP/1.1" 200 1015
2018-12-12 12:39:58,989 | onap_dcae_cbs_docker_client.client | client | _get_path | 83 |  INFO | get_config returned the following configuration: {"heartbeat_config": {"vnfs": [{"eventName": "Heartbeat_S", "heartbeatcountmissed": 3, "heartbeatinterval": 60, "closedLoopControlName": "ControlLoopEvent1", "policyVersion": "1.0.0.5", "policyName": "vFireWall", "policyScope": "resource=sampleResource,type=sampletype,CLName=sampleCLName", "target_type": "VM", "target": "genVnfName", "version": "2.0"}, {"eventName": "Heartbeat_vFW", "heartbeatcountmissed": 3, "heartbeatinterval": 60, "closedLoopControlName": "ControlLoopEvent1", "policyVersion": "1.0.0.5", "policyName": "vFireWall", "policyScope": "resource=sampleResource,type=sampletype,CLName=sampleCLName", "target_type": "VNF", "target": "genVnfName", "version": "2.0"}]}, "streams_publishes": {"ves_heartbeat": {"dmaap_info": {"topic_url": "http://10.12.5.252:3904/events/unauthenticated.DCAE_CL_OUTPUT/"}, "type": "message_router"}}, "streams_subscribes": {"ves_heartbeat": {"dmaap_info": {"topic_url": "http://10.12.5.252:3904/events/unauthenticated.SEC_HEARTBEAT_INPUT/"}, "type": "message_router"}}}
2018-12-12 12:39:58,989 | __main__ | misshtbtd | fetch_json_file | 254 |  INFO | MSHBD:current config logged to : ../etc/download.json
2018-12-12 12:39:58,996 | __main__ | misshtbtd | fetch_json_file | 272 |  INFO | ('MSHBT: The json file is - ', '../etc/config.json')
2018-12-12 12:39:59,028 | __main__ | misshtbtd | create_database | 79 |  INFO | ('MSHBT:Create_database:DB not exists? ', (False,))
2018-12-12 12:39:59,030 | __main__ | misshtbtd | create_database | 86 |  INFO | MSHBD:Database already exists
2018-12-12 12:39:59,032 | __main__ | misshtbtd | create_update_db | 281 |  INFO | ('MSHBT: DB parameters -', '10.0.4.1', '5432', 'postgres', 'abc', 'hb_vnf')
2018-12-12 12:39:59,099 | __main__ | misshtbtd | main | 325 |  INFO | ('MSHBD:Current process id is', 7)
2018-12-12 12:39:59,099 | __main__ | misshtbtd | main | 326 |  INFO | MSHBD:Now be in a continuous loop
2018-12-12 12:39:59,111 | __main__ | misshtbtd | main | 331 |  INFO | ('MSHBT: hb_common values ', 6, 'RUNNING', '8909e4332e34-mvp-dcaegen2-heartbeat-static', 1544618286)
2018-12-12 12:39:59,111 | __main__ | misshtbtd | main | 335 |  INFO | ('MSHBD:pid,srcName,state,time,ctime,timeDiff is', 6, '8909e4332e34-mvp-dcaegen2-heartbeat-static', 'RUNNING', 1544618286, 1544618399, 113)
2018-12-12 12:39:59,111 | __main__ | misshtbtd | main | 378 |  INFO | MSHBD:Active instance is inactive for long time: Time to switchover
2018-12-12 12:39:59,111 | __main__ | misshtbtd | main | 380 |  INFO | MSHBD:Initiating to become Active Instance
2018-12-12 12:39:59,111 | onap_dcae_cbs_docker_client.client | client | _get_uri_from_consul | 36 |  DEBUG | Trying to lookup service: http://10.12.6.50:8500/v1/catalog/service/config_binding_service
2018-12-12 12:39:59,114 | urllib3.connectionpool | connectionpool | _new_conn | 208 |  DEBUG | Starting new HTTP connection (1): 10.12.6.50
2018-12-12 12:39:59,118 | urllib3.connectionpool | connectionpool | _make_request | 396 |  DEBUG | http://10.12.6.50:8500 "GET /v1/catalog/service/config_binding_service HTTP/1.1" 200 375
2018-12-12 12:39:59,120 | urllib3.connectionpool | connectionpool | _new_conn | 208 |  DEBUG | Starting new HTTP connection (1): 10.12.6.50
2018-12-12 12:39:59,129 | urllib3.connectionpool | connectionpool | _make_request | 396 |  DEBUG | http://10.12.6.50:10000 "GET /service_component/mvp-dcaegen2-heartbeat-static HTTP/1.1" 200 1015
2018-12-12 12:39:59,129 | onap_dcae_cbs_docker_client.client | client | _get_path | 83 |  INFO | get_config returned the following configuration: {"heartbeat_config": {"vnfs": [{"eventName": "Heartbeat_S", "heartbeatcountmissed": 3, "heartbeatinterval": 60, "closedLoopControlName": "ControlLoopEvent1", "policyVersion": "1.0.0.5", "policyName": "vFireWall", "policyScope": "resource=sampleResource,type=sampletype,CLName=sampleCLName", "target_type": "VM", "target": "genVnfName", "version": "2.0"}, {"eventName": "Heartbeat_vFW", "heartbeatcountmissed": 3, "heartbeatinterval": 60, "closedLoopControlName": "ControlLoopEvent1", "policyVersion": "1.0.0.5", "policyName": "vFireWall", "policyScope": "resource=sampleResource,type=sampletype,CLName=sampleCLName", "target_type": "VNF", "target": "genVnfName", "version": "2.0"}]}, "streams_publishes": {"ves_heartbeat": {"dmaap_info": {"topic_url": "http://10.12.5.252:3904/events/unauthenticated.DCAE_CL_OUTPUT/"}, "type": "message_router"}}, "streams_subscribes": {"ves_heartbeat": {"dmaap_info": {"topic_url": "http://10.12.5.252:3904/events/unauthenticated.SEC_HEARTBEAT_INPUT/"}, "type": "message_router"}}}
2018-12-12 12:39:59,129 | __main__ | misshtbtd | fetch_json_file | 254 |  INFO | MSHBD:current config logged to : ../etc/download.json
2018-12-12 12:39:59,139 | __main__ | misshtbtd | fetch_json_file | 272 |  INFO | ('MSHBT: The json file is - ', '../etc/config.json')
2018-12-12 12:39:59,139 | __main__ | misshtbtd | main | 386 |  INFO | ('MSHBD: Creating HB and DBM threads. The param pssed %d and %s', '../etc/config.json', 7)
2018-12-12 12:39:59,142 | __main__ | misshtbtd | create_process | 301 |  INFO | ('MSHBD:jobs list is', [<Process(Process-2, started)>, <Process(Process-3, started)>])
2018-12-12 12:39:59,221 | __main__ | misshtbtd | create_update_hb_common | 143 |  INFO | MSHBT:Updated  hb_common DB with new values
/usr/local/lib/python3.6/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install --no-cache-dir psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
  """)
2018-12-12 12:39:59,815 | __main__ | htbtworker | <module> | 243 |  INFO | HBT:HeartBeat thread Created
2018-12-12 12:39:59,815 | __main__ | htbtworker | <module> | 245 |  INFO | ('HBT:The config file name passed is -%s', '../etc/config.json')
/usr/local/lib/python3.6/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install --no-cache-dir psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
  """)
2018-12-12 12:39:59,931 | __main__ | cbs_polling | pollCBS | 39 |  INFO | ('CBSP:Main process ID in hb_common is %d', 7)
2018-12-12 12:39:59,931 | __main__ | cbs_polling | pollCBS | 41 |  INFO | ('CBSP:My parent process ID is %d', '7')
2018-12-12 12:39:59,931 | __main__ | cbs_polling | pollCBS | 43 |  INFO | ('CBSP:CBS Polling interval is %d', 300)
/usr/local/lib/python3.6/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install --no-cache-dir psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
  """)
2018-12-12 12:39:59,937 | __main__ | db_monitoring | <module> | 231 |  INFO | DBM: DBM Process started
2018-12-12 12:39:59,939 | __main__ | db_monitoring | <module> | 236 |  INFO | ('DBM:Parent process ID and json file name', '7', '../etc/config.json')
2018-12-12 12:40:09,860 | __main__ | htbtworker | process_msg | 71 |  INFO | ('\n\nHBT:eventnameList values ', ['Heartbeat_S', 'Heartbeat_vFW'])
2018-12-12 12:40:09,860 | __main__ | htbtworker | process_msg | 77 |  INFO | HBT:Getting :http://10.12.5.252:3904/events/unauthenticated.SEC_HEARTBEAT_INPUT/group1/1?timeout=15000
2018-12-12 12:40:09,864 | urllib3.connectionpool | connectionpool | _new_conn | 208 |  DEBUG | Starting new HTTP connection (1): 10.12.5.252
2018-12-12 12:40:19,968 | __main__ | db_monitoring | db_monitoring | 53 |  INFO | DBM: Active DB Monitoring Instance
2018-12-12 12:40:24,259 | __main__ | misshtbtd | main | 331 |  INFO | ('MSHBT: hb_common values ', 7, 'RUNNING', '102413e8af4a-mvp-dcaegen2-heartbeat-static', 1544618399)
2018-12-12 12:40:24,260 | __main__ | misshtbtd | main | 335 |  INFO | ('MSHBD:pid,srcName,state,time,ctime,timeDiff is', 7, '102413e8af4a-mvp-dcaegen2-heartbeat-static', 'RUNNING', 1544618399, 1544618424, 25)
2018-12-12 12:40:24,260 | __main__ | misshtbtd | main | 351 |  INFO | ('MSHBD:config status is', 'RUNNING')
2018-12-12 12:40:24,267 | __main__ | misshtbtd | create_update_hb_common | 143 |  INFO | MSHBT:Updated  hb_common DB with new values
2018-12-12 12:40:24,810 | urllib3.connectionpool | connectionpool | _make_request | 396 |  DEBUG | http://10.12.5.252:3904 "GET /events/unauthenticated.SEC_HEARTBEAT_INPUT/group1/1?timeout=15000 HTTP/1.1" 200 2
2018-12-12 12:40:24,812 | __main__ | htbtworker | process_msg | 92 |  INFO | ('HBT:', '[]')
2018-12-12 12:40:34,837 | __main__ | htbtworker | process_msg | 71 |  INFO | ('\n\nHBT:eventnameList values ', ['Heartbeat_S', 'Heartbeat_vFW'])
2018-12-12 12:40:34,838 | __main__ | htbtworker | process_msg | 77 |  INFO | HBT:Getting :http://10.12.5.252:3904/events/unauthenticated.SEC_HEARTBEAT_INPUT/group1/1?timeout=15000
2018-12-12 12:40:34,839 | urllib3.connectionpool | connectionpool | _new_conn | 208 |  DEBUG | Starting new HTTP connection (1): 10.12.5.252
2018-12-12 12:40:39,994 | __main__ | db_monitoring | db_monitoring | 53 |  INFO | DBM: Active DB Monitoring Instance
2018-12-12 12:40:49,304 | __main__ | misshtbtd | main | 331 |  INFO | ('MSHBT: hb_common values ', 7, 'RUNNING', '102413e8af4a-mvp-dcaegen2-heartbeat-static', 1544618424)
2018-12-12 12:40:49,304 | __main__ | misshtbtd | main | 335 |  INFO | ('MSHBD:pid,srcName,state,time,ctime,timeDiff is', 7, '102413e8af4a-mvp-dcaegen2-heartbeat-static', 'RUNNING', 1544618424, 1544618449, 25)
2018-12-12 12:40:49,304 | __main__ | misshtbtd | main | 351 |  INFO | ('MSHBD:config status is', 'RUNNING')
2018-12-12 12:40:49,314 | __main__ | misshtbtd | create_update_hb_common | 143 |  INFO | MSHBT:Updated  hb_common DB with new values
2018-12-12 12:40:49,681 | urllib3.connectionpool | connectionpool | _make_request | 396 |  DEBUG | http://10.12.5.252:3904 "GET /events/unauthenticated.SEC_HEARTBEAT_INPUT/group1/1?timeout=15000 HTTP/1.1" 200 2
2018-12-12 12:40:49,682 | __main__ | htbtworker | process_msg | 92 |  INFO | ('HBT:', '[]')
2018-12-12 12:40:59,719 | __main__ | htbtworker | process_msg | 71 |  INFO | ('\n\nHBT:eventnameList values ', ['Heartbeat_S', 'Heartbeat_vFW'])
2018-12-12 12:40:59,720 | __main__ | htbtworker | process_msg | 77 |  INFO | HBT:Getting :http://10.12.5.252:3904/events/unauthenticated.SEC_HEARTBEAT_INPUT/group1/1?timeout=15000
2018-12-12 12:40:59,721 | urllib3.connectionpool | connectionpool | _new_conn | 208 |  DEBUG | Starting new HTTP connection (1): 10.12.5.252
2018-12-12 12:41:00,036 | __main__ | db_monitoring | db_monitoring | 53 |  INFO | DBM: Active DB Monitoring Instance
2018-12-12 12:41:00,225 | urllib3.connectionpool | connectionpool | _make_request | 396 |  DEBUG | http://10.12.5.252:3904 "GET /events/unauthenticated.SEC_HEARTBEAT_INPUT/group1/1?timeout=15000 HTTP/1.1" 200 22
2018-12-12 12:41:00,226 | __main__ | htbtworker | process_msg | 92 |  INFO | ('HBT:', '["{\\"test\\":\\"msg\\"}"]')
2018-12-12 12:41:00,226 | __main__ | htbtworker | process_msg | 122 |  ERROR | ('HBT message process error - ', KeyError('event',))
2018-12-12 12:41:10,255 | __main__ | htbtworker | process_msg | 71 |  INFO | ('\n\nHBT:eventnameList values ', ['Heartbeat_S', 'Heartbeat_vFW'])
2018-12-12 12:41:10,255 | __main__ | htbtworker | process_msg | 77 |  INFO | HBT:Getting :http://10.12.5.252:3904/events/unauthenticated.SEC_HEARTBEAT_INPUT/group1/1?timeout=15000
2018-12-12 12:41:10,256 | urllib3.connectionpool | connectionpool | _new_conn | 208 |  DEBUG | Starting new HTTP connection (1): 10.12.5.252
2018-12-12 12:41:14,350 | __main__ | misshtbtd | main | 331 |  INFO | ('MSHBT: hb_common values ', 7, 'RUNNING', '102413e8af4a-mvp-dcaegen2-heartbeat-static', 1544618449)
2018-12-12 12:41:14,350 | __main__ | misshtbtd | main | 335 |  INFO | ('MSHBD:pid,srcName,state,time,ctime,timeDiff is', 7, '102413e8af4a-mvp-dcaegen2-heartbeat-static', 'RUNNING', 1544618449, 1544618474, 25)
2018-12-12 12:41:14,350 | __main__ | misshtbtd | main | 351 |  INFO | ('MSHBD:config status is', 'RUNNING')
2018-12-12 12:41:14,359 | __main__ | misshtbtd | create_update_hb_common | 143 |  INFO | MSHBT:Updated  hb_common DB with new values
2018-12-12 12:41:20,075 | __main__ | db_monitoring | db_monitoring | 53 |  INFO | DBM: Active DB Monitoring Instance
2018-12-12 12:41:25,193 | urllib3.connectionpool | connectionpool | _make_request | 396 |  DEBUG | http://10.12.5.252:3904 "GET /events/unauthenticated.SEC_HEARTBEAT_INPUT/group1/1?timeout=15000 HTTP/1.1" 200 2
2018-12-12 12:41:25,193 | __main__ | htbtworker | process_msg | 92 |  INFO | ('HBT:', '[]')
2018-12-12 12:41:35,222 | __main__ | htbtworker | process_msg | 71 |  INFO | ('\n\nHBT:eventnameList values ', ['Heartbeat_S', 'Heartbeat_vFW'])
2018-12-12 12:41:35,222 | __main__ | htbtworker | process_msg | 77 |  INFO | HBT:Getting :http://10.12.5.252:3904/events/unauthenticated.SEC_HEARTBEAT_INPUT/group1/1?timeout=15000
2018-12-12 12:41:35,223 | urllib3.connectionpool | connectionpool | _new_conn | 208 |  DEBUG | Starting new HTTP connection (1): 10.12.5.252
2018-12-12 12:41:35,838 | urllib3.connectionpool | connectionpool | _make_request | 396 |  DEBUG | http://10.12.5.252:3904 "GET /events/unauthenticated.SEC_HEARTBEAT_INPUT/group1/1?timeout=15000 HTTP/1.1" 200 662
2018-12-12 12:41:35,839 | __main__ | htbtworker | process_msg | 92 |  INFO | ('HBT:', '["{\\"event\\":{\\"commonEventHeader\\":{\\"startEpochMicrosec\\":1548313727714,\\"sourceId\\":\\"VNFA_SRC1\\",\\"eventId\\":\\"mvfs10\\",\\"nfcNamingCode\\":\\"VNFA\\",\\"timeZoneOffset\\":\\"UTC-05:30\\",\\"reportingEntityId\\":\\"cc305d54-75b4-431b-adb2-eb6b9e541234\\",\\"eventType\\":\\"platform\\",\\"priority\\":\\"Normal\\",\\"version\\":\\"4.0.2\\",\\"reportingEntityName\\":\\"ibcx0001vm002oam001\\",\\"sequence\\":1000,\\"domain\\":\\"heartbeat\\",\\"lastEpochMicrosec\\":1548313727714,\\"eventName\\":\\"Heartbeat_vDNS\\",\\"vesEventListenerVersion\\":\\"7.0.2\\",\\"sourceName\\":\\"SOURCE_NAME1\\",\\"nfNamingCode\\":\\"VNFA\\"},\\"heartbeatFields\\":{\\"heartbeatInterval\\":20,\\"heartbeatFieldsVersion\\":\\"3.0\\"}}}"]')
2018-12-12 12:41:35,839 | __main__ | htbtworker | process_msg | 125 |  INFO | ('HBT:Newly received HB event values ::', 'Heartbeat_vDNS', 1548313727714, 'SOURCE_NAME1')
2018-12-12 12:41:35,842 | __main__ | htbtworker | process_msg | 132 |  INFO | HBT:vnf_table_2 is already there
2018-12-12 12:41:35,842 | __main__ | htbtworker | process_msg | 183 |  INFO | HBT:eventName is not being monitored, Igonoring JSON message
2018-12-12 12:41:39,407 | __main__ | misshtbtd | main | 331 |  INFO | ('MSHBT: hb_common values ', 7, 'RUNNING', '102413e8af4a-mvp-dcaegen2-heartbeat-static', 1544618474)
2018-12-12 12:41:39,407 | __main__ | misshtbtd | main | 335 |  INFO | ('MSHBD:pid,srcName,state,time,ctime,timeDiff is', 7, '102413e8af4a-mvp-dcaegen2-heartbeat-static', 'RUNNING', 1544618474, 1544618499, 25)
2018-12-12 12:41:39,407 | __main__ | misshtbtd | main | 351 |  INFO | ('MSHBD:config status is', 'RUNNING')
2018-12-12 12:41:39,418 | __main__ | misshtbtd | create_update_hb_common | 143 |  INFO | MSHBT:Updated  hb_common DB with new values
2018-12-12 12:41:40,118 | __main__ | db_monitoring | db_monitoring | 53 |  INFO | DBM: Active DB Monitoring Instance
2018-12-12 12:41:45,864 | __main__ | htbtworker | process_msg | 71 |  INFO | ('\n\nHBT:eventnameList values ', ['Heartbeat_S', 'Heartbeat_vFW'])
2018-12-12 12:41:45,864 | __main__ | htbtworker | process_msg | 77 |  INFO | HBT:Getting :http://10.12.5.252:3904/events/unauthenticated.SEC_HEARTBEAT_INPUT/group1/1?timeout=15000
2018-12-12 12:41:45,865 | urllib3.connectionpool | connectionpool | _new_conn | 208 |  DEBUG | Starting new HTTP connection (1): 10.12.5.252
2018-12-12 12:41:46,482 | urllib3.connectionpool | connectionpool | _make_request | 396 |  DEBUG | http://10.12.5.252:3904 "GET /events/unauthenticated.SEC_HEARTBEAT_INPUT/group1/1?timeout=15000 HTTP/1.1" 200 661
2018-12-12 12:41:46,483 | __main__ | htbtworker | process_msg | 92 |  INFO | ('HBT:', '["{\\"event\\":{\\"commonEventHeader\\":{\\"startEpochMicrosec\\":1544608845841,\\"sourceId\\":\\"VNFB_SRC5\\",\\"eventId\\":\\"mvfs10\\",\\"nfcNamingCode\\":\\"VNFB\\",\\"timeZoneOffset\\":\\"UTC-05:30\\",\\"reportingEntityId\\":\\"cc305d54-75b4-431b-adb2-eb6b9e541234\\",\\"eventType\\":\\"platform\\",\\"priority\\":\\"Normal\\",\\"version\\":\\"4.0.2\\",\\"reportingEntityName\\":\\"ibcx0001vm002oam001\\",\\"sequence\\":1000,\\"domain\\":\\"heartbeat\\",\\"lastEpochMicrosec\\":1544608845841,\\"eventName\\":\\"Heartbeat_vFW\\",\\"vesEventListenerVersion\\":\\"7.0.2\\",\\"sourceName\\":\\"SOURCE_NAME2\\",\\"nfNamingCode\\":\\"VNFB\\"},\\"heartbeatFields\\":{\\"heartbeatInterval\\":20,\\"heartbeatFieldsVersion\\":\\"3.0\\"}}}"]')
2018-12-12 12:41:46,483 | __main__ | htbtworker | process_msg | 125 |  INFO | ('HBT:Newly received HB event values ::', 'Heartbeat_vFW', 1544608845841, 'SOURCE_NAME2')
2018-12-12 12:41:46,486 | __main__ | htbtworker | process_msg | 132 |  INFO | HBT:vnf_table_2 is already there
2018-12-12 12:41:46,486 | __main__ | htbtworker | process_msg | 136 |  INFO | ('HBT:', "Select source_name_count from vnf_table_1 where event_name='Heartbeat_vFW'")
2018-12-12 12:41:46,487 | __main__ | htbtworker | process_msg | 153 |  INFO | ('HBT:event name, source_name & source_name_count are', 'Heartbeat_vFW', 'SOURCE_NAME2', 1)
2018-12-12 12:41:46,487 | __main__ | htbtworker | process_msg | 157 |  INFO | ('HBT:eppc query is', "Select source_name from vnf_table_2 where event_name= 'Heartbeat_vFW' and source_name_key=1")
2018-12-12 12:41:46,487 | __main__ | htbtworker | process_msg | 165 |  INFO | ('HBT: Update vnf_table_2 : ', 0, [('SOURCE_NAME2',)])
2018-12-12 12:41:46,488 | __main__ | htbtworker | process_msg | 173 |  INFO | ('HBT: The source_name_key and source_name_count are ', 1, 1)
2018-12-12 12:41:56,508 | __main__ | htbtworker | process_msg | 71 |  INFO | ('\n\nHBT:eventnameList values ', ['Heartbeat_S', 'Heartbeat_vFW'])
2018-12-12 12:41:56,508 | __main__ | htbtworker | process_msg | 77 |  INFO | HBT:Getting :http://10.12.5.252:3904/events/unauthenticated.SEC_HEARTBEAT_INPUT/group1/1?timeout=15000
2018-12-12 12:41:56,509 | urllib3.connectionpool | connectionpool | _new_conn | 208 |  DEBUG | Starting new HTTP connection (1): 10.12.5.252
2018-12-12 12:42:00,160 | __main__ | db_monitoring | db_monitoring | 53 |  INFO | DBM: Active DB Monitoring Instance
2018-12-12 12:42:04,456 | __main__ | misshtbtd | main | 331 |  INFO | ('MSHBT: hb_common values ', 7, 'RUNNING', '102413e8af4a-mvp-dcaegen2-heartbeat-static', 1544618499)
2018-12-12 12:42:04,456 | __main__ | misshtbtd | main | 335 |  INFO | ('MSHBD:pid,srcName,state,time,ctime,timeDiff is', 7, '102413e8af4a-mvp-dcaegen2-heartbeat-static', 'RUNNING', 1544618499, 1544618524, 25)
2018-12-12 12:42:04,456 | __main__ | misshtbtd | main | 351 |  INFO | ('MSHBD:config status is', 'RUNNING')
2018-12-12 12:42:04,464 | __main__ | misshtbtd | create_update_hb_common | 143 |  INFO | MSHBT:Updated  hb_common DB with new values
2018-12-12 12:42:11,463 | urllib3.connectionpool | connectionpool | _make_request | 396 |  DEBUG | http://10.12.5.252:3904 "GET /events/unauthenticated.SEC_HEARTBEAT_INPUT/group1/1?timeout=15000 HTTP/1.1" 200 2
2018-12-12 12:42:11,464 | __main__ | htbtworker | process_msg | 92 |  INFO | ('HBT:', '[]')
2018-12-12 12:42:20,199 | __main__ | db_monitoring | db_monitoring | 53 |  INFO | DBM: Active DB Monitoring Instance
2018-12-12 12:42:21,489 | __main__ | htbtworker | process_msg | 71 |  INFO | ('\n\nHBT:eventnameList values ', ['Heartbeat_S', 'Heartbeat_vFW'])
2018-12-12 12:42:21,489 | __main__ | htbtworker | process_msg | 77 |  INFO | HBT:Getting :http://10.12.5.252:3904/events/unauthenticated.SEC_HEARTBEAT_INPUT/group1/1?timeout=15000
2018-12-12 12:42:21,491 | urllib3.connectionpool | connectionpool | _new_conn | 208 |  DEBUG | Starting new HTTP connection (1): 10.12.5.252
2018-12-12 12:42:29,490 | __main__ | misshtbtd | main | 331 |  INFO | ('MSHBT: hb_common values ', 7, 'RUNNING', '102413e8af4a-mvp-dcaegen2-heartbeat-static', 1544618524)
2018-12-12 12:42:29,490 | __main__ | misshtbtd | main | 335 |  INFO | ('MSHBD:pid,srcName,state,time,ctime,timeDiff is', 7, '102413e8af4a-mvp-dcaegen2-heartbeat-static', 'RUNNING', 1544618524, 1544618549, 25)
2018-12-12 12:42:29,490 | __main__ | misshtbtd | main | 351 |  INFO | ('MSHBD:config status is', 'RUNNING')
2018-12-12 12:42:29,503 | __main__ | misshtbtd | create_update_hb_common | 143 |  INFO | MSHBT:Updated  hb_common DB with new values
2018-12-12 12:42:36,431 | urllib3.connectionpool | connectionpool | _make_request | 396 |  DEBUG | http://10.12.5.252:3904 "GET /events/unauthenticated.SEC_HEARTBEAT_INPUT/group1/1?timeout=15000 HTTP/1.1" 200 2
2018-12-12 12:42:36,433 | __main__ | htbtworker | process_msg | 92 |  INFO | ('HBT:', '[]')
2018-12-12 12:42:40,235 | __main__ | db_monitoring | db_monitoring | 53 |  INFO | DBM: Active DB Monitoring Instance
2018-12-12 12:42:46,467 | __main__ | htbtworker | process_msg | 71 |  INFO | ('\n\nHBT:eventnameList values ', ['Heartbeat_S', 'Heartbeat_vFW'])
2018-12-12 12:42:46,467 | __main__ | htbtworker | process_msg | 77 |  INFO | HBT:Getting :http://10.12.5.252:3904/events/unauthenticated.SEC_HEARTBEAT_INPUT/group1/1?timeout=15000
2018-12-12 12:42:46,468 | urllib3.connectionpool | connectionpool | _new_conn | 208 |  DEBUG | Starting new HTTP connection (1): 10.12.5.252
2018-12-12 12:42:54,539 | __main__ | misshtbtd | main | 331 |  INFO | ('MSHBT: hb_common values ', 7, 'RUNNING', '102413e8af4a-mvp-dcaegen2-heartbeat-static', 1544618549)
2018-12-12 12:42:54,539 | __main__ | misshtbtd | main | 335 |  INFO | ('MSHBD:pid,srcName,state,time,ctime,timeDiff is', 7, '102413e8af4a-mvp-dcaegen2-heartbeat-static', 'RUNNING', 1544618549, 1544618575, 26)
2018-12-12 12:42:54,539 | __main__ | misshtbtd | main | 351 |  INFO | ('MSHBD:config status is', 'RUNNING')
2018-12-12 12:42:54,555 | __main__ | misshtbtd | create_update_hb_common | 143 |  INFO | MSHBT:Updated  hb_common DB with new values
2018-12-12 12:43:00,273 | __main__ | db_monitoring | db_monitoring | 53 |  INFO | DBM: Active DB Monitoring Instance
2018-12-12 12:43:01,415 | urllib3.connectionpool | connectionpool | _make_request | 396 |  DEBUG | http://10.12.5.252:3904 "GET /events/unauthenticated.SEC_HEARTBEAT_INPUT/group1/1?timeout=15000 HTTP/1.1" 200 2
2018-12-12 12:43:01,416 | __main__ | htbtworker | process_msg | 92 |  INFO | ('HBT:', '[]')
2018-12-12 12:43:11,439 | __main__ | htbtworker | process_msg | 71 |  INFO | ('\n\nHBT:eventnameList values ', ['Heartbeat_S', 'Heartbeat_vFW'])
2018-12-12 12:43:11,439 | __main__ | htbtworker | process_msg | 77 |  INFO | HBT:Getting :http://10.12.5.252:3904/events/unauthenticated.SEC_HEARTBEAT_INPUT/group1/1?timeout=15000
2018-12-12 12:43:11,440 | urllib3.connectionpool | connectionpool | _new_conn | 208 |  DEBUG | Starting new HTTP connection (1): 10.12.5.252
2018-12-12 12:43:19,592 | __main__ | misshtbtd | main | 331 |  INFO | ('MSHBT: hb_common values ', 7, 'RUNNING', '102413e8af4a-mvp-dcaegen2-heartbeat-static', 1544618575)
2018-12-12 12:43:19,593 | __main__ | misshtbtd | main | 335 |  INFO | ('MSHBD:pid,srcName,state,time,ctime,timeDiff is', 7, '102413e8af4a-mvp-dcaegen2-heartbeat-static', 'RUNNING', 1544618575, 1544618600, 25)
2018-12-12 12:43:19,593 | __main__ | misshtbtd | main | 351 |  INFO | ('MSHBD:config status is', 'RUNNING')
2018-12-12 12:43:19,601 | __main__ | misshtbtd | create_update_hb_common | 143 |  INFO | MSHBT:Updated  hb_common DB with new values
2018-12-12 12:43:20,309 | __main__ | db_monitoring | db_monitoring | 53 |  INFO | DBM: Active DB Monitoring Instance
2018-12-12 12:43:26,383 | urllib3.connectionpool | connectionpool | _make_request | 396 |  DEBUG | http://10.12.5.252:3904 "GET /events/unauthenticated.SEC_HEARTBEAT_INPUT/group1/1?timeout=15000 HTTP/1.1" 200 2
2018-12-12 12:43:26,384 | __main__ | htbtworker | process_msg | 92 |  INFO | ('HBT:', '[]')
2018-12-12 12:43:36,399 | __main__ | htbtworker | process_msg | 71 |  INFO | ('\n\nHBT:eventnameList values ', ['Heartbeat_S', 'Heartbeat_vFW'])
2018-12-12 12:43:36,400 | __main__ | htbtworker | process_msg | 77 |  INFO | HBT:Getting :http://10.12.5.252:3904/events/unauthenticated.SEC_HEARTBEAT_INPUT/group1/1?timeout=15000
2018-12-12 12:43:36,401 | urllib3.connectionpool | connectionpool | _new_conn | 208 |  DEBUG | Starting new HTTP connection (1): 10.12.5.252
2018-12-12 12:43:40,346 | __main__ | db_monitoring | db_monitoring | 53 |  INFO | DBM: Active DB Monitoring Instance
2018-12-12 12:43:44,635 | __main__ | misshtbtd | main | 331 |  INFO | ('MSHBT: hb_common values ', 7, 'RUNNING', '102413e8af4a-mvp-dcaegen2-heartbeat-static', 1544618600)
2018-12-12 12:43:44,635 | __main__ | misshtbtd | main | 335 |  INFO | ('MSHBD:pid,srcName,state,time,ctime,timeDiff is', 7, '102413e8af4a-mvp-dcaegen2-heartbeat-static', 'RUNNING', 1544618600, 1544618625, 25)
2018-12-12 12:43:44,636 | __main__ | misshtbtd | main | 351 |  INFO | ('MSHBD:config status is', 'RUNNING')
2018-12-12 12:43:44,645 | __main__ | misshtbtd | create_update_hb_common | 143 |  INFO | MSHBT:Updated  hb_common DB with new values
2018-12-12 12:43:51,339 | urllib3.connectionpool | connectionpool | _make_request | 396 |  DEBUG | http://10.12.5.252:3904 "GET /events/unauthenticated.SEC_HEARTBEAT_INPUT/group1/1?timeout=15000 HTTP/1.1" 200 2
2018-12-12 12:43:51,343 | __main__ | htbtworker | process_msg | 92 |  INFO | ('HBT:', '[]')
2018-12-12 12:44:00,385 | __main__ | db_monitoring | db_monitoring | 53 |  INFO | DBM: Active DB Monitoring Instance
2018-12-12 12:44:01,369 | __main__ | htbtworker | process_msg | 71 |  INFO | ('\n\nHBT:eventnameList values ', ['Heartbeat_S', 'Heartbeat_vFW'])
2018-12-12 12:44:01,369 | __main__ | htbtworker | process_msg | 77 |  INFO | HBT:Getting :http://10.12.5.252:3904/events/unauthenticated.SEC_HEARTBEAT_INPUT/group1/1?timeout=15000
2018-12-12 12:44:01,371 | urllib3.connectionpool | connectionpool | _new_conn | 208 |  DEBUG | Starting new HTTP connection (1): 10.12.5.252
2018-12-12 12:44:09,678 | __main__ | misshtbtd | main | 331 |  INFO | ('MSHBT: hb_common values ', 7, 'RUNNING', '102413e8af4a-mvp-dcaegen2-heartbeat-static', 1544618625)
2018-12-12 12:44:09,679 | __main__ | misshtbtd | main | 335 |  INFO | ('MSHBD:pid,srcName,state,time,ctime,timeDiff is', 7, '102413e8af4a-mvp-dcaegen2-heartbeat-static', 'RUNNING', 1544618625, 1544618650, 25)
2018-12-12 12:44:09,679 | __main__ | misshtbtd | main | 351 |  INFO | ('MSHBD:config status is', 'RUNNING')
2018-12-12 12:44:09,687 | __main__ | misshtbtd | create_update_hb_common | 143 |  INFO | MSHBT:Updated  hb_common DB with new values
2018-12-12 12:44:16,313 | urllib3.connectionpool | connectionpool | _make_request | 396 |  DEBUG | http://10.12.5.252:3904 "GET /events/unauthenticated.SEC_HEARTBEAT_INPUT/group1/1?timeout=15000 HTTP/1.1" 200 2
2018-12-12 12:44:16,313 | __main__ | htbtworker | process_msg | 92 |  INFO | ('HBT:', '[]')
2018-12-12 12:44:20,422 | __main__ | db_monitoring | db_monitoring | 53 |  INFO | DBM: Active DB Monitoring Instance
2018-12-12 12:44:26,338 | __main__ | htbtworker | process_msg | 71 |  INFO | ('\n\nHBT:eventnameList values ', ['Heartbeat_S', 'Heartbeat_vFW'])
2018-12-12 12:44:26,338 | __main__ | htbtworker | process_msg | 77 |  INFO | HBT:Getting :http://10.12.5.252:3904/events/unauthenticated.SEC_HEARTBEAT_INPUT/group1/1?timeout=15000
2018-12-12 12:44:26,339 | urllib3.connectionpool | connectionpool | _new_conn | 208 |  DEBUG | Starting new HTTP connection (1): 10.12.5.252
2018-12-12 12:44:34,721 | __main__ | misshtbtd | main | 331 |  INFO | ('MSHBT: hb_common values ', 7, 'RUNNING', '102413e8af4a-mvp-dcaegen2-heartbeat-static', 1544618650)
2018-12-12 12:44:34,721 | __main__ | misshtbtd | main | 335 |  INFO | ('MSHBD:pid,srcName,state,time,ctime,timeDiff is', 7, '102413e8af4a-mvp-dcaegen2-heartbeat-static', 'RUNNING', 1544618650, 1544618675, 25)
2018-12-12 12:44:34,721 | __main__ | misshtbtd | main | 351 |  INFO | ('MSHBD:config status is', 'RUNNING')
2018-12-12 12:44:34,730 | __main__ | misshtbtd | create_update_hb_common | 143 |  INFO | MSHBT:Updated  hb_common DB with new values
2018-12-12 12:44:40,448 | __main__ | db_monitoring | db_monitoring | 53 |  INFO | DBM: Active DB Monitoring Instance
2018-12-12 12:44:41,287 | urllib3.connectionpool | connectionpool | _make_request | 396 |  DEBUG | http://10.12.5.252:3904 "GET /events/unauthenticated.SEC_HEARTBEAT_INPUT/group1/1?timeout=15000 HTTP/1.1" 200 2
2018-12-12 12:44:41,288 | __main__ | htbtworker | process_msg | 92 |  INFO | ('HBT:', '[]')
2018-12-12 12:44:51,316 | __main__ | htbtworker | process_msg | 71 |  INFO | ('\n\nHBT:eventnameList values ', ['Heartbeat_S', 'Heartbeat_vFW'])
2018-12-12 12:44:51,316 | __main__ | htbtworker | process_msg | 77 |  INFO | HBT:Getting :http://10.12.5.252:3904/events/unauthenticated.SEC_HEARTBEAT_INPUT/group1/1?timeout=15000
2018-12-12 12:44:51,317 | urllib3.connectionpool | connectionpool | _new_conn | 208 |  DEBUG | Starting new HTTP connection (1): 10.12.5.252
2018-12-12 12:44:59,764 | __main__ | misshtbtd | main | 331 |  INFO | ('MSHBT: hb_common values ', 7, 'RUNNING', '102413e8af4a-mvp-dcaegen2-heartbeat-static', 1544618675)
2018-12-12 12:44:59,764 | __main__ | misshtbtd | main | 335 |  INFO | ('MSHBD:pid,srcName,state,time,ctime,timeDiff is', 7, '102413e8af4a-mvp-dcaegen2-heartbeat-static', 'RUNNING', 1544618675, 1544618700, 25)
2018-12-12 12:44:59,764 | __main__ | misshtbtd | main | 351 |  INFO | ('MSHBD:config status is', 'RUNNING')
2018-12-12 12:44:59,773 | __main__ | misshtbtd | create_update_hb_common | 143 |  INFO | MSHBT:Updated  hb_common DB with new values
2018-12-12 12:45:00,038 | __main__ | cbs_polling | pollCBS | 52 |  INFO | CBSP:ACTIVE Instance:Change the state to RECONFIGURATION
2018-12-12 12:45:00,046 | misshtbtd | misshtbtd | create_update_hb_common | 143 |  INFO | MSHBT:Updated  hb_common DB with new values
2018-12-12 12:45:00,055 | __main__ | cbs_polling | pollCBS | 39 |  INFO | ('CBSP:Main process ID in hb_common is %d', 7)
2018-12-12 12:45:00,055 | __main__ | cbs_polling | pollCBS | 41 |  INFO | ('CBSP:My parent process ID is %d', '7')
2018-12-12 12:45:00,055 | __main__ | cbs_polling | pollCBS | 43 |  INFO | ('CBSP:CBS Polling interval is %d', 300)
2018-12-12 12:45:00,485 | __main__ | db_monitoring | db_monitoring | 225 |  INFO | DBM:Inactive instance or hb_common state is not RUNNING
2018-12-12 12:45:06,290 | urllib3.connectionpool | connectionpool | _make_request | 396 |  DEBUG | http://10.12.5.252:3904 "GET /events/unauthenticated.SEC_HEARTBEAT_INPUT/group1/1?timeout=15000 HTTP/1.1" 200 2
2018-12-12 12:45:06,291 | __main__ | htbtworker | process_msg | 92 |  INFO | ('HBT:', '[]')
2018-12-12 12:45:16,308 | __main__ | htbtworker | process_msg | 57 |  INFO | HBT:Waiting for hb_common state to become RUNNING
2018-12-12 12:45:20,517 | __main__ | db_monitoring | db_monitoring | 225 |  INFO | DBM:Inactive instance or hb_common state is not RUNNING
2018-12-12 12:45:24,806 | __main__ | misshtbtd | main | 331 |  INFO | ('MSHBT: hb_common values ', 7, 'RECONFIGURATION', '102413e8af4a-mvp-dcaegen2-heartbeat-static', 1544618700)
2018-12-12 12:45:24,806 | __main__ | misshtbtd | main | 335 |  INFO | ('MSHBD:pid,srcName,state,time,ctime,timeDiff is', 7, '102413e8af4a-mvp-dcaegen2-heartbeat-static', 'RECONFIGURATION', 1544618700, 1544618725, 25)
2018-12-12 12:45:24,806 | __main__ | misshtbtd | main | 351 |  INFO | ('MSHBD:config status is', 'RECONFIGURATION')
2018-12-12 12:45:24,806 | __main__ | misshtbtd | main | 357 |  INFO | MSHBD:Reconfiguration is in progress,Starting new processes by killing the present processes
2018-12-12 12:45:24,806 | onap_dcae_cbs_docker_client.client | client | _get_uri_from_consul | 36 |  DEBUG | Trying to lookup service: http://10.12.6.50:8500/v1/catalog/service/config_binding_service
2018-12-12 12:45:24,808 | urllib3.connectionpool | connectionpool | _new_conn | 208 |  DEBUG | Starting new HTTP connection (1): 10.12.6.50
2018-12-12 12:45:24,810 | urllib3.connectionpool | connectionpool | _make_request | 396 |  DEBUG | http://10.12.6.50:8500 "GET /v1/catalog/service/config_binding_service HTTP/1.1" 200 375
2018-12-12 12:45:24,814 | urllib3.connectionpool | connectionpool | _new_conn | 208 |  DEBUG | Starting new HTTP connection (1): 10.12.6.50
2018-12-12 12:45:24,820 | urllib3.connectionpool | connectionpool | _make_request | 396 |  DEBUG | http://10.12.6.50:10000 "GET /service_component/mvp-dcaegen2-heartbeat-static HTTP/1.1" 200 1015
2018-12-12 12:45:24,821 | onap_dcae_cbs_docker_client.client | client | _get_path | 83 |  INFO | get_config returned the following configuration: {"heartbeat_config": {"vnfs": [{"eventName": "Heartbeat_S", "heartbeatcountmissed": 3, "heartbeatinterval": 60, "closedLoopControlName": "ControlLoopEvent1", "policyVersion": "1.0.0.5", "policyName": "vFireWall", "policyScope": "resource=sampleResource,type=sampletype,CLName=sampleCLName", "target_type": "VM", "target": "genVnfName", "version": "2.0"}, {"eventName": "Heartbeat_vFW", "heartbeatcountmissed": 3, "heartbeatinterval": 60, "closedLoopControlName": "ControlLoopEvent1", "policyVersion": "1.0.0.5", "policyName": "vFireWall", "policyScope": "resource=sampleResource,type=sampletype,CLName=sampleCLName", "target_type": "VNF", "target": "genVnfName", "version": "2.0"}]}, "streams_publishes": {"ves_heartbeat": {"dmaap_info": {"topic_url": "http://10.12.5.252:3904/events/unauthenticated.DCAE_CL_OUTPUT/"}, "type": "message_router"}}, "streams_subscribes": {"ves_heartbeat": {"dmaap_info": {"topic_url": "http://10.12.5.252:3904/events/unauthenticated.SEC_HEARTBEAT_INPUT/"}, "type": "message_router"}}}
2018-12-12 12:45:24,821 | __main__ | misshtbtd | fetch_json_file | 254 |  INFO | MSHBD:current config logged to : ../etc/download.json
2018-12-12 12:45:24,828 | __main__ | misshtbtd | fetch_json_file | 272 |  INFO | ('MSHBT: The json file is - ', '../etc/config.json')
2018-12-12 12:45:24,829 | __main__ | misshtbtd | create_update_db | 281 |  INFO | ('MSHBT: DB parameters -', '10.0.4.1', '5432', 'postgres', 'abc', 'hb_vnf')
2018-12-12 12:45:24,840 | __main__ | misshtbtd | create_update_vnf_table_1 | 162 |  INFO | MSHBT:Set Validity flag to zero in vnf_table_1 table
2018-12-12 12:45:24,841 | __main__ | misshtbtd | create_update_vnf_table_1 | 191 |  INFO | MSHBT:Updated vnf_table_1 as per the json configuration file
2018-12-12 12:45:24,843 | __main__ | misshtbtd | main | 362 |  INFO | ('MSHBD: parameters  passed to DBM and HB are %d and %s', 7)
2018-12-12 12:45:24,852 | __main__ | misshtbtd | create_update_hb_common | 143 |  INFO | MSHBT:Updated  hb_common DB with new values
2018-12-12 12:45:26,325 | __main__ | htbtworker | process_msg | 71 |  INFO | ('\n\nHBT:eventnameList values ', ['Heartbeat_S', 'Heartbeat_vFW'])
2018-12-12 12:45:26,325 | __main__ | htbtworker | process_msg | 77 |  INFO | HBT:Getting :http://10.12.5.252:3904/events/unauthenticated.SEC_HEARTBEAT_INPUT/group1/1?timeout=15000
2018-12-12 12:45:26,326 | urllib3.connectionpool | connectionpool | _new_conn | 208 |  DEBUG | Starting new HTTP connection (1): 10.12.5.252
2018-12-12 12:45:40,549 | __main__ | db_monitoring | db_monitoring | 53 |  INFO | DBM: Active DB Monitoring Instance
2018-12-12 12:45:41,267 | urllib3.connectionpool | connectionpool | _make_request | 396 |  DEBUG | http://10.12.5.252:3904 "GET /events/unauthenticated.SEC_HEARTBEAT_INPUT/group1/1?timeout=15000 HTTP/1.1" 200 2
2018-12-12 12:45:41,268 | __main__ | htbtworker | process_msg | 92 |  INFO | ('HBT:', '[]')
2018-12-12 12:45:49,885 | __main__ | misshtbtd | main | 331 |  INFO | ('MSHBT: hb_common values ', 7, 'RUNNING', '102413e8af4a-mvp-dcaegen2-heartbeat-static', 1544618725)
2018-12-12 12:45:49,886 | __main__ | misshtbtd | main | 335 |  INFO | ('MSHBD:pid,srcName,state,time,ctime,timeDiff is', 7, '102413e8af4a-mvp-dcaegen2-heartbeat-static', 'RUNNING', 1544618725, 1544618750, 25)
2018-12-12 12:45:49,886 | __main__ | misshtbtd | main | 351 |  INFO | ('MSHBD:config status is', 'RUNNING')
2018-12-12 12:45:49,894 | __main__ | misshtbtd | create_update_hb_common | 143 |  INFO | MSHBT:Updated  hb_common DB with new values
2018-12-12 12:45:51,291 | __main__ | htbtworker | process_msg | 71 |  INFO | ('\n\nHBT:eventnameList values ', ['Heartbeat_S', 'Heartbeat_vFW'])
2018-12-12 12:45:51,291 | __main__ | htbtworker | process_msg | 77 |  INFO | HBT:Getting :http://10.12.5.252:3904/events/unauthenticated.SEC_HEARTBEAT_INPUT/group1/1?timeout=15000
2018-12-12 12:45:51,292 | urllib3.connectionpool | connectionpool | _new_conn | 208 |  DEBUG | Starting new HTTP connection (1): 10.12.5.252
2018-12-12 12:46:00,585 | __main__ | db_monitoring | db_monitoring | 53 |  INFO | DBM: Active DB Monitoring Instance

Kpi Computation MS

Kpi Computation MS overview and functions
Introduction
Kpi Computation MS is a software component of ONAP that does calucaltion in accordance with the formula defined dynamically. The service include the features:

Subscribe original PM data from DMaaP. Do KPI computation based on KPI formula which can be got from config policies and the formula can be configued dynamically. Publish KPI results on DMaaP. Receive request for specific KPI computation (future scope) on specific ‘objects’ (e.g., S-NSSAI, Service).

Architecture

The internal architecture of Kpi Computation MS is shown below.

_images/arch1.PNG
Functionality

Kpi Computation MS will do calculation based on the PM data that is VES format. publish KPI result as VES events on a DMaaP Message Router topic for consumers that prefer such data in VES format. Kpi Computation MS receives PM data by subscribing to a Message Router topic.

Flows: 1. KPI Computation MS will get PM data VES format from DMaaP 2. Other modules (e.g., SO/OOF/Slice Analysis MS) can also request KPI-MS for KPI calculation (Future scope beyond H-release). 3. KPI Computation MS will support for periodical KPI Computation. Period may be specified by a requestor optionally, if nothing is specified, KPI computation MS will continue computation until an explicit stop trigger is received. 4. The KPI result which genertate by kpi computation will be published to DMaaP.

Verification
Publish a file to the PM-Mapper using the following example curl:

curl -k -X PUT https://dcae-pm-mapper:8443/delivery/<filename> -H ‘X-DMAAP-DR-META:{“productName”: “AcmeNode”,”vendorName”: “Acme”,”lastEpochMicrosec”: “1538478000000”,”sourceName”: “oteNB5309”,”startEpochMicrosec”: “1538478900000”,”timeZoneOffset”: “UTC+05:00”,”location”: “ftpes://127.0.0.1:22/ftp/rop/A20161224.1045-1100.bin.gz”,”compression”: “gzip”,”fileFormatType”: “org.3GPP.32.435#measCollec”,”fileFormatVersion”: “V9”}’ -H “Content-Type:application/xml” –data-binary @<filename> -H ‘X-ONAP-RequestID: 12345’ -H ‘X-DMAAP-DR-PUBLISH-ID: 12345’

Example type A file:

<?xml version=”1.0” encoding=”utf-8”?> <measCollecFile xmlns=”http://www.3gpp.org/ftp/specs/archive/32_series/32.435#measCollec”>

<fileHeader dnPrefix=”www.google.com” vendorName=”CMCC” fileFormatVersion=”32.435 V10.0”>

<fileSender localDn=”some sender name”/> <measCollec beginTime=”2020-06-02T12:00:00Z”/>

</fileHeader> <measData>

<managedElement swVersion=”r0.1” localDn=”UPFMeasurement”/> <measInfo measInfoId=”UPFFunction0”>

<job jobId=”job10”/> <granPeriod endTime=”2020-06-02T12:15:00Z” duration=”PT900S”/> <repPeriod duration=”PT900S”/> <measType p=”1”>GTP.InDataOctN3UPF.08_010101</measType> <measType p=”2”>GTP.OutDataOctN3UPF.08_010101</measType> <measValue measObjLdn=”some measObjLdn”>

<r p=”1”>10</r> <r p=”2”>20</r> <suspect>false</suspect>

</measValue>

</measInfo> <measInfo measInfoId=”UPFFunction1”>

<job jobId=”job10”/> <granPeriod endTime=”2020-06-02T12:15:00Z” duration=”PT900S”/> <repPeriod duration=”PT900S”/> <measType p=”1”>GTP.InDataOctN3UPF.08_010101</measType> <measType p=”2”>GTP.OutDataOctN3UPF.08_010101</measType> <measValue measObjLdn=”some measObjLdn”>

<r p=”1”>30</r> <r p=”2”>40</r> <suspect>false</suspect>

</measValue>

</measInfo>

</measData> <fileFooter>

<measCollec endTime=”2020-06-02T12:15:00Z”/>

</fileFooter>

</measCollecFile>

Curl the topic on Message Router to retrieve the published event:

Example message output:
{
“event”: {
“commonEventHeader”: {

“domain”: “perf3gpp”, “eventId”: “432fa910-feed-4c64-9532-bd63201080d8”, “eventName”: “perf3gpp_AcmeNode-Acme_pmMeasResult”, “lastEpochMicrosec”: 1591100100000, “priority”: “Normal”, “reportingEntityName”: “”, “sequence”: 0, “sourceName”: “oteNB5309”, “startEpochMicrosec”: 1591099200000, “version”: 4.0, “vesEventListenerVersion”: “7.1”, “timeZoneOffset”: “UTC+05:00”

}, “perf3gppFields”: {

“perf3gppFieldsVersion”: “1.0”, “measDataCollection”: {

“granularityPeriod”: 1591100100000, “measuredEntityUserName”: “”, “measuredEntityDn”: “UPFMeasurement”, “measuredEntitySoftwareVersion”: “r0.1”, “measInfoList”: [{

“measInfoId”: {

“sMeasTypesList”: “SLICE”

}, “measTypes”: {

“sMeasTypesList”: [“UpstreamThr08_010101”]

}, “measValuesList”: [{

“suspectFlag”: false, “measResults”: [{

“p”: 1, “sValue”: “40”

}]

}]

}]

}

}

}

}

Interaction

Kpi Computation MS interacts with the Config Binding Service to get configuration information.

Kpi Computation MS Installation Steps
Installation

Kpi Computation MS can be deployed using cloudify blueprint using bootstrap container of an existing DCAE deployment.

Deployment Pre-requisites
  • DCAE and DMaaP pods should be up and running.

  • PM mapper service should be running.

  • Make sure that cfy is installed and configured to work with the Cloudify deployment.

Deployment steps

Execute bash on the bootstrap Kubernetes pod.

kubectl -n onap exec -it <dcaegen2-dcae-bootstrap> bash

Validate Blueprint
Before the blueprints uploading to Cloudify manager, the blueprints shoule be validated first through the following command.
#cfy blueprint validate /bluerints/k8s-kpi-ms.yaml
Upload the Blueprint to Cloudify Manager.
After validating, we can start to proceed blueprints uploading.
#cfy blueprint upload -b kpi-ms /bluerints/k8s-kpi-ms.yaml
Verify Uploaded Blueprints
Using “cfy blueprint list” to verify your work.
#cfy blueprint list
You can see the following returned message to show the blueprints have been correctly uploaded.
_images/blueprint-list1.png
Verify Plugin Versions
If the version of the plugin used is different, update the blueprint import to match.
#cfy plugins list
Create Deployment
Here we are going to create deployments for both feeder and admin UI.
#cfy deployments create -b kpi-ms kpi-ms
Launch Service
Next, we are going to launch the KPI-MS.
#cfy executions start -d kpi-ms install
Verify the Deployment Result

The following command can be used to list the kpi-ms logs.

#kubectl logs <kpi-pod> -n onap
The output should looks like.
_images/kpi-log.PNG
Uninstall
Uninstall running component and delete deployment
#cfy uninstall kpi-ms
Delete Blueprint
#cfy blueprints delete kpi-ms
Helm Installation

Kpi Computation microservice can be deployed using helm charts in oom repository.

Deployment Pre-requisites
  • DMaaP pods should be up and running.

  • PM mapper service should be running.

  • Policy pods should be running.

  • Required policies should be created and pushed to the policy component. Steps for creating and pushing policy models:

    1. Log in to policy-drools-pdp-0 container

      kubectl exec -ti --namespace <namespace> policy-pdp-0 bash
      
    2. Create policy type:

      curl -k --silent --user 'healthcheck:zb!XztG34' -X POST "https://policy-api:6969/policy/api/v1/policytypes" -H "Accept: application/json" -H "Content-Type: application/json" --data '{"policy_types":{"onap.policies.monitoring.docker.kpims.app":{"derived_from":"onap.policies.Monitoring:1.0.0","description":"KPI ms policy type","properties":{"domain":{"required":true,"type":"string"},"methodForKpi":{"type":"list","required":true,"entry_schema":{"type":"policy.data.methodForKpi_properties"}}},"version":"1.0.0"}},"data_types":{"policy.data.methodForKpi_properties":{"derived_from":"tosca.nodes.Root","properties":{"eventName":{"type":"string","required":true},"controlLoopSchemaType":{"type":"string","required":true},"policyScope":{"type":"string","required":true},"policyName":{"type":"string","required":true},"policyVersion":{"type":"string","required":true},"kpis":{"type":"list","required":true,"entry_schema":{"type":"policy.data.kpis_properties"}}}},"policy.data.kpis_properties":{"derived_from":"tosca.nodes.Root","properties":{"measType":{"type":"string","required":true},"operation":{"type":"string","required":true},"operands":{"type":"string","required":true}}}},"tosca_definitions_version":"tosca_simple_yaml_1_1_0"}'
      
    3. Create monitoring policy:

      curl -k --user 'healthcheck:zb!XztG34' -X POST "https://policy-api:6969/policy/api/v1/policytypes/onap.policies.monitoring.docker.kpims.app/versions/1.0.0/policies" -H "Accept: application/json" -H "Content-Type: application/json" --data '{"name":"ToscaServiceTemplateSimple","topology_template":{"policies":[{"com.Config_KPIMS_CONFIG_POLICY":{"type":"onap.policies.monitoring.docker.kpims.app","type_version":"1.0.0","version":"1.0.0","metadata":{"policy-id":"com.Config_KPIMS_CONFIG_POLICY","policy-version":"1"},"name":"com.Config_KPIMS_CONFIG_POLICY","properties":{"domain":"measurementsForKpi","methodForKpi":[{"eventName":"perf3gpp_CORE-AMF_pmMeasResult","controlLoopSchemaType":"SLICE","policyScope":"resource=networkSlice;type=configuration","policyName":"configuration.dcae.microservice.kpi-computation","policyVersion":"v0.0.1","kpis":[{"measType":"AMFRegNbr","operation":"SUM","operands":"RM.RegisteredSubNbrMean"}]},{"eventName":"perf3gpp_CORE-UPF_pmMeasResult","controlLoopSchemaType":"SLICE","policyScope":"resource=networkSlice;type=configuration","policyName":"configuration.dcae.microservice.kpi-computation","policyVersion":"v0.0.1","kpis":[{"measType":"UpstreamThr","operation":"SUM","operands":"GTP.InDataOctN3UPF"},{"measType":"DownstreamThr","operation":"SUM","operands":"GTP.OutDataOctN3UPF"}]}]}}}]},"tosca_definitions_version":"tosca_simple_yaml_1_1_0","version":"1.0.0"}'
      
    4. Push monitoring policy:

      curl --silent -k --user 'healthcheck:zb!XztG34' -X POST "https://policy-pap:6969/policy/pap/v1/pdps/policies" -H "Accept: application/json" -H "Content-Type: application/json" --data '{"policies":[{"policy-id":"com.Config_KPIMS_CONFIG_POLICY","policy-version":1}]}'
      
Deployment steps
  • Default app config values can be updated in oom/kubernetes/dcaegen2-services/components/dcae-kpi-ms/values.yaml.

  • Update monitoring policy ID in below configuration which is used to enable Policy-Sync Side car container to be deployed and retrieves active policy configuration.

    dcaePolicySyncImage: onap/org.onap.dcaegen2.deployments.dcae-services-policy-sync:1.0.1
    policies:
    policyID:  |
        '["com.Config_KPIMS_CONFIG_POLICY"]'
    
  • Enable KPI MS component in oom/kubernetes/dcaegen2-services/values.yaml

    dcae-kpi-ms:
      enabled: true
    
  • Make the chart and deploy using the following command:

    cd oom/kubernetes/
    make dcaegen2-services
    helm install dev-dcaegen2-services dcaegen2-services --namespace <namespace> --set global.masterPassword=<password>
    
  • To deploy only KPI MS:

    helm install dev-dcae-kpi-ms dcaegen2-services/components/dcae-kpi-ms --namespace <namespace> --set global.masterPassword=<password>
    
  • To Uninstall

    helm uninstall dev-dcae-kpi-ms
    
Application Configurations

Configuration

Description

streams_subscribes

Dmaap topics that the MS will consume messages

streams_publishes

Dmaap topics that the MS will publish messages

cbsPollingInterval

Polling Interval for consuming config data from CBS

pollingInterval

Polling Interval for consuming dmaap messages

pollingTimeout

Polling timeout for consuming dmaap messages

dmaap.server

Location of message routers

cg

DMAAP Consumer group for subscription

cid

DMAAP Consumer id for subscription

trust_store_path

Location of trust.jks file

trust_store_pass_path

Location of trust.pass file

Kpi Computation MS Configurations
Configuration

KPI Computation MS expects to be able to fetch configuration directly from consul service in following JSON format:

During ONAP OOM/Kubernetes deployment this configuration is created from KPI Computation MS Cloudify blueprint.

PM Subscription Handler

Overview
Introduction

The PM Subscription Handler (PMSH) is a Python based micro service, which allows for the definition and activation of PM subscriptions on one or more network function (NF) instances.

Functionality

PMSH allows for the definition of subscriptions on a network level, which enables the configuration of PM data on a set of NF instances. During creation of a subscription, PM reporting configuration and a network function filter will be defined. This filter will then be used to produce a subset of NF’s to which the subscription will be applied. The NF’s in question must have an Active orchestration-status in A&AI. If an NF matching the filter is registered in ONAP after the microservice has been deployed, the subscription will be applied to that NF.

Interaction
Config Binding Service

PMSH interacts with the Config Binding Service to retrieve it’s configuration information, including the subscription information.

DMaaP

PMSH subscribes and publishes to various DMaaP Message Router topics (See Topics for more information on which topics are used).

A&AI

PMSH interacts with A&AI to fetch data about network functions. The nfFilter is then applied to this data to produce a targeted subset of NF’s.

Policy

PMSH interacts indirectly with Policy via DMaaP Message Router to trigger an action on an operational policy defined by the operator. The operational policy must align with the inputs provided in the event sent from PMSH.

CDS

The operational policy will be used to make a request to CDS, which will apply/remove the subscription to/from the NF. The CDS blue print processor will execute the action over netconf towards the NF. (See DCAE_CL_OUTPUT_Topic for more details)

Multiple CDS Blueprint support

When PMSH applies the nfFilter during the parsing of the NF data, it will attempt to retrieve the relevant blueprint information defined in A&AI related to that model. These are optional parameters in SDC (sdnc_model_name, sdnc_model_version), and can be defined as properties assignment inputs, then pushed to A&AI during distribution.

If no blueprint information is available, the NF will be skipped and no subscription event sent.

If successful, the sdnc_model_name and sdnc_model_version will be sent as part of the event to the policy framework as blueprintName and blueprintVersion respectively. This in turn will be sent from the operational policy towards CDS blueprint processor, to trigger the action for the relevant blueprint.

Delivery
Docker Container

The PMSH is delivered as a docker image that can be downloaded from ONAP docker registry.

nexus3.onap.org:10001/onap/org.onap.dcaegen2.services.pm-subscription-handler:1.0.3

The PMSH logs will be rotated when the log file reaches 10Mb and a single backup will be kept, with a maximum of 10 back ups.

Logging

The PMSH application writes logs at INFO level to STDOUT, and also to the following file:

/var/log/ONAP/dcaegen2/services/pmsh/application.log

To configure PMSH log level, the configuration yaml needs to be altered:

vi /opt/app/pmsh/log_config.yaml

onap_logger level should be changed from INFO to DEBUG in order to enable debug logs to be captured. This will affect both STDOUT logs and the logs written to application.log file

loggers:
    onap_logger:
        level: INFO
Configuration

The PMSH is configured and deployed via the DCAE dashboard.

Application specific configuration

The application config is the basic information that PMSH needs to run. The following parameters are required, they are specified in the dashboard deployment GUI.

Field

Description

Type

Required

Default

tag_version

Docker image to be used.

string

True

nexus3.onap.org:10001/onap/org.onap.dcaegen2.services.pm-subscription-handler:1.0.3

replicas

Number of instances.

integer

True

1

operational_policy_name

Name of the operational policy to be executed.

string

True

pmsh-operational-policy

control_loop_name

Name of the control loop.

string

True

pmsh-control-loop

pm_publish_topic_name

The topic that PMSH will publish to, and which policy will subscribe to.

string

True

unauthenticated.DCAE_CL_OUTPUT

policy_feedback_topic_name

The topic that PMSH will subscribe to, and which policy will publish to.

string

True

unauthenticated.PMSH_CL_INPUT

aai_notification_topic_name

The topic that PMSH will subscribe to, and which AAI will publish change events to.

string

True

AAI-EVENT

publisher_client_role

The client role used to publish to the topic that policy will subscribe to.

string

True

org.onap.dcae.pmPublisher

subscriber_client_role

The client role used to subscribe to the topic that AAI will publish change events to.

string

True

org.onap.dcae.pmSubscriber

dcae_location

Location of the DCAE cluster.

string

True

san-francisco

cpu_limit

CPU limit for the PMSH service.

string

True

1000m

cpu_request

Requested CPU for the PMSH service.

string

True

1000m

memory_limit

Memory limit for the PMSH service.

string

True

1024Mi

memory_request

Requested Memory for the PMSH service.

string

True

1024Mi

pgaas_cluster_name

Cluster name for Postgres As A Service.

string

True

dcae-pg-primary.onap

enable_tls

Boolean flag to toggle HTTPS cert auth support.

boolean

True

true

protocol

HTTP protocol for PMSH. If ‘enable_tls’ is false, protocol must be set to http.

string

True

https

Subscription configuration

The subscription is configured within the monitoring policy. The subscription model schema is as follows:

subscription

{
   "subscription":{
      "subscriptionName":"someExtraPM-All-gNB-R2B",
      "administrativeState":"UNLOCKED",
      "fileBasedGP":15,
      "fileLocation":"/pm/pm.xml",
      "nfFilter":{
         "nfNames":[
            "^pnf1.*"
         ],
         "modelInvariantIDs":[
            "5845y423-g654-6fju-po78-8n53154532k6",
            "7129e420-d396-4efb-af02-6b83499b12f8"
         ],
         "modelVersionIDs":[
            "e80a6ae3-cafd-4d24-850d-e14c084a5ca9"
         ],
        "modelNames": [
            "pnf102"
        ]
      },
      "measurementGroups":[
         {
            "measurementGroup":{
               "measurementTypes":[
                  {
                     "measurementType":"EutranCell.*"
                  },
                  {
                     "measurementType":"EutranCellRelation.pmCounter1"
                  },
                  {
                     "measurementType":"EutranCellRelation.pmCounter2"
                  }
               ],
               "managedObjectDNsBasic":[
                  {
                     "DN":"ManagedElement=1,ENodeBFunction=1,EUtranCell=CityCenter1"
                  },
                  {
                     "DN":"ManagedElement=1,ENodeBFunction=1,EUtranCell=CityCenter1, EUtranCellRelation=CityCenter2"
                  },
                  {
                     "DN":"ManagedElement=1,ENodeBFunction=1,EUtranCell=CityCenter1, EUtranCellRelation=CityCenter3"
                  }
               ]
            }
         }
      ]
   }
}

Field

Description

Type

Required

Values

subscriptionName

Name of the subscription.

administrativeState

Setting a subscription to UNLOCKED will apply the subscription to the NF instances immediately. If it is set to LOCKED, it will not be applied until it is later unlocked.

fileBasedGP

The frequency at which measurements are produced.

fileLocation

Location of Report Output Period file.

nfFilter

The network function filter will be used to filter the list of nf’s stored in A&AI to produce a subset.

measurementGroups

List containing measurementGroup.

nfFilter

The nfFilter will be used in order to filter the list of NF’s retrieved from A&AI. There are four criteria that can be filtered on, nfNames, modelInvariantIDs, modelVersionIDs and/or modelNames. All 4 of these are optional fields but at least 1 must be present for the filter to work.

"nfFilter": {
    "nfNames":[
       "^pnf.*",
       "^vnf.*"
    ],
    "modelInvariantIDs": [
       "5845y423-g654-6fju-po78-8n53154532k6",
       "7129e420-d396-4efb-af02-6b83499b12f8"
    ],
    "modelVersionIDs": [
       "e80a6ae3-cafd-4d24-850d-e14c084a5ca9"
    ],
    "modelNames": [
        "pnf102"
    ]
}

Field

Description

Type

Required

nfNames

List of NF names. These names are regexes, which will be parsed by the PMSH.

list

False

modelInvariantIDs

List of modelInvariantIDs. These UUIDs will be checked for exact matches with AAI entities.

list

False

modelVersionIDs

List of modelVersionIDs. These IDs will be checked for exact matches with AAI entities.

list

False

modelNames

List of modelNames. These names will be checked for exact matches with AAI entities.

list

False

measurementGroup

measurementGroup is used to specify the group of measurements that will be collected.

"measurementGroup": {
   "measurementTypes": [
     {
       "measurementType": "EutranCell.*"
     },
     {
       "measurementType": "EutranCellRelation.pmCounter1"
     },
     {
       "measurementType": "EutranCellRelation.pmCounter2"
     }
   ],
   "managedObjectDNsBasic": [
     {
       "DN": "ManagedElement=1,ENodeBFunction=1,EUtranCell=CityCenter1"
     },
     {
       "DN": "ManagedElement=1,ENodeBFunction=1,EUtranCell=CityCenter1, EUtranCellRelation=CityCenter2"
     },
     {
       "DN": "ManagedElement=1,ENodeBFunction=1,EUtranCell=CityCenter1, EUtranCellRelation=CityCenter3"
     }
   ]
}

Field

Description

Type

Required

measurementTypes

List of measurement types. These are regexes, and it is expected that either the CDS blueprint, or NF can parse them. As the PMSH will not do so.

list

True

managedObjectDNsBasic

List of managed object distinguished names.

list

True

MR Topics
Subscriber:
AAI-EVENT

This topic is used so that the PMSH can listen for new NFs getting added or deleted. If the NF matches the NF filter (See Configuration) it will be added to the relevant subscription.

unauthenticated.PMSH_CL_INPUT

This topic enables the operational policy to provide feedback on the status of a subscription attempt, back to PMSH, with a message of either success or failed.

Example of successful CREATE event sent from policy:

{
    "name": "ResponseEvent",
    "nameSpace": "org.onap.policy.apex.onap.pmcontrol",
    "source": "APEX",
    "target": "DCAE",
    "version": "0.0.1",
    "status": {
        "subscriptionName": "subscriptiona",
        "nfName": "PNF104",
        "changeType": "CREATE",
        "message": "success"
    }
}
Publisher:
unauthenticated.DCAE_CL_OUTPUT

PMSH publishes subscriptions to this topic. They will be consumed by an operational policy which will make a request to CDS to change the state of the subscription.

Example event sent from PMSH:

{
   "nfName":"PNF104",
   "ipv4Address": "10.12.13.12",
   "policyName":"pmsh-operational-policy",
   "closedLoopControlName":"pmsh-control-loop",
   "blueprintName":"pm_control",
   "blueprintVersion":"1.2.4",
   "changeType":"CREATE",
   "subscription":{
      "administrativeState":"UNLOCKED",
      "subscriptionName":"subscriptiona",
      "fileBasedGP":15,
      "fileLocation":"/pm/pm.xml",
      "measurementGroups":[
         {
            "measurementGroup":{
               "measurementTypes":[
                  {
                     "measurementType":"countera"
                  },
                  {
                     "measurementType":"counterb"
                  }
               ],
               "managedObjectDNsBasic":[
                  {
                     "DN":"dna"
                  },
                  {
                     "DN":"dnb"
                  }
               ]
            }
         },
         {
            "measurementGroup":{
               "measurementTypes":[
                  {
                     "measurementType":"counterc"
                  },
                  {
                     "measurementType":"counterd"
                  }
               ],
               "managedObjectDNsBasic":[
                  {
                     "DN":"dnc"
                  },
                  {
                     "DN":"dnd"
                  }
               ]
            }
         }
      ]
   }
}
Troubleshooting
Configuration Errors

If the PMSH fails to start and is in CrashLoopBackOff, it is likely due to a configuration error.

Unable to connect to Config Binding Service

The PMSH may not be able to reach the Config Binding Service. If this is the case you will be able to see an error connecting to Config Binding Service, by checking the logs in Kibana.

Invalid Configuration

If the PMSH is able to connect to Config Binding Service, but is failing to start. It may be due to invalid configuration. Check Kibana for an incorrect configuration error.

Slice Analysis MS

Slice Analysis MS is introduced in ONAP for: (a) Analyzing the FM/PM data (reported from the xNFs) and KPI data (computed from PM data) related to various slice instances (NSIs), slice sub-net instances (NSSIs) and services catered to by the slices (S-NSSAIs). (b) Determining and triggering appropriate Control Loop actions based on the analysis above (c) Receiving recommendations for closed loop actions from ML or Analytics engines, performing validity checks, etc. to determine if the actions can be carried out, and then triggering the appropriate Control Loop

In Guilin, this MS: - Performs simple Closed Loop control action for the RAN slice sub-net instances based on simple analysis of a set of RAN PM data - Initiates simple control loop actions in the RAN based on recommendation from an ML engine for RAN slice sub-net instance re-configuration

For the Control loops, SO, VES Collector, Policy, DMaaP and CCSDK/SDN-R, AAI, PM-mapper and DFC are involved apart from this MS.

Flow diagrams are available at: https://wiki.onap.org/display/DW/Closed+Loop https://wiki.onap.org/display/DW/Intelligent+Slicing+flow

Slice Analysis MS overview and functions
Architecture

The internal architecture of Slice Analysis MS is shown below.

_images/slice_analysis_ms_arch.jpg

The Slice Analysis MS has a DMaaP interface towards towards Policy and VES-Collector, and a REST interface towards Config DB. It also has a DMaaP interface to receive any recommendations for Closed Loop updates from an ML engine, which is then used to trigger a control loop message to Policy.

  • DMAAP Client creates a thread pool for every DMaaP topic consumer. The thread polls the DMaaP topic for every configured time interval and whenever a message is received it stores that message in the Postgres DB.

  • PM Thread reads the PM event from the database and puts the PM sample in the internal queue in the format which is needed for further processing.

  • Consumer Thread consumes PM samples from the internal queue and make all the required Config DB calls, perform the analysis, and puts the onset message to the DMaaP topic.

  • Database is a PG DB.

Detailed flow diagrams are available at:

Closed Loop: https://wiki.onap.org/display/DW/Closed+Loop

Intelligent Slicing: https://wiki.onap.org/display/DW/Intelligent+Slicing+flow

Functional Description
  • Slice Analysis ms consumes PM messages from PERFORMANCE_MEASUREMENTS topic.

  • For analysis Slice Analysis MS consumes various data from Config DB including List of Network Functions which serves the S-NSSAI, List of Near-RT RICs and the corresponding cell mappings of the S-NSSAI, Current Configuration of the Near-RT RICs, Slice Profile associated with the S-NSSAI and Subscriber details of the S-NSSAI (for sending the onset message to policy).

  • Based on the collected PM data, Slice Analysis MS computes the DLThptPerSlice and ULThptPerSlice for the Near-RT RICs relevant for the S-NSSAI, and the computed value is compared with the current configuration of the Near-RT RICs. If the change in configuration exceeds the minimum percentage value which is kept as a configuration parameter, then the closed-loop will be triggered by posting the onset message to DMaaP.

  • Upon reception of recommendation to update the configuration of RAN from e.g., an ML MS, the Slice Analysis MS prepares and sends a control loop onset message.

Deployment aspects

The SON-Handler MS will be deployed on DCAE as an on-demand component. Details of the installation steps are available at ./installation.rst. Further details can be obtained from: https://wiki.onap.org/pages/viewpage.action?pageId=92998809

Known Issues and Resolutions

The assumptions of functionality in Guilin release is documented in: https://wiki.onap.org/display/DW/Assumptions+for+Guilin+release

Slice Analysis MS Installation Steps
Installation

Slice Analysis MS can be deployed using cloudify blueprint using bootstrap container of an existing DCAE deployment.

Deployment Pre-requisites
  • DCAE and DMaaP pods should be up and running.

  • DMaaP Bus Controller PostInstalls job should have completed successfully (executed as part of an OOM install).

  • PM mapper service should be running.

  • Config DB service should be running.

  • Make sure that cfy is installed and configured to work with the Cloudify deployment.

Deployment steps
  1. Execute bash on the bootstrap Kubernetes pod.

    kubectl -n onap exec -it <dcaegen2-dcae-bootstrap> bash

  2. Go to the /blueprints directory.

Check that the tag_version in the slice-analysis-ms blueprint is correct for the release of ONAP that it is being installed on see Nexus link below for slice-analysis-ms for tag_versions. Nexus link: https://nexus3.onap.org/#browse/browse:docker.public:v2%2Fonap%2Forg.onap.dcaegen2.services.components.slice-analysis-ms%2Ftags

  1. Create an input file.

  2. Run the Cloudify install command to install the slice-analysis-ms with the blueprint and the newly created input file k8s-slice-input.yaml.

    $ cfy install k8s-slice-analysis-ms.yaml -i k8s-slice-input.yaml –blueprint-id sliceanalysisms

    Details of the sample output are available at: https://wiki.onap.org/pages/viewpage.action?pageId=92998809.

  3. To un-deploy

$ cfy uninstall sliceanalysisms

Application configurations

Configuration

Description

samples

Minimum number of samples to be present for analysis

minimumPercentageChange

Minimum percentage of configuration change above which control loop should be triggered

initialDelaySeconds

Initial delay in milliseconds for the consumer thread to start after the application startup

config_db

Host where the config DB application is running

performance_management_topicurl

Dmaap Topic URL to which PM data are posted by network functions

dcae_cl_topic_url

Dmaap topic to which onset message to trigger the control loop are posted

dcae_cl_response_topic_url

Dmaap topic URL to which Policy posts the message after successful control loop trigger

intelligent_slicing_topic_url

Dmaap topic URL to which ML MS posts the messages

dmaap_polling_interval

Dmaap Polling interval in milliseconds

Helm Installation

Slice Analysis MS can be deployed using helm charts as kubernetes applications.

Deployment Pre-requisites
  • DCAE and DMaaP pods should be up and running.

  • PM mapper service should be running.

  • Config DB service, CPS and AAI should be running.

  • The environment should have helm and kubernetes installed.

  • Check whether all the charts mentioned in the requirements.yaml file are present in the charts/ folder. If not present, package the respective chart and put it in the charts/ folder.

    For example:
    helm package <dcaegen2-services-common>
    
Deployment steps
  1. Go to the directory where dcae-slice-analysis-ms chart is present and Execute the below command.
    helm install <slice_analysis_ms> <dcae-slice-analysis-ms> --namespace onap --set global.masterPassword=guilin2021
    
  2. We can check the logs of the slice-analysis-ms container by using the below command
    kubectl logs -f -n onap <dev-dcae-slice-analysis-ms-9fd8495f7-zmnlw> -c <dcae-slice-analysis-ms>
    
  3. To un-deploy
    helm uninstall <slice_analysis_ms>
    
Application configurations

Configuration

Description

postgres host

Host where the postgres application is running

pollingInterval

Dmaap Polling interval in milliseconds

pollingTimeout

Dmaap Polling timeout in milliseconds

configDb service

Host where the config DB application is running

configDbEnabled

To choose whether to use config DB or CPS & AAI

aai url

Host where the AAI application is running

cps url

Host where cps tbdmt application is running

samples

Minimum number of samples to be present for analysis

minimumPercentageChange

Minimum percentage of configuration change above which control loop should be triggered

initialDelaySeconds

Initial delay in milliseconds for the consumer thread to start after the application startup

cl_topic

Dmaap topic URL to which onset message to trigger the control loop are posted

performance_management_topic

Dmaap topic URL to which PM data are posted by network functions

intelligent_slicing_topic

Dmaap topic URL to which ML MS posts the messages

dcae_cl_response_topic

Dmaap topic URL to which Policy posts the message after successful control loop trigger

Slice Analysis MS Troubleshooting Steps
Trouble shooting steps
  1. Microservice stops and restarts during startup

    Possible reason & Solution: Microservice is not registered with the consul
    • Check the consul if the microservice is registered with it and the MS is able to fetch the app config from the CBS. Check if CBS and consul are deployed properly and try to redeploy the MS

      The below logs will be seen if CBS is not reachable by the MS

      15:14:13.861 [main] WARN org.postgresql.Driver - JDBC URL port: 0 not valid (1:65535) 15:14:13.862 [main] WARN o.s.b.w.s.c.AnnotationConfigServletWebServerApplicationContext - Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name ‘org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaConfiguration’: Unsatisfied dependency expressed through constructor parameter 0; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name ‘dataSource’ defined in org.onap.dcaegen2.services. sliceanalysisms.Application: Initialization of bean failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name ‘org.springframework.boot.autoconfigure.jdbc.DataSourceInitializerInvoker’: Invocation of init method failed; nested exception is org.springframework.jdbc.datasource.init.UncategorizedScriptException: Failed to execute database script; nested exception is java.lang.RuntimeException: Driver org.postgresql.Driver claims to not accept jdbcUrl, jdbc:postgresql://null:0/sliceanalysisms 15:14:13.865 [main] INFO o.a.catalina.core.StandardService - Stopping service [Tomcat] 15:14:13.877 [main] INFO o.s.b.a.l.ConditionEvaluationReportLoggingListener - Error starting ApplicationContext. To display the conditions report re-run your application with ‘debug’ enabled. 15:14:13.880 [main] ERROR o.s.boot.SpringApplication - Application run failed

  2. No PostgreSQL clusters have been deployed on this manager

    Solution:

    kubectl exec -ti -n onap dev-dcaemod-db-primary-56ff585cf7-dxkkx bash psql ALTER ROLE “postgres” WITH PASSWORD ‘onapdemodb’; q

    kubectl exec -ti -n onap dev-dcae-bootstrap-b47854569-dnrqf bash cfy blueprints upload -b pgaas_initdb /blueprints/k8s-pgaas-initdb.yaml cfy deployments create -b pgaas_initdb -i k8s-pgaas-initdb-inputs.yaml pgaas_initdb cfy executions start -d pgaas_initdb install

Logging

Since the Slice Analysis MS is deployed as a pod in the kubernetes, we can check the logs by using the following command:

$ kubectl logs <pod-name> –namespace onap

SON-Handler MS

SON-Handler MS is introduced in ONAP for implementing the pre-processing and co-ordination actions of various RAN SON use cases. PCI optimization and centralized ANR updates are handled in Frankfurt release.

OOF, DCAE - SON Handler MS and VES Collector, Policy, DMaaP and SDN-C (SDN-R) are involved in the realization of this use case.

_images/flowdiagram.jpg
SON-Handler MS overview and functions
Architecture

The architecture below depicts the SON-Handler MS as a part of DCAE. Only the relevant interactions and components are shown.

_images/dcae_new.jpg

The internal architecture of SON-Handler MS is shown below.

_images/son_handler.jpg
Description

The SON-Handler MS has a REST interface towards OOF as well as DMaaP interface towards Policy, VES-Collector and SDN-R. It has a database and core logic.

Core logic

The core logic is implemented as 3 threads - main thread, child thread(s) for handling neighbor-list updates and collision/confusion alarms from the RAN (via SDN-R), and a separate child for handling handover measurements (PM) inputs from the RAN (via VES-Collector). The Main Thread is responsible for spawning and terminating the Child Threads. The core logic is responsible for: (a) Performing all the pre-processing that is required before triggering OOF for PCI as well as PCI/ANR joint-optimization (b) Autonomously taking actions for ANR updates (c) Preparing the message contents required by SDN-R to re-configure the RAN nodes with PCI/ANR updates

The logic may not be 100% fool-proof (i.e., cover all possible scenarios and boundary-conditions as in real field deployments), as well as the most efficient one. An attempt has been made to balance the usefulness for a PoC versus the complexity of handling all possible scenarios. It is intended to provide a good base for the community/users to enhance it further as required.

The details of the state machines of all the threads in the core logic are available in https://wiki.onap.org/pages/viewpage.action?pageId=56131985.

In Frankfurt release, adaptive SON functionality was introduced for PCI optimization. While determining the optimum PCI values to resolve PCI collision and confusion, the optimizer also takes into consideration a set of cells whose PCI values may not be changed during the optimization. Such situations could arise, for example, when the PCI value of a cell could not be updated in the past (due to whatever reason), or configuration policy specifies that certain cells’ PCI values should never be changed. So, the SON-Handler MS keeps track of cells whose PCI values cannot be changed. When triggering OOF for PCI optimization, the SON-Handler MS also provides the list of cells whose PCI values cannot be changed.

Details of Frankfurt implementation are available in https://wiki.onap.org/display/DW/SON-Handler+MS+%28DCAE%29+Impacts.

Database

This is a PostgreSQL DB, and is intended to persist information such as the following:

  • PCI-Handler MS Config information (e.g., thresholds, timer values, OOF algorithm name, etc.)

  • Pre-processing results and other related information (e.g., neighbor list)

  • Buffered notifications (i.e., notifications not yet processed at all)

  • State information

  • Association between PNF-name and CellId

  • Aggregated PM/FM data

  • List of cells whose PCI values are fixed

  • Etc.

DMaaP Client

This is responsible for registering with the DMaaP client for the DMaaP notifications from SDN-R and VES-Collector, and to Policy.

Deployment aspects

The SON-Handler MS will be deployed on DCAE as an on-demand component. Details of the installation steps are available at ./installation.rst. Further details can be obtained from https://wiki.onap.org/pages/viewpage.action?pageId=76875778

Known Issues and Resolutions

The scope and scenarios addressed are documented in the SON use case page - https://wiki.onap.org/display/DW/OOF-PCI+Use+Case+-+Dublin+Release+-+ONAP+based+SON+for+PCI+and+ANR. The enhancements and limitations in Frankfurt release are documented in the SON use case page for Frankfurt - https://wiki.onap.org/display/DW/OOF+%28SON%29+in+R5+El+Alto%2C+OOF+%28SON%29+in+R6+Frankfurt.

SON-Handler MS Installation Steps, Configurations, Troubleshooting Tips and Logging
Helm Installation

SON handler microservice can be deployed using helm charts in oom repository.

Deployment Prerequisites
  • SON-Handler service requires config-binding-service, policy, dmaap and aaf componenets to be running.

  • The following topics must be created in dmaap:

    curl --header "Content-type: application/json" --request POST --data '{"topicName": "DCAE_CL_RSP"}' http://<DMAAP_IP>:3904/events/DCAE_CL_RSP
    curl --header "Content-type: application/json" --request POST --data '{"topicName": "unauthenticated.SEC_FAULT_OUTPUT"}' http://<DMAAP_IP>:3904/events/unauthenticated.SEC_FAULT_OUTPUT
    curl --header "Content-type: application/json" --request POST --data '{"topicName": "unauthenticated.VES_MEASUREMENT_OUTPUT"}' http://<DMAAP_IP>:3904/events/unauthenticated.VES_MEASUREMENT_OUTPUT
    curl --header "Content-type: application/json" --request POST --data '{"topicName": "unauthenticated.DCAE_CL_OUTPUT"}' http://<DMAAP_IP>:3904/events/unauthenticated.DCAE_CL_OUTPUT
    
  • Policies required for SON-handler service should be created and pushed to the policy component. Steps for creating and pushing policy models:

    1.Login to policy-drools-pdp-0 container

    kubectl exec -ti --namespace <namespace> policy-pdp-0 bash
    
    1. Create Modify Config policy:

      curl -k --silent --user 'healthcheck:zb!XztG34' -X POST "https://policy-api:6969/policy/api/v1/policytypes/onap.policies.controlloop.operational.common.Drools/versions/1.0.0/policies" -H "Accept: application/json" -H "Content-Type: application/json" --data-raw '{"tosca_definitions_version":"tosca_simple_yaml_1_1_0","topology_template":{"policies":[{"operational.pcihandler":{"type":"onap.policies.controlloop.operational.common.Drools","type_version":"1.0.0","name":"operational.pcihandler","version":"1.0.0","metadata":{"policy-id":"operational.pcihandler"},"properties":{"controllerName":"usecases","id":"ControlLoop-vPCI-fb41f388-a5f2-11e8-98d0-529269fb1459","timeout":900,"abatement":false,"trigger":"unique-policy-id-123-modifyconfig","operations":[{"id":"unique-policy-id-123-modifyconfig","description":"Modify the packet generator","operation":{"actor":"SDNR","operation":"ModifyConfig","target":{"targetType":"PNF"}},"timeout":300,"retries":0,"success":"final_success","failure":"final_failure","failure_timeout":"final_failure_timeout","failure_retries":"final_failure_retries","failure_exception":"final_failure_exception","failure_guard":"final_failure_guard"}]}}}]}}'
      
    2. Push Modify Config policy:

      curl --silent -k --user 'healthcheck:zb!XztG34' -X POST "https://policy-pap:6969/policy/pap/v1/pdps/policies" -H "Accept: application/json" -H "Content-Type: application/json" --data-raw '{"policies":[{"policy-id":"operational.pcihandler","policy-version":1}]}'
      
    3. Create Modify Config ANR policy:

      curl -k --silent --user 'healthcheck:zb!XztG34' -X POST "https://policy-api:6969/policy/api/v1/policytypes/onap.policies.controlloop.operational.common.Drools/versions/1.0.0/policies" -H "Accept: application/json" -H "Content-Type: application/json" --data-raw '{"tosca_definitions_version":"tosca_simple_yaml_1_1_0","topology_template":{"policies":[{"operational.sonhandler":{"type":"onap.policies.controlloop.operational.common.Drools","type_version":"1.0.0","name":"operational.sonhandler","version":"1.0.0","metadata":{"policy-id":"operational.sonhandler"},"properties":{"controllerName":"usecases","id":"ControlLoop-vSONH-7d4baf04-8875-4d1f-946d-06b874048b61","timeout":900,"abatement":false,"trigger":"unique-policy-id-123-modifyconfig","operations":[{"id":"unique-policy-id-123-modifyconfig","description":"Modify the packet generator","operation":{"actor":"SDNR","operation":"ModifyConfigANR","target":{"targetType":"PNF"}},"timeout":300,"retries":0,"success":"final_success","failure":"final_failure","failure_timeout":"final_failure_timeout","failure_retries":"final_failure_retries","failure_exception":"final_failure_exception","failure_guard":"final_failure_guard"}]}}}]}}'
      
    1. Push Modify Config ANR policy:

      curl --silent -k --user 'healthcheck:zb!XztG34' -X POST "https://policy-pap:6969/policy/pap/v1/pdps/policies" -H "Accept: application/json" -H "Content-Type: application/json" '{"policies":[{"policy-id":"operational.sonhandler","policy-version":1}]}'
      
    2. Create policy type:

      curl -k --silent --user 'healthcheck:zb!XztG34' -X POST "https://policy-api:6969/policy/api/v1/policytypes" -H "Accept: application/json" -H "Content-Type: application/json" --data-raw '{"policy_types":{"onap.policies.monitoring.docker.sonhandler.app":{"derived_from":"onap.policies.Monitoring:1.0.0","description":"son handler policy type","properties":{"PCI_MODCONFIGANR_POLICY_NAME":{"required":true,"type":"string"},"PCI_MODCONFIG_POLICY_NAME":{"required":true,"type":"string"},"PCI_NEIGHBOR_CHANGE_CLUSTER_TIMEOUT_IN_SECS":{"required":true,"type":"string"},"PCI_OPTMIZATION_ALGO_CATEGORY_IN_OOF":{"required":true,"type":"string"},"PCI_SDNR_TARGET_NAME":{"required":true,"type":"string"}},"version":"1.0.0"}},"tosca_definitions_version":"tosca_simple_yaml_1_1_0"}'
      
    3. Create monitoring policy:

      curl -k --silent --user 'healthcheck:zb!XztG34' -X POST "https://policy-api:6969/policy/api/v1/policytypes/onap.policies.monitoring.docker.sonhandler.app/versions/1.0.0/policies" -H "Accept: application/json" -H "Content-Type: application/json" --data-raw '{"name":"ToscaServiceTemplateSimple","topology_template":{"policies":[{"com.Config_PCIMS_CONFIG_POLICY":{"metadata":{"policy-id":"com.Config_PCIMS_CONFIG_POLICY","policy-version":"1"},"name":"com.Config_PCIMS_CONFIG_POLICY","properties":{"PCI_MODCONFIGANR_POLICY_NAME":"ControlLoop-vSONH-7d4baf04-8875-4d1f-946d-06b874048b61","PCI_MODCONFIG_POLICY_NAME":"ControlLoop-vPCI-fb41f388-a5f2-11e8-98d0-529269fb1459","PCI_NEIGHBOR_CHANGE_CLUSTER_TIMEOUT_IN_SECS":60,"PCI_OPTMIZATION_ALGO_CATEGORY_IN_OOF":"OOF-PCI-OPTIMIZATION","PCI_SDNR_TARGET_NAME":"SDNR"},"type":"onap.policies.monitoring.docker.sonhandler.app","type_version":"1.0.0","version":"1.0.0"}}]},"tosca_definitions_version":"tosca_simple_yaml_1_1_0","version":"1.0.0"}'
      
    4. Push monitoring policy:

      curl --silent -k --user 'healthcheck:zb!XztG34' -X POST "https://policy-pap:6969/policy/pap/v1/pdps/policies" -H "Accept: application/json" -H "Content-Type: application/json" --data-raw '{"policies":[{"policy-id":"com.Config_PCIMS_CONFIG_POLICY","policy-version":1}]}'
      
Deployment Steps
  • Default app config values can be updated in oom/kubernetes/dcaegen2-services/components/dcae-son-handler/values.yaml

  • Update monitoring policy ID in below configuration which is used to enable Policy-Sync Side car container to be deployed and retrieves active policy configuration.

    dcaePolicySyncImage: onap/org.onap.dcaegen2.deployments.dcae-services-policy-sync:1.0.1
    policies:
      policyID: |
       '["com.Config_PCIMS_CONFIG_POLICY"]'
    
  • Update Config db IP address:

    sonhandler.configDb.service: http://<configDB-IPAddress>:8080
    
  • Enable sonhandler component in oom/kubernetes/dcaegen2-services/values.yaml

    dcae-son-handler:
        enabled: true
    
  • Make the chart and deploy using the following command:

    cd oom/kubernetes/
    make dcaegen2-services
    helm install dev-dcaegen2-services dcaegen2-services --namespace <namespace> --set global.masterPassword=<password>
    
  • To deploy only son-handler:

    helm install dev-son-handler dcaegen2-services/components/dcae-son-handler --namespace <namespace> --set global.masterPassword=<password>
    
  • To uninstall:

    helm uninstall dev-son-handler
    
Application Configurations

Configuration

Description

streams_subscribes

Dmaap topics that the MS will consume messages

streams_publishes

Dmaap topics that the MS will publish messages

postgres.host

Host where the postgres database is running

postgres.port

Host where the postgres database is running

postgres.username

Postgres username

postgres.password

Postgres password

sonhandler.pollingInterval

Polling Interval for consuming dmaap messages

sonhandler.pollingTimeout

Polling timeout for consuming dmaap messages

sonhandler.numSolutions

Number for solutions for OOF optimization

sonhandler.minCollision

Minimum collision criteria to trigger OOF

sonhandler.minConfusion

Minimum confusion criteria to trigger OOF

sonhandler.maximumClusters

Maximum number of clusters MS can process

sonhandler.badThreshold

Bad threshold for Handover success rate

sonhandler.poorThreshold

Poor threshold for Handover success rate

sonhandler.namespace

Namespace where MS is going to be deployed

sonhandler.namespace

Namespace where MS is going to be deployed

sonhandler.namespace

Namespace where MS is going to be deployed

sonhandler.sourceId

Source ID of the Microservice (to OOF)

sonhandler.dmaap.server

Location of message routers

sonhandler.bufferTime

Buffer time for MS to wait for notifications

sonhandler.cg

DMAAP Consumer group for subscription

sonhandler.cid

DMAAP Consumer id for subcription

sonhandler.configDbService

Location of config DB (protocol, host & port)

sonhandler.oof.service

Location of OOF (protocol, host & port)

sonhandler.optimizers

Optimizer to trigger in OOF

sonhandler.poorCountThreshold

Threshold for number of times poorThreshold can be recorded for the cell

sonhandler.badCountThreshold

Threshold for number of times badThreshold can be recorded for the cell

sonhandler. oofTriggerCountTimer

Timer for oof triggered count in minutes

sonhandler.policyRespTimer

Timer to wait for notification from policy

sonhandler. policyNegativeAckThreshold

Maximum number of negative acknowledgements from policy for a given cell

sonhandler. policyFixedPciTimeInterval

Time interval to trigger OOF with fixed pci cells

sonhandler.nfNamingCode

Parameter to filter FM and PM notifications coming from ves

Troubleshooting steps
  1. Microservice stops and restarts during startup

    Possible reasons & Solutions:
    1. Microservice is not registered with the consul
      • Check the consul if the microservice is registered with it and the MS is able to fetch the app config from the CBS. Check if CBS and consul are deployed properly and try to redeploy the MS

      The below logs will be seen if CBS is not reachable by the MS

    15:14:13.861 [main] WARN org.postgresql.Driver - JDBC URL port: 0 not valid (1:65535)

    15:14:13.862 [main] WARN o.s.b.w.s.c.AnnotationConfigServletWebServerApplicationContext - Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name ‘org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaConfiguration’: Unsatisfied dependency expressed through constructor parameter 0; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name ‘dataSource’ defined in org.onap.dcaegen2.services.sonhms.Application: Initialization of bean failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name ‘org.springframework.boot.autoconfigure.jdbc.DataSourceInitializerInvoker’: Invocation of init method failed; nested exception is org.springframework.jdbc.datasource.init.UncategorizedScriptException: Failed to execute database script; nested exception is java.lang.RuntimeException: Driver org.postgresql.Driver claims to not accept jdbcUrl, jdbc:postgresql://null:0/sonhms 15:14:13.865 [main] INFO o.a.catalina.core.StandardService - Stopping service [Tomcat] 15:14:13.877 [main] INFO o.s.b.a.l.ConditionEvaluationReportLoggingListener - Error starting ApplicationContext. To display the conditions report re-run your application with ‘debug’ enabled. 15:14:13.880 [main] ERROR o.s.boot.SpringApplication - Application run failed

    1. MS is not able to fetch the config policies from the policy handler.
      • Check if the config policy for the MS is created and pushed into the policy module. The below logs will be seen if the config policies are not available.

      2019-05-16 14:48:48.651 LOG <sonhms> [son_policy_widelm.create] INFO: latest policy for policy_id(com.Config_PCIMS_CONFIG_POLICY.1.xml) status(404) response: {} 2019-05-16 14:48:49.661 LOG <sonhms> [son_policy_widelm.create] INFO: exit policy_get 2019-05-16 14:48:49.661 LOG <sonhms> [son_policy_widelm.create] INFO: policy not found for policy_id com.Config_PCIMS_CONFIG_POLICY.1.xml 2019-05-16 14:48:49.456 CFY <sonhms> [son_policy_widelm.create] Task succeeded ‘dcaepolicyplugin.policy_get’ 2019-05-16 14:48:50.283 CFY <sonhms> [son_policy_widelm] Configuring node 2019-05-16 14:48:50.283 CFY <sonhms> [son_policy_widelm] Configuring node 2019-05-16 14:48:51.333 CFY <sonhms> [son_policy_widelm] Starting node 2019-05-16 14:50:02.996 LOG <sonhms> [pgaasvm_fb20w3.create] WARNING: All done 2019-05-16 14:50:02.902 CFY <sonhms> [pgaasvm_fb20w3.create] Task succeeded ‘pgaas.pgaas_plugin.create_database’

Logging
  1. Logs can be found either from kubernetes UI or from kubectl. Since, the MS is deployed as a pod in the kubernetes, you can check the logs by using the command

    kubectl logs <pod-name> –namespace onap

Miscellaneous Services

VES OpenAPI Manager

VES OpenAPI Manager has been created to validate the presence of OpenAPI schemas declared in VES_EVENT type artifacts, within the DCAE run-time environment during Service Model distribution in SDC. When deployed, it automatically listens to events of Service Models distributions by using SDC Distribution Client in order to read the declared OpenAPI descriptions. Purpose of this component is to partially validate artifacts of type VES_EVENT from Resources of distributed services. During validation phase it checks whether stndDefined events defined in VES_EVENT type artifact, contain only schemaReferences that local copies are accessible by DCAE VES Collector. If any of schemaReference is absent in local externalSchema repository, the VES OpenAPI Manager informs ONAP user which schemas need to be uploaded to the DCAE run-time environment.

VES OpenAPI Manager overview and functions
VES OpenAPI Manager architecture

Functionalities of VES OpenAPI Manager require communication with other ONAP components. Because of that, SDC Distribution Client has been used as a library to achieve such a communication. There are two components required by application to work: SDC BE and Message Router. SDC Distribution Client provides communication with both of them when it’s properly configured (for application configuration instruction refer to: VES OpenAPI Manager deployment ).

_images/architecture.png
VES OpenAPI Manager workflow

VES OpenAPI Manager workflow can be split into phases:

  1. Listening for Service Model distribution events

  2. Optional downloading of artifacts depending on Service Model contents. At least one Service Model resource must contain VES_EVENT type artifacts.

  3. Optional validation of artifacts depending on content of downloaded artifacts. Artifact must contain stndDefined events declaration.

VES OpenAPI Manager workflow is presented on the diagram below.

_images/workflow.png
VES OpenAPI Manager artifacts and delivery

VES OpenAPI Manager is delivered as a docker container and published in ONAP Nexus repository following image naming convention.

Image

Full image name is onap/org.onap.dcaegen2.platform.ves-openapi-manager.

Versioning

VES OpenAPI Manager keeps its Changelog in the repository. It’s available here: Changelog

Use latest image tag to get the most recent version of VES OpenAPI Manager.

Repository

Repository with the code of VES OpenAPI Manager is available on ONAP Gerrit: Gerrit

VES OpenAPI Manager deployment

VES OpenAPI Manager is a simple Java application which can be started by using only Java 11+, yet it has some prerequisites to work correctly:

  1. File with OpenAPI schemas mappings.

  2. Access to two ONAP services: SDC BE and Message Router.

These prerequisites are met by default when using Helm charts created for VES OpenAPI Manager in OOM. It’s described in more detail in Helm chart section.

There is also available a simple configuration via environment variables which are optional. It’s described in more detail in Environment variables section.

File with OpenAPI schemas mappings

VES OpenAPI checks whether schemaReferences of distributed service align with stndDefined schemas from VES Collector. To achieve that application should receive a file with mappings used by VES. Because there are few ways to run the application, it contains its own default file which assure that application will work. Default file may be overwritten or edited any time, even during application runtime.

Helm charts which are used to deploy application in ONAP cluster are configured to overwrite default mapping file with file from predefined ConfigMap (named dcae-external-repo-configmap-schema-map), which is also used by VES Collector. Using ConfigMap ensures that both: VES OpenAPI Manager and VES Collector use the exact same file.

Warning

VES OpenAPI Manager does not check if the used mapping file is the same file that VES uses. Within ONAP, the working assumption is, that both: VES openAPI Manager and VES Collector leverage the same Kubernetes ConfigMaps, which contain the schema-mapping file and respective openAPI descriptions.

VES OpenAPI Manager has a configurable property which contains path to the mapping file. It has to be set before the application startup. It can be done by setting environment variable SCHEMA_MAP_PATH. Helm charts are preconfigured to set this variable.

Environment variables

There are environment variables which must be used for configuration. Helm chart contain predefined values which are valid when running VES OpenAPI Manager from its released image in the ONAP cluster.

Variable name

Description

Helm chart values

SCHEMA_MAP_PATH

Path to the mapping file.

/app/schema-map.json

ASDC_ADDRESS

URL to SDC BE.

sdc-be:8443

Helm chart

By default VES OpenAPI Manger is deployed via Helm as the DCAE subcomponent in the ONAP cluster. Helm chart is configured to deploy application with all prerequisites met. It achieves that by:

  1. Mounting ConfigMap with mapping file under /app/schema-map.json path.

  2. Proper setting environment variables to values described in section Environment variables. Mapping file path is set to point to mounted file and SDC BE URL is set to internal port available only from Kubernetes cluster.

  3. Setting Readiness check. It waits for other ONAP components to start: SDC BE, Message Router. VES OpenAPI Manager Pod will not start until they are not ready.

Local deployment

It’s possible to run VES OpenAPI Manager in local environment which connects to external lab with ONAP. This way requires exposing ports of some services on lab, creating local port tunneling and running VES OpenAPI Manager (using docker-compose or IDE e.g. IntelliJ).

It’s described in more detail in the README in project repository (README).

VES OpenAPI Manager validation use-cases

The main VES OpenAPI Manager use case is to verify if the schemaReferences declared in VES_EVENT type artifacts are present in the local DCAE run-time externalSchemaRepo and show validation results to user in SDC UI.

The general flow of VES OpenAPI Manager is available here VES OpenAPI Manager workflow.

Based on the referenced flow, there are few possible behaviours of VES OpenAPI Manager. In this section two main flows: successful and unsuccessful validation will be described step by step.

Validation prerequisites

Validation phase takes place only when specific conditions are met.

  1. VES OpenAPI Manager is properly configured: client is connected to SDC and mapping file is present and pointed in configuration. Configuration is described in detail here: VES OpenAPI Manager deployment.

  2. Distribution of a Service Model takes place in SDC.

  3. Service contains an VES_EVENT type artifact.

  4. Artifact content is correctly downloaded.

Validation description

When schemaReference field from artifact is being validated, only the part of the URI that indicates public openAPI description file location is taken into consideration.

For example when schemaReference with value https://forge.3gpp.org/rep/sa5/MnS/blob/SA88-Rel16/OpenAPI/faultMnS.yaml#/components/schemas/NotifyNewAlarm is found in artifact, then only the part before # sign (public openAPI description file location URI part) is being validated. This way part which would be validated is https://forge.3gpp.org/rep/sa5/MnS/blob/SA88-Rel16/OpenAPI/faultMnS.yaml.

Mapping file must have a predefined JSON format of list of objects (mappings) with publicURL and localURL fields. Example with 3 mappings:

[
  {
    "publicURL": "https://forge.3gpp.org/rep/sa5/MnS/tree/SA88-Rel16/OpenAPI/comDefs.yaml",
    "localURL": "3gpp/rep/sa5/MnS/tree/SA88-Rel16/OpenAPI/comDefs.yaml"
  },
  {
    "publicURL": "https://forge.3gpp.org/rep/sa5/MnS/tree/SA88-Rel16/OpenAPI/coslaNrm.yaml",
    "localURL": "3gpp/rep/sa5/MnS/tree/SA88-Rel16/OpenAPI/coslaNrm.yaml"
  },
  {
    "publicURL": "https://forge.3gpp.org/rep/sa5/MnS/blob/SA88-Rel16/OpenAPI/faultMnS.yaml",
    "localURL": "3gpp/rep/sa5/MnS/tree/SA88-Rel16/OpenAPI/faultMnS.yaml"
  }
]

When schemaReference is split, it’s compared to each publicURL from mapping file. If there is no publicURL in mapping file which matches schemaReference, then schemaReference is marked as invalid. This process is executed for all stndDefined events defined in VES_EVENT artifact, which declare a schemaReference. All invalid references are returned to user via SDC UI when validation of a complete artifact ends.

Based on returned information with invalid references user can take action and e.g. add mappings and schemas to DCAE run-time environment by editing ConfigMaps which store them.

ConfigMap name

Content

dcae-external-repo-configmap-schema-map

Mapping file

dcae-external-repo-configmap-sa88-rel16

OpenAPI schemas content, example stores 3GPP sa88-rel16 schemas

Successful validation case

There are few ways to get a successful validation status - DEPLOY_OK.

  1. When artifact VES_EVENT does not contain stndDefined events definitions. Only stndDefined event are validated.

  2. When artifact VES_EVENT contains stndDefined events definitions but schemaReference fields are not present.

  3. When artifact VES_EVENT contains stndDefined events definitions and each schemaReference of the event is present in the mapping file.

VES_EVENT artifact may contain more than one event definition. Examples of valid artifacts with single events are below.

Example of valid artifact without stndDefined event definition (case 1):

---
event:
  presence: required
  structure:
    commonEventHeader:
      presence: required
      structure:
        domain: {presence: required, value: notification}
        eventName: {presence: required, value: Noti_MyPnf-Acme_FileReady}
        priority: {presence: required, value: Normal}
        eventId: {presence: required}
        reportingEntityId: {presence: required}
        reportingEntityName: {presence: required}
        sequence: {presence: required, value: 0}
        sourceId: {presence: required}
        sourceName: {presence: required}
        version: {presence: required, value: 4.0.1}
        vesEventListenerVersion: {presence: required, value: 7.0.1}
        startEpochMicrosec: {presence: required}
        lastEpochMicrosec: {presence: required}
    notificationFields:
      presence: required
      structure:
        changeIdentifier: {presence: required, value: PM_MEAS_FILES}
        changeType: {presence: required, value: fileReady}
        notificationFieldsVersion: {presence: required, value: 2.0}
        arrayOfNamedHashMap:
          presence: required
          array:
            - name: {presence: required}
              hashMap: {presence: required, structure: {
                keyValuePair: {presence: required, structure: {key: {presence: required, value: location}, value: {presence: required}}},
                keyValuePair: {presence: required, structure: {key: {presence: required, value: compression}, value: {presence: required, value: gzip}}},
                keyValuePair: {presence: required, structure: {key: {presence: required, value: fileFormatType}, value: {presence: required, value: org.3GPP.32.435}}},
                keyValuePair: {presence: required, structure: {key: {presence: required, value: fileFormatVersion}, value: {presence: required, value: V10}}}}
              }
...

Example of valid artifact with stndDefined event definition, but without schemaReference field (case 2):

---
event:
  presence: required
  comment: "stndDefined event to support 3GPP FaultSupervision NotifyNewAlarm notification"
  structure:
    commonEventHeader:
      presence: required
      structure:
        domain: {presence: required, value: stndDefined}
        eventName: {presence: required, value: stndDefined-gNB-Nokia-Notification}
        priority: {presence: required, value: Normal}
        eventId: {presence: required}
        reportingEntityId: {presence: required}
        reportingEntityName: {presence: required}
        sequence: {presence: required, value: 0}
        sourceId: {presence: required}
        sourceName: {presence: required}
        version: {presence: required, value: 4.1}
        vesEventListenerVersion: {presence: required, value: 7.2}
        startEpochMicrosec: {presence: required}
        lastEpochMicrosec: {presence: required}
        stndDefinedNamespace: {presence: required, value: "3GPP-FaultSupervision"}
    stndDefinedFields:
      presence: required
      structure:
        data: {presence: required}
        stndDefinedFieldsVersion: {presence: required, value: "1.0"}

...

Example of artifact with stndDefined event definition (case 3):

---
event:
  presence: required
  comment: "stndDefined event to support 3GPP FaultSupervision NotifyNewAlarm notification"
  structure:
    commonEventHeader:
      presence: required
      structure:
        domain: {presence: required, value: stndDefined}
        eventName: {presence: required, value: stndDefined-gNB-Nokia-Notification}
        priority: {presence: required, value: Normal}
        eventId: {presence: required}
        reportingEntityId: {presence: required}
        reportingEntityName: {presence: required}
        sequence: {presence: required, value: 0}
        sourceId: {presence: required}
        sourceName: {presence: required}
        version: {presence: required, value: 4.1}
        vesEventListenerVersion: {presence: required, value: 7.2}
        startEpochMicrosec: {presence: required}
        lastEpochMicrosec: {presence: required}
        stndDefinedNamespace: {presence: required, value: "3GPP-FaultSupervision"}
    stndDefinedFields:
      presence: required
      structure:
        schemaReference: { presence: required, value: "https://forge.3gpp.org/rep/sa5/MnS/blob/SA88-Rel16/OpenAPI/faultMnS.yaml#/components/schemas/NotifyNewAlarm" }
        data: {presence: required}
        stndDefinedFieldsVersion: {presence: required, value: "1.0"}

...

which is valid when mapping file contains a mapping of schemaReference field. Example of mapping file content which makes example artifact valid:

[
  {
    "publicURL": "https://forge.3gpp.org/rep/sa5/MnS/blob/SA88-Rel16/OpenAPI/faultMnS.yaml",
    "localURL": "3gpp/rep/sa5/MnS/tree/SA88-Rel16/OpenAPI/faultMnS.yaml"
  }
]
Unsuccessful validation case

Another case is an unsuccessful validation case which sends status DEPLOY_ERROR with error message containing listed schemaReference that are missing from mapping file. Fail case might occur:

  1. When artifact VES_EVENT contains stndDefined events definitions and any of schemaReference is not present in mapping file.

Example of artifact with stndDefined event definition:

---
event:
  presence: required
  comment: "stndDefined event to support 3GPP FaultSupervision NotifyNewAlarm notification"
  structure:
    commonEventHeader:
      presence: required
      structure:
        domain: {presence: required, value: stndDefined}
        eventName: {presence: required, value: stndDefined-gNB-Nokia-Notification}
        priority: {presence: required, value: Normal}
        eventId: {presence: required}
        reportingEntityId: {presence: required}
        reportingEntityName: {presence: required}
        sequence: {presence: required, value: 0}
        sourceId: {presence: required}
        sourceName: {presence: required}
        version: {presence: required, value: 4.1}
        vesEventListenerVersion: {presence: required, value: 7.2}
        startEpochMicrosec: {presence: required}
        lastEpochMicrosec: {presence: required}
        stndDefinedNamespace: {presence: required, value: "3GPP-FaultSupervision"}
    stndDefinedFields:
      presence: required
      structure:
        schemaReference: { presence: required, value: "https://forge.3gpp.org/rep/sa5/MnS/blob/SA88-Rel16/OpenAPI/faultMnS.yaml#/components/schemas/NotifyNewAlarm" }
        data: {presence: required}
        stndDefinedFieldsVersion: {presence: required, value: "1.0"}

...

which is invalid when mapping file does not contain a mapping of schemaReference field. Example of mapping file which makes example artifact invalid:

[
  {
    "publicURL": "https://forge.3gpp.org/rep/sa5/MnS/tree/SA88-Rel16/OpenAPI/streamingDataMnS.yaml",
    "localURL": "3gpp/rep/sa5/MnS/tree/SA88-Rel16/OpenAPI/streamingDataMnS.yaml"
  }
]
Validation results

There are two ways to receive validation results.

  1. Via SDC UI. Results are available in Service->Distributions view. To see results in SDC UI user has to wait up to few minutes.

  2. In VES OpenAPI Manager logs. They are printed right after validation.

DCAE Release Notes

Version: 9.0.1

Abstract

This document provides the release notes for the Istanbul Maintenance release

Summary

This maintenance release is primarily to resolve bugs identified during Istanbul release testing.

Release Data

Project

DCAE

Docker images

See Istanbul Maintenance Release

Deliverable (below)

Release designation

Istanbul Maintenance Release

Release date

2022/01/31

New features

None

Bug fixes

Known Issues

None

Security Notes

Known Vulnerabilities in Used Modules

dcaegne2/services/mapper includes transitive dependency on log4j 1.2.17; this will be addressed in later release (DCAEGEN2-3105)

Istanbul Maintenance Rls Deliverables

Software Deliverables

Repository

SubModules

Version & Docker Image (if applicable)

dcaegen2/collectors/restconf

onap/org.onap.dcaegen2.collectors.restconfcollector:1.2.7

dcaegen2/collectors/ves

onap/org.onap.dcaegen2.collectors.ves.vescollector:1.10.3

dcaegen2/services/mapper

onap/org.onap.dcaegen2.services.mapper.vesadapter.universalvesadaptor:1.3.2

Version: 9.0.0

Abstract

This document provides the release notes for Istanbul release.

Summary

Following DCAE components are available with default ONAP/DCAE installation.

  • Platform components

    • Cloudify Manager (helm chart)*

    • Bootstrap container (helm chart)*

    • Configuration Binding Service (helm chart)

    • Deployment Handler (helm chart)*

    • Policy Handler (helm chart*

    • Service Change Handler (helm chart)*

    • Inventory API (helm chart)*

    • Dashboard (helm chart)*

    • VES OpenAPI Manager (helm chart)

  • Service components

    • VES Collector (helm chart & cloudify blueprint)

    • HV-VES Collector (helm chart & cloudify blueprint)

    • PNF-Registration Handler (helm chart & cloudify blueprint)

    • Docker based Threshold Crossing Analytics (TCA-Gen2) (helm chart & cloudify blueprint)

  • Additional resources that DCAE utilizes deployed using ONAP common charts:

    • Postgres Database

    • Mongo Database

    • Consul Cluster

* These components will be retired next ONAP release as cloudify deployments will be diabled after Istanbul.

Below service components (mS) are available to be deployed on-demand (helm chart & Cloudify Blueprint)

  • SNMPTrap Collector

  • RESTConf Collector

  • DataFile Collector

  • PM-Mapper

  • BBS-EventProcessor

  • VES Mapper

  • Heartbeat mS

  • SON-Handler

  • PM-Subscription Handler

  • DataLake Handler (Admin and Feeder)

  • Slice Analysis mS

  • DataLake Extraction Service

  • KPI-Ms

Under OOM all DCAE component containers are deployed as Kubernetes Pods/Deployments/Services into Kubernetes cluster. With DCAE tranformation to Helm in Istanbul release - all DCAE components are available to be deployed under Helm; Cloudify blueprint deployment is provided for backward compatibility support in this release.

For Helm managed microservices, the dependencies/pre-requisite are identified on each charts individually. In general, most DCAE microservice rely on Consul/Configbindingservice for sourcing configuration updates (this dependency will be removed in next release). Each microservice can be deployed independently and based on dcaegen2-services-common template, features can be enabled or disabled via configuration override during deployment. For list of supported features in helm refer - Using Helm to deploy DCAE Microservices.

DCAE continues to provides Cloudify deployment through plugins (cloudify) that is capable of expanding a Cloudify blueprint node specification for a service component to a full Kubernetes specification, with additional enhancements such as replica scaling, sidecar for logging to ONAP ELK stack.

Release Data

DCAE Project

Docker images

Refer Deliverable

Release designation

9.0.0 Istanbul

Release date

2021-11-18

New features

DCAE Enhancements Features

DCAEGEN2-2771 DCAE Impacts for E2E Network Slicing in Istanbul release
  • Enhance SliceAnalysis and KPI-Computation MS to interface with CPS; integration with new CBS client SDK and support policy sidecar

DCAEGEN2-2703 Add stndDefined domain to HV-VES
  • Adapt HV_VES to support stdDefined domain instroduced under VES7.2.1 spec

DCAEGEN2-2630 DCAE Helm Transformation (Phase 2)
  • Since Honolulu, 13 additional MS has been delivered for Helm deployment

  • DCAE Service helm deployment is supported through implementing common functions as named template/functions defined in dcaegen2-services-common charts. Several new common features has been added in generic fashion and components/mS can enable required features via configuration override
    • K8S Secret/Environment mapping

    • CMPv2 Certificate support

    • Policy Sidecar

    • Mount data from configmap through PV/PVC

    • Topic/feed provisioning support

  • SDK Libraries (java and python) has been enhanced to support configuration retrieval from files

  • Helm-generator tool available for generating DCAE component helm-chart given component spec

DCAEGEN2-2541 Bulk PM (PMSH) - Additional use cases, deployment and documentation enhancements
  • Enhanced PMSH Microservice to support subscription property updates, config updates to support ‘n’ subscriptions, support resource name in filter

DCAEGEN2-2522 Enhancements for OOF SON use case
  • Implemented CPS client and switched to new CBS client SDK for removing consul dependency and enabling policy configuration through sidecar.

Non-Functional

  • Removed GPLv3 license from software by switching to onap/integration base images (DCAEGEN2-2455)

  • CII Badging improvements (DCAEGEN2-2622)

  • Healthcheck container Py3 upgrade (DCAEGEN2-2737)

  • Vulnerability updates for several DCAE MS (TCA-gen2, DataFileCollector,RESTConf, VES,Mapper, PM-Mapper, PRH, SON-handler, KPI-MS, Slice-Analysis MS) (DCAEGEN2-2768)

Bug Fixes

  • BPGenerator yaml Fixes are different for yaml file and string (DCAEGEN2-2489)

  • Slice Analysis - Avoid removal of data when insufficient samples are present (DCAEGEN2-2509)

Deliverables

Software Deliverables

Repository

SubModules

Version & Docker Image (if applicable)

dcaegen2/analytics/tca-gen2

onap/org.onap.dcaegen2.analytics.tca-gen2.dcae-analytics-tca-web:1.3.1

dcaegen2/collectors/datafile

onap/org.onap.dcaegen2.collectors.datafile.datafile-app-server:1.6.1

dcaegen2/collectors/hv-ves

onap/org.onap.dcaegen2.collectors.hv-ves.hv-collector-main:1.9.1

dcaegen2/collectors/restconf

onap/org.onap.dcaegen2.collectors.restconfcollector:1.2.5

dcaegen2/collectors/snmptrap

onap/org.onap.dcaegen2.collectors.snmptrap:2.0.5

dcaegen2/collectors/ves

onap/org.onap.dcaegen2.collectors.ves.vescollector:1.10.1

dcaegen2/deployments

cm-container

onap/org.onap.dcaegen2.deployments.cm-container:4.6.1

dcaegen2/deployments

consul-loader-container

onap/org.onap.dcaegen2.deployments.consul-loader-container:1.1.1

dcaegen2/deployments

dcae-k8s-cleanup-container

onap/org.onap.dcaegen2.deployments.dcae-k8s-cleanup-container:1.0.0

dcaegen2/deployments

healthcheck-container

onap/org.onap.dcaegen2.deployments.healthcheck-container:2.2.0

dcaegen2/deployments

tls-init-container

onap/org.onap.dcaegen2.deployments.tls-init-container:2.1.0

dcaegen2/deployments

dcae-services-policy-sync

onap/org.onap.dcaegen2.deployments.dcae-services-policy-sync:1.0.1

dcaegen2/platform

mod/onboardingapi

onap/org.onap.dcaegen2.platform.mod.onboardingapi:2.12.5

dcaegen2/platform

mod/distributorapi

onap/org.onap.dcaegen2.platform.mod.distributorapi:1.1.0

dcaegen2/platform

mod/designtool

onap/org.onap.dcaegen2.platform.mod.designtool-web:1.0.2

dcaegen2/platform

mod/genprocessor

onap/org.onap.dcaegen2.platform.mod.genprocessor-http:1.0.2

dcaegen2/platform

mod/genprocessor

onap/org.onap.dcaegen2.platform.mod.genprocessor-job:1.0.2

dcaegen2/platform

mod/designtool/mod-registry

onap/org.onap.dcaegen2.platform.mod.mod-registry:1.0.0

dcaegen2/platform

mod/runtimeapi

onap/org.onap.dcaegen2.platform.mod.runtime-web:1.2.3

dcaegen2/platform

adapter/acumos

onap/org.onap.dcaegen2.platform.adapter.acumos:1.0.6

dcaegen2/platform/blueprints

onap/org.onap.dcaegen2.deployments.k8s-bootstrap-container:3.3.5

dcaegen2/platform/configbinding

onap/org.onap.dcaegen2.platform.configbinding:2.5.4

dcaegen2/platform/deployment-handler

onap/org.onap.dcaegen2.platform.deployment-handler:4.4.1

dcaegen2/platform/inventory-api

onap/org.onap.dcaegen2.platform.inventory-api:3.5.2

dcaegen2/platform/policy-handler

onap/org.onap.dcaegen2.platform.policy-handler:5.1.3

dcaegen2/platform/servicechange-handler

onap/org.onap.dcaegen2.platform.servicechange-handler:1.4.0

dcaegen2/platform/ves-openapi-manager

onap/org.onap.dcaegen2.platform.ves-openapi-manager:1.0.1

dcaegen2/services

components/datalake-handler

onap/org.onap.dcaegen2.services.datalakefeeder:1.1.1

dcaegen2/services

components/datalake-handler

onap/org.onap.dcaegen2.services.datalakeadminui:1.1.1

dcaegen2/services

components/datalake-handler

onap/org.onap.dcaegen2.services.datalake.exposure.service:1.1.1

dcaegen2/services

components/pm-subscription-handler

onap/org.onap.dcaegen2.services.pmsh:1.3.2

dcaegen2/services

components/slice-analysis-ms

onap/org.onap.dcaegen2.services.components.slice-analysis-ms:1.0.6

dcaegen2/services

components/bbs-event-processor

onap/org.onap.dcaegen2.services.components.bbs-event-processor:2.1.1

dcaegen2/services

components/kpi-ms

onap/org.onap.dcaegen2.services.components.kpi-ms:1.0.1

dcaegen2/services/heartbeat

onap/org.onap.dcaegen2.services.heartbeat:2.3.1

dcaegen2/services/mapper

onap/org.onap.dcaegen2.services.mapper.vesadapter.universalvesadaptor:1.3.0

dcaegen2/services/pm-mapper

onap/org.onap.dcaegen2.services.pm-mapper:1.7.2

dcaegen2/services/prh

onap/org.onap.dcaegen2.services.prh.prh-app-server:1.7.1

dcaegen2/services/son-handler

onap/org.onap.dcaegen2.services.son-handler:2.1.5

dcaegen2/platform

mod/bpgenerator

Blueprint Generator 1.8.0 (jar)

dcaegen2/services/sdk

DCAE SDK 1.8.7 (jar)

ccsdk/dashboard

onap/org.onap.ccsdk.dashboard.ccsdk-app-os:1.4.4

Known Limitations, Issues and Workarounds

DCAEGEN2-2861 - Topic/feed provisioned through Helm require manual cleanup once the helm deployed service are uninstalled. Refer following document Using Helm to deploy DCAE Microservices for steps to remove topic/feed provisioned in DMAAP.

Known Vulnerabilities

None

Workarounds

Documented under corresponding jira if applicable.

Security Notes

Fixed Security Issues

Documented on earlier section

Known Security Issues

None

Known Vulnerabilities in Used Modules

None

DCAE code has been formally scanned during build time using NexusIQ and all Critical vulnerabilities have been addressed, items that remain open have been assessed for risk and determined to be false positive. The DCAE open Critical security vulnerabilities and their risk assessment have been documented as part of the project.

Test Results

References

For more information on the ONAP Honolulu release, please see:

  1. ONAP Home Page

  2. ONAP Documentation

  3. ONAP Release Downloads

  4. ONAP Wiki Page

Quick Links:

Version: 8.0.1

Abstract

This document provides the release notes for the Honolulu Maintenance release

Summary

This maintenance release is primarily to resolve bugs identified during Honolulu release testing.

Release Data

Project

DCAE

Docker images

onap/org.onap.ccsdk.dashboard.

.ccsdk-app-os:1.4.4

Release designation

Honolulu Maintenance Release

Release date

2021/06/01

New features

None

Bug fixes

  • DCAEGEN2-2751 Dashboard login issue due to oom/common PG upgrade to centos8-13.2-4.6.1

  • CCSDK-3233 Switch to integration base image & vulnerability updates fixes

  • DCAEGEN2-2800 DCAE Healthcheck failure due to Dashboard

  • DCAEGEN2-2869 Fix PRH aai lookup url config

Known Issues

None

Version: 8.0.0

Abstract

This document provides the release notes for Honolulu release.

Summary

Following DCAE components are available with default ONAP/DCAE installation.

  • Platform components

    • Cloudify Manager (helm chart)

    • Bootstrap container (helm chart)

    • Configuration Binding Service (helm chart)

    • Deployment Handler (helm chart)

    • Policy Handler (helm chart

    • Service Change Handler (helm chart)

    • Inventory API (helm chart)

    • Dashboard (helm chart)

    • VES OpenAPI Manager (helm chart)

  • Service components

    • VES Collector (helm chart & cloudify blueprint)

    • HV-VES Collector (helm chart & cloudify blueprint)

    • PNF-Registration Handler (helm chart & cloudify blueprint)

    • Docker based Threshold Crossing Analytics (TCA-Gen2) (helm chart & cloudify blueprint)

    • Holmes Rule Management (helm chart & cloudify blueprint)

    • Holmes Engine Management (helm chart & cloudify blueprint)

  • Additional resources that DCAE utilizes deployed using ONAP common charts:

    • Postgres Database

    • Mongo Database

    • Consul Cluster

Below service components (mS) are available to be deployed on-demand (through Cloudify Blueprint)

  • SNMPTrap Collector

  • RESTConf Collector

  • DataFile Collector

  • PM-Mapper

  • BBS-EventProcessor

  • VES Mapper

  • Heartbeat mS

  • SON-Handler

  • PM-Subscription Handler

  • DataLake Handler (Admin and Feeder)

  • Slice Analysis mS

  • DataLake Extraction Service

  • KPI-Ms

Notes:

* These components are delivered by the Holmes project.

Under OOM (Kubernetes) all DCAE component containers are deployed as Kubernetes Pods/Deployments/Services into Kubernetes cluster. DCAE components are deployed using combination of Helm charts and Cloudify blueprint as noted above. DCAE provides a Cloudify Manager plugin (k8splugin) that is capable of expanding a Cloudify blueprint node specification for a service component to a full Kubernetes specification, with additional enhancements such as replica scaling, sidecar for logging to ONAP ELK stack, registering services to MSB, etc.

Release Data

DCAE Project

Docker images

Refer Deliverable

Release designation

8.0.0 Honolulu

Release date

2021-04-29

New features

DCAE Enhancements

Functional Updates

  • New service VES-Openapi-Manager component added to DCAE, allowing to notify of missing openAPI description, at xNF distribution phase (DCAEGEN2-2571)

  • Added VES 7.2.1 support in VESCollector (DCAEGEN2-2539, DCAEGEN2-2477)

  • DCAE MS deployment through helm with introduction of common dcae-service template to standardize charts with migration (DCAEGEN2-2488)

  • New service KPI-Computation MS introduced for support for E2E Slicing Usecase (DCAEGEN2-2521)

  • K8S configMap support through onboarding/design/deployment via DCAE-MOD and DCAE-Platform (DCAEGEN2-2539)

  • BP-generation Enhancements - support Native-kafka & Config-map through onboarding (DCAEGEN2-2458)

  • CFY plugin enhancements - support IPV6 service exposure + Config-Map + Cert-Manager’s CMPv2 issuer integration (DCAEGEN2-2539, DCAEGEN2-2458, DCAEGEN2-2388)

  • DCAE SDK enhancement - Dmaap Client update for timeout/retry + CBS client update (DCAEGEN2-1483)

  • DFC enhancement - support in HTTP/HTTPS/enroll certificate from CMPv2 server (DCAEGEN2-2517)

Non-Functional

  • DCAE Cloudify py3 upgrade including plugins/bootstrap cli (DCAEGEN2-1546)

  • CII Badging improvements (DCAEGEN2-2570)

  • Policy-Handler Py3 upgrade (DCAEGEN2-2494)

  • Vulnerability updates for several DCAE MS (DataFile Collector, RESTConf, VESCollector, InventoryAPI, MOD/RuntimeAPI, VES-mapper, PM-Mapper, PRH, SON-Handler) (DCAEGEN2-2551)

  • Code Coverage improvement (DataFile, SDK, Blueprint-generator, Plugins, Acumos Adapter) (DCAEGEN2-2382)

  • Documentation/user-guide updates

Bug Fixes

  • BPGenerator yaml Fixes are different for yaml file and string (DCAEGEN2-2489)

  • Slice Analysis - Avoid removal of data when insufficient samples are present (DCAEGEN2-2509)

  • Following new services are delivered this release
    • VES OpenAPI Manager

    • KPI MS (Analytics/RCA)

Deliverables

Software Deliverables

Repository

SubModules

Version & Docker Image (if applicable)

dcaegen2/analytics/tca-gen2

onap/org.onap.dcaegen2.analytics.tca-gen2.dcae-analytics-tca-web:1.2.1

dcaegen2/collectors/datafile

onap/org.onap.dcaegen2.collectors.datafile.datafile-app-server:1.5.5

dcaegen2/collectors/hv-ves

onap/org.onap.dcaegen2.collectors.hv-ves.hv-collector-main:1.6.0

dcaegen2/collectors/restconf

onap/org.onap.dcaegen2.collectors.restconfcollector:1.2.4

dcaegen2/collectors/snmptrap

onap/org.onap.dcaegen2.collectors.snmptrap:2.0.4

dcaegen2/collectors/ves

onap/org.onap.dcaegen2.collectors.ves.vescollector:1.8.0

dcaegen2/deployments

cm-container

onap/org.onap.dcaegen2.deployments.cm-container:4.4.2

dcaegen2/deployments

consul-loader-container

onap/org.onap.dcaegen2.deployments.consul-loader-container:1.1.0

dcaegen2/deployments

dcae-k8s-cleanup-container

onap/org.onap.dcaegen2.deployments.dcae-k8s-cleanup-container:1.0.0

dcaegen2/deployments

healthcheck-container

onap/org.onap.dcaegen2.deployments.healthcheck-container:2.1.0

dcaegen2/deployments

tls-init-container

onap/org.onap.dcaegen2.deployments.tls-init-container:2.1.0

dcaegen2/deployments

dcae-services-policy-sync

onap/org.onap.dcaegen2.deployments.dcae-services-policy-sync:1.0.0

dcaegen2/platform

mod/onboardingapi

onap/org.onap.dcaegen2.platform.mod.onboardingapi:2.12.5

dcaegen2/platform

mod/distributorapi

onap/org.onap.dcaegen2.platform.mod.distributorapi:1.1.0

dcaegen2/platform

mod/designtool

onap/org.onap.dcaegen2.platform.mod.designtool-web:1.0.2

dcaegen2/platform

mod/genprocessor

onap/org.onap.dcaegen2.platform.mod.genprocessor-http:1.0.2

dcaegen2/platform

mod/genprocessor

onap/org.onap.dcaegen2.platform.mod.genprocessor-job:1.0.2

dcaegen2/platform

mod/designtool/mod-registry

onap/org.onap.dcaegen2.platform.mod.mod-registry:1.0.0

dcaegen2/platform

mod/runtimeapi

onap/org.onap.dcaegen2.platform.mod.runtime-web:1.2.3

dcaegen2/platform

adapter/acumos

onap/org.onap.dcaegen2.platform.adapter.acumos:1.0.4

dcaegen2/platform/blueprints

onap/org.onap.dcaegen2.deployments.k8s-bootstrap-container:3.0.4

dcaegen2/platform/configbinding

onap/org.onap.dcaegen2.platform.configbinding:2.5.3

dcaegen2/platform/deployment-handler

onap/org.onap.dcaegen2.platform.deployment-handler:4.4.1

dcaegen2/platform/inventory-api

onap/org.onap.dcaegen2.platform.inventory-api:3.5.2

dcaegen2/platform/policy-handler

onap/org.onap.dcaegen2.platform.policy-handler:5.1.2

dcaegen2/platform/servicechange-handler

onap/org.onap.dcaegen2.platform.servicechange-handler:1.4.0

dcaegen2/platform/ves-openapi-manager

onap/org.onap.dcaegen2.platform.ves-openapi-manager:1.0.1

dcaegen2/services

components/datalake-handler

onap/org.onap.dcaegen2.services.datalakefeeder:1.1.0

dcaegen2/services

components/datalake-handler

onap/org.onap.dcaegen2.services.datalakeadminui:1.1.0

dcaegen2/services

components/datalake-handler

onap/org.onap.dcaegen2.services.datalake.exposure.service:1.1.0

dcaegen2/services

components/pm-subscription-handler

onap/org.onap.dcaegen2.services.pmsh:1.1.2

dcaegen2/services

components/slice-analysis-ms

onap/org.onap.dcaegen2.services.components.slice-analysis-ms:1.0.4

dcaegen2/services

components/bbs-event-processor

onap/org.onap.dcaegen2.services.components.bbs-event-processor:2.0.1

dcaegen2/services

components/kpi-ms

onap/org.onap.dcaegen2.services.components.kpi-ms:1.0.0

dcaegen2/services/heartbeat

onap/org.onap.dcaegen2.services.heartbeat:2.1.1

dcaegen2/services/mapper

onap/org.onap.dcaegen2.services.mapper.vesadapter.universalvesadaptor:1.2.0

dcaegen2/services/pm-mapper

onap/org.onap.dcaegen2.services.pm-mapper:1.5.2

dcaegen2/services/prh

onap/org.onap.dcaegen2.services.prh.prh-app-server:1.5.6

dcaegen2/services/son-handler

onap/org.onap.dcaegen2.services.son-handler:2.1.3

dcaegen2/platform

mod/bpgenerator

Blueprint Generator 1.7.3 (jar)

dcaegen2/services/sdk

DCAE SDK 1.7.0 (jar)

ccsdk/dashboard

onap/org.onap.ccsdk.dashboard.ccsdk-app-os:1.4.0

Known Limitations, Issues and Workarounds

The new, Helm based installation mechanism for collectors doesn’t support yet certain features available with the traditional Cloudify orchestration based mechanisms:
  • Obtaining X.509 certificates from external CMP v2 server for secure xNF connections

  • Exposing the Collector port in Dual Stack IPv4/IPv6 networks.

Such features are available, when the collectors are installed using the Cloudify mechanisms. Refer to collector installation page for more details:

Known Vulnerabilities

None

Workarounds

Documented under corresponding jira if applicable.

Security Notes

Fixed Security Issues

Documented on earlier section

Known Security Issues

None

Known Vulnerabilities in Used Modules

None

DCAE code has been formally scanned during build time using NexusIQ and all Critical vulnerabilities have been addressed, items that remain open have been assessed for risk and determined to be false positive. The DCAE open Critical security vulnerabilities and their risk assessment have been documented as part of the project.

Test Results

References

For more information on the ONAP Honolulu release, please see:

  1. ONAP Home Page

  2. ONAP Documentation

  3. ONAP Release Downloads

  4. ONAP Wiki Page

Quick Links:

Version: 7.0.1

Abstract

This document provides the release notes for the Guilin Maintenance release

Summary

This maintenance release is primarily to resolve bugs identified during Guilin release testing.

Release Data

Project

DCAE

Docker images

onap/org.onap.dcaegen2.collectors

.hv-ves.hv-collector-main:1.5.1

Release designation

Guilin Maintenance Release

Release date

2021/04/19

New features

None

Bug fixes

  • DCAEGEN2-2516 HV-VES Pod recovery when config-fetch fails

  • OOM-2641 Fix DCAEMOD paths based on Guilin ingress template

Known Issues

Same as Guilin Release

Version: 7.0.0

Abstract

This document provides the release notes for Guilin release.

Summary

Following DCAE components are available with default ONAP/DCAE installation.

  • Platform components

    • Cloudify Manager (helm chart)

    • Bootstrap container (helm chart)

    • Configuration Binding Service (helm chart)

    • Deployment Handler (helm chart)

    • Policy Handler (helm chart

    • Service Change Handler (helm chart)

    • Inventory API (helm chart)

    • Dashboard (helm chart)

  • Service components

    • VES Collector

    • HV-VES Collector

    • PNF-Registration Handler

    • Docker based Threshold Crossing Analytics (TCA-Gen2)

    • Holmes Rule Management *

    • Holmes Engine Management *

  • Additional resources that DCAE utilizes deployed using ONAP common charts:

    • Postgres Database

    • Mongo Database

    • Redis Cluster Database

    • Consul Cluster

Below service components (mS) are available to be deployed on-demand.

  • SNMPTrap Collector

  • RESTConf Collector

  • DataFile Collector

  • PM-Mapper

  • BBS-EventProcessor

  • VES Mapper

  • Heartbeat mS

  • SON-Handler

  • PM-Subscription Handler

  • DataLake Handler (Admin and Feeder)

  • Slice Analysis

  • DataLake Extraction Service

Notes:

* These components are delivered by the Holmes project.

Under OOM (Kubernetes) all DCAE component containers are deployed as Kubernetes Pods/Deployments/Services into Kubernetes cluster. DCAE platform components are deployed using Helm charts. DCAE service components are deployed using Cloudify blueprints. DCAE provides a Cloudify Manager plugin (k8splugin) that is capable of expanding a Cloudify blueprint node specification for a service component to a full Kubernetes specification, with additional enhancements such as replica scaling, sidecar for logging to ONAP ELK stack, registering services to MSB, etc.

Release Data

DCAE Project

Docker images

Refer Deliverable

Release designation

7.0.0 Guilin

Release date

2020-11-19

New features

  • DCAE Enhancements

    • Cloudify Container upgraded with new base image; plugins load optimized (DCAEGEN2-2236, DCAEGEN2-2207, DCAEGEN2-2262)

    • Bootstrap container optimization (DCAEGEN2-1791)

    • MOD/Runtime – Enable configuration for dynamic topic support (DCAEGEN2-1996)

    • MOD/OnboardingAPI - Support for offline install (DCAEGEN2-2221)

    • DCAE Dashboard UI Optimization and bugfixes (DCAEGEN2-2223, DCAEGEN2-2364,DCAEGEN2-1638,DCAEGEN2-2298, DCAEGEN2-1857)

    • Blueprint generator tool and K8Splugin enhancement to support External Certificate (DCAEGEN2-2250)

    • K8S v1.17 support through DCAE Cloudify K8S plugins (DCAEGEN2-2309)

    • Python 3.8 support enabled for several DCAE components - Heartbeat mS, PMSH mS, MOD/DistriubtorAPI mS, MOD/OnboardingAPI mS, Policy Library (DCAEGEN2-2292)

    • Java 11 upgrade complete for following modules - RESTConf, PM-Mapper, DFC, VES-Mapper, SON-handler, TCA-gen2, DL-Feeder, InventoryAPI, ServiceChangeHandler, MOD/RuntimeAPI, MOD/Bp-gen (DCAEGEN2-2223)

    • Hardcoded password removed from OOM charts - Cloudify, Bootstrap, DeploymentHandler, Dashboard; now managed dynamically through K8S secret (DCAEGEN2-1972, DCAEGEN2-1975)

    • Best practice compliance
      • STDOUT log compliance for DCAE Containers (DCAEGEN2-2324)

      • No more than one main process (DCAEGEN2-2327/REQ-365)

      • Container must crash when failure is noted (DCAEGEN2-2326/REQ-366)

      • All containers must run as non-root (REQ-362)

      • Code coverage >55% (DCAEGEN2-2333)

    • All Vulnerability identified by SECCOM has been resolved (DCAEGEN2-2242)

  • Following new services are delivered this release

    • Event Processors
      • DataLake Extraction Service

    • Analytics/RCA
      • Slice Analysis MS

Deliverables

Software Deliverables

Repository

SubModules

Version & Docker Image (if applicable)

dcaegen2/analytics/tca-gen2

onap/org.onap.dcaegen2.analytics.tca-gen2.dcae-analytics-tca-web:1.2.1

dcaegen2/collectors/datafile

onap/org.onap.dcaegen2.collectors.datafile.datafile-app-server:1.4.3

dcaegen2/collectors/hv-ves

onap/org.onap.dcaegen2.collectors.hv-ves.hv-collector-main:1.5.0

dcaegen2/collectors/restconf

onap/org.onap.dcaegen2.collectors.restconfcollector:1.2.2

dcaegen2/collectors/snmptrap

onap/org.onap.dcaegen2.collectors.snmptrap:2.0.3

dcaegen2/collectors/ves

onap/org.onap.dcaegen2.collectors.ves.vescollector:1.7.9

dcaegen2/deployments

cm-container

onap/org.onap.dcaegen2.deployments.cm-container:3.3.4

dcaegen2/deployments

consul-loader-container

onap/org.onap.dcaegen2.deployments.consul-loader-container:1.0.0

dcaegen2/deployments

dcae-k8s-cleanup-container

onap/org.onap.dcaegen2.deployments.dcae-k8s-cleanup-container:1.0.0

dcaegen2/deployments

healthcheck-container

onap/org.onap.dcaegen2.deployments.healthcheck-container:2.1.0

dcaegen2/deployments

multisite-init-container

onap/org.onap.dcaegen2.deployments.multisite-init-container:1.0.0

dcaegen2/deployments

tls-init-container

onap/org.onap.dcaegen2.deployments.tls-init-container:2.1.0

dcaegen2/platform

mod/onboardingapi

onap/org.onap.dcaegen2.platform.mod.onboardingapi:2.12.3

dcaegen2/platform

mod/distributorapi

onap/org.onap.dcaegen2.platform.mod.distributorapi:1.1.0

dcaegen2/platform

mod/designtool

onap/org.onap.dcaegen2.platform.mod.designtool-web:1.0.2

dcaegen2/platform

mod/genprocessor

onap/org.onap.dcaegen2.platform.mod.genprocessor-http:1.0.2

dcaegen2/platform

mod/genprocessor

onap/org.onap.dcaegen2.platform.mod.genprocessor-job:1.0.2

dcaegen2/platform

mod/designtool/mod-registry

onap/org.onap.dcaegen2.platform.mod.mod-registry:1.0.0

dcaegen2/platform

mod/runtimeapi

onap/org.onap.dcaegen2.platform.mod.runtime-web:1.1.1

dcaegen2/platform

adapter/acumos

onap/org.onap.dcaegen2.platform.adapter.acumos:1.0.3

dcaegen2/platform/blueprints

onap/org.onap.dcaegen2.deployments.k8s-bootstrap-container:2.1.8

dcaegen2/platform/configbinding

onap/org.onap.dcaegen2.platform.configbinding:2.5.3

dcaegen2/platform/deployment-handler

onap/org.onap.dcaegen2.platform.deployment-handler:4.4.1

dcaegen2/platform/inventory-api

onap/org.onap.dcaegen2.platform.inventory-api:3.5.1

dcaegen2/platform/policy-handler

onap/org.onap.dcaegen2.platform.policy-handler:5.1.0

dcaegen2/platform/servicechange-handler

onap/org.onap.dcaegen2.platform.servicechange-handler:1.4.0

dcaegen2/services

components/datalake-handler

onap/org.onap.dcaegen2.services.datalakefeeder:1.1.0

dcaegen2/services

components/datalake-handler

onap/org.onap.dcaegen2.services.datalakeadminui:1.1.0

dcaegen2/services

components/datalake-handler

onap/org.onap.dcaegen2.services.datalake.exposure.service:1.1.0

dcaegen2/services

components/pm-subscription-handler

onap/org.onap.dcaegen2.services.pmsh:1.1.2

dcaegen2/services

components/slice-analysis-ms

onap/org.onap.dcaegen2.services.components.slice-analysis-ms:1.0.1

dcaegen2/services

components/bbs-event-processor

onap/org.onap.dcaegen2.services.components.bbs-event-processor:2.0.1

dcaegen2/services/heartbeat

onap/org.onap.dcaegen2.services.heartbeat:2.1.1

dcaegen2/services/mapper

onap/org.onap.dcaegen2.services.mapper.vesadapter.universalvesadaptor:1.1.0

dcaegen2/services/pm-mapper

onap/org.onap.dcaegen2.services.pm-mapper:1.4.1

dcaegen2/services/prh

onap/org.onap.dcaegen2.services.prh.prh-app-server:1.5.4

dcaegen2/services/son-handler

onap/org.onap.dcaegen2.services.son-handler:2.1.2

dcaegen2/platform

mod/bpgenerator

Blueprint Generator 1.5.2 (jar)

dcaegen2/services/sdk

DCAE SDK 1.4.3 (jar)

ccsdk/dashboard

onap/org.onap.ccsdk.dashboard.ccsdk-app-os:1.4.0

Known Limitations, Issues and Workarounds

  • BPGenerator yaml Fixes are different for yaml file and string (DCAEGEN2-2489)

  • Slice Analysis - Avoid removal of data when insufficient samples are present (DCAEGEN2-2509)

  • HV-VES - Pod recovery when config-fetch fails (DCAEGEN2-2516)

System Limitations

None

Known Vulnerabilities

None

Workarounds

Documented under corresponding jira if applicable.

Security Notes

Fixed Security Issues

Listed above

Known Security Issues

None

Known Vulnerabilities in Used Modules

None

DCAE code has been formally scanned during build time using NexusIQ and all Critical vulnerabilities have been addressed, items that remain open have been assessed for risk and determined to be false positive. The DCAE open Critical security vulnerabilities and their risk assessment have been documented as part of the project.

Test Results

References

For more information on the ONAP Guilin release, please see:

  1. ONAP Home Page

  2. ONAP Documentation

  3. ONAP Release Downloads

  4. ONAP Wiki Page

Quick Links:

Version: 5.0.2

Abstract

This document provides the release notes for the El-Alto Maintenance release

Summary

This maintenance release is primarily to update expired certificates from original El-Alto released TLS-init container.

This patch is not required for Frankfurt release (and beyond) as certificates are dynamically retrieved from AAF at deployment time for all DCAE components.

Release Data

Project

DCAE

Docker images

onap/org.onap.dcaegen2.deployments

.tls-init-container:1.0.4

Release designation

El-Alto Maintenance Release

Release date

2020/08/24

New features

None

Bug fixes

  • DCAEGEN2-2206 DCAE TLS Container : Address certificate expiration

Known Issues Same as El-Alto Release

Version: 6.0.1

Abstract

This document provides the release notes for the Frankfurt Maintenance release

Summary

The focus of this release is to correct issues found on Frankfurt release.

Release Data

Project

DCAE

Docker images

onap/org.onap.dcaegen2.services.

son-handler:2.0.4

Release designation

Frankfurt Maintenance Release 1

Release date

2020/08/17

New features

None

Bug fixes

  • DCAEGEN2-2249 SON-Handler: Fix networkId issue while making call to oof

  • DCAEGEN2-2216 SON-Handler: Change Policy notification to align with policy component updates

Known Issues

Same as Frankfurt Release

Version: 6.0.0

Abstract

This document provides the release notes for the Frankfurt release.

Summary

Following DCAE components are available with default ONAP/DCAE installation.

  • Platform components

    • Cloudify Manager (helm chart)

    • Bootstrap container (helm chart)

    • Configuration Binding Service (helm chart)

    • Deployment Handler (helm chart)

    • Policy Handler (helm chart

    • Service Change Handler (helm chart)

    • Inventory API (helm chart)

    • Dashboard (helm chart)

  • Service components

    • VES Collector

    • Threshold Crossing Analytics (TCA/CDAP)

    • HV-VES Collector

    • PNF-Registration Handler

    • Docker based Threshold Crossing Analytics (TCA-Gen2)

    • Holmes Rule Management *

    • Holmes Engine Management *

  • Additional resources that DCAE utilizes deployed using ONAP common charts:

    • Postgres Database

    • Mongo Database

    • Redis Cluster Database

    • Consul Cluster

Below service components (mS) are available to be deployed on-demand.

  • SNMPTrap Collector

  • RESTConf Collector

  • DataFile Collector

  • PM-Mapper

  • BBS-EventProcessor

  • VES Mapper

  • Heartbeat mS

  • SON-Handler

  • PM-Subscription Handler

Notes:

* These components are delivered by the Holmes project.

Under OOM (Kubernetes) deployment all DCAE component containers are deployed as Kubernetes Pods/Deployments/Services into Kubernetes cluster. DCAE platform components are deployed using Helm charts. DCAE service components are deployed using Cloudify blueprints. DCAE provides a Cloudify Manager plugin (k8splugin) that is capable of expanding a Cloudify blueprint node specification for a service component to a full Kubernetes specification, with additional enhancements such as replica scaling, sidecar for logging to ONAP ELK stack, registering services to MSB, etc.

Release Data

DCAE Project

Docker images

Refer Deliverable

Release designation

6.0.0 frankfurt

Release date

2020-06-04

New features

  • DCAE Platform Enhancement

    • Introduction of Microservice and Onboarding Design (MOD) platform

    • Policy Notification support for DCAE components

    • Dynamic AAF certificate creation during component instantiation

    • Helm chart optimization to control each platform component separate

    • Dashboard Optimization

    • Blueprint generator tool to simplify deployment artifact creation

  • Following new services are delivered this release

    • Event Processors

      • PM Subscription Handler

      • DataLake Handlers

    • Analytics/RCA

      • TCA-GEN2

      • Acumos Adapter (PoC)

Deliverables

Software Deliverables

Repository

SubModules

Version & Docker Image (if applicable)

dcaegen2/analytics/tca-gen2

onap/org.onap.dcaegen2.analytics.tca-gen2.dcae-analytics-tca-web:1.0.1

dcaegen2/collectors/datafile

onap/org.onap.dcaegen2.collectors.datafile.datafile-app-server:1.3.0

dcaegen2/collectors/hv-ves

onap/org.onap.dcaegen2.collectors.hv-ves.hv-collector-main:1.4.0

dcaegen2/collectors/restconf

onap/org.onap.dcaegen2.collectors.restconfcollector:1.1.1

dcaegen2/collectors/snmptrap

onap/org.onap.dcaegen2.collectors.snmptrap:2.0.3

dcaegen2/collectors/ves

onap/org.onap.dcaegen2.collectors.ves.vescollector:1.5.4

dcaegen2/deployments

cm-container

onap/org.onap.dcaegen2.deployments.cm-container:2.1.0

dcaegen2/deployments

consul-loader-container

onap/org.onap.dcaegen2.deployments.consul-loader-container:1.0.0

dcaegen2/deployments

dcae-k8s-cleanup-container

onap/org.onap.dcaegen2.deployments.dcae-k8s-cleanup-container:1.0.0

dcaegen2/deployments

healthcheck-container

onap/org.onap.dcaegen2.deployments.healthcheck-container:1.3.1

dcaegen2/deployments

multisite-init-container

onap/org.onap.dcaegen2.deployments.multisite-init-container:1.0.0

dcaegen2/deployments

redis-cluster-container

onap/org.onap.dcaegen2.deployments.redis-cluster-container:1.0.0

dcaegen2/deployments

tca-cdap-container

onap/org.onap.dcaegen2.deployments.tca-cdap-container:1.2.2

dcaegen2/deployments

tls-init-container

onap/org.onap.dcaegen2.deployments.tls-init-container:2.1.0

dcaegen2/platform

mod/onboardingapi

onap/org.onap.dcaegen2.platform.mod.onboardingapi:2.12.1

dcaegen2/platform

mod/distributorapi

onap/org.onap.dcaegen2.platform.mod.distributorapi:1.0.1

dcaegen2/platform

mod/designtool

onap/org.onap.dcaegen2.platform.mod.designtool-web:1.0.2

dcaegen2/platform

mod/genprocessor

onap/org.onap.dcaegen2.platform.mod.genprocessor-http:1.0.1

dcaegen2/platform

mod/genprocessor

onap/org.onap.dcaegen2.platform.mod.genprocessor-job:1.0.1

dcaegen2/platform

mod/designtool/mod-registry

onap/org.onap.dcaegen2.platform.mod.mod-registry:1.0.0

dcaegen2/platform

mod/runtimeapi

onap/org.onap.dcaegen2.platform.mod.runtime-web:1.0.3

dcaegen2/platform/blueprints

onap/org.onap.dcaegen2.deployments.k8s-bootstrap-container:1.12.6

dcaegen2/platform/configbinding

onap/org.onap.dcaegen2.platform.configbinding:2.5.2

dcaegen2/platform/deployment-handler

onap/org.onap.dcaegen2.platform.deployment-handler:4.3.0

dcaegen2/platform/inventory-api

onap/org.onap.dcaegen2.platform.inventory-api:3.4.1

dcaegen2/platform/policy-handler

onap/org.onap.dcaegen2.platform.policy-handler:5.1.0

dcaegen2/platform/servicechange-handler

onap/org.onap.dcaegen2.platform.servicechange-handler:1.3.2

dcaegen2/services

components/datalake-handler

onap/org.onap.dcaegen2.services.datalakefeeder:1.0.2

dcaegen2/services

components/datalake-handler

onap/org.onap.dcaegen2.services.datalakeadminui:1.0.2

dcaegen2/services

components/pm-subscription-handler

onap/org.onap.dcaegen2.services.pmsh:1.0.3

dcaegen2/services

components/bbs-event-processor

onap/org.onap.dcaegen2.services.components.bbs-event-processor:2.0.0

dcaegen2/services/heartbeat

onap/org.onap.dcaegen2.services.heartbeat:2.1.0

dcaegen2/services/mapper

onap/org.onap.dcaegen2.services.mapper.vesadapter.universalvesadaptor:1.0.1

dcaegen2/services/pm-mapper

onap/org.onap.dcaegen2.services.pm-mapper:1.3.1

dcaegen2/services/prh

onap/org.onap.dcaegen2.services.prh.prh-app-server:1.5.2

dcaegen2/services/son-handler

onap/org.onap.dcaegen2.services.son-handler:2.0.2

dcaegen2/platform

adapter/acumos

onap/org.onap.dcaegen2.platform.adapter.acumos:1.0.2

dcaegen2/platform

mod/bpgenerator

Blueprint Generator 1.3.1 (jar)

dcaegen2/services/sdk

DCAE SDK 1.3.5 (jar)

ccsdk/dashboard

onap/org.onap.ccsdk.dashboard.ccsdk-app-os:1.3.2

Known Limitations, Issues and Workarounds

  • Blueprint generator escape char issue (DCAEGEN2-2140)

  • TCAgen2 Policy configuration support (DCAEGEN2-2198)

  • TCA/CDAP config refresh causes duplicate events (DCAEGEN2-2241)

System Limitations

None

Known Vulnerabilities

None

Workarounds

Documented under corresponding jira if applicable.

Security Notes

Fixed Security Issues

  • Unsecured Swagger UI Interface in xdcae-ves-collector. [OJSI-30]

  • In default deployment DCAEGEN2 (xdcae-ves-collector) exposes HTTP port 30235 outside of cluster. [OJSI-116]

  • In default deployment DCAEGEN2 (xdcae-dashboard) exposes HTTP port 30418 outside of cluster. [OJSI-159]

  • In default deployment DCAEGEN2 (dcae-redis) exposes redis port 30286 outside of cluster. [OJSI-187]

  • In default deployment DCAEGEN2 (config-binding-service) exposes HTTP port 30415 outside of cluster. [OJSI-195]

Known Security Issues

None

Known Vulnerabilities in Used Modules

None

DCAE code has been formally scanned during build time using NexusIQ and all Critical vulnerabilities have been addressed, items that remain open have been assessed for risk and determined to be false positive. The DCAE open Critical security vulnerabilities and their risk assessment have been documented as part of the project.

Test Results

References

For more information on the ONAP Frankfurt release, please see:

  1. ONAP Home Page

  2. ONAP Documentation

  3. ONAP Release Downloads

  4. ONAP Wiki Page

Quick Links:

Version: 5.0.1

The offical El-Alto release (rolls up all 5.0.0 early drop deliverables) focused on technical debts and SECCOM priority work-items.

Following is summary of updates done for DCAEGEN2

Security

Following platform components were enabled for HTTPS
  • ConfigBindingService (CBS) - CBS is used by all DCAE MS to fetch DCAE MS configuration from Consul. To mitigate impact for DCAE MS, CBS deployment through OOM/Helm was modified to support CBS on both HTTP and HTTPS. Design for CBS TLS migration

  • Cloudify Manager

  • InventoryAPI

  • Non-root container process (ConfigBindingService, InventoryAPI, ServiceChangeHandler, HV-VES, PRH, Son-handler)

All components interfacing with platform components were modified to support TLS interface

Miscellaneous
  • DCAE Dashboard deployment migration from cloudify blueprint to OOM/Chart

  • Dynamic Topic support via Dmaap plugin integration for DataFileCollector MS

  • Dynamic Topic support via Dmaap plugin integration for PM-Mapper service

  • CBS client libraries updated to remove consul service lookup

  • Image Optimization (ConfigBindingService, InventoryAPI, ServiceChangeHandler, HV-VES, PRH, Son-handler)

With this release, all DCAE platform components has been migrated to helm charts. Following is complete list of DCAE components available part of default ONAP/DCAE installation.
  • Platform components
    • Cloudify Manager (helm chart)

    • Bootstrap container (helm chart)

    • Configuration Binding Service (helm chart)

    • Deployment Handler (helm chart)

    • Policy Handler (helm chart

    • Service Change Handler (helm chart)

    • Inventory API (helm chart)

    • Dashboard (helm charts)

  • Service components
    • VES Collector

    • SNMP Collector

    • Threshold Crossing Analytics

    • HV-VES Collector

    • PNF-Registration Handler

    • Holmes Rule Management *

    • Holmes Engine Management *

  • Additional resources that DCAE utilizes:
    • Postgres Database

    • Redis Cluster Database

    • Consul Cluster *

Notes:

* These components are delivered by external ONAP project.

DCAE also includes below MS which can be deployed on-demand (via Dashboard or Cloudify CLI or CLAMP)

  • Collectors
    • RESTConf collector

    • DataFile collector

  • Event Processors
    • VES Mapper

    • 3gpp PM-Mapper

    • BBS Event processor

  • Analytics/RCA
    • SON-Handler

    • Missing Heartbeat Ms

  • All DCAE components are designed to support platform maturity requirements.

Source Code

Source code of DCAE components are released under the following repositories on gerrit.onap.org; there is no new component introduced for El-Alto Early-drop.
  • dcaegen2

  • dcaegen2.analytics.tca

  • dcaegen2.collectors.snmptrap

  • dcaegen2.collectors.ves

  • dcaegen2.collectors.hv-ves

  • dcaegen2.collectors.datafile

  • dcaegen2.collectors.restconf

  • dcaegen2.deployments

  • dcaegen2.platform.blueprints

  • dcaegen2.platform.cli

  • dcaegen2.platform.configbinding

  • dcaegen2.platform.deployment-handler

  • dcaegen2.platform.inventory-api

  • dcaegen2.platform.plugins

  • dcaegen2.platform.policy-handler

  • dcaegen2.platform.servicechange-handler

  • dcaegen2.services.heartbeat

  • dcaegen2.services.mapper

  • dcaegen2.services.pm-mapper

  • dcaegen2.services.prh

  • dcaegen2.services.son-handler

  • dcaegen2.services

  • dcaegen2.services.sdk

  • dcaegen2.utils

  • ccsdk.platform.plugins

  • ccsdk.dashboard

Bug Fixes
  • k8splugin can generate deployment name > 63 chars (DCAEGEN2-1667)

  • CM container loading invalid Cloudify types file (DCAEGEN2-1685)

Known Issues
  • Healthcheck/Readiness probe VES Collector when authentication is enabled (DCAEGEN2-1594)

Security Notes

Fixed Security Issues
  • Unsecured Swagger UI Interface in xdcae-datafile-collector. [OJSI-28]

  • In default deployment DCAEGEN2 (xdcae-datafile-collector) exposes HTTP port 30223 outside of cluster. [OJSI-109]

  • In default deployment DCAEGEN2 (xdcae-tca-analytics) exposes HTTP port 32010 outside of cluster. [OJSI-161]

  • In default deployment DCAEGEN2 (dcae-datafile-collector) exposes HTTP port 30262 outside of cluster. [OJSI-131]

  • CVE-2019-12126 - DCAE TCA exposes unprotected APIs/UIs on port 32010. [OJSI-201]

Known Security Issues
  • Unsecured Swagger UI Interface in xdcae-ves-collector. [OJSI-30]

  • In default deployment DCAEGEN2 (xdcae-ves-collector) exposes HTTP port 30235 outside of cluster. [OJSI-116]

  • In default deployment DCAEGEN2 (xdcae-dashboard) exposes HTTP port 30418 outside of cluster. [OJSI-159]

  • In default deployment DCAEGEN2 (dcae-redis) exposes redis port 30286 outside of cluster. [OJSI-187]

  • In default deployment DCAEGEN2 (config-binding-service) exposes HTTP port 30415 outside of cluster. [OJSI-195]

Known Vulnerabilities in Used Modules

DCAE code has been formally scanned during build time using NexusIQ and all Critical vulnerabilities have been addressed, items that remain open have been assessed for risk and determined to be false positive. The DCAE open Critical security vulnerabilities and their risk assessment have been documented as part of the project.

Quick Links:

Upgrade Notes

The following components are upgraded from Dublin/R4 and El-Alto EarlyDrop deliverables.
  • K8S Bootstrap container:
    • Docker container tag: onap/org.onap.dcaegen2.deployments.k8s-bootstrap-container:1.6.4

    • Description: K8s bootstrap container updated to interface with Cloudify using HTTPS; new k8s and Dmaap plugin version included; Dashboard deployment was removed.

  • Configuration Binding Service:
    • Docker container tag: onap/org.onap.dcaegen2.platform.configbinding.app-app:2.5.2

    • Description: HTTPS support, Image optimization and non-root user

  • Inventory API
    • Docker container image tag: onap/org.onap.dcaegen2.platform.inventory-api:3.4.0

    • Description: HTTPS support, container optmization and non-root user

  • DataFile Collector
    • Docker container tag: onap/org.onap.dcaegen2.collectors.datafile.datafile-app-server:1.2.3

    • Description : Code optimization, bug fixes, dmaap plugin integration

  • SON Handler MS
    • Docker container tag: onap/org.onap.dcaegen2.services.son-handler:1.1.1

    • Description : Image optimization, bug fixes, CBS integration

  • VES Adapter/Mapper MS
    • Docker container tag: onap/org.onap.dcaegen2.services.mapper.vesadapter.universalvesadaptor:1.0.1

    • Description : Image optimization & CBS periodic polling

  • PRH MS
    • Docker container tag: onap/org.onap.dcaegen2.services.prh.prh-app-server:1.3.1

    • Description : Code optimization, bug fixes and SDK alignment

  • HV-VES MS
    • Docker container tag: onap/org.onap.dcaegen2.collectors.hv-ves.hv-collector-main:1.3.0

    • Description : Code optimization, bug fixes and SDK alignment

Version: 5.0.0

El-Alto Early-drop focused on technical debts and SECCOM priority work-items.

Following is summary of updates done for DCAEGEN2

Security

Following platform components were enabled for HTTPS
  • ConfigBindingService (CBS) - CBS is used by all DCAE MS to fetch DCAE MS configuration from Consul. To mitigate impact for DCAE MS, CBS deployment through OOM/Helm was modified to support CBS on both HTTP and HTTPS. Design for CBS TLS migration

  • Cloudify Manager

  • InventoryAPI

All components interfacing with platform components were modified to support TLS interface

Miscellaneous
  • DCAE Dashboard deployment migration from cloudify blueprint to OOM/Chart

  • Dynamic Topic support via Dmaap plugin integration for DataFileCollector MS

  • Dynamic Topic support via Dmaap plugin integration for PM-Mapper service

  • CBS client libraries updated to remove consul service lookup

Bug Fixes
  • k8splugin can generate deployment name > 63 chars (DCAEGEN2-1667)

  • CM container loading invalid Cloudify types file (DCAEGEN2-1685)

Known Issues
  • Healthcheck/Readiness probe VES Collector when authentication is enabled (DCAEGEN2-1594)

Security Notes

Fixed Security Issues

Known Security Issues

  • Unsecured Swagger UI Interface in xdcae-datafile-collector. [OJSI-28]

  • Unsecured Swagger UI Interface in xdcae-ves-collector. [OJSI-30]

  • In default deployment DCAEGEN2 (xdcae-datafile-collector) exposes HTTP port 30223 outside of cluster. [OJSI-109]

  • In default deployment DCAEGEN2 (xdcae-ves-collector) exposes HTTP port 30235 outside of cluster. [OJSI-116]

  • In default deployment DCAEGEN2 (dcae-datafile-collector) exposes HTTP port 30262 outside of cluster. [OJSI-131]

  • In default deployment DCAEGEN2 (xdcae-dashboard) exposes HTTP port 30418 outside of cluster. [OJSI-159]

  • In default deployment DCAEGEN2 (xdcae-tca-analytics) exposes HTTP port 32010 outside of cluster. [OJSI-161]

  • In default deployment DCAEGEN2 (dcae-redis) exposes redis port 30286 outside of cluster. [OJSI-187]

  • In default deployment DCAEGEN2 (config-binding-service) exposes HTTP port 30415 outside of cluster. [OJSI-195]

  • CVE-2019-12126 - DCAE TCA exposes unprotected APIs/UIs on port 32010. [OJSI-201]

Known Vulnerabilities in Used Modules

DCAE code has been formally scanned during build time using NexusIQ and all Critical vulnerabilities have been addressed, items that remain open have been assessed for risk and determined to be false positive. The DCAE open Critical security vulnerabilities and their risk assessment have been documented as part of the project.

Quick Links:

Upgrade Notes

The following components are upgraded from Dublin/R4.
  • Cloudify Manager:
    • Docker container tag: onap/org.onap.dcaegen2.deployments.cm-container:2.0.2

    • Description: DCAE’s Cloudify Manager container is based on Cloudify Manager Community Version 19.01.24, which is based on Cloudify Manager 4.5. The container was updated to support TLS.

  • K8S Bootstrap container:
    • Docker container tag: onap/org.onap.dcaegen2.deployments.k8s-bootstrap-container:1.6.2

    • Description: K8s bootstrap container updated to interface with Cloudify using HTTPS; new k8s and Dmaap plugin version included; Dashboard deployment was removed.

  • Configuration Binding Service:
    • Docker container tag: onap/org.onap.dcaegen2.platform.configbinding.app-app:2.5.1

    • Description: HTTPS support, Image optimization and non-root user

  • Deployment Handler
    • Docker container image tag: onap/org.onap.dcaegen2.platform.deployment-handler:4.2.0

    • Description: Update to node10, uninstall workflow updates

  • Service Change Handler
    • Docker container image tag: onap/org.onap.dcaegen2.platform.servicechange-handler:1.3.2

    • Description: HTTPS inventoryAPI support, container optmization and non-root user

  • Inventory API
    • Docker container image tag: onap/org.onap.dcaegen2.platform.inventory-api:3.4.0

    • Description: HTTPS support, container optmization and non-root user

  • DataFile Collector
    • Docker container tag: onap/org.onap.dcaegen2.collectors.datafile.datafile-app-server:1.2.2

    • Description : Code optimization, bug fixes, dmaap plugin integration

  • 3gpp PM-Mapper
    • Docker container tag: onap/org.onap.dcaegen2.services.pm-mapper:1.1.3

    • Description: Code optimization, bug fixes, dmaap plugin integration

Version: 4.0.0

Release Date

2019-06-06

New Features

DCAE R4 improves upon previous release with the following new features:

  • DCAE Platform Enhancement
    • Multisite K8S cluster deployment support for DCAE services (via K8S plugin)

    • Support helm chart deployment in DCAE using new Helm cloudify plugin

    • DCAE Healthcheck enhancement to cover static and dynamic deployments

    • Dynamic AAF based topic provisioning support through Dmaap cloudify plugin

    • Dashboard Integration (UI for deployment/verification)

    • PolicyHandler Enhancement to support new Policy Lifecycle API’s

    • Blueprint generator tool to simplify deployment artifact creation

    • Cloudify Manager resiliency

  • Following new services are delivered with Dublin
    • Collectors
      • RESTConf collector

    • Event Processors
      • VES Mapper

      • 3gpp PM-Mapper

      • BBS Event processor

    • Analytics/RCA
      • SON-Handler

      • Heartbeat MS

Most platform components has been migrated to helm charts. Following is complete list of DCAE components available part of default ONAP/dcae installation.
  • Platform components
    • Cloudify Manager (helm chart)

    • Bootstrap container (helm chart)

    • Configuration Binding Service (helm chart)

    • Deployment Handler (helm chart)

    • Policy Handler (helm chart

    • Service Change Handler (helm chart)

    • Inventory API (helm chart)

    • Dashboard (Cloudify Blueprint)

  • Service components
    • VES Collector

    • SNMP Collector

    • Threshold Crossing Analytics

    • HV-VES Collector

    • PNF-Registration Handler

    • Holmes Rule Management *

    • Holmes Engine Management *

  • Additional resources that DCAE utilizes:
    • Postgres Database

    • Redis Cluster Database

    • Consul Cluster *

Notes:

* These components are delivered by the Holmes project.

Under OOM (Kubernetes) deployment all DCAE component containers are deployed as Kubernetes Pods/Deployments/Services into Kubernetes cluster. DCAE R3 includes enhancement to Cloudify Manager plugin (k8splugin) that is capable of expanding a Blueprint node specification written for Docker container to a full Kubernetes specification, with additional enhancements such as replica scaling, sidecar for logging to ONAP ELK stack, registering services to MSB, etc.

  • All DCAE components are designed to support platform maturity requirements.

Source Code

Source code of DCAE components are released under the following repositories on gerrit.onap.org:
  • dcaegen2

  • dcaegen2.analytics.tca

  • dcaegen2.collectors.snmptrap

  • dcaegen2.collectors.ves

  • dcaegen2.collectors.hv-ves

  • dcaegen2.collectors.datafile

  • dcaegen2.collectors.restconf

  • dcaegen2.deployments

  • dcaegen2.platform.blueprints

  • dcaegen2.platform.cli

  • dcaegen2.platform.configbinding

  • dcaegen2.platform.deployment-handler

  • dcaegen2.platform.inventory-api

  • dcaegen2.platform.plugins

  • dcaegen2.platform.policy-handler

  • dcaegen2.platform.servicechange-handler

  • dcaegen2.services.heartbeat

  • dcaegen2.services.mapper

  • dcaegen2.services.pm-mapper

  • dcaegen2.services.prh

  • dcaegen2.services.son-handler

  • dcaegen2.services

  • dcaegen2.services.sdk

  • dcaegen2.utils

  • ccsdk.platform.plugins

  • ccsdk.dashboard

Bug Fixes

Known Issues
  • Healthcheck/Readiness probe VES Collector when authentication is enabled (DCAEGEN2-1594)

Security Notes

Fixed Security Issues

Known Security Issues

  • Unsecured Swagger UI Interface in xdcae-datafile-collector. [OJSI-28]

  • Unsecured Swagger UI Interface in xdcae-ves-collector. [OJSI-30]

  • In default deployment DCAEGEN2 (xdcae-datafile-collector) exposes HTTP port 30223 outside of cluster. [OJSI-109]

  • In default deployment DCAEGEN2 (xdcae-ves-collector) exposes HTTP port 30235 outside of cluster. [OJSI-116]

  • In default deployment DCAEGEN2 (dcae-datafile-collector) exposes HTTP port 30262 outside of cluster. [OJSI-131]

  • In default deployment DCAEGEN2 (xdcae-dashboard) exposes HTTP port 30418 outside of cluster. [OJSI-159]

  • In default deployment DCAEGEN2 (xdcae-tca-analytics) exposes HTTP port 32010 outside of cluster. [OJSI-161]

  • In default deployment DCAEGEN2 (dcae-redis) exposes redis port 30286 outside of cluster. [OJSI-187]

  • In default deployment DCAEGEN2 (config-binding-service) exposes HTTP port 30415 outside of cluster. [OJSI-195]

  • CVE-2019-12126 - DCAE TCA exposes unprotected APIs/UIs on port 32010. [OJSI-201]

Known Vulnerabilities in Used Modules

DCAE code has been formally scanned during build time using NexusIQ and all Critical vulnerabilities have been addressed, items that remain open have been assessed for risk and determined to be false positive. The DCAE open Critical security vulnerabilities and their risk assessment have been documented as part of the project.

Quick Links:

New component Notes The following components are introduced in R4

  • Dashboard
    • Docker container tag: onap/org.onap.ccsdk.dashboard.ccsdk-app-os:1.1.0

    • Description: Dashboard provides an UI interface for users/operation to deploy and manage service components in DCAE

  • Blueprint generator
    • Java artifact : /org/onap/dcaegen2/platform/cli/blueprint-generator/1.0.0/blueprint-generator-1.0.0.jar

    • Description: Tool to generate the deployment artifact (cloudify blueprints) based on component spec

  • RESTConf collector
    • Docker container tag: onap/org.onap.dcaegen2.collectors.restconfcollector:1.1.1

    • Description: Provides RESTConf interfaces to events from external domain controllers

  • VES/Universal Mapper
    • Docker container tag: onap/org.onap.dcaegen2.services.mapper.vesadapter.universalvesadaptor:1.0.0

    • Description: Standardizes events recieved from SNMP and RESTConf collector into VES for further processing with DCAE analytics services

  • 3gpp PM-Mapper
    • Docker container tag: onap/org.onap.dcaegen2.services.pm-mapper:1.0.1

    • Description: Transforms 3gpp data feed recieved from DMAAP-DR into VES events

  • BBS Event processor
    • Docker container tag: onap/org.onap.dcaegen2.services.components.bbs-event-processor:1.0.0

    • Description: Handles PNF-Reregistration and CPE authentication events and generate CL events

  • SON-Handler
    • Docker container tag: onap/org.onap.dcaegen2.services.son-handler:1.0.3

    • Description: Supports PC-ANR optimization analysis and generating CL events output

  • Heartbeat MS
    • Docker container tag: onap/org.onap.dcaegen2.services.heartbeat:2.1.0

    • Description: Generates missing heartbeat CL events based on configured threshold for VES heartbeats/VNF type.

Upgrade Notes

The following components are upgraded from R3
  • Cloudify Manager:
    • Docker container tag: onap/org.onap.dcaegen2.deployments.cm-container:1.6.2

    • Description: DCAE’s Cloudify Manager container is based on Cloudify Manager Community Version 19.01.24, which is based on Cloudify Manager 4.5.

  • K8S Bootstrap container:
    • Docker container tag: onap/org.onap.dcaegen2.deployments.k8s-bootstrap-container:1.4.18

    • Description: K8s bootstrap container updated to include new plugin and remove DCAE Controller components which have been migrated to Helm chart.

  • Configuration Binding Service:
    • Docker container tag: onap/org.onap.dcaegen2.platform.configbinding.app-app:2.3.0

    • Description: Code optimization and bug fixes

  • Deployment Handler
    • Docker container image tag: onap/org.onap.dcaegen2.platform.deployment-handler:4.0.1

    • Include updates for health and service endpoint check and bug fixes

  • Policy Handler
    • Docker container image tag: onap/org.onap.dcaegen2.platform.policy-handler:5.0.0

    • Description: Policy Handler supports the new lifecycle API’s from Policy framework

  • Service Change Handler
    • Docker container image tag: onap/org.onap.dcaegen2.platform.servicechange-handler:1.1.5

    • Description: No update from R3

  • Inventory API
    • Docker container image tag: onap/org.onap.dcaegen2.platform.inventory-api:3.2.0

    • Description: Refactoring and updates for health and service endpoint check

  • VES Collector
    • Docker container image tag: onap/org.onap.dcaegen2.collectors.ves.vescollector:1.4.5

    • Description : Authentication enhancement, refactoring and bug-fixes

  • Threshold Crossing Analytics
    • Docker container image tag: onap/org.onap.dcaegen2.deployments.tca-cdap-container:1.1.2

    • Description: Config updates. Replaced Hadoop VM Cluster based file system with regular host file system; repackaged full TCA-CDAP stack into Docker container; transactional state separation from TCA in-memory to off-node Redis cluster for supporting horizontal scaling.

  • DataFile Collector
    • Docker container tag: onap/org.onap.dcaegen2.collectors.datafile.datafile-app-server:1.1.3

    • Description : Code optimization, bug fixes, logging and performance improvement

  • PNF Registrator handler
    • Docker container tag: onap/org.onap.dcaegen2.services.prh.prh-app-server:1.2.4

    • Description : Code optimization, SDK integration, PNF-UPDATE flow support

  • HV-VES Collector
    • Docker container tag: onap/org.onap.dcaegen2.collectors.hv-ves.hv-collector-main:1.1.0

    • Description : Code optimization, bug fixes, and enables SASL for kafka interface

  • SNMP Trap Collector
    • Docker container tag: onap/org.onap.dcaegen2.collectors.snmptrap:1.4.0

    • Description : Code coverage improvements

Version: 3.0.1

Release Date

2019-01-31

DCAE R3 Maintenance release includes following fixes

Bug Fixes

  • DataFileCollector
    • DCAEGEN2-940 Larger files of size 100Kb publish to DR

    • DCAEGEN2-941 DFC error after running over 12 hours

    • DCAEGEN2-1001 Multiple Fileready notification not handled

  • HighVolume VES Collector (protobuf/tcp)
    • DCAEGEN2-976 HV-VES not fully complaint to RTPM protocol (issue with CommonEventHeader.sequence)

  • VESCollector (http)
    • DCAEGEN2-1035 Issue with VES batch event publish

  • Heat deployment
    • DCAEGEN2-1007 Removing obsolete services configuration

The following containers are updated in R3.0.1

  • DataFile Collector
    • Docker container tag: onap/org.onap.dcaegen2.collectors.datafile.datafile-app-server:1.0.5

  • HV-VES Collector
    • Docker container tag: onap/org.onap.dcaegen2.collectors.hv-ves.hv-collector-main:1.0.2

  • VES Collector
    • Docker container tag: onap/org.onap.dcaegen2.collectors.ves.vescollector:1.3.2

Known Issues

  • An issue related to VESCollector basic authentication was noted and tracked under DCAEGEN2-1130. This configuration is not enabled by default for R3.0.1; and fix will be handled in Dublin

  • Certificates under onap/org.onap.dcaegen2.deployments.tls-init-container:1.0.0 has expired March’2019 and impacting CL deployment from CLAMP. Follow below workaround to update the certificate

    kubectl get deployments -n onap | grep deployment-handler kubectl edit deployment -n onap dev-dcaegen2-dcae-deployment-handler Search and change tag onap/org.onap.dcaegen2.deployments.tls-init-container:1.0.0 to onap/org.onap.dcaegen2.deployments.tls-init-container:1.0.3

Version: 3.0.0

Release Date

2018-11-30

New Features

DCAE R3 improves upon previous release with the following new features:

  • All DCAE R3 components are delivered as Docker container images. The list of components is as follows.
    • Platform components
      • Cloudify Manager

      • Bootstrap container

      • Configuration Binding Service

      • Deployment Handler

      • Policy Handler

      • Service Change Handler

      • Inventory API

    • Service components
      • VES Collector

      • SNMP Collector

      • Threshold Crossing Analytics

      • Holmes Rule Management *

      • Holmes Engine Management *

    • Additional resources that DCAE utilizes:
      • Postgres Database

      • Redis Cluster Database

      • Consul Cluster

    Notes:

    * These components are delivered by the Holmes project.

  • DCAE R3 supports both OpenStack Heat Orchestration Template based deployment and OOM (Kubernetes) based deployment.

    • Under Heat based deployment all DCAE component containers are deployed onto a single Docker host VM that is launched from an OpenStack Heat Orchestration Template as part of “stack creation”.

    • Under OOM (Kubernetes) deployment all DCAE component containers are deployed as Kubernetes Pods/Deployments/Services into Kubernetes cluster.

  • DCAE R3 includes a new Cloudify Manager plugin (k8splugin) that is capable of expanding a Blueprint node specification written for Docker container to a full Kubernetes specification, with additional enhancements such as replica scaling, sidecar for logging to ONAP ELK stack, registering services to MSB, etc.

  • All DCAE components are designed to support platform maturity requirements.

Source Code

Source code of DCAE components are released under the following repositories on gerrit.onap.org:
  • dcaegen2

  • dcaegen2.analytics

  • dcaegen2.analytics.tca

  • dcaegen2.collectors

  • dcaegen2.collectors.snmptrap

  • dcaegen2.collectors.ves

  • dcaegen2.collectors.hv-ves

  • dcaegen2.collectors.datafile

  • dcaegen2.deployments

  • dcaegen2.platform

  • dcaegen2.platform.blueprints

  • dcaegen2.platform.cli

  • dcaegen2.platform.configbinding

  • dcaegen2.platform.deployment-handler

  • dcaegen2.platform.inventory-api

  • dcaegen2.platform.plugins

  • dcaegen2.platform.policy-handler

  • dcaegen2.platform.servicechange-handler

  • dcaegen2.services.heartbeat

  • dcaegen2.services.mapper

  • dcaegen2.services.prh

  • dcaegen2.utils

Bug Fixes

Known Issues

  • DCAE utilizes Cloudify Manager as its declarative model based resource deployment engine. Cloudify Manager is an open source upstream technology provided by Cloudify Inc. as a Docker image. DCAE R2 does not provide additional enhancements towards Cloudify Manager’s platform maturity.

Security Notes

DCAE code has been formally scanned during build time using NexusIQ and all Critical vulnerabilities have been addressed, items that remain open have been assessed for risk and determined to be false positive. The DCAE open Critical security vulnerabilities and their risk assessment have been documented as part of the project.

Quick Links:

New component Notes The following components are introduced in R3

  • DataFile Collector
    • Docker container tag: onap/org.onap.dcaegen2.collectors.datafile.datafile-app-server:1.0.4

    • Description : Bulk data file collector to fetch non-realtime PM data

  • PNF Registrator handler
    • Docker container tag: onap/org.onap.dcaegen2.services.prh.prh-app-server:1.1.1

    • Description : Recieves VES registration event and updates AAI and SO

  • HV-VES Collector
    • Docker container tag: onap/org.onap.dcaegen2.collectors.hv-ves.hv-collector-main:1.0.0

    • Description : High Volume VES Collector for fetching real-time PM measurement data

  • SNMP Trap Collector
    • Docker container tag: onap/org.onap.dcaegen2.collectors.snmptrap:1.4.0

    • Description : Receives SNMP traps and publishes them to a message router (DMAAP/MR) in json structure

Upgrade Notes

The following components are upgraded from R2:
  • Cloudify Manager:
    • Docker container tag: onap/org.onap.dcaegen2.deployments.cm-container:1.4.2

    • Description: R3 DCAE’s Cloudify Manager container is based on Cloudify Manager Community Version 18.7.23, which is based on Cloudify Manager 4.3.

  • Bootstrap container:
    • Docker container tag: onap/org.onap.dcaegen2.deployments.k8s-bootstrap-container:1.4.5

    • Description: R3 DCAE no longer uses bootstrap container for Heat based deployment, – deployment is done through cloud-init scripts and docker-compose specifications. The bootstrap is for OOM (Kubernetes) based deployment.

  • Configuration Binding Service:
    • Docker container tag: onap/org.onap.dcaegen2.platform.configbinding.app-app:2.2.3

    • Description: Configuration Binding Sevice now supports the new configuration policy format and support for TLS

  • Deployment Handler
    • Docker container image tag: onap/org.onap.dcaegen2.platform.deployment-handler:3.0.3

  • Policy Handler
    • Docker container image tag: onap/org.onap.dcaegen2.platform.policy-handler:4.4.0

    • Description: Policy Handler now supports the new configuration policy format and support for TLS

  • Service Change Handler
    • Docker container image tag: onap/org.onap.dcaegen2.platform.servicechange-handler:1.1.5

    • Description: Refactoring.

  • Inventory API
    • Docker container image tag: onap/org.onap.dcaegen2.platform.inventory-api:3.0.4

    • Description: Refactoring.

  • VES Collector
    • Docker container image tag: onap/org.onap.dcaegen2.collectors.ves.vescollector:1.3.1

    • Description : Refactoring

  • Threshold Crossing Analytics
    • Docker container image tag: onap/org.onap.dcaegen2.deployments.tca-cdap-container:1.1.0

    • Description: Replaced Hadoop VM Cluster based file system with regular host file system; repackaged full TCA-CDAP stack into Docker container; transactional state separation from TCA in-memory to off-node Redis cluster for supporting horizontal scaling.

Version: 2.0.0

Release Date

2018-06-07

New Features

DCAE R2 improves upon previous release with the following new features:

  • All DCAE R2 components are delivered as Docker container images. The list of components is as follows.
    • Platform components
      • Cloudify Manager

      • Bootstrap container

      • Configuration Binding Service

      • Deployment Handler

      • Policy Handler

      • Service Change Handler

      • Inventory API

    • Service components
      • VES Collector

      • SNMP Collector

      • Threshold Crossing Analytics

      • Holmes Rule Management *

      • Holmes Engine Management *

    • Additional resources that DCAE utilizes:
      • Postgres Database

      • Redis Cluster Database

      • Consul Cluster

    Notes:

    * These components are delivered by the Holmes project and used as a DCAE analytics component in R2.

  • DCAE R2 supports both OpenStack Heat Orchestration Template based deployment and OOM (Kubernetes) based deployment.

    • Under Heat based deployment all DCAE component containers are deployed onto a single Docker host VM that is launched from an OpenStack Heat Orchestration Template as part of “stack creation”.

    • Under OOM (Kubernetes) deployment all DCAE component containers are deployed as Kubernetes Pods/Deployments/Services into Kubernetes cluster.

  • DCAE R2 includes a new Cloudify Manager plugin (k8splugin) that is capable of expanding a Blueprint node specification written for Docker container to a full Kubernetes specification, with additional enhancements such as replica scaling, sidecar for logging to ONAP ELK stack, registering services to MSB, etc.

  • All DCAE components are designed to support platform maturity requirements.

Source Code

Source code of DCAE components are released under the following repositories on gerrit.onap.org:
  • dcaegen2

  • dcaegen2.analytics

  • dcaegen2.analytics.tca

  • dcaegen2.collectors

  • dcaegen2.collectors.snmptrap

  • dcaegen2.collectors.ves

  • dcaegen2.deployments

  • dcaegen2.platform

  • dcaegen2.platform.blueprints

  • dcaegen2.platform.cli

  • dcaegen2.platform.configbinding

  • dcaegen2.platform.deployment-handler

  • dcaegen2.platform.inventory-api

  • dcaegen2.platform.plugins

  • dcaegen2.platform.policy-handler

  • dcaegen2.platform.servicechange-handler

  • dcaegen2.services.heartbeat

  • dcaegen2.services.mapper

  • dcaegen2.services.prh

  • dcaegen2.utils

Bug Fixes

Known Issues

  • DCAE utilizes Cloudify Manager as its declarative model based resource deployment engine. Cloudify Manager is an open source upstream technology provided by Cloudify Inc. as a Docker image. DCAE R2 does not provide additional enhancements towards Cloudify Manager’s platform maturity.

Security Notes

DCAE code has been formally scanned during build time using NexusIQ and all Critical vulnerabilities have been addressed, items that remain open have been assessed for risk and determined to be false positive. The DCAE open Critical security vulnerabilities and their risk assessment have been documented as part of the project.

Quick Links:

Upgrade Notes

The following components are upgraded from R1:
  • Cloudify Manager:
    • Docker container tag: onap/org.onap.dcaegen2.deployments.cm-container:1.3.0

    • Description: R2 DCAE’s Cloudify Manager container is based on Cloudify Manager Community Version 18.2.28, which is based on Cloudify Manager 4.3.

  • Bootstrap container:
    • Docker container tag: onap/org.onap.dcaegen2.deployments.k8s-bootstrap-container:1.1.11

    • Description: R2 DCAE no longer uses bootstrap container for Heat based deployment, – deployment is done through cloud-init scripts and docker-compose specifications. The bootstrap is for OOM (Kubernetes) based deployment.

  • Configuration Binding Service:
    • Docker container tag: onap/org.onap.dcaegen2.platform.configbinding:2.1.5

    • Description: Configuration Binding Sevice now supports the new configuration policy format.

  • Deployment Handler
    • Docker container image tag: onap/org.onap.dcaegen2.platform.deployment-handler:2.1.5

  • Policy Handler
    • Docker container image tag: onap/org.onap.dcaegen2.platform.policy-handler:2.4.5

    • Description: Policy Handler now supports the new configuration policy format.

  • Service Change Handler
    • Docker container image tag: onap/org.onap.dcaegen2.platform.servicechange-handler:1.1.4

    • Description: Refactoring.

  • Inventory API
    • Docker container image tag: onap/org.onap.dcaegen2.platform.inventory-api:3.0.1

    • Description: Refactoring.

  • VES Collector
    • Docker container image tag: onap/org.onap.dcaegen2.collectors.ves.vescollector:1.2.0

  • Threshold Crossing Analytics
    • Docker container image tag: onap/org.onap.dcaegen2.deployments.tca-cdap-container:1.1.0

    • Description: Replaced Hadoop VM Cluster based file system with regular host file system; repackaged full TCA-CDAP stack into Docker container; transactional state separation from TCA in-memory to off-node Redis cluster for supporting horizontal scaling.

Version: 1.0.0

Release Date

2017-11-16

New Features

DCAE is the data collection and analytics sub-system of ONAP. Under ONAP Release 1 the DCAE sub-system includes both platform components and DCAE service components. Collectively the ONAP R1 DCAE components support the data collection and analytics functions for the R1 use cases, i.e. vFW, vDNS, vCPU, and vVoLTE.

Specifically, DCAE R1 includes the following components:

  • Core platform
    • Cloudify manager

    • Consul cluster

  • Extended platform
    • Platform component docker host

    • Service component docker host

    • CDAP cluster

    • PostgreSQL database (*)

  • Platform docker container components
    • Configuration binding service

    • Deployment handler

    • Service change handler

    • Inventory

    • Policy handler

    • CDAP broker

  • Service components
    • Docker container components
      • VNF Event Streaming (VES) collector

      • Holmes (engine and rule management) **

    • CDAP analytics component
      • Threshold Crossing Analytics (TCA)

(*) Note: This component is delivered under the CCSDK project, deployed by DCAE under a single VM configuration as a shared PostgreSQL database for the R1 demos. (CCSDK PostgreSQL supports other deployment configurations not used in the R1 demos.) (**) Note: This component is delivered under the Holmes project and used as a DCAE analytics component in R1.

Source codes of DCAE are released under the following repositories on gerrit.onap.org:

  • dcaegen2

  • dcaegen2/analytics

  • dcaegen2/analytics/tca

  • dcaegen2/collectors

  • dcaegen2/collectors/snmptrap

  • dcaegen2/collectors/ves

  • dcaegen2/deployments

  • dcaegen2/platform

  • dcaegen2/platform/blueprints

  • dcaegen2/platform/cdapbroker

  • dcaegen2/platform/cli

  • dcaegen2/platform/configbinding

  • dcaegen2/platform/deployment-handler

  • dcaegen2/platform/inventory-api

  • dcaegen2/platform/plugins

  • dcaegen2/platform/policy-handler

  • dcaegen2/platform/servicechange-handler

  • dcaegen2/utils

Bug Fixes

This is the initial release.

Known Issues

  • Need to test/integrate into an OpenStack environment other than Intel/Windriver Pod25.

  • Need to provide a dev configuration DCAE.

Security Issues

  • The DCAE Bootstrap container needs to have a secret key for accessing VMs that it launches. This key is currently passed in as a Heat template parameter. Tracked by JIRA DCAEGEN2-178.>`_.

  • The RESTful API calls are generally not secure. That is, they are either over http, or https without certificate verification. Once there is an ONAP wide solution for handling certificates, DCAE will switch to https.

Upgrade Notes

This is the initial release.

Deprecation Notes

There is a GEN1 DCAE sub-system implementation existing in the pre-R1 ONAP Gerrit system. The GEN1 DCAE is deprecated by the R1 release. The DCAE included in ONAP R1 is also known as DCAE GEN2. The following Gerrit repos are voided and already locked as read-only.

  • dcae

  • dcae/apod

  • dcae/apod/analytics

  • dcae/apod/buildtools

  • dcae/apod/cdap

  • dcae/collectors

  • dcae/collectors/ves

  • dcae/controller

  • dcae/controller/analytics

  • dcae/dcae-inventory

  • dcae/demo

  • dcae/demo/startup

  • dcae/demo/startup/aaf

  • dcae/demo/startup/controller

  • dcae/demo/startup/message-router

  • dcae/dmaapbc

  • dcae/operation

  • dcae/operation/utils

  • dcae/orch-dispatcher

  • dcae/pgaas

  • dcae/utils

  • dcae/utils/buildtools

  • ncomp

  • ncomp/cdap

  • ncomp/core

  • ncomp/docker

  • ncomp/maven

  • ncomp/openstack

  • ncomp/sirius

  • ncomp/sirius/manager

  • ncomp/utils

Other

SNMP trap collector is seed code delivery only.

https://bestpractices.coreinfrastructure.org/projects/1718/badge