MultiCloud Plugin for Wind River Titanium Cloud

The following guides are provided to describe tasks that a user of ONAP may need to perform when operating ONAP to orchestrate VNF onto an instance of Wind River Titanium Cloud

Supported Features

Proxy endpoints for OpenStack services

MultiCloud plugin for Wind River Titanium Cloud supports the proxy of OpenStack services. The catalog of proxied services is exactly same as the catalog of OpenStack services

VFC specific Northbound API

MultiCloud plugin for Wind River Titanium Cloud supports VFC with the legacy APIs which was inherited from OPEN-O MultiVIM project.

Support enhanced SO/OOF workflow

MultiCloud plugin for Wind River Titanium Cloud supports infra_workload APIs from Casablanca Release.

These APIs enhances the workflow of Heat based VNF orchestration by:

  • offloading Heat template/parameter updating from SO to MultiCloud

    plugins

  • Enabling the “Centralized Representation of Cloud Regions”

  • Automate the heatbridge action by updating the AAI with deployed Heat

    stack resources

Support OOF

MultiCloud plugin for Wind River Titanium Cloud supports capacity check from Beijing Release.

Conform to Consistent ID of a Cloud Region

Northbound API v1 supports the composite keys {cloud-owner}/{cloud-region-id} as the ID of a cloud region

Decoupling between cloud-region-id and OpenStack Region ID

{cloud-region-id} should be populated by users while on-boarding a cloud region. With ONAP A and B release, it must be the same as the “OpenStack Region ID” of the represented OpenStack instance. From ONAP C release, this restriction has been removed.

The backward compatibility is maintained so that user can still populate the {cloud-reigon-id} by “OpenStack Region ID”.

Users could also specify the “OpenStack Region ID” while onboarding a cloud region out of multi-region instances.

Note

There are still restrictions to populate {cloud-owner} and {cloud-region-id}, please refer to section “On-board a Cloud Region”

Support on-boarding of Multi-Region instances

Multiple OpenStack instances federated with the “multi-region” feature can be on-boarded into ONAP with a single click. ONAP user needs to register only the primary region into ONAP, and the multicloud plugin for Wind River Titanium Cloud

Titanium Cloud will discover and on-board all other secondary regions automatically.

This feature supports Titanium Cloud feature “Distributed Cloud” to on-board all subclouds with a single click.

This feature can be controlled by user with configuration options while on-boarding a cloud region

HPA discovery

MultiCloud plugin for Wind River Titanium Cloud supports discover and registration into AAI with regarding to following HPA capability: CPU Pinning, HugePage, …

Cloud Region decommission

MultiCloud plugin for Wind River Titanium Cloud support the decommission of a cloud region with a single API requests.

This API is not yet integrated with ESR GUI portal.

VESagent

MultiCloud plugin for Wind River Titanium Cloud supports VESagent which can be configured to monitor the VM status and assert or abate fault event to VES collector for close loop control over infrastructure resources.

LOGGING

MultiCloud plugin for Wind River Titanium Cloud supports centralized logging with OOM deployed ONAP

Supported Use Cases

vFW/vDNS

The vFW/vDNS are the VNFs modeled with HEAT templates MultiCloud plugin for Wind River Titanium Cloud has been tested with vFW/vDNS use cases from Amsterdam Release.

vCPE

vCPE (HEAT VNF) without HPA orchestration

vCPE is the VNF modeled with HEAT templates, basic Use case from Amsterdam Release does not include any HPA orchestration.

vCPE (HEAT VNF) with HPA orchestration

From Beijing Release,a variation of vCPE use case include HPA orchestration

vCPE (TOSCA VNF) with HPA orchestration

From Casablanca Release (With MultiCloud Release Version 1.2.2), vCPE use case expands to support TOSCA VNF and include HPA orchestration

MultiCloud plugin for Wind River Titanium Cloud has been tested with both cases.

vVoLTE

MultiCloud plugin for Wind River Titanium Cloud has been tested with vVoLTE use case.

Known Issues:

1. MULTICLOUD-359 : The image uploading API from VFC specific NBI does not work with large image file.

Tutorial: Onboard instance of Wind River Titanium Cloud

Prerequisites

Collect ONAP Access Info

With Heat based ONAP:
export ONAP_AAI_IP=<floating IP of VM with name "onap-aai-inst1">
export ONAP_AAI_PORT=8443
export ONAP_MSB_IP=<floating IP of VM with name "onap-multi-service">
export ONAP_MSB_PORT=80
With OOM based ONAP:
export ONAP_AAI_IP=<floating IP of VM with name "k8s_1">
export ONAP_AAI_PORT=30233
export ONAP_MSB_IP=<floating IP of VM with name "k8s_1">
export ONAP_MSB_PORT=30280

Determine the ID of the cloud region:

cloud region is ONAP’s representation of the on-boarded VIM/Cloud instance (Titanium Cloud instance in this case). The ID of a cloud region is specified by ONAP user while on-boarding the VIM/Cloud instance, this ID will be internal to ONAP only, comprised by the composite keys of “cloud-owner” and “cloud-region-id”.

export CLOUD_OWNER="CloudOwner"
export CLOUD_REGION_ID="RegionOne"
Notes:

1, It is suggested to populate “cloud-owner” to be “CloudOwner”. The restriction is that underscore “_” can not be used.

2, There is restriction from ONAP Amsterdam Release that the “cloud-region-id” must be the same as OpenStack Region ID in case that the cloud region represent an OpenStack Instance. From Casablanca Release, the restriction of “cloud-region-id” has been removed by MultiCloud plugin for Wind River Titanium Cloud. It is not mandatory to be populate the “cloud-region-id” with OpenStack Region ID.

The geographic location of the cloud region

make sure there is complex object to represent the geographic location of the cloud region in case you need create a complex object “clli1”:

curl -X PUT \
https://$ONAP_AAI_IP:$ONAP_AAI_PORT/aai/v13/cloud-infrastructure/complexes/complex/clli1 \
-H 'Accept: application/json' \
-H 'Authorization: Basic QUFJOkFBSQ==' \
-H 'Cache-Control: no-cache' \
-H 'Content-Type: application/json' \
-H 'Postman-Token: 2b272126-aa65-41e6-aa5d-46bc70b9eb4f' \
-H 'Real-Time: true' \
-H 'X-FromAppId: jimmy-postman' \
-H 'X-TransactionId: 9999' \
-d '{
    "physical-location-id": "clli1",
    "data-center-code": "example-data-center-code-val-5556",
    "complex-name": "clli1",
    "identity-url": "example-identity-url-val-56898",
    "physical-location-type": "example-physical-location-type-val-7608",
    "street1": "example-street1-val-34205",
    "street2": "example-street2-val-99210",
    "city": "Beijing",
    "state": "example-state-val-59487",
    "postal-code": "100000",
    "country": "example-country-val-94173",
    "region": "example-region-val-13893",
    "latitude": "39.9042",
    "longitude": "106.4074",
    "elevation": "example-elevation-val-30253",
    "lata": "example-lata-val-46073"
    }'

On-board Wind River Titanium Cloud Instance

You can on-board the instance of Wind River Titanium Cloud with either way as below

With curl commands

Step 1: Create a cloud region to represent the instance
### on-board a single OpenStack region
### you can specify the Openstack Region ID by extra inputs: {"openstack-region-id":"RegionOne"}


curl -X PUT \
https://$ONAP_AAI_IP:$ONAP_AAI_PORT/aai/v13/cloud-infrastructure/cloud-regions/cloud-region/${CLOUD_OWNER}/${CLOUD_REGION_ID} \
-H 'Accept: application/json' \
-H 'Authorization: Basic QUFJOkFBSQ==' \
-H 'Cache-Control: no-cache' \
-H 'Content-Type: application/json' \
-H 'Postman-Token: 8b9b95ae-91d6-4436-90fa-69cb4d2db99c' \
-H 'Real-Time: true' \
-H 'X-FromAppId: jimmy-postman' \
-H 'X-TransactionId: 9999' \
-d '{
    "cloud-owner": "<${CLOUD_OWNER}>",
    "cloud-region-id": "<${CLOUD_REGION_ID}>",
    "cloud-type": "openstack",
    "owner-defined-type": "t1",
    "cloud-region-version": "titanium_cloud",
    "complex-name": "clli1",
    "cloud-zone": "CloudZone",
    "sriov-automation": false,
    "identity-url": "WillBeUpdatedByMultiCloud",
    "cloud-extra-info":"{\"openstack-region-id\":\"RegionOne\"}"
    "esr-system-info-list": {
        "esr-system-info": [
            {
            "esr-system-info-id": "<random UUID, e.g. 5c85ce1f-aa78-4ebf-8d6f-4b62773e9bde>",
            "service-url": "http://<your openstack keystone endpoint, e.g. http://10.12.25.2:5000/v3>",
            "user-name": "<your openstack user>",
            "password": "<your openstack password>",
            "system-type": "VIM",
            "ssl-insecure": true,
            "cloud-domain": "Default",
            "default-tenant": "<your openstack project name>",
            "system-status": "active"
            }
        ]
      }
    }'
### on-board multiple OpenStack regions with a single request by indicating {"multi-region-discovery":true}
### you can specify the Openstack Region ID by extra inputs: {"openstack-region-id":"RegionOne"}

curl -X PUT \
https://$ONAP_AAI_IP:$ONAP_AAI_PORT/aai/v13/cloud-infrastructure/cloud-regions/cloud-region/${CLOUD_OWNER}/${CLOUD_REGION_ID} \
-H 'Accept: application/json' \
-H 'Authorization: Basic QUFJOkFBSQ==' \
-H 'Cache-Control: no-cache' \
-H 'Content-Type: application/json' \
-H 'Postman-Token: 8b9b95ae-91d6-4436-90fa-69cb4d2db99c' \
-H 'Real-Time: true' \
-H 'X-FromAppId: jimmy-postman' \
-H 'X-TransactionId: 9999' \
-d '{
    "cloud-owner": "<${CLOUD_OWNER}>",
    "cloud-region-id": "<${CLOUD_REGION_ID}>",
    "cloud-type": "openstack",
    "owner-defined-type": "t1",
    "cloud-region-version": "titanium_cloud",
    "complex-name": "clli1",
    "cloud-zone": "CloudZone",
    "sriov-automation": false,
    "identity-url": "WillBeUpdatedByMultiCloud",
    "cloud-extra-info":"{\"multi-region-discovery\":true, \"openstack-region-id\":\"RegionOne\"}"
    "esr-system-info-list": {
        "esr-system-info": [
            {
            "esr-system-info-id": "<random UUID, e.g. 5c85ce1f-aa78-4ebf-8d6f-4b62773e9bde>",
            "service-url": "http://<your openstack keystone endpoint, e.g. http://10.12.25.2:5000/v3>",
            "user-name": "<your openstack user>",
            "password": "<your openstack password>",
            "system-type": "VIM",
            "ssl-insecure": true,
            "cloud-domain": "Default",
            "default-tenant": "<your openstack project name>",
            "system-status": "active"
            }
        ]
      }
    }'
Step 2: associate the cloud region with the location object

This association between the cloud region and location is required for OOF homing/placement of VNF

curl -X PUT \
https://$ONAP_AAI_IP:$ONAP_AAI_PORT/aai/v13/cloud-infrastructure/cloud-regions/cloud-region/${CLOUD_OWNER}/${CLOUD_REGION_ID}/relationship-list/relationship \
-H 'Authorization: Basic QUFJOkFBSQ==' \
-H 'Cache-Control: no-cache' \
-H 'Content-Type: application/json' \
-H 'Postman-Token: 7407d60c-8ce7-45de-ada3-4a7a9e88ebd4' \
-H 'Real-Time: true' \
-H 'X-FromAppId: jimmy-postman' \
-H 'X-TransactionId: 9999' \
-d '{
    "related-to": "complex",
    "related-link": "/aai/v13/cloud-infrastructure/complexes/complex/clli1",
    "relationship-data": [
        {
        "relationship-key": "complex.physical-location-id",
        "relationship-value": "clli1"
        }
        ]
    }'
Step 3: Trigger the MultiCloud Plugin registration process

Make sure trigger MultiCloud plugin to discover and register Infrastructure resources, including HPA

curl -X POST \
http://$ONAP_MSB_IP:$ONAP_MSB_PORT/api/multicloud/v0/${CLOUD_OWNER}_${CLOUD_REGION_ID}/registry \
-H 'Accept: application/json' \
-H 'Cache-Control: no-cache' \
-H 'Content-Type: application/json' \
-H 'Postman-Token: 8577e1cc-1038-471d-8b3b-d36fe44ae023'

With ESR GUI Portal

ESR will conduct all steps mentioned above with a single click.

The url of the ESR GUI Portal is:

http://$ONAP_MSB_IP:$ONAP_MSB_PORT/iui/aai-esr-gui/extsys/vim/vimView.html

ESR VIM Registrer GUI Portal

Verification

You may want to verify if the cloud region was registered properly (with HPA information populated) to represent the instance of Wind River Titanium Cloud, you can do it with the curl command as below

curl -X GET \
https://$ONAP_AAI_IP:$ONAP_AAI_PORT/aai/v13/cloud-infrastructure/cloud-regions/cloud-region/${CLOUD_OWNER}/${CLOUD_REGION_ID}?depth=all \
-H 'Accept: application/json' \
-H 'Authorization: Basic QUFJOkFBSQ==' \
-H 'Cache-Control: no-cache' \
-H 'Content-Type: application/json' \
-H 'Postman-Token: 2899359f-871b-4e61-a307-ecf8b3144e3f' \
-H 'Real-Time: true' \
-H 'X-FromAppId: jimmy-postman' \
-H 'X-TransactionId: 9999'

Note:

The response of querying a cloud region above should return with a comprehensive cloud region object, you should find out the “hpa-capabilities” under certain flavor object with name prefixed by “onap.”

Tutorial: Cloud Region Decommission:

ESR GUI Portal cannot decommission a cloud region which has been updated by MultiCloud Plugin for Wind River Titanium Cloud, and it does not request MultiCloud to help on that yet. So it is required to issue a rest API request to MultiCloud with a single curl commands:

curl -X DELETE \
'http://$ONAP_MSB_IP:$ONAP_MSB_PORT/api/multicloud-titaniumcloud/v0/CloudOwner_RegionOne' \
-H 'Accept: application/json' \
-H 'Cache-Control: no-cache' \
-H 'Content-Type: application/json' \
-H 'Postman-Token: 8577e1cc-1038-471d-8b3b-d36fe44ae023'

Tutorial: Enable ONAP HPA Orchestation to Wind River Titanium Cloud

To fulfil the functional requirement of HPA enablement, MultiCloud plugin for Wind River Titanium Cloud expects the administrator to provision the Titanium Cloud instance conforming to certain conventions.

This tutorial demonstrates how to enable ONAP HPA orchestration to Wind River Titanium Cloud.

Architecture & Policies & Mappings

Please refer to the link for more architecture details:

Please refer to the link for more Policies&Mappings details:

Provision Flavors

configure openstack with proper flavors (with name prefixed by “onap.” to carry HPA information to ONAP), example flavor:

nova flavor-create onap.hpa.medium 110 4096 0 6
#cpu pining
nova flavor-key onap.hpa.medium set hw:cpu_policy=dedicated
nova flavor-key onap.hpa.medium set hw:cpu_thread_policy=prefer
#cpu topology
nova flavor-key onap.hpa.medium set hw:cpu_sockets=2
nova flavor-key onap.hpa.medium set hw:cpu_cores=4
nova flavor-key onap.hpa.medium set hw:cpu_threads=8
#hugepage
nova flavor-key onap.hpa.medium set hw:mem_page_size=large
#numa
nova flavor-key onap.hpa.medium set hw:numa_nodes=2
nova flavor-key onap.hpa.medium set hw:numa_cpus.0=0,1 hw:numa_cpus.1=2,3,4,5 hw:numa_mem.0=2048 hw:numa_mem.1=2048

Access configuration of Titanium Cloud Instance

collect following information for on-boarding this Cloud instance to ONAP:

your openstack project name
your openstack user
your openstack password
your openstack keystone endpoint
your openstack Region ID: e.g. RegionOne

On-board the Titanium Cloud instance

Now you can onboard this Titanium Cloud instance, make sure the multicloud registration process is triggered.

See Tutorial: Onboard instance of Wind River Titanium Cloud

Tutorial: VESagent configuration and Testing

VESagent is a FCAPS relaying service offered by MultiCloud Plugin for Wind River Titanium Cloud. It allows user to monitor specified VM status and report VES collector with onset or abate fault event “Fault_MultiCloud_VMFailure”

VESagent provisoning APIs

### assume OOM deployment as below endpoints:

  • OOM k8s Node IP, e.g. 10.12.5.184

  • OOM k8s Node port for multicloud-titaniumcloud POD: 30294

  • On-boarded cloud region with {cloud-owner}/{cloud-region-id} : CloudOwner/pod01

  • VES collector endpoint: 10.12.6.79:8081

#!/bin/bash
export MC_EP_IP=10.12.5.184
export MC_EP_PORT=30294

export MC_EPv0=http://$OPENO_IP:$MC_EP_PORT/api/multicloud-titaniumcloud/v0/CloudOwner_pod01
export MC_EPv1=http://$OPENO_IP:$MC_EP_PORT/api/multicloud-titaniumcloud/v1/CloudOwner/pod01

1. Setup VESagent backlogs

** Option 1: monitor all VMs of a tenant**

curl -v -s -H "Content-Type: application/json" -d '{"vesagent_config": \
     {"backlogs":[ {"domain":"fault","type":"vm","tenant":"VIM"}],\
     "poll_interval_default":10,"ves_subscription":\
     {"username":"admin","password":"admin","endpoint":"http://10.12.6.79:8081/eventListener/v5"}}}'\
      -X POST  $MC_EPv0/vesagent

** Option 2: monitor specified VMs**

### zdfw1lb01dns01, zdfw1lb01dns02
curl -v -s -H "Content-Type: application/json" -d '{"vesagent_config":\
     {"backlogs":[ {"source":"zdfw1lb01dns01", "domain":"fault","type":"vm","tenant":"VIM"},\
      {"source":"zdfw1lb01dns02", "domain":"fault","type":"vm","tenant":"VIM"}],
     "poll_interval_default":10,"ves_subscription":\
     {"username":"admin","password":"admin","endpoint":"http://10.12.6.79:8081/eventListener/v5"}}}' \
     -X POST  $MC_EPv0/vesagent

2. Dump the VESagent backlogs

curl -v -s -H "Content-Type: application/json" -X GET  $MC_EPv0/vesagent

3. Delete the VESagent backlogs

curl -v -s -H "Content-Type: application/json" -X DELETE  $MC_EPv0/vesagent

VESagent exercises

Step 1: Monitor the DMaaP events

Subscribe to and keep polling DMaaP Topic: “unauthenticated.SEC_FAULT_OUTPUT” with curl command

curl -X GET \
      'http://$DMAAP_IP:3904/events/unauthenticated.SEC_FAULT_OUTPUT/EVENT-LISTENER-POSTMAN/304?timeout=6000&limit=10&filter=' \
      -H 'Cache-Control: no-cache' \
      -H 'Content-Type: application/json' \
      -H 'Postman-Token: 4e2e3589-d742-48c7-8d48-d1b3577df259' \
      -H 'X-FromAppId: 121' \
      -H 'X-TransactionId: 9999'

Step 2: Setup VESagent backlog

### zdfw1lb01dns01, zdfw1lb01dns02
curl -v -s -H "Content-Type: application/json" -d '{"vesagent_config":\
     {"backlogs":[ {"source":"zdfw1lb01dns01", "domain":"fault","type":"vm","tenant":"VIM"}],\
     "poll_interval_default":10,"ves_subscription":\
     {"username":"admin","password":"admin","endpoint":"http://10.12.6.79:8081/eventListener/v5"}}}' \
     -X POST  $MC_EPv0/vesagent

Step 3: Simulate the Faults

Manually stop the monitored VMs,e.g. VM with name ‘zdfw1lb01dns01’,

Step 4: Observe DMaaP event: “Fault_MultiCloud_VMFailure”

Poll the subscribed DMaaP topic “unauthenticated.SEC_FAULT_OUTPUT” with curl command, you should be able to observe the following VES fault event generated from DMaaP:

[

    "{\"event\":{\"commonEventHeader\":{\"startEpochMicrosec\":1537233558255872,\"sourceId\":\"8e606aa7-39c8-4df7-b2f4-1f6785b9f682\",\"eventId\":\"a236f561-f0fa-48a3-96cd-3a61ccfdf421\",\"reportingEntityId\":\"CloudOwner_pod01\",\"internalHeaderFields\":{\"collectorTimeStamp\":\"Tue, 09 18 2018 01:19:19 GMT\"},\"eventType\":\"\",\"priority\":\"High\",\"version\":3,\"reportingEntityName\":\"CloudOwner_pod01\",\"sequence\":0,\"domain\":\"fault\",\"lastEpochMicrosec\":1537233558255872,\"eventName\":\"Fault_MultiCloud_VMFailure\",\"sourceName\":\"zdfw1lb01dns01\"},\"faultFields\":{\"eventSeverity\":\"CRITICAL\",\"alarmCondition\":\"Guest_Os_Failure\",\"faultFieldsVersion\":2,\"specificProblem\":\"Fault_MultiCloud_VMFailure\",\"alarmInterfaceA\":\"aaaa\",\"alarmAdditionalInformation\":[{\"name\":\"objectType\",\"value\":\"VIM\"},{\"name\":\"eventTime\",\"value\":\"2018-09-18 01:19:18.255937\"}],\"eventSourceType\":\"virtualMachine\",\"vfStatus\":\"Active\"}}}",

]

Step 5: Simulate the Recovery

Manually restart the stopped VM ‘zdfw1lb01dns01’

Step 6: Observe DMaaP event: “Fault_MultiCloud_VMFailureCleared”

[
    "{\"event\":{\"commonEventHeader\":{\"startEpochMicrosec\":1537233558255872,\"sourceId\":\"8e606aa7-39c8-4df7-b2f4-1f6785b9f682\",\"eventId\":\"a236f561-f0fa-48a3-96cd-3a61ccfdf421\",\"reportingEntityId\":\"CloudOwner_pod01\",\"internalHeaderFields\":{\"collectorTimeStamp\":\"Tue, 09 18 2018 01:19:31 GMT\"},\"eventType\":\"\",\"priority\":\"Normal\",\"version\":3,\"reportingEntityName\":\"CloudOwner_pod01\",\"sequence\":1,\"domain\":\"fault\",\"lastEpochMicrosec\":1537233570150714,\"eventName\":\"Fault_MultiCloud_VMFailureCleared\",\"sourceName\":\"zdfw1lb01dns01\"},\"faultFields\":{\"eventSeverity\":\"NORMAL\",\"alarmCondition\":\"Vm_Restart\",\"faultFieldsVersion\":2,\"specificProblem\":\"Fault_MultiCloud_VMFailure\",\"alarmInterfaceA\":\"aaaa\",\"alarmAdditionalInformation\":[{\"name\":\"objectType\",\"value\":\"VIM\"},{\"name\":\"eventTime\",\"value\":\"2018-09-18 01:19:30.150736\"}],\"eventSourceType\":\"virtualMachine\",\"vfStatus\":\"Active\"}}}"

]