OOM Documentation Repository

ONAP Operations Manager Project

The ONAP Operations Manager (OOM) is responsible for life-cycle management of the ONAP platform itself; components such as SO, SDNC, etc. It is not responsible for the management of services, VNFs or infrastructure instantiated by ONAP or used by ONAP to host such services or VNFs. OOM uses the open-source Kubernetes container management system as a means to manage the Docker containers that compose ONAP where the containers are hosted either directly on bare-metal servers or on VMs hosted by a 3rd party management system. OOM ensures that ONAP is easily deployable and maintainable throughout its life cycle while using hardware resources efficiently.

_images/oomLogoV2-medium.png

In summary OOM provides the following capabilities:

  • Deploy - with built-in component dependency management

  • Configure - unified configuration across all ONAP components

  • Monitor - real-time health monitoring feeding to a Consul UI and Kubernetes

  • Heal- failed ONAP containers are recreated automatically

  • Scale - cluster ONAP services to enable seamless scaling

  • Upgrade - change-out containers or configuration with little or no service impact

  • Delete - cleanup individual containers or entire deployments

OOM supports a wide variety of Kubernetes private clouds - built with Rancher, Kubeadm or Cloudify - and public cloud infrastructures such as: Microsoft Azure, Amazon AWS, Google GCD, VMware VIO, and OpenStack.

The OOM documentation is broken into four different areas each targeted at a different user:

The ONAP Operations Manager Release Notes for OOM describe the incremental features per release.

Component Orchestration Overview

Multiple technologies, templates, and extensible plug-in frameworks are used in ONAP to orchestrate platform instances of software component artifacts. A few standard configurations are provide that may be suitable for test, development, and some production deployments by substitution of local or platform wide parameters. Larger and more automated deployments may require integration the component technologies, templates, and frameworks with a higher level of automated orchestration and control software. Design guidelines are provided to insure the component level templates and frameworks can be easily integrated and maintained. The following diagram provides an overview of these with links to examples and templates for describing new ones.

digraph COO {
   rankdir="LR";

   {
      node      [shape=folder]
      oValues   [label="values"]
      cValues   [label="values"]
      comValues [label="values"]
      sValues   [label="values"]
      oCharts   [label="charts"]
      cCharts   [label="charts"]
      comCharts [label="charts"]
      sCharts   [label="charts"]
      blueprint [label="TOSCA blueprint"]
   }
   {oom [label="ONAP Operations Manager"]}
   {hlo [label="High Level Orchestrator"]}


   hlo -> blueprint
   hlo -> oom
   oom -> oValues
   oom -> oCharts
   oom -> component
   oom -> common
   common -> comValues
   common -> comCharts
   component -> cValues
   component -> cCharts
   component -> subcomponent
   subcomponent -> sValues
   subcomponent -> sCharts
   blueprint -> component
}

OOM Quick Start Guide

_images/oomLogoV2-medium.png

Once a Kubernetes environment is available (follow the instructions in OOM Cloud Setup Guide if you don’t have a cloud environment available), follow the following instructions to deploy ONAP.

Step 1. Clone the OOM repository from ONAP gerrit:

> git clone -b <BRANCH> http://gerrit.onap.org/r/oom --recurse-submodules
> cd oom/kubernetes

where <BRANCH> can be an official release tag, such as

  • 4.0.0-ONAP for Dublin

  • 5.0.1-ONAP for El Alto

  • 6.0.0 for Frankfurt

  • 7.0.0 for Guilin

  • 8.0.0 for Honolulu

  • 9.0.0 for Istanbul

Step 2. Install Helm Plugins required to deploy ONAP:

> cp -R ~/oom/kubernetes/helm/plugins/ ~/.local/share/helm/plugins
> helm plugin install https://github.com/chartmuseum/helm-push.git \
    --version 0.9.0

Note

The --version 0.9.0 is required as new version of helm (3.7.0 and up) is now using push directly and helm-push is using cm-push starting version 0.10.0 and up.

Step 3. Install Chartmuseum:

> curl -LO https://s3.amazonaws.com/chartmuseum/release/latest/bin/linux/amd64/chartmuseum
> chmod +x ./chartmuseum
> mv ./chartmuseum /usr/local/bin

Step 4. Install Cert-Manager:

> kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.2.0/cert-manager.yaml

More details can be found here.

Step 5. Customize the Helm charts like oom/kubernetes/onap/values.yaml or an override file like onap-all.yaml, onap-vfw.yaml or openstack.yaml file to suit your deployment with items like the OpenStack tenant information.

Note

Standard and example override files (e.g. onap-all.yaml, openstack.yaml) can be found in the oom/kubernetes/onap/resources/overrides/ directory.

  1. You may want to selectively enable or disable ONAP components by changing the enabled: true/false flags.

  2. Encrypt the OpenStack password using the shell tool for Robot and put it in the Robot Helm charts or Robot section of openstack.yaml

  3. Encrypt the OpenStack password using the java based script for SO Helm charts or SO section of openstack.yaml.

  4. Update the OpenStack parameters that will be used by Robot, SO and APPC Helm charts or use an override file to replace them.

  5. Add in the command line a value for the global master password (global.masterPassword).

a. Enabling/Disabling Components: Here is an example of the nominal entries that need to be provided. We have different values file available for different contexts.

# Copyright © 2019 Amdocs, Bell Canada
# Copyright (c) 2020 Nordix Foundation, Modifications
# Modifications Copyright © 2020-2021 Nokia
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#       http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

#################################################################
# Global configuration overrides.
#
# These overrides will affect all helm charts (ie. applications)
# that are listed below and are 'enabled'.
#################################################################
global:
  # Change to an unused port prefix range to prevent port conflicts
  # with other instances running within the same k8s cluster
  nodePortPrefix: 302
  nodePortPrefixExt: 304


  # Install test components
  # test components are out of the scope of ONAP but allow to have a entire
  # environment to test the different features of ONAP
  # Current tests environments provided:
  #  - netbox (needed for CDS IPAM)
  #  - AWX (needed for XXX)
  #  - EJBCA Server (needed for CMPv2 tests)
  # Today, "contrib" chart that hosting these components must also be enabled
  # in order to make it work. So `contrib.enabled` must have the same value than
  # addTestingComponents
  addTestingComponents: &testing false

  # ONAP Repository
  # Four different repositories are used
  # You can change individually these repositories to ones that will serve the
  # right images. If credentials are needed for one of them, see below.
  repository: nexus3.onap.org:10001
  dockerHubRepository: &dockerHubRepository docker.io
  elasticRepository: &elasticRepository docker.elastic.co
  googleK8sRepository: k8s.gcr.io
  githubContainerRegistry: ghcr.io

  #/!\ DEPRECATED /!\
  # Legacy repositories which will be removed at the end of migration.
  # Please don't use
  loggingRepository: *elasticRepository
  busyboxRepository: *dockerHubRepository

  # Default credentials
  # they're optional. If the target repository doesn't need them, comment them
  repositoryCred:
    user: docker
    password: docker
  # If you want / need authentication on the repositories, please set
  # Don't set them if the target repo is the same than others
  # so id you've set repository to value `my.private.repo` and same for
  # dockerHubRepository, you'll have to configure only repository (exclusive) OR
  # dockerHubCred.
  # dockerHubCred:
  #   user: myuser
  #   password: mypassord
  # elasticCred:
  #   user: myuser
  #   password: mypassord
  # googleK8sCred:
  #   user: myuser
  #   password: mypassord


  # common global images
  # Busybox for simple shell manipulation
  busyboxImage: busybox:1.32

  # curl image
  curlImage: curlimages/curl:7.69.1

  # env substitution image
  envsubstImage: dibi/envsubst:1

  # generate htpasswd files image
  # there's only latest image for htpasswd
  htpasswdImage: xmartlabs/htpasswd:latest

  # kubenretes client image
  kubectlImage: bitnami/kubectl:1.19

  # logging agent
  loggingImage: beats/filebeat:5.5.0

  # mariadb client image
  mariadbImage: bitnami/mariadb:10.5.8

  # nginx server image
  nginxImage: bitnami/nginx:1.18-debian-10

  # postgreSQL client and server image
  postgresImage: crunchydata/crunchy-postgres:centos8-13.2-4.6.1

  # readiness check image
  readinessImage: onap/oom/readiness:3.0.1

  # image pull policy
  pullPolicy: Always

  # default java image
  jreImage: onap/integration-java11:7.2.0

  # default clusterName
  # {{ template "common.fullname" . }}.{{ template "common.namespace" . }}.svc.{{ .Values.global.clusterName }}
  clusterName: cluster.local

  # default mount path root directory referenced
  # by persistent volumes and log files
  persistence:
    mountPath: /dockerdata-nfs
    enableDefaultStorageclass: false
    parameters: {}
    storageclassProvisioner: kubernetes.io/no-provisioner
    volumeReclaimPolicy: Retain

  # override default resource limit flavor for all charts
  flavor: unlimited

  # flag to enable debugging - application support required
  debugEnabled: false

  # default password complexity
  # available options: phrase, name, pin, basic, short, medium, long, maximum security
  # More datails: https://www.masterpasswordapp.com/masterpassword-algorithm.pdf
  passwordStrength: long

  # configuration to set log level to all components (the one that are using
  # "common.log.level" to set this)
  # can be overrided per components by setting logConfiguration.logLevelOverride
  # to the desired value
  # logLevel: DEBUG

  # Global ingress configuration
  ingress:
    enabled: false
    virtualhost:
      baseurl: "simpledemo.onap.org"

  # Global Service Mesh configuration
  # POC Mode, don't use it in production
  serviceMesh:
    enabled: false
    tls: true

  # metrics part
  # If enabled, exporters (for prometheus) will be deployed
  # if custom resources set to yes, CRD from prometheus operartor will be
  # created
  # Not all components have it enabled.
  #
  metrics:
    enabled: true
    custom_resources: false

  # Disabling AAF
  # POC Mode, only for use in development environment
  # Keep it enabled in production
  aafEnabled: true
  aafAgentImage: onap/aaf/aaf_agent:2.1.20

  # Disabling MSB
  # POC Mode, only for use in development environment
  msbEnabled: true

  # default values for certificates
  certificate:
    default:
      renewBefore: 720h #30 days
      duration:    8760h #365 days
      subject:
        organization: "Linux-Foundation"
        country: "US"
        locality: "San-Francisco"
        province: "California"
        organizationalUnit: "ONAP"
      issuer:
        group: certmanager.onap.org
        kind: CMPv2Issuer
        name: cmpv2-issuer-onap

  # Enabling CMPv2
  cmpv2Enabled: true
  platform:
    certificates:
      clientSecretName: oom-cert-service-client-tls-secret
      keystoreKeyRef: keystore.jks
      truststoreKeyRef: truststore.jks
      keystorePasswordSecretName: oom-cert-service-certificates-password
      keystorePasswordSecretKey: password
      truststorePasswordSecretName: oom-cert-service-certificates-password
      truststorePasswordSecretKey: password

  # Indicates offline deployment build
  # Set to true if you are rendering helm charts for offline deployment
  # Otherwise keep it disabled
  offlineDeploymentBuild: false

  # TLS
  # Set to false if you want to disable TLS for NodePorts. Be aware that this
  # will loosen your security.
  # if set this element will force or not tls even if serviceMesh.tls is set.
  # tlsEnabled: false

  # Logging
  # Currently, centralized logging is not in best shape so it's disabled by
  # default
  centralizedLoggingEnabled: &centralizedLogging false

  # Example of specific for the components where you want to disable TLS only for
  # it:
  # if set this element will force or not tls even if global.serviceMesh.tls and
  # global.tlsEnabled is set otherwise.
  # robot:
  #   tlsOverride: false

  # Global storage configuration
  #    Set to "-" for default, or with the name of the storage class
  #    Please note that if you use AAF, CDS, SDC, Netbox or Robot, you need a
  #    storageclass with RWX capabilities (or set specific configuration for these
  #    components).
  # persistence:
  #   storageClass: "-"

# Example of specific for the components which requires RWX:
# aaf:
#   persistence:
#     storageClassOverride: "My_RWX_Storage_Class"
# contrib:
#   netbox:
#     netbox-app:
#       persistence:
#         storageClassOverride: "My_RWX_Storage_Class"
# cds:
#   cds-blueprints-processor:
#     persistence:
#       storageClassOverride: "My_RWX_Storage_Class"
# sdc:
#   sdc-onboarding-be:
#     persistence:
#       storageClassOverride: "My_RWX_Storage_Class"

#################################################################
# Enable/disable and configure helm charts (ie. applications)
# to customize the ONAP deployment.
#################################################################
aaf:
  enabled: false
aai:
  enabled: false
appc:
  enabled: false
  config:
    openStackType: OpenStackProvider
    openStackName: OpenStack
    openStackKeyStoneUrl: http://localhost:8181/apidoc/explorer/index.html
    openStackServiceTenantName: default
    openStackDomain: default
    openStackUserName: admin
    openStackEncryptedPassword: admin
cassandra:
  enabled: false
cds:
  enabled: false
clamp:
  enabled: false
cli:
  enabled: false
consul:
  enabled: false
# Today, "contrib" chart that hosting these components must also be enabled
# in order to make it work. So `contrib.enabled` must have the same value than
# addTestingComponents
contrib:
  enabled: *testing
cps:
  enabled: false
dcaegen2:
  enabled: false
dcaegen2-services:
  enabled: false
dcaemod:
  enabled: false
holmes:
  enabled: false
dmaap:
  enabled: false
# Today, "logging" chart that perform the central part of logging must also be
# enabled in order to make it work. So `logging.enabled` must have the same
# value than centralizedLoggingEnabled
log:
  enabled: *centralizedLogging
sniro-emulator:
  enabled: false
oof:
  enabled: false
mariadb-galera:
  enabled: false
msb:
  enabled: false
multicloud:
  enabled: false
nbi:
  enabled: false
  config:
    # openstack configuration
    openStackRegion: "Yolo"
    openStackVNFTenantId: "1234"
policy:
  enabled: false
pomba:
  enabled: false
portal:
  enabled: false
robot:
  enabled: false
  config:
    # openStackEncryptedPasswordHere should match the encrypted string used in SO and APPC and overridden per environment
    openStackEncryptedPasswordHere: "c124921a3a0efbe579782cde8227681e"
sdc:
  enabled: false
sdnc:
  enabled: false

  replicaCount: 1

  mysql:
    replicaCount: 1
so:
  enabled: false

  replicaCount: 1

  liveness:
    # necessary to disable liveness probe when setting breakpoints
    # in debugger so K8s doesn't restart unresponsive container
    enabled: false

  # so server configuration
  config:
    # message router configuration
    dmaapTopic: "AUTO"
    # openstack configuration
    openStackUserName: "vnf_user"
    openStackRegion: "RegionOne"
    openStackKeyStoneUrl: "http://1.2.3.4:5000"
    openStackServiceTenantName: "service"
    openStackEncryptedPasswordHere: "c124921a3a0efbe579782cde8227681e"

  # in order to enable static password for so-monitoring uncomment:
  # so-monitoring:
  #   server:
  #     monitoring:
  #       password: demo123456!
uui:
  enabled: false
vfc:
  enabled: false
vid:
  enabled: false
vnfsdk:
  enabled: false
modeling:
  enabled: false
platform:
  enabled: false
a1policymanagement:
  enabled: false

cert-wrapper:
  enabled: true
repository-wrapper:
  enabled: true
roles-wrapper:
  enabled: true

b. Generating ROBOT Encrypted Password: The Robot encrypted Password uses the same encryption.key as SO but an openssl algorithm that works with the python based Robot Framework.

Note

To generate Robot openStackEncryptedPasswordHere:

cd so/resources/config/mso/
/oom/kubernetes/so/resources/config/mso# echo -n "<openstack tenant password>" | openssl aes-128-ecb -e -K `cat encryption.key` -nosalt | xxd -c 256 -p``

c. Generating SO Encrypted Password: The SO Encrypted Password uses a java based encryption utility since the Java encryption library is not easy to integrate with openssl/python that Robot uses in Dublin and upper versions.

Note

To generate SO openStackEncryptedPasswordHere and openStackSoEncryptedPassword ensure default-jdk is installed:

apt-get update; apt-get install default-jdk

Then execute:

SO_ENCRYPTION_KEY=`cat ~/oom/kubernetes/so/resources/config/mso/encryption.key`
OS_PASSWORD=XXXX_OS_CLEARTESTPASSWORD_XXXX

git clone http://gerrit.onap.org/r/integration
cd integration/deployment/heat/onap-rke/scripts

javac Crypto.java
java Crypto "$OS_PASSWORD" "$SO_ENCRYPTION_KEY"
  1. Update the OpenStack parameters:

There are assumptions in the demonstration VNF Heat templates about the networking available in the environment. To get the most value out of these templates and the automation that can help confirm the setup is correct, please observe the following constraints.

openStackPublicNetId:

This network should allow Heat templates to add interfaces. This need not be an external network, floating IPs can be assigned to the ports on the VMs that are created by the heat template but its important that neutron allow ports to be created on them.

openStackPrivateNetCidr: "10.0.0.0/16"

This ip address block is used to assign OA&M addresses on VNFs to allow ONAP connectivity. The demonstration Heat templates assume that 10.0 prefix can be used by the VNFs and the demonstration ip addressing plan embodied in the preload template prevent conflicts when instantiating the various VNFs. If you need to change this, you will need to modify the preload data in the Robot Helm chart like integration_preload_parameters.py and the demo/heat/preload_data in the Robot container. The size of the CIDR should be sufficient for ONAP and the VMs you expect to create.

openStackOamNetworkCidrPrefix: "10.0"

This ip prefix mush match the openStackPrivateNetCidr and is a helper variable to some of the Robot scripts for demonstration. A production deployment need not worry about this setting but for the demonstration VNFs the ip asssignment strategy assumes 10.0 ip prefix.

Example Keystone v2.0

#################################################################
# This override file configures openstack parameters for ONAP
#################################################################
appc:
  config:
    enableClustering: false
    openStackType: "OpenStackProvider"
    openStackName: "OpenStack"
    # OS_AUTH_URL from the openstack .RC file
    openStackKeyStoneUrl: "http://10.12.25.2:5000/v2.0"
    openStackServiceTenantName: "OPENSTACK_TENANTNAME_HERE"
    # OS_USER_DOMAIN_NAME from the openstack .RC file
    openStackDomain: "Default"
    openStackUserName: "OPENSTACK_USERNAME_HERE"
    openStackEncryptedPassword: "XXXXXXXXXXXXXXXXXXXXXXXX_OPENSTACK_PASSWORD_HERE_XXXXXXXXXXXXXXXX"
robot:
  appcUsername: "appc@appc.onap.org"
  appcPassword: "demo123456!"
  # OS_AUTH_URL without the /v2.0 from the openstack .RC file
  openStackKeyStoneUrl: "http://10.12.25.2:5000"
  # From openstack network list output
  openStackPublicNetId: "971040b2-7059-49dc-b220-4fab50cb2ad4"
  # tenantID=`openstack project show $tenantName | grep -w id | awk '{print $4}'`
  # where "tenantName" is OS_PROJECT_NAME from openstack .RC file
  openStackTenantId: "09d8566ea45e43aa974cf447ed591d77"
  openStackUserName: "OPENSTACK_USERNAME_HERE"
  ubuntu14Image: "ubuntu-14-04-cloud-amd64"
  ubuntu16Image: "ubuntu-16-04-cloud-amd64"
  # From openstack network list output
  openStackPrivateNetId: "c7824f00-bef7-4864-81b9-f6c3afabd313"
  # From openstack network list output
  openStackPrivateSubnetId: "2a0e8888-f93e-4615-8d28-fc3d4d087fc3"
  openStackPrivateNetCidr: "10.0.0.0/16"
  # From openstack security group list output
  openStackSecurityGroup: "3a7a1e7e-6d15-4264-835d-fab1ae81e8b0"
  openStackOamNetworkCidrPrefix: "10.0"
  # Control node IP
  dcaeCollectorIp: "10.12.6.88"
  # SSH public key
  vnfPubKey: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDKXDgoo3+WOqcUG8/5uUbk81+yczgwC4Y8ywTmuQqbNxlY1oQ0YxdMUqUnhitSXs5S/yRuAVOYHwGg2mCs20oAINrP+mxBI544AMIb9itPjCtgqtE2EWo6MmnFGbHB4Sx3XioE7F4VPsh7japsIwzOjbrQe+Mua1TGQ5d4nfEOQaaglXLLPFfuc7WbhbJbK6Q7rHqZfRcOwAMXgDoBqlyqKeiKwnumddo2RyNT8ljYmvB6buz7KnMinzo7qB0uktVT05FH9Rg0CTWH5norlG5qXgP2aukL0gk1ph8iAt7uYLf1ktp+LJI2gaF6L0/qli9EmVCSLr1uJ38Q8CBflhkh"
  demoArtifactsVersion: "1.4.0-SNAPSHOT"
  demoArtifactsRepoUrl: "https://nexus.onap.org/content/repositories/releases"
  scriptVersion: "1.4.0-SNAPSHOT"
  # rancher node IP where RKE configired
  rancherIpAddress: "10.12.5.127"
  config:
    # openStackEncryptedPasswordHere should match the encrypted string used in SO and APPC and overridden per environment
    openStackEncryptedPasswordHere: "XXXXXXXXXXXXXXXXXXXXXXXX_OPENSTACK_ENCRYPTED_PASSWORD_HERE_XXXXXXXXXXXXXXXX"
so:
  # so server configuration
  so-catalog-db-adapter:
    config:
      openStackUserName: "OPENSTACK_USERNAME_HERE"
      # OS_AUTH_URL from the openstack .RC file
      openStackKeyStoneUrl: "http://10.12.25.2:5000/v2.0"
      openStackEncryptedPasswordHere: "XXXXXXXXXXXXXXXXXXXXXXXX_OPENSTACK_ENCRYPTED_PASSWORD_HERE_XXXXXXXXXXXXXXXX"

Example Keystone v3 (required for Rocky and later releases)

#################################################################
# This override file configures openstack parameters for ONAP
#################################################################
robot:
  enabled: true
  flavor: large
  appcUsername: "appc@appc.onap.org"
  appcPassword: "demo123456!"
  # KEYSTONE Version 3  Required for Rocky and beyond
  openStackKeystoneAPIVersion: "v3"
  # OS_AUTH_URL without the /v3 from the openstack .RC file
  openStackKeyStoneUrl: "http://10.12.25.2:5000"
  # tenantID=`openstack project show $tenantName | grep -w id | awk '{print $4}'`
  # where "tenantName" is OS_PROJECT_NAME from openstack .RC file
  openStackTenantId: "09d8566ea45e43aa974cf447ed591d77"
  # OS_USERNAME from the openstack .RC file
  openStackUserName: "OS_USERNAME_HERE"
  #  OS_PROJECT_DOMAIN_ID from the openstack .RC file
  #  in some environments it is a string but in other environmens it may be a numeric
  openStackDomainId:  "default"
  #  OS_USER_DOMAIN_NAME from the openstack .RC file
  openStackUserDomain:  "Default"
  openStackProjectName: "OPENSTACK_PROJECT_NAME_HERE"
  ubuntu14Image: "ubuntu-14-04-cloud-amd64"
  ubuntu16Image: "ubuntu-16-04-cloud-amd64"
  # From openstack network list output
  openStackPublicNetId: "971040b2-7059-49dc-b220-4fab50cb2ad4"
  # From openstack network list output
  openStackPrivateNetId: "83c84b68-80be-4990-8d7f-0220e3c6e5c8"
  # From openstack network list output
  openStackPrivateSubnetId: "e571c1d1-8ac0-4744-9b40-c3218d0a53a0"
  openStackPrivateNetCidr: "10.0.0.0/16"
  openStackOamNetworkCidrPrefix: "10.0"
  # From openstack security group list output
  openStackSecurityGroup: "bbe028dc-b64f-4f11-a10f-5c6d8d26dc89"
  dcaeCollectorIp: "10.12.6.109"
  # SSH public key
  vnfPubKey: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDKXDgoo3+WOqcUG8/5uUbk81+yczgwC4Y8ywTmuQqbNxlY1oQ0YxdMUqUnhitSXs5S/yRuAVOYHwGg2mCs20oAINrP+mxBI544AMIb9itPjCtgqtE2EWo6MmnFGbHB4Sx3XioE7F4VPsh7japsIwzOjbrQe+Mua1TGQ5d4nfEOQaaglXLLPFfuc7WbhbJbK6Q7rHqZfRcOwAMXgDoBqlyqKeiKwnumddo2RyNT8ljYmvB6buz7KnMinzo7qB0uktVT05FH9Rg0CTWH5norlG5qXgP2aukL0gk1ph8iAt7uYLf1ktp+LJI2gaF6L0/qli9EmVCSLr1uJ38Q8CBflhkh"
  demoArtifactsVersion: "1.4.0"
  demoArtifactsRepoUrl: "https://nexus.onap.org/content/repositories/releases"
  scriptVersion: "1.4.0"
  # rancher node IP where RKE configired
  rancherIpAddress: "10.12.6.160"
  config:
    # use the python utility to encrypt the OS_PASSWORD for the OS_USERNAME
    openStackEncryptedPasswordHere: "XXXXXXXXXXXXXXXXXXXXXXXX_OPENSTACK_PYTHON_PASSWORD_HERE_XXXXXXXXXXXXXXXX"
    openStackSoEncryptedPassword:  "YYYYYYYYYYYYYYYYYYYYYYYY_OPENSTACK_JAVA_PASSWORD_HERE_YYYYYYYYYYYYYYYY"
so:
  enabled: true
  so-catalog-db-adapter:
    config:
      openStackUserName: "OS_USERNAME_HERE"
      # OS_AUTH_URL (keep the /v3) from the openstack .RC file
      openStackKeyStoneUrl: "http://10.12.25.2:5000/v3"
      # use the SO Java utility to encrypt the OS_PASSWORD for the OS_USERNAME
      openStackEncryptedPasswordHere: "YYYYYYYYYYYYYYYYYYYYYYYY_OPENSTACK_JAVA_PASSWORD_HERE_YYYYYYYYYYYYYYYY"
appc:
  enabled: true
  replicaCount: 3
  config:
    enableClustering: true
    openStackType: "OpenStackProvider"
    openStackName: "OpenStack"
    # OS_AUTH_URL from the openstack .RC file
    openStackKeyStoneUrl: "http://10.12.25.2:5000/v3"
    openStackServiceTenantName: "OPENSTACK_PROJECT_NAME_HERE"
    openStackDomain: "OPEN_STACK_DOMAIN_NAME_HERE"
    openStackUserName: "OS_USER_NAME_HERE"
    openStackEncryptedPassword: "OPENSTACK_CLEAR_TEXT_PASSWORD_HERE"

Step 6. To setup a local Helm server to server up the ONAP charts:

> chartmuseum --storage local --storage-local-rootdir ~/helm3-storage -port 8879 &

Note the port number that is listed and use it in the Helm repo add as follows:

> helm repo add local http://127.0.0.1:8879

Step 7. Verify your Helm repository setup with:

> helm repo list
NAME   URL
local  http://127.0.0.1:8879

Step 8. Build a local Helm repository (from the kubernetes directory):

> make SKIP_LINT=TRUE [HELM_BIN=<HELM_PATH>] all ; make SKIP_LINT=TRUE [HELM_BIN=<HELM_PATH>] onap
HELM_BIN

Sets the helm binary to be used. The default value use helm from PATH

Step 9. Display the onap charts that available to be deployed:

> helm repo update
> helm search repo onap
NAME                    CHART VERSION    APP VERSION    DESCRIPTION
local/onap                    9.0.0      Istanbul      Open Network Automation Platform (ONAP)
local/aaf                     9.0.0                    ONAP Application Authorization Framework
local/aai                     9.0.0                    ONAP Active and Available Inventory
local/appc                    9.0.0                    Application Controller
local/cassandra               9.0.0                    ONAP cassandra
local/cds                     9.0.0                    ONAP Controller Design Studio (CDS)
local/clamp                   9.0.0                    ONAP Clamp
local/cli                     9.0.0                    ONAP Command Line Interface
local/common                  9.0.0                    Common templates for inclusion in other charts
local/consul                  9.0.0                    ONAP Consul Agent
local/contrib                 9.0.0                    ONAP optional tools
local/cps                     9.0.0                    ONAP Configuration Persistene Service (CPS)
local/dcaegen2                9.0.0                    ONAP DCAE Gen2
local/dgbuilder               9.0.0                    D.G. Builder application
local/dmaap                   9.0.0                    ONAP DMaaP components
local/log                     9.0.0                    ONAP Logging ElasticStack
local/mariadb-galera          9.0.0                    Chart for MariaDB Galera cluster
local/mongo                   9.0.0                    MongoDB Server
local/msb                     9.0.0                    ONAP MicroServices Bus
local/multicloud              9.0.0                    ONAP multicloud broker
local/music                   9.0.0                    MUSIC - Multi-site State Coordination Service
local/mysql                   9.0.0                    MySQL Server
local/nbi                     9.0.0                    ONAP Northbound Interface
local/network-name-gen        9.0.0                    Name Generation Micro Service
local/nfs-provisioner         9.0.0                    NFS provisioner
local/oof                     9.0.0                    ONAP Optimization Framework
local/policy                  9.0.0                    ONAP Policy Administration Point
local/pomba                   9.0.0                    ONAP Post Orchestration Model Based Audit
local/portal                  9.0.0                    ONAP Web Portal
local/postgres                9.0.0                    ONAP Postgres Server
local/robot                   9.0.0                    A helm Chart for kubernetes-ONAP Robot
local/sdc                     9.0.0                    Service Design and Creation Umbrella Helm charts
local/sdnc                    9.0.0                    SDN Controller
local/sdnc-prom               9.0.0                    ONAP SDNC Policy Driven Ownership Management
local/sniro-emulator          9.0.0                    ONAP Mock Sniro Emulator
local/so                      9.0.0                    ONAP Service Orchestrator
local/uui                     9.0.0                    ONAP uui
local/vfc                     9.0.0                    ONAP Virtual Function Controller (VF-C)
local/vid                     9.0.0                    ONAP Virtual Infrastructure Deployment
local/vnfsdk                  9.0.0                    ONAP VNF SDK

Note

The setup of the Helm repository is a one time activity. If you make changes to your deployment charts or values be sure to use make to update your local Helm repository.

Step 10. Once the repo is setup, installation of ONAP can be done with a single command

Note

The --timeout 900s is currently required in Dublin and later versions up to address long running initialization tasks for DMaaP and SO. Without this timeout value both applications may fail to deploy.

Danger

We’ve added the master password on the command line. You shouldn’t put it in a file for safety reason please don’t forget to change the value to something random

A space is also added in front of the command so “history” doesn’t catch it. This masterPassword is very sensitive, please be careful!

To deploy all ONAP applications use this command:

> cd oom/kubernetes
>  helm deploy dev local/onap --namespace onap --create-namespace --set global.masterPassword=myAwesomePasswordThatINeedToChange -f onap/resources/overrides/onap-all.yaml -f onap/resources/overrides/environment.yaml -f onap/resources/overrides/openstack.yaml --timeout 900s

All override files may be customized (or replaced by other overrides) as per needs.

onap-all.yaml

Enables the modules in the ONAP deployment. As ONAP is very modular, it is possible to customize ONAP and disable some components through this configuration file.

onap-all-ingress-nginx-vhost.yaml

Alternative version of the onap-all.yaml but with global ingress controller enabled. It requires the cluster configured with the nginx ingress controller and load balancer. Please use this file instead onap-all.yaml if you want to use experimental ingress controller feature.

environment.yaml

Includes configuration values specific to the deployment environment.

Example: adapt readiness and liveness timers to the level of performance of your infrastructure

openstack.yaml

Includes all the OpenStack related information for the default target tenant you want to use to deploy VNFs from ONAP and/or additional parameters for the embedded tests.

Step 11. Verify ONAP installation

Use the following to monitor your deployment and determine when ONAP is ready for use:

> kubectl get pods -n onap -o=wide

Note

While all pods may be in a Running state, it is not a guarantee that all components are running fine.

Launch the healthcheck tests using Robot to verify that the components are healthy:

> ~/oom/kubernetes/robot/ete-k8s.sh onap health

Step 12. Undeploy ONAP

> helm undeploy dev

More examples of using the deploy and undeploy plugins can be found here: https://wiki.onap.org/display/DW/OOM+Helm+%28un%29Deploy+plugins

OOM User Guide helm3 (experimental)

The ONAP Operations Manager (OOM) provide the ability to manage the entire life-cycle of an ONAP installation, from the initial deployment to final decommissioning. This guide provides instructions for users of ONAP to use the Kubernetes/Helm system as a complete ONAP management system.

This guide provides many examples of Helm command line operations. For a complete description of these commands please refer to the Helm Documentation.

_images/oomLogoV2-medium.png

The following sections describe the life-cycle operations:

  • Deploy - with built-in component dependency management

  • Configure - unified configuration across all ONAP components

  • Monitor - real-time health monitoring feeding to a Consul UI and Kubernetes

  • Heal- failed ONAP containers are recreated automatically

  • Scale - cluster ONAP services to enable seamless scaling

  • Upgrade - change-out containers or configuration with little or no service impact

  • Delete - cleanup individual containers or entire deployments

_images/oomLogoV2-Deploy.png

Deploy

The OOM team with assistance from the ONAP project teams, have built a comprehensive set of Helm charts, yaml files very similar to TOSCA files, that describe the composition of each of the ONAP components and the relationship within and between components. Using this model Helm is able to deploy all of ONAP with a few simple commands.

Pre-requisites

Your environment must have the Kubernetes kubectl with Cert-Manager and Helm setup as a one time activity.

Install Kubectl

Enter the following to install kubectl (on Ubuntu, there are slight differences on other O/Ss), the Kubernetes command line interface used to manage a Kubernetes cluster:

> curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.15.11/bin/linux/amd64/kubectl
> chmod +x ./kubectl
> sudo mv ./kubectl /usr/local/bin/kubectl
> mkdir ~/.kube

Paste kubectl config from Rancher (see the OOM Cloud Setup Guide for alternative Kubernetes environment setups) into the ~/.kube/config file.

Verify that the Kubernetes config is correct:

> kubectl get pods --all-namespaces

At this point you should see Kubernetes pods running.

Install Cert-Manager

Details on how to install Cert-Manager can be found here.

Install Helm

Helm is used by OOM for package and configuration management. To install Helm, enter the following:

> wget https://get.helm.sh/helm-v3.5.2-linux-amd64.tar.gz
> tar -zxvf helm-v3.5.2-linux-amd64.tar.gz
> sudo mv linux-amd64/helm /usr/local/bin/helm

Verify the Helm version with:

> helm version

Install the Helm Repo

Once kubectl and Helm are setup, one needs to setup a local Helm server to server up the ONAP charts:

> helm install osn/onap

Note

The osn repo is not currently available so creation of a local repository is required.

Helm is able to use charts served up from a repository and comes setup with a default CNCF provided Curated applications for Kubernetes repository called stable which should be removed to avoid confusion:

> helm repo remove stable

To prepare your system for an installation of ONAP, you’ll need to:

> git clone -b guilin --recurse-submodules -j2 http://gerrit.onap.org/r/oom
> cd oom/kubernetes

To install a local Helm server:

> curl -LO https://s3.amazonaws.com/chartmuseum/release/latest/bin/linux/amd64/chartmuseum
> chmod +x ./chartmuseum
> mv ./chartmuseum /usr/local/bin

To setup a local Helm server to server up the ONAP charts:

> mkdir -p ~/helm3-storage
> chartmuseum --storage local --storage-local-rootdir ~/helm3-storage -port 8879 &

Note the port number that is listed and use it in the Helm repo add as follows:

> helm repo add local http://127.0.0.1:8879

To get a list of all of the available Helm chart repositories:

> helm repo list
NAME   URL
local  http://127.0.0.1:8879

Then build your local Helm repository:

> make SKIP_LINT=TRUE [HELM_BIN=<HELM_PATH>] all
HELM_BIN

Sets the helm binary to be used. The default value use helm from PATH

The Helm search command reads through all of the repositories configured on the system, and looks for matches:

> helm search repo local
NAME                    VERSION    DESCRIPTION
local/appc              2.0.0      Application Controller
local/clamp             2.0.0      ONAP Clamp
local/common            2.0.0      Common templates for inclusion in other charts
local/onap              2.0.0      Open Network Automation Platform (ONAP)
local/robot             2.0.0      A helm Chart for kubernetes-ONAP Robot
local/so                2.0.0      ONAP Service Orchestrator

In any case, setup of the Helm repository is a one time activity.

Next, install Helm Plugins required to deploy the ONAP release:

> cp -R ~/oom/kubernetes/helm/plugins/ ~/.local/share/helm/plugins

Once the repo is setup, installation of ONAP can be done with a single command:

> helm deploy development local/onap --namespace onap --set global.masterPassword=password

This will install ONAP from a local repository in a ‘development’ Helm release. As described below, to override the default configuration values provided by OOM, an environment file can be provided on the command line as follows:

> helm deploy development local/onap --namespace onap -f overrides.yaml --set global.masterPassword=password

Note

Refer the Configure section on how to update overrides.yaml and values.yaml

To get a summary of the status of all of the pods (containers) running in your deployment:

> kubectl get pods --namespace onap -o=wide

Note

The Kubernetes namespace concept allows for multiple instances of a component (such as all of ONAP) to co-exist with other components in the same Kubernetes cluster by isolating them entirely. Namespaces share only the hosts that form the cluster thus providing isolation between production and development systems as an example.

Note

The Helm –name option refers to a release name and not a Kubernetes namespace.

To install a specific version of a single ONAP component (so in this example) with the given release name enter:

> helm deploy so onap/so --version 9.0.0 --set global.masterPassword=password --set global.flavor=unlimited --namespace onap

Note

The dependent components should be installed for component being installed

To display details of a specific resource or group of resources type:

> kubectl describe pod so-1071802958-6twbl

where the pod identifier refers to the auto-generated pod identifier.

_images/oomLogoV2-Configure.png

Configure

Each project within ONAP has its own configuration data generally consisting of: environment variables, configuration files, and database initial values. Many technologies are used across the projects resulting in significant operational complexity and an inability to apply global parameters across the entire ONAP deployment. OOM solves this problem by introducing a common configuration technology, Helm charts, that provide a hierarchical configuration with the ability to override values with higher level charts or command line options.

The structure of the configuration of ONAP is shown in the following diagram. Note that key/value pairs of a parent will always take precedence over those of a child. Also note that values set on the command line have the highest precedence of all.

digraph config {
   {
      node     [shape=folder]
      oValues  [label="values.yaml"]
      demo     [label="onap-demo.yaml"]
      prod     [label="onap-production.yaml"]
      oReq     [label="requirements.yaml"]
      soValues [label="values.yaml"]
      soReq    [label="requirements.yaml"]
      mdValues [label="values.yaml"]
   }
   {
      oResources  [label="resources"]
   }
   onap -> oResources
   onap -> oValues
   oResources -> environments
   oResources -> oReq
   oReq -> so
   environments -> demo
   environments -> prod
   so -> soValues
   so -> soReq
   so -> charts
   charts -> mariadb
   mariadb -> mdValues

}

The top level onap/values.yaml file contains the values required to be set before deploying ONAP. Here is the contents of this file:

# Copyright © 2019 Amdocs, Bell Canada
# Copyright (c) 2020 Nordix Foundation, Modifications
# Modifications Copyright © 2020-2021 Nokia
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#       http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

#################################################################
# Global configuration overrides.
#
# These overrides will affect all helm charts (ie. applications)
# that are listed below and are 'enabled'.
#################################################################
global:
  # Change to an unused port prefix range to prevent port conflicts
  # with other instances running within the same k8s cluster
  nodePortPrefix: 302
  nodePortPrefixExt: 304


  # Install test components
  # test components are out of the scope of ONAP but allow to have a entire
  # environment to test the different features of ONAP
  # Current tests environments provided:
  #  - netbox (needed for CDS IPAM)
  #  - AWX (needed for XXX)
  #  - EJBCA Server (needed for CMPv2 tests)
  # Today, "contrib" chart that hosting these components must also be enabled
  # in order to make it work. So `contrib.enabled` must have the same value than
  # addTestingComponents
  addTestingComponents: &testing false

  # ONAP Repository
  # Four different repositories are used
  # You can change individually these repositories to ones that will serve the
  # right images. If credentials are needed for one of them, see below.
  repository: nexus3.onap.org:10001
  dockerHubRepository: &dockerHubRepository docker.io
  elasticRepository: &elasticRepository docker.elastic.co
  googleK8sRepository: k8s.gcr.io
  githubContainerRegistry: ghcr.io

  #/!\ DEPRECATED /!\
  # Legacy repositories which will be removed at the end of migration.
  # Please don't use
  loggingRepository: *elasticRepository
  busyboxRepository: *dockerHubRepository

  # Default credentials
  # they're optional. If the target repository doesn't need them, comment them
  repositoryCred:
    user: docker
    password: docker
  # If you want / need authentication on the repositories, please set
  # Don't set them if the target repo is the same than others
  # so id you've set repository to value `my.private.repo` and same for
  # dockerHubRepository, you'll have to configure only repository (exclusive) OR
  # dockerHubCred.
  # dockerHubCred:
  #   user: myuser
  #   password: mypassord
  # elasticCred:
  #   user: myuser
  #   password: mypassord
  # googleK8sCred:
  #   user: myuser
  #   password: mypassord


  # common global images
  # Busybox for simple shell manipulation
  busyboxImage: busybox:1.32

  # curl image
  curlImage: curlimages/curl:7.69.1

  # env substitution image
  envsubstImage: dibi/envsubst:1

  # generate htpasswd files image
  # there's only latest image for htpasswd
  htpasswdImage: xmartlabs/htpasswd:latest

  # kubenretes client image
  kubectlImage: bitnami/kubectl:1.19

  # logging agent
  loggingImage: beats/filebeat:5.5.0

  # mariadb client image
  mariadbImage: bitnami/mariadb:10.5.8

  # nginx server image
  nginxImage: bitnami/nginx:1.18-debian-10

  # postgreSQL client and server image
  postgresImage: crunchydata/crunchy-postgres:centos8-13.2-4.6.1

  # readiness check image
  readinessImage: onap/oom/readiness:3.0.1

  # image pull policy
  pullPolicy: Always

  # default java image
  jreImage: onap/integration-java11:7.2.0

  # default clusterName
  # {{ template "common.fullname" . }}.{{ template "common.namespace" . }}.svc.{{ .Values.global.clusterName }}
  clusterName: cluster.local

  # default mount path root directory referenced
  # by persistent volumes and log files
  persistence:
    mountPath: /dockerdata-nfs
    enableDefaultStorageclass: false
    parameters: {}
    storageclassProvisioner: kubernetes.io/no-provisioner
    volumeReclaimPolicy: Retain

  # override default resource limit flavor for all charts
  flavor: unlimited

  # flag to enable debugging - application support required
  debugEnabled: false

  # default password complexity
  # available options: phrase, name, pin, basic, short, medium, long, maximum security
  # More datails: https://www.masterpasswordapp.com/masterpassword-algorithm.pdf
  passwordStrength: long

  # configuration to set log level to all components (the one that are using
  # "common.log.level" to set this)
  # can be overrided per components by setting logConfiguration.logLevelOverride
  # to the desired value
  # logLevel: DEBUG

  # Global ingress configuration
  ingress:
    enabled: false
    virtualhost:
      baseurl: "simpledemo.onap.org"

  # Global Service Mesh configuration
  # POC Mode, don't use it in production
  serviceMesh:
    enabled: false
    tls: true

  # metrics part
  # If enabled, exporters (for prometheus) will be deployed
  # if custom resources set to yes, CRD from prometheus operartor will be
  # created
  # Not all components have it enabled.
  #
  metrics:
    enabled: true
    custom_resources: false

  # Disabling AAF
  # POC Mode, only for use in development environment
  # Keep it enabled in production
  aafEnabled: true
  aafAgentImage: onap/aaf/aaf_agent:2.1.20

  # Disabling MSB
  # POC Mode, only for use in development environment
  msbEnabled: true

  # default values for certificates
  certificate:
    default:
      renewBefore: 720h #30 days
      duration:    8760h #365 days
      subject:
        organization: "Linux-Foundation"
        country: "US"
        locality: "San-Francisco"
        province: "California"
        organizationalUnit: "ONAP"
      issuer:
        group: certmanager.onap.org
        kind: CMPv2Issuer
        name: cmpv2-issuer-onap

  # Enabling CMPv2
  cmpv2Enabled: true
  platform:
    certificates:
      clientSecretName: oom-cert-service-client-tls-secret
      keystoreKeyRef: keystore.jks
      truststoreKeyRef: truststore.jks
      keystorePasswordSecretName: oom-cert-service-certificates-password
      keystorePasswordSecretKey: password
      truststorePasswordSecretName: oom-cert-service-certificates-password
      truststorePasswordSecretKey: password

  # Indicates offline deployment build
  # Set to true if you are rendering helm charts for offline deployment
  # Otherwise keep it disabled
  offlineDeploymentBuild: false

  # TLS
  # Set to false if you want to disable TLS for NodePorts. Be aware that this
  # will loosen your security.
  # if set this element will force or not tls even if serviceMesh.tls is set.
  # tlsEnabled: false

  # Logging
  # Currently, centralized logging is not in best shape so it's disabled by
  # default
  centralizedLoggingEnabled: &centralizedLogging false

  # Example of specific for the components where you want to disable TLS only for
  # it:
  # if set this element will force or not tls even if global.serviceMesh.tls and
  # global.tlsEnabled is set otherwise.
  # robot:
  #   tlsOverride: false

  # Global storage configuration
  #    Set to "-" for default, or with the name of the storage class
  #    Please note that if you use AAF, CDS, SDC, Netbox or Robot, you need a
  #    storageclass with RWX capabilities (or set specific configuration for these
  #    components).
  # persistence:
  #   storageClass: "-"

# Example of specific for the components which requires RWX:
# aaf:
#   persistence:
#     storageClassOverride: "My_RWX_Storage_Class"
# contrib:
#   netbox:
#     netbox-app:
#       persistence:
#         storageClassOverride: "My_RWX_Storage_Class"
# cds:
#   cds-blueprints-processor:
#     persistence:
#       storageClassOverride: "My_RWX_Storage_Class"
# sdc:
#   sdc-onboarding-be:
#     persistence:
#       storageClassOverride: "My_RWX_Storage_Class"

#################################################################
# Enable/disable and configure helm charts (ie. applications)
# to customize the ONAP deployment.
#################################################################
aaf:
  enabled: false
aai:
  enabled: false
appc:
  enabled: false
  config:
    openStackType: OpenStackProvider
    openStackName: OpenStack
    openStackKeyStoneUrl: http://localhost:8181/apidoc/explorer/index.html
    openStackServiceTenantName: default
    openStackDomain: default
    openStackUserName: admin
    openStackEncryptedPassword: admin
cassandra:
  enabled: false
cds:
  enabled: false
clamp:
  enabled: false
cli:
  enabled: false
consul:
  enabled: false
# Today, "contrib" chart that hosting these components must also be enabled
# in order to make it work. So `contrib.enabled` must have the same value than
# addTestingComponents
contrib:
  enabled: *testing
cps:
  enabled: false
dcaegen2:
  enabled: false
dcaegen2-services:
  enabled: false
dcaemod:
  enabled: false
holmes:
  enabled: false
dmaap:
  enabled: false
# Today, "logging" chart that perform the central part of logging must also be
# enabled in order to make it work. So `logging.enabled` must have the same
# value than centralizedLoggingEnabled
log:
  enabled: *centralizedLogging
sniro-emulator:
  enabled: false
oof:
  enabled: false
mariadb-galera:
  enabled: false
msb:
  enabled: false
multicloud:
  enabled: false
nbi:
  enabled: false
  config:
    # openstack configuration
    openStackRegion: "Yolo"
    openStackVNFTenantId: "1234"
policy:
  enabled: false
pomba:
  enabled: false
portal:
  enabled: false
robot:
  enabled: false
  config:
    # openStackEncryptedPasswordHere should match the encrypted string used in SO and APPC and overridden per environment
    openStackEncryptedPasswordHere: "c124921a3a0efbe579782cde8227681e"
sdc:
  enabled: false
sdnc:
  enabled: false

  replicaCount: 1

  mysql:
    replicaCount: 1
so:
  enabled: false

  replicaCount: 1

  liveness:
    # necessary to disable liveness probe when setting breakpoints
    # in debugger so K8s doesn't restart unresponsive container
    enabled: false

  # so server configuration
  config:
    # message router configuration
    dmaapTopic: "AUTO"
    # openstack configuration
    openStackUserName: "vnf_user"
    openStackRegion: "RegionOne"
    openStackKeyStoneUrl: "http://1.2.3.4:5000"
    openStackServiceTenantName: "service"
    openStackEncryptedPasswordHere: "c124921a3a0efbe579782cde8227681e"

  # in order to enable static password for so-monitoring uncomment:
  # so-monitoring:
  #   server:
  #     monitoring:
  #       password: demo123456!
uui:
  enabled: false
vfc:
  enabled: false
vid:
  enabled: false
vnfsdk:
  enabled: false
modeling:
  enabled: false
platform:
  enabled: false
a1policymanagement:
  enabled: false

cert-wrapper:
  enabled: true
repository-wrapper:
  enabled: true
roles-wrapper:
  enabled: true

One may wish to create a value file that is specific to a given deployment such that it can be differentiated from other deployments. For example, a onap-development.yaml file may create a minimal environment for development while onap-production.yaml might describe a production deployment that operates independently of the developer version.

For example, if the production OpenStack instance was different from a developer’s instance, the onap-production.yaml file may contain a different value for the vnfDeployment/openstack/oam_network_cidr key as shown below.

nsPrefix: onap
nodePortPrefix: 302
apps: consul msb mso message-router sdnc vid robot portal policy appc aai
sdc dcaegen2 log cli multicloud clamp vnfsdk aaf kube2msb
dataRootDir: /dockerdata-nfs

# docker repositories
repository:
  onap: nexus3.onap.org:10001
  oom: oomk8s
  aai: aaionap
  filebeat: docker.elastic.co

image:
  pullPolicy: Never

# vnf deployment environment
vnfDeployment:
  openstack:
    ubuntu_14_image: "Ubuntu_14.04.5_LTS"
    public_net_id: "e8f51956-00dd-4425-af36-045716781ffc"
    oam_network_id: "d4769dfb-c9e4-4f72-b3d6-1d18f4ac4ee6"
    oam_subnet_id: "191f7580-acf6-4c2b-8ec0-ba7d99b3bc4e"
    oam_network_cidr: "192.168.30.0/24"
<...>

To deploy ONAP with this environment file, enter:

> helm deploy local/onap -n onap -f onap/resources/environments/onap-production.yaml --set global.masterPassword=password
#################################################################
# Global configuration overrides.
#
# These overrides will affect all helm charts (ie. applications)
# that are listed below and are 'enabled'.
#################################################################
global:
  # Change to an unused port prefix range to prevent port conflicts
  # with other instances running within the same k8s cluster
  nodePortPrefix: 302

  # image repositories
  repository: nexus3.onap.org:10001
  repositorySecret: eyJuZXh1czMub25hcC5vcmc6MTAwMDEiOnsidXNlcm5hbWUiOiJkb2NrZXIiLCJwYXNzd29yZCI6ImRvY2tlciIsImVtYWlsIjoiQCIsImF1dGgiOiJaRzlqYTJWeU9tUnZZMnRsY2c9PSJ9fQ==
  # readiness check
  readinessImage: onap/oom/readiness:3.0.1
  # logging agent
  loggingRepository: docker.elastic.co

  # image pull policy
  pullPolicy: IfNotPresent

  # override default mount path root directory
  # referenced by persistent volumes and log files
  persistence:
    mountPath: /dockerdata

  # flag to enable debugging - application support required
  debugEnabled: true

#################################################################
# Enable/disable and configure helm charts (ie. applications)
# to customize the ONAP deployment.
#################################################################
aaf:
  enabled: false
aai:
  enabled: false
appc:
  enabled: false
clamp:
  enabled: true
cli:
  enabled: false
consul: # Consul Health Check Monitoring
  enabled: false
cps:
  enabled: false
dcaegen2:
  enabled: false
log:
  enabled: false
message-router:
  enabled: false
mock:
  enabled: false
msb:
  enabled: false
multicloud:
  enabled: false
policy:
  enabled: false
portal:
  enabled: false
robot: # Robot Health Check
  enabled: true
sdc:
  enabled: false
sdnc:
  enabled: false
so: # Service Orchestrator
  enabled: true

  replicaCount: 1

  liveness:
    # necessary to disable liveness probe when setting breakpoints
    # in debugger so K8s doesn't restart unresponsive container
    enabled: true

  # so server configuration
  config:
    # message router configuration
    dmaapTopic: "AUTO"
    # openstack configuration
    openStackUserName: "vnf_user"
    openStackRegion: "RegionOne"
    openStackKeyStoneUrl: "http://1.2.3.4:5000"
    openStackServiceTenantName: "service"
    openStackEncryptedPasswordHere: "c124921a3a0efbe579782cde8227681e"

  # configure embedded mariadb
  mariadb:
    config:
      mariadbRootPassword: password
uui:
  enabled: false
vfc:
  enabled: false
vid:
  enabled: false
vnfsdk:
  enabled: false

When deploying all of ONAP a requirements.yaml file control which and what version of the ONAP components are included. Here is an excerpt of this file:

# Referencing a named repo called 'local'.
# Can add this repo by running commands like:
# > helm serve
# > helm repo add local http://127.0.0.1:8879
dependencies:
<...>
  - name: so
    version: ~9.0.0
    repository: '@local'
    condition: so.enabled
<...>

The ~ operator in the so version value indicates that the latest “8.X.X” version of so shall be used thus allowing the chart to allow for minor upgrades that don’t impact the so API; hence, version 8.0.1 will be installed in this case.

The onap/resources/environment/dev.yaml (see the excerpt below) enables for fine grained control on what components are included as part of this deployment. By changing this so line to enabled: false the so component will not be deployed. If this change is part of an upgrade the existing so component will be shut down. Other so parameters and even so child values can be modified, for example the so’s liveness probe could be disabled (which is not recommended as this change would disable auto-healing of so).

#################################################################
# Global configuration overrides.
#
# These overrides will affect all helm charts (ie. applications)
# that are listed below and are 'enabled'.
#################################################################
global:
<...>

#################################################################
# Enable/disable and configure helm charts (ie. applications)
# to customize the ONAP deployment.
#################################################################
aaf:
  enabled: false
<...>
so: # Service Orchestrator
  enabled: true

  replicaCount: 1

  liveness:
    # necessary to disable liveness probe when setting breakpoints
    # in debugger so K8s doesn't restart unresponsive container
    enabled: true

<...>

Accessing the ONAP Portal using OOM and a Kubernetes Cluster

The ONAP deployment created by OOM operates in a private IP network that isn’t publicly accessible (i.e. OpenStack VMs with private internal network) which blocks access to the ONAP Portal. To enable direct access to this Portal from a user’s own environment (a laptop etc.) the portal application’s port 8989 is exposed through a Kubernetes LoadBalancer object.

Typically, to be able to access the Kubernetes nodes publicly a public address is assigned. In OpenStack this is a floating IP address.

When the portal-app chart is deployed a Kubernetes service is created that instantiates a load balancer. The LB chooses the private interface of one of the nodes as in the example below (10.0.0.4 is private to the K8s cluster only). Then to be able to access the portal on port 8989 from outside the K8s & OpenStack environment, the user needs to assign/get the floating IP address that corresponds to the private IP as follows:

> kubectl -n onap get services|grep "portal-app"
portal-app  LoadBalancer   10.43.142.201   10.0.0.4   8989:30215/TCP,8006:30213/TCP,8010:30214/TCP   1d   app=portal-app,release=dev

In this example, use the 10.0.0.4 private address as a key find the corresponding public address which in this example is 10.12.6.155. If you’re using OpenStack you’ll do the lookup with the horizon GUI or the OpenStack CLI for your tenant (openstack server list). That IP is then used in your /etc/hosts to map the fixed DNS aliases required by the ONAP Portal as shown below:

10.12.6.155 portal.api.simpledemo.onap.org
10.12.6.155 vid.api.simpledemo.onap.org
10.12.6.155 sdc.api.fe.simpledemo.onap.org
10.12.6.155 sdc.workflow.plugin.simpledemo.onap.org
10.12.6.155 sdc.dcae.plugin.simpledemo.onap.org
10.12.6.155 portal-sdk.simpledemo.onap.org
10.12.6.155 policy.api.simpledemo.onap.org
10.12.6.155 aai.api.sparky.simpledemo.onap.org
10.12.6.155 cli.api.simpledemo.onap.org
10.12.6.155 msb.api.discovery.simpledemo.onap.org
10.12.6.155 msb.api.simpledemo.onap.org
10.12.6.155 clamp.api.simpledemo.onap.org
10.12.6.155 so.api.simpledemo.onap.org
10.12.6.155 sdc.workflow.plugin.simpledemo.onap.org

Ensure you’ve disabled any proxy settings the browser you are using to access the portal and then simply access now the new ssl-encrypted URL: https://portal.api.simpledemo.onap.org:30225/ONAPPORTAL/login.htm

Note

Using the HTTPS based Portal URL the Browser needs to be configured to accept unsecure credentials. Additionally when opening an Application inside the Portal, the Browser might block the content, which requires to disable the blocking and reloading of the page

Note

Besides the ONAP Portal the Components can deliver additional user interfaces, please check the Component specific documentation.

Note

Alternatives Considered:
  • Kubernetes port forwarding was considered but discarded as it would require the end user to run a script that opens up port forwarding tunnels to each of the pods that provides a portal application widget.

  • Reverting to a VNC server similar to what was deployed in the Amsterdam release was also considered but there were many issues with resolution, lack of volume mount, /etc/hosts dynamic update, file upload that were a tall order to solve in time for the Beijing release.

Observations:

  • If you are not using floating IPs in your Kubernetes deployment and directly attaching a public IP address (i.e. by using your public provider network) to your K8S Node VMs’ network interface, then the output of ‘kubectl -n onap get services | grep “portal-app”’ will show your public IP instead of the private network’s IP. Therefore, you can grab this public IP directly (as compared to trying to find the floating IP first) and map this IP in /etc/hosts.

_images/oomLogoV2-Monitor.png

Monitor

All highly available systems include at least one facility to monitor the health of components within the system. Such health monitors are often used as inputs to distributed coordination systems (such as etcd, Zookeeper, or Consul) and monitoring systems (such as Nagios or Zabbix). OOM provides two mechanisms to monitor the real-time health of an ONAP deployment:

  • a Consul GUI for a human operator or downstream monitoring systems and Kubernetes liveness probes that enable automatic healing of failed containers, and

  • a set of liveness probes which feed into the Kubernetes manager which are described in the Heal section.

Within ONAP, Consul is the monitoring system of choice and deployed by OOM in two parts:

  • a three-way, centralized Consul server cluster is deployed as a highly available monitor of all of the ONAP components, and

  • a number of Consul agents.

The Consul server provides a user interface that allows a user to graphically view the current health status of all of the ONAP components for which agents have been created - a sample from the ONAP Integration labs follows:

_images/consulHealth.png

To see the real-time health of a deployment go to: http://<kubernetes IP>:30270/ui/ where a GUI much like the following will be found:

Note

If Consul GUI is not accessible, you can refer this kubectl port-forward method to access an application

_images/oomLogoV2-Heal.png

Heal

The ONAP deployment is defined by Helm charts as mentioned earlier. These Helm charts are also used to implement automatic recoverability of ONAP components when individual components fail. Once ONAP is deployed, a “liveness” probe starts checking the health of the components after a specified startup time.

Should a liveness probe indicate a failed container it will be terminated and a replacement will be started in its place - containers are ephemeral. Should the deployment specification indicate that there are one or more dependencies to this container or component (for example a dependency on a database) the dependency will be satisfied before the replacement container/component is started. This mechanism ensures that, after a failure, all of the ONAP components restart successfully.

To test healing, the following command can be used to delete a pod:

> kubectl delete pod [pod name] -n [pod namespace]

One could then use the following command to monitor the pods and observe the pod being terminated and the service being automatically healed with the creation of a replacement pod:

> kubectl get pods --all-namespaces -o=wide
_images/oomLogoV2-Scale.png

Scale

Many of the ONAP components are horizontally scalable which allows them to adapt to expected offered load. During the Beijing release scaling is static, that is during deployment or upgrade a cluster size is defined and this cluster will be maintained even in the presence of faults. The parameter that controls the cluster size of a given component is found in the values.yaml file for that component. Here is an excerpt that shows this parameter:

# default number of instances
replicaCount: 1

In order to change the size of a cluster, an operator could use a helm upgrade (described in detail in the next section) as follows:

> helm upgrade [RELEASE] [CHART] [flags]

The RELEASE argument can be obtained from the following command:

> helm list

Below is the example for the same:

> helm list
  NAME                    REVISION        UPDATED                         STATUS          CHART                   APP VERSION     NAMESPACE
  dev                     1               Wed Oct 14 13:49:52 2020        DEPLOYED        onap-9.0.0              Istanbul        onap
  dev-cassandra           5               Thu Oct 15 14:45:34 2020        DEPLOYED        cassandra-9.0.0                         onap
  dev-contrib             1               Wed Oct 14 13:52:53 2020        DEPLOYED        contrib-9.0.0                           onap
  dev-mariadb-galera      1               Wed Oct 14 13:55:56 2020        DEPLOYED        mariadb-galera-9.0.0                    onap

Here the Name column shows the RELEASE NAME, In our case we want to try the scale operation on cassandra, thus the RELEASE NAME would be dev-cassandra.

Now we need to obtain the chart name for cassandra. Use the below command to get the chart name:

> helm search cassandra

Below is the example for the same:

> helm search cassandra
  NAME                    CHART VERSION   APP VERSION     DESCRIPTION
  local/cassandra         9.0.0                           ONAP cassandra
  local/portal-cassandra  9.0.0                           Portal cassandra
  local/aaf-cass          9.0.0                           ONAP AAF cassandra
  local/sdc-cs            9.0.0                           ONAP Service Design and Creation Cassandra

Here the Name column shows the chart name. As we want to try the scale operation for cassandra, thus the corresponding chart name is local/cassandra

Now we have both the command’s arguments, thus we can perform the scale operation for cassandra as follows:

> helm upgrade dev-cassandra local/cassandra --set replicaCount=3

Using this command we can scale up or scale down the cassandra db instances.

The ONAP components use Kubernetes provided facilities to build clustered, highly available systems including: Services with load-balancers, ReplicaSet, and StatefulSet. Some of the open-source projects used by the ONAP components directly support clustered configurations, for example ODL and MariaDB Galera.

The Kubernetes Services abstraction to provide a consistent access point for each of the ONAP components, independent of the pod or container architecture of that component. For example, SDN-C uses OpenDaylight clustering with a default cluster size of three but uses a Kubernetes service to and change the number of pods in this abstract this cluster from the other ONAP components such that the cluster could change size and this change is isolated from the other ONAP components by the load-balancer implemented in the ODL service abstraction.

A ReplicaSet is a construct that is used to describe the desired state of the cluster. For example ‘replicas: 3’ indicates to Kubernetes that a cluster of 3 instances is the desired state. Should one of the members of the cluster fail, a new member will be automatically started to replace it.

Some of the ONAP components many need a more deterministic deployment; for example to enable intra-cluster communication. For these applications the component can be deployed as a Kubernetes StatefulSet which will maintain a persistent identifier for the pods and thus a stable network id for the pods. For example: the pod names might be web-0, web-1, web-{N-1} for N ‘web’ pods with corresponding DNS entries such that intra service communication is simple even if the pods are physically distributed across multiple nodes. An example of how these capabilities can be used is described in the Running Consul on Kubernetes tutorial.

_images/oomLogoV2-Upgrade.png

Upgrade

Helm has built-in capabilities to enable the upgrade of pods without causing a loss of the service being provided by that pod or pods (if configured as a cluster). As described in the OOM Developer’s Guide, ONAP components provide an abstracted ‘service’ end point with the pods or containers providing this service hidden from other ONAP components by a load balancer. This capability is used during upgrades to allow a pod with a new image to be added to the service before removing the pod with the old image. This ‘make before break’ capability ensures minimal downtime.

Prior to doing an upgrade, determine of the status of the deployed charts:

> helm list
NAME REVISION UPDATED                  STATUS    CHART     NAMESPACE
so   1        Mon Feb 5 10:05:22 2020  DEPLOYED  so-9.0.0  onap

When upgrading a cluster a parameter controls the minimum size of the cluster during the upgrade while another parameter controls the maximum number of nodes in the cluster. For example, SNDC configured as a 3-way ODL cluster might require that during the upgrade no fewer than 2 pods are available at all times to provide service while no more than 5 pods are ever deployed across the two versions at any one time to avoid depleting the cluster of resources. In this scenario, the SDNC cluster would start with 3 old pods then Kubernetes may add a new pod (3 old, 1 new), delete one old (2 old, 1 new), add two new pods (2 old, 3 new) and finally delete the 2 old pods (3 new). During this sequence the constraints of the minimum of two pods and maximum of five would be maintained while providing service the whole time.

Initiation of an upgrade is triggered by changes in the Helm charts. For example, if the image specified for one of the pods in the SDNC deployment specification were to change (i.e. point to a new Docker image in the nexus3 repository - commonly through the change of a deployment variable), the sequence of events described in the previous paragraph would be initiated.

For example, to upgrade a container by changing configuration, specifically an environment value:

> helm upgrade so onap/so --version 8.0.1 --set enableDebug=true

Issuing this command will result in the appropriate container being stopped by Kubernetes and replaced with a new container with the new environment value.

To upgrade a component to a new version with a new configuration file enter:

> helm upgrade so onap/so --version 8.0.1 -f environments/demo.yaml

To fetch release history enter:

> helm history so
REVISION UPDATED                  STATUS     CHART     DESCRIPTION
1        Mon Feb 5 10:05:22 2020  SUPERSEDED so-8.0.0  Install complete
2        Mon Feb 5 10:10:55 2020  DEPLOYED   so-9.0.0  Upgrade complete

Unfortunately, not all upgrades are successful. In recognition of this the lineup of pods within an ONAP deployment is tagged such that an administrator may force the ONAP deployment back to the previously tagged configuration or to a specific configuration, say to jump back two steps if an incompatibility between two ONAP components is discovered after the two individual upgrades succeeded.

This rollback functionality gives the administrator confidence that in the unfortunate circumstance of a failed upgrade the system can be rapidly brought back to a known good state. This process of rolling upgrades while under service is illustrated in this short YouTube video showing a Zero Downtime Upgrade of a web application while under a 10 million transaction per second load.

For example, to roll-back back to previous system revision enter:

> helm rollback so 1

> helm history so
REVISION UPDATED                  STATUS     CHART     DESCRIPTION
1        Mon Feb 5 10:05:22 2020  SUPERSEDED so-8.0.0  Install complete
2        Mon Feb 5 10:10:55 2020  SUPERSEDED so-9.0.0  Upgrade complete
3        Mon Feb 5 10:14:32 2020  DEPLOYED   so-8.0.0  Rollback to 1

Note

The description field can be overridden to document actions taken or include tracking numbers.

Many of the ONAP components contain their own databases which are used to record configuration or state information. The schemas of these databases may change from version to version in such a way that data stored within the database needs to be migrated between versions. If such a migration script is available it can be invoked during the upgrade (or rollback) by Container Lifecycle Hooks. Two such hooks are available, PostStart and PreStop, which containers can access by registering a handler against one or both. Note that it is the responsibility of the ONAP component owners to implement the hook handlers - which could be a shell script or a call to a specific container HTTP endpoint - following the guidelines listed on the Kubernetes site. Lifecycle hooks are not restricted to database migration or even upgrades but can be used anywhere specific operations need to be taken during lifecycle operations.

OOM uses Helm K8S package manager to deploy ONAP components. Each component is arranged in a packaging format called a chart - a collection of files that describe a set of k8s resources. Helm allows for rolling upgrades of the ONAP component deployed. To upgrade a component Helm release you will need an updated Helm chart. The chart might have modified, deleted or added values, deployment yamls, and more. To get the release name use:

> helm ls

To easily upgrade the release use:

> helm upgrade [RELEASE] [CHART]

To roll back to a previous release version use:

> helm rollback [flags] [RELEASE] [REVISION]

For example, to upgrade the onap-so helm release to the latest SO container release v1.1.2:

  • Edit so values.yaml which is part of the chart

  • Change “so: nexus3.onap.org:10001/openecomp/so:v1.1.1” to “so: nexus3.onap.org:10001/openecomp/so:v1.1.2”

  • From the chart location run:

    > helm upgrade onap-so
    

The previous so pod will be terminated and a new so pod with an updated so container will be created.

_images/oomLogoV2-Delete.png

Delete

Existing deployments can be partially or fully removed once they are no longer needed. To minimize errors it is recommended that before deleting components from a running deployment the operator perform a ‘dry-run’ to display exactly what will happen with a given command prior to actually deleting anything. For example:

> helm undeploy onap --dry-run

will display the outcome of deleting the ‘onap’ release from the deployment. To completely delete a release and remove it from the internal store enter:

> helm undeploy onap

Once complete undeploy is done then delete the namespace as well using following command:

>  kubectl delete namespace <name of namespace>

Note

You need to provide the namespace name which you used during deployment, below is the example:

>  kubectl delete namespace onap

One can also remove individual components from a deployment by changing the ONAP configuration values. For example, to remove so from a running deployment enter:

> helm undeploy onap-so

will remove so as the configuration indicates it’s no longer part of the deployment. This might be useful if a one wanted to replace just so by installing a custom version.

ONAP PaaS set-up

Starting from Honolulu release, Cert-Manager and Prometheus Stack are a part of k8s PaaS for ONAP operations and can be installed to provide additional functionality for ONAP engineers.

The versions of PaaS components that are supported by OOM are as follows:

ONAP PaaS components

Release

Cert-Manager

Prometheus Stack

honolulu

1.2.0

13.x

istanbul

1.5.4

19.x

This guide provides instructions on how to install the PaaS components for ONAP.

Cert-Manager

Cert-Manager is a native Kubernetes certificate management controller. It can help with issuing certificates from a variety of sources, such as Let’s Encrypt, HashiCorp Vault, Venafi, a simple signing key pair, self signed or external issuers. It ensures certificates are valid and up to date, and attempt to renew certificates at a configured time before expiry.

Installation steps

The recommended version of Cert-Manager for Kubernetes 1.19 is v1.5.4. Cert-Manager is deployed using regular YAML manifests which include all the needed resources (the CustomResourceDefinitions, cert-manager, namespace, and the webhook component).

Full installation instructions, including details on how to configure extra functionality in Cert-Manager can be found in the Cert-Manager Installation documentation.

There is also a kubectl plugin (kubectl cert-manager) that can help you to manage cert-manager resources inside your cluster. For installation steps, please refer to Cert-Manager kubectl plugin documentation.

Installation can be as simple as:

> kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.5.4/cert-manager.yaml

Prometheus Stack (optional)

Prometheus is an open-source systems monitoring and alerting toolkit with an active ecosystem.

Kube Prometheus Stack is a collection of Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator. As it includes both Prometheus Operator and Grafana dashboards, there is no need to set up them separately.

Installation steps

The recommended version of kube-prometheus-stack chart for Kubernetes 1.19 is 19.x (which is currently the latest major chart version), for example 19.0.2.

In order to install Prometheus Stack, you must follow these steps:

  • Create the namespace for Prometheus Stack:

    > kubectl create namespace prometheus
    
  • Add the prometheus-community Helm repository:

    > helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
    
  • Update your local Helm chart repository cache:

    > helm repo update
    
  • To install the kube-prometheus-stack Helm chart in latest version:

    > helm install prometheus prometheus-community/kube-prometheus-stack --namespace=prometheus
    

    To install the kube-prometheus-stack Helm chart in specific version, for example 19.0.2:

    > helm install prometheus prometheus-community/kube-prometheus-stack --namespace=prometheus --version=19.0.2
    

OOM Developer Guide

_images/oomLogoV2-medium.png

ONAP consists of a large number of components, each of which are substantial projects within themselves, which results in a high degree of complexity in deployment and management. To cope with this complexity the ONAP Operations Manager (OOM) uses a Helm model of ONAP - Helm being the primary management system for Kubernetes container systems - to drive all user driven life-cycle management operations. The Helm model of ONAP is composed of a set of hierarchical Helm charts that define the structure of the ONAP components and the configuration of these components. These charts are fully parameterized such that a single environment file defines all of the parameters needed to deploy ONAP. A user of ONAP may maintain several such environment files to control the deployment of ONAP in multiple environments such as development, pre-production, and production.

The following sections describe how the ONAP Helm charts are constructed.

Container Background

Linux containers allow for an application and all of its operating system dependencies to be packaged and deployed as a single unit without including a guest operating system as done with virtual machines. The most popular container solution is Docker which provides tools for container management like the Docker Host (dockerd) which can create, run, stop, move, or delete a container. Docker has a very popular registry of containers images that can be used by any Docker system; however, in the ONAP context, Docker images are built by the standard CI/CD flow and stored in Nexus repositories. OOM uses the “standard” ONAP docker containers and three new ones specifically created for OOM.

Containers are isolated from each other primarily via name spaces within the Linux kernel without the need for multiple guest operating systems. As such, multiple containers can be deployed with little overhead such as all of ONAP can be deployed on a single host. With some optimization of the ONAP components (e.g. elimination of redundant database instances) it may be possible to deploy ONAP on a single laptop computer.

Helm Charts

A Helm chart is a collection of files that describe a related set of Kubernetes resources. A simple chart might be used to deploy something simple, like a memcached pod, while a complex chart might contain many micro-service arranged in a hierarchy as found in the aai ONAP component.

Charts are created as files laid out in a particular directory tree, then they can be packaged into versioned archives to be deployed. There is a public archive of Helm Charts on GitHub that includes many technologies applicable to ONAP. Some of these charts have been used in ONAP and all of the ONAP charts have been created following the guidelines provided.

The top level of the ONAP charts is shown below:

common
├── cassandra
│   ├── Chart.yaml
│   ├── requirements.yaml
│   ├── resources
│   │   ├── config
│   │   │   └── docker-entrypoint.sh
│   │   ├── exec.py
│   │   └── restore.sh
│   ├── templates
│   │   ├── backup
│   │   │   ├── configmap.yaml
│   │   │   ├── cronjob.yaml
│   │   │   ├── pv.yaml
│   │   │   └── pvc.yaml
│   │   ├── configmap.yaml
│   │   ├── pv.yaml
│   │   ├── service.yaml
│   │   └── statefulset.yaml
│   └── values.yaml
├── common
│   ├── Chart.yaml
│   ├── templates
│   │   ├── _createPassword.tpl
│   │   ├── _ingress.tpl
│   │   ├── _labels.tpl
│   │   ├── _mariadb.tpl
│   │   ├── _name.tpl
│   │   ├── _namespace.tpl
│   │   ├── _repository.tpl
│   │   ├── _resources.tpl
│   │   ├── _secret.yaml
│   │   ├── _service.tpl
│   │   ├── _storage.tpl
│   │   └── _tplValue.tpl
│   └── values.yaml
├── ...
└── postgres-legacy
    ├── Chart.yaml
    ├── requirements.yaml
    ├── charts
    └── configs

The common section of charts consists of a set of templates that assist with parameter substitution (_name.tpl, _namespace.tpl and others) and a set of charts for components used throughout ONAP. When the common components are used by other charts they are instantiated each time or we can deploy a shared instances for several components.

All of the ONAP components have charts that follow the pattern shown below:

name-of-my-component
├── Chart.yaml
├── requirements.yaml
├── component
│   └── subcomponent-folder
├── charts
│   └── subchart-folder
├── resources
│   ├── folder1
│   │   ├── file1
│   │   └── file2
│   └── folder1
│       ├── file3
│       └── folder3
│           └── file4
├── templates
│   ├── NOTES.txt
│   ├── configmap.yaml
│   ├── deployment.yaml
│   ├── ingress.yaml
│   ├── job.yaml
│   ├── secrets.yaml
│   └── service.yaml
└── values.yaml

Note that the component charts / components may include a hierarchy of sub components and in themselves can be quite complex.

You can use either charts or components folder for your subcomponents. charts folder means that the subcomponent will always been deployed.

components folders means we can choose if we want to deploy the subcomponent.

This choice is done in root values.yaml:

---
global:
  key: value

component1:
  enabled: true
component2:
  enabled: true

Then in requirements.yaml, you’ll use these values:

---
dependencies:
  - name: common
    version: ~x.y-0
    repository: '@local'
  - name: component1
    version: ~x.y-0
    repository: 'file://components/component1'
    condition: component1.enabled
  - name: component2
    version: ~x.y-0
    repository: 'file://components/component2'
    condition: component2.enabled

Configuration of the components varies somewhat from component to component but generally follows the pattern of one or more configmap.yaml files which can directly provide configuration to the containers in addition to processing configuration files stored in the config directory. It is the responsibility of each ONAP component team to update these configuration files when changes are made to the project containers that impact configuration.

The following section describes how the hierarchical ONAP configuration system is key to management of such a large system.

Configuration Management

ONAP is a large system composed of many components - each of which are complex systems in themselves - that needs to be deployed in a number of different ways. For example, within a single operator’s network there may be R&D deployments under active development, pre-production versions undergoing system testing and production systems that are operating live networks. Each of these deployments will differ in significant ways, such as the version of the software images deployed. In addition, there may be a number of application specific configuration differences, such as operating system environment variables. The following describes how the Helm configuration management system is used within the OOM project to manage both ONAP infrastructure configuration as well as ONAP components configuration.

One of the artifacts that OOM/Kubernetes uses to deploy ONAP components is the deployment specification, yet another yaml file. Within these deployment specs are a number of parameters as shown in the following example:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app.kubernetes.io/name: zookeeper
    helm.sh/chart: zookeeper
    app.kubernetes.io/component: server
    app.kubernetes.io/managed-by: Tiller
    app.kubernetes.io/instance: onap-oof
  name: onap-oof-zookeeper
  namespace: onap
spec:
  <...>
  replicas: 3
  selector:
    matchLabels:
      app.kubernetes.io/name: zookeeper
      app.kubernetes.io/component: server
      app.kubernetes.io/instance: onap-oof
  serviceName: onap-oof-zookeeper-headless
  template:
    metadata:
      labels:
        app.kubernetes.io/name: zookeeper
        helm.sh/chart: zookeeper
        app.kubernetes.io/component: server
        app.kubernetes.io/managed-by: Tiller
        app.kubernetes.io/instance: onap-oof
    spec:
      <...>
      affinity:
      containers:
      - name: zookeeper
        <...>
        image: gcr.io/google_samples/k8szk:v3
        imagePullPolicy: Always
        <...>
        ports:
        - containerPort: 2181
          name: client
          protocol: TCP
        - containerPort: 3888
          name: election
          protocol: TCP
        - containerPort: 2888
          name: server
          protocol: TCP
        <...>

Note that within the statefulset specification, one of the container arguments is the key/value pair image: gcr.io/google_samples/k8szk:v3 which specifies the version of the zookeeper software to deploy. Although the statefulset specifications greatly simplify statefulset, maintenance of the statefulset specifications themselves become problematic as software versions change over time or as different versions are required for different statefulsets. For example, if the R&D team needs to deploy a newer version of mariadb than what is currently used in the production environment, they would need to clone the statefulset specification and change this value. Fortunately, this problem has been solved with the templating capabilities of Helm.

The following example shows how the statefulset specifications are modified to incorporate Helm templates such that key/value pairs can be defined outside of the statefulset specifications and passed during instantiation of the component.

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: {{ include "common.fullname" . }}
  namespace: {{ include "common.namespace" . }}
  labels: {{- include "common.labels" . | nindent 4 }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels: {{- include "common.matchLabels" . | nindent 6 }}
  # serviceName is only needed for StatefulSet
  # put the postfix part only if you have add a postfix on the service name
  serviceName: {{ include "common.servicename" . }}-{{ .Values.service.postfix }}
  <...>
  template:
    metadata:
      labels: {{- include "common.labels" . | nindent 8 }}
      annotations: {{- include "common.tplValue" (dict "value" .Values.podAnnotations "context" $) | nindent 8 }}
      name: {{ include "common.name" . }}
    spec:
      <...>
      containers:
        - name: {{ include "common.name" . }}
          image: {{ .Values.image }}
          imagePullPolicy: {{ .Values.global.pullPolicy | default .Values.pullPolicy }}
          ports:
          {{- range $index, $port := .Values.service.ports }}
            - containerPort: {{ $port.port }}
              name: {{ $port.name }}
          {{- end }}
          {{- range $index, $port := .Values.service.headlessPorts }}
            - containerPort: {{ $port.port }}
              name: {{ $port.name }}
          {{- end }}
          <...>

This version of the statefulset specification has gone through the process of templating values that are likely to change between statefulsets. Note that the image is now specified as: image: {{ .Values.image }} instead of a string used previously. During the statefulset phase, Helm (actually the Helm sub-component Tiller) substitutes the {{ .. }} entries with a variable defined in a values.yaml file. The content of this file is as follows:

<...>
image: gcr.io/google_samples/k8szk:v3
replicaCount: 3
<...>

Within the values.yaml file there is an image key with the value gcr.io/google_samples/k8szk:v3 which is the same value used in the non-templated version. Once all of the substitutions are complete, the resulting statefulset specification ready to be used by Kubernetes.

When creating a template consider the use of default values if appropriate. Helm templating has built in support for DEFAULT values, here is an example:

imagePullSecrets:
- name: "{{ .Values.nsPrefix | default "onap" }}-docker-registry-key"

The pipeline operator (“|”) used here hints at that power of Helm templates in that much like an operating system command line the pipeline operator allow over 60 Helm functions to be embedded directly into the template (note that the Helm template language is a superset of the Go template language). These functions include simple string operations like upper and more complex flow control operations like if/else.

OOM is mainly helm templating. In order to have consistent deployment of the different components of ONAP, some rules must be followed.

Templates are provided in order to create Kubernetes resources (Secrets, Ingress, Services, …) or part of Kubernetes resources (names, labels, resources requests and limits, …).

a full list and simple description is done in kubernetes/common/common/documentation.rst.

Service template

In order to create a Service for a component, you have to create a file (with service in the name. For normal service, just put the following line:

{{ include "common.service" . }}

For headless service, the line to put is the following:

{{ include "common.headlessService" . }}

The configuration of the service is done in component values.yaml:

service:
 name: NAME-OF-THE-SERVICE
 postfix: MY-POSTFIX
 type: NodePort
 annotations:
   someAnnotationsKey: value
 ports:
 - name: tcp-MyPort
   port: 5432
   nodePort: 88
 - name: http-api
   port: 8080
   nodePort: 89
 - name: https-api
   port: 9443
   nodePort: 90

annotations and postfix keys are optional. if service.type is NodePort, then you have to give nodePort value for your service ports (which is the end of the computed nodePort, see example).

It would render the following Service Resource (for a component named name-of-my-component, with version x.y.z, helm deployment name my-deployment and global.nodePortPrefix 302):

apiVersion: v1
kind: Service
metadata:
  annotations:
    someAnnotationsKey: value
  name: NAME-OF-THE-SERVICE-MY-POSTFIX
  labels:
    app.kubernetes.io/name: name-of-my-component
    helm.sh/chart: name-of-my-component-x.y.z
    app.kubernetes.io/instance: my-deployment-name-of-my-component
    app.kubernetes.io/managed-by: Tiller
spec:
  ports:
    - port: 5432
      targetPort: tcp-MyPort
      nodePort: 30288
    - port: 8080
      targetPort: http-api
      nodePort: 30289
    - port: 9443
      targetPort: https-api
      nodePort: 30290
  selector:
    app.kubernetes.io/name: name-of-my-component
    app.kubernetes.io/instance:  my-deployment-name-of-my-component
  type: NodePort

In the deployment or statefulSet file, you needs to set the good labels in order for the service to match the pods.

here’s an example to be sure it matches (for a statefulSet):

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: {{ include "common.fullname" . }}
  namespace: {{ include "common.namespace" . }}
  labels: {{- include "common.labels" . | nindent 4 }}
spec:
  selector:
    matchLabels: {{- include "common.matchLabels" . | nindent 6 }}
  # serviceName is only needed for StatefulSet
  # put the postfix part only if you have add a postfix on the service name
  serviceName: {{ include "common.servicename" . }}-{{ .Values.service.postfix }}
  <...>
  template:
    metadata:
      labels: {{- include "common.labels" . | nindent 8 }}
      annotations: {{- include "common.tplValue" (dict "value" .Values.podAnnotations "context" $) | nindent 8 }}
      name: {{ include "common.name" . }}
    spec:
     <...>
     containers:
       - name: {{ include "common.name" . }}
         ports:
         {{- range $index, $port := .Values.service.ports }}
         - containerPort: {{ $port.port }}
           name: {{ $port.name }}
         {{- end }}
         {{- range $index, $port := .Values.service.headlessPorts }}
         - containerPort: {{ $port.port }}
           name: {{ $port.name }}
         {{- end }}
         <...>

The configuration of the service is done in component values.yaml:

service:
 name: NAME-OF-THE-SERVICE
 headless:
   postfix: NONE
   annotations:
     anotherAnnotationsKey : value
   publishNotReadyAddresses: true
 headlessPorts:
 - name: tcp-MyPort
   port: 5432
 - name: http-api
   port: 8080
 - name: https-api
   port: 9443

headless.annotations, headless.postfix and headless.publishNotReadyAddresses keys are optional.

If headless.postfix is not set, then we’ll add -headless at the end of the service name.

If it set to NONE, there will be not postfix.

And if set to something, it will add -something at the end of the service name.

It would render the following Service Resource (for a component named name-of-my-component, with version x.y.z, helm deployment name my-deployment and global.nodePortPrefix 302):

apiVersion: v1
kind: Service
metadata:
  annotations:
    anotherAnnotationsKey: value
  name: NAME-OF-THE-SERVICE
  labels:
    app.kubernetes.io/name: name-of-my-component
    helm.sh/chart: name-of-my-component-x.y.z
    app.kubernetes.io/instance: my-deployment-name-of-my-component
    app.kubernetes.io/managed-by: Tiller
spec:
  clusterIP: None
  ports:
    - port: 5432
      targetPort: tcp-MyPort
      nodePort: 30288
    - port: 8080
      targetPort: http-api
      nodePort: 30289
    - port: 9443
      targetPort: https-api
      nodePort: 30290
  publishNotReadyAddresses: true
  selector:
    app.kubernetes.io/name: name-of-my-component
    app.kubernetes.io/instance:  my-deployment-name-of-my-component
  type: ClusterIP

Previous example of StatefulSet would also match (except for the postfix part obviously).

Creating Deployment or StatefulSet

Deployment and StatefulSet should use the apps/v1 (which has appeared in v1.9). As seen on the service part, the following parts are mandatory:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: {{ include "common.fullname" . }}
  namespace: {{ include "common.namespace" . }}
  labels: {{- include "common.labels" . | nindent 4 }}
spec:
  selector:
    matchLabels: {{- include "common.matchLabels" . | nindent 6 }}
  # serviceName is only needed for StatefulSet
  # put the postfix part only if you have add a postfix on the service name
  serviceName: {{ include "common.servicename" . }}-{{ .Values.service.postfix }}
  <...>
  template:
    metadata:
      labels: {{- include "common.labels" . | nindent 8 }}
      annotations: {{- include "common.tplValue" (dict "value" .Values.podAnnotations "context" $) | nindent 8 }}
      name: {{ include "common.name" . }}
    spec:
      <...>
      containers:
        - name: {{ include "common.name" . }}

ONAP Application Configuration

Dependency Management

These Helm charts describe the desired state of an ONAP deployment and instruct the Kubernetes container manager as to how to maintain the deployment in this state. These dependencies dictate the order in-which the containers are started for the first time such that such dependencies are always met without arbitrary sleep times between container startups. For example, the SDC back-end container requires the Elastic-Search, Cassandra and Kibana containers within SDC to be ready and is also dependent on DMaaP (or the message-router) to be ready - where ready implies the built-in “readiness” probes succeeded - before becoming fully operational. When an initial deployment of ONAP is requested the current state of the system is NULL so ONAP is deployed by the Kubernetes manager as a set of Docker containers on one or more predetermined hosts. The hosts could be physical machines or virtual machines. When deploying on virtual machines the resulting system will be very similar to “Heat” based deployments, i.e. Docker containers running within a set of VMs, the primary difference being that the allocation of containers to VMs is done dynamically with OOM and statically with “Heat”. Example SO deployment descriptor file shows SO’s dependency on its mariadb data-base component:

SO deployment specification excerpt:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "common.fullname" . }}
  namespace: {{ include "common.namespace" . }}
  labels: {{- include "common.labels" . | nindent 4 }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels: {{- include "common.matchLabels" . | nindent 6 }}
  template:
    metadata:
      labels:
        app: {{ include "common.name" . }}
        release: {{ .Release.Name }}
    spec:
      initContainers:
      - command:
        - /app/ready.py
        args:
        - --container-name
        - so-mariadb
        env:
...

Kubernetes Container Orchestration

The ONAP components are managed by the Kubernetes container management system which maintains the desired state of the container system as described by one or more deployment descriptors - similar in concept to OpenStack HEAT Orchestration Templates. The following sections describe the fundamental objects managed by Kubernetes, the network these components use to communicate with each other and other entities outside of ONAP and the templates that describe the configuration and desired state of the ONAP components.

Name Spaces

Within the namespaces are Kubernetes services that provide external connectivity to pods that host Docker containers.

ONAP Components to Kubernetes Object Relationships

Kubernetes deployments consist of multiple objects:

  • nodes - a worker machine - either physical or virtual - that hosts multiple containers managed by Kubernetes.

  • services - an abstraction of a logical set of pods that provide a micro-service.

  • pods - one or more (but typically one) container(s) that provide specific application functionality.

  • persistent volumes - One or more permanent volumes need to be established to hold non-ephemeral configuration and state data.

The relationship between these objects is shown in the following figure:

_images/kubernetes_objects.png

OOM uses these Kubernetes objects as described in the following sections.

Nodes

OOM works with both physical and virtual worker machines.

  • Virtual Machine Deployments - If ONAP is to be deployed onto a set of virtual machines, the creation of the VMs is outside of the scope of OOM and could be done in many ways, such as

    • manually, for example by a user using the OpenStack Horizon dashboard or AWS EC2, or

    • automatically, for example with the use of a OpenStack Heat Orchestration Template which builds an ONAP stack, Azure ARM template, AWS CloudFormation Template, or

    • orchestrated, for example with Cloudify creating the VMs from a TOSCA template and controlling their life cycle for the life of the ONAP deployment.

  • Physical Machine Deployments - If ONAP is to be deployed onto physical machines there are several options but the recommendation is to use Rancher along with Helm to associate hosts with a Kubernetes cluster.

Pods

A group of containers with shared storage and networking can be grouped together into a Kubernetes pod. All of the containers within a pod are co-located and co-scheduled so they operate as a single unit. Within ONAP Amsterdam release, pods are mapped one-to-one to docker containers although this may change in the future. As explained in the Services section below the use of Pods within each ONAP component is abstracted from other ONAP components.

Services

OOM uses the Kubernetes service abstraction to provide a consistent access point for each of the ONAP components independent of the pod or container architecture of that component. For example, the SDNC component may introduce OpenDaylight clustering as some point and change the number of pods in this component to three or more but this change will be isolated from the other ONAP components by the service abstraction. A service can include a load balancer on its ingress to distribute traffic between the pods and even react to dynamic changes in the number of pods if they are part of a replica set.

Persistent Volumes

To enable ONAP to be deployed into a wide variety of cloud infrastructures a flexible persistent storage architecture, built on Kubernetes persistent volumes, provides the ability to define the physical storage in a central location and have all ONAP components securely store their data.

When deploying ONAP into a public cloud, available storage services such as AWS Elastic Block Store, Azure File, or GCE Persistent Disk are options. Alternatively, when deploying into a private cloud the storage architecture might consist of Fiber Channel, Gluster FS, or iSCSI. Many other storage options existing, refer to the Kubernetes Storage Class documentation for a full list of the options. The storage architecture may vary from deployment to deployment but in all cases a reliable, redundant storage system must be provided to ONAP with which the state information of all ONAP components will be securely stored. The Storage Class for a given deployment is a single parameter listed in the ONAP values.yaml file and therefore is easily customized. Operation of this storage system is outside the scope of the OOM.

Insert values.yaml code block with storage block here

Once the storage class is selected and the physical storage is provided, the ONAP deployment step creates a pool of persistent volumes within the given physical storage that is used by all of the ONAP components. ONAP components simply make a claim on these persistent volumes (PV), with a persistent volume claim (PVC), to gain access to their storage.

The following figure illustrates the relationships between the persistent volume claims, the persistent volumes, the storage class, and the physical storage.

digraph PV {
   label = "Persistance Volume Claim to Physical Storage Mapping"
   {
      node [shape=cylinder]
      D0 [label="Drive0"]
      D1 [label="Drive1"]
      Dx [label="Drivex"]
   }
   {
      node [shape=Mrecord label="StorageClass:ceph"]
      sc
   }
   {
      node [shape=point]
      p0 p1 p2
      p3 p4 p5
   }
   subgraph clusterSDC {
      label="SDC"
      PVC0
      PVC1
   }
   subgraph clusterSDNC {
      label="SDNC"
      PVC2
   }
   subgraph clusterSO {
      label="SO"
      PVCn
   }
   PV0 -> sc
   PV1 -> sc
   PV2 -> sc
   PVn -> sc

   sc -> {D0 D1 Dx}
   PVC0 -> PV0
   PVC1 -> PV1
   PVC2 -> PV2
   PVCn -> PVn

   # force all of these nodes to the same line in the given order
   subgraph {
      rank = same; PV0;PV1;PV2;PVn;p0;p1;p2
      PV0->PV1->PV2->p0->p1->p2->PVn [style=invis]
   }

   subgraph {
      rank = same; D0;D1;Dx;p3;p4;p5
      D0->D1->p3->p4->p5->Dx [style=invis]
   }

}

In-order for an ONAP component to use a persistent volume it must make a claim against a specific persistent volume defined in the ONAP common charts. Note that there is a one-to-one relationship between a PVC and PV. The following is an excerpt from a component chart that defines a PVC:

Insert PVC example here

OOM Networking with Kubernetes

  • DNS

  • Ports - Flattening the containers also expose port conflicts between the containers which need to be resolved.

Node Ports

Pod Placement Rules

OOM will use the rich set of Kubernetes node and pod affinity / anti-affinity rules to minimize the chance of a single failure resulting in a loss of ONAP service. Node affinity / anti-affinity is used to guide the Kubernetes orchestrator in the placement of pods on nodes (physical or virtual machines). For example:

  • if a container used Intel DPDK technology the pod may state that it as affinity to an Intel processor based node, or

  • geographical based node labels (such as the Kubernetes standard zone or region labels) may be used to ensure placement of a DCAE complex close to the VNFs generating high volumes of traffic thus minimizing networking cost. Specifically, if nodes were pre-assigned labels East and West, the pod deployment spec to distribute pods to these nodes would be:

nodeSelector:
  failure-domain.beta.Kubernetes.io/region: {{ .Values.location }}
  • “location: West” is specified in the values.yaml file used to deploy one DCAE cluster and “location: East” is specified in a second values.yaml file (see OOM Configuration Management for more information about configuration files like the values.yaml file).

Node affinity can also be used to achieve geographic redundancy if pods are assigned to multiple failure domains. For more information refer to Assigning Pods to Nodes.

Note

One could use Pod to Node assignment to totally constrain Kubernetes when doing initial container assignment to replicate the Amsterdam release OpenStack Heat based deployment. Should one wish to do this, each VM would need a unique node name which would be used to specify a node constaint for every component. These assignment could be specified in an environment specific values.yaml file. Constraining Kubernetes in this way is not recommended.

Kubernetes has a comprehensive system called Taints and Tolerations that can be used to force the container orchestrator to repel pods from nodes based on static events (an administrator assigning a taint to a node) or dynamic events (such as a node becoming unreachable or running out of disk space). There are no plans to use taints or tolerations in the ONAP Beijing release. Pod affinity / anti-affinity is the concept of creating a spacial relationship between pods when the Kubernetes orchestrator does assignment (both initially an in operation) to nodes as explained in Inter-pod affinity and anti-affinity. For example, one might choose to co-located all of the ONAP SDC containers on a single node as they are not critical runtime components and co-location minimizes overhead. On the other hand, one might choose to ensure that all of the containers in an ODL cluster (SDNC and APPC) are placed on separate nodes such that a node failure has minimal impact to the operation of the cluster. An example of how pod affinity / anti-affinity is shown below:

Pod Affinity / Anti-Affinity

apiVersion: v1
kind: Pod
metadata:
  name: with-pod-affinity
spec:
  affinity:
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
      - key: security
        operator: In
        values:
        - S1
        topologyKey: failure-domain.beta.Kubernetes.io/zone
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchExpressions:
            - key: security
              operator: In
              values:
              - S2
          topologyKey: Kubernetes.io/hostname
     containers:
     - name: with-pod-affinity
       image: gcr.io/google_containers/pause:2.0

This example contains both podAffinity and podAntiAffinity rules, the first rule is is a must (requiredDuringSchedulingIgnoredDuringExecution) while the second will be met pending other considerations (preferredDuringSchedulingIgnoredDuringExecution). Preemption Another feature that may assist in achieving a repeatable deployment in the presence of faults that may have reduced the capacity of the cloud is assigning priority to the containers such that mission critical components have the ability to evict less critical components. Kubernetes provides this capability with Pod Priority and Preemption. Prior to having more advanced production grade features available, the ability to at least be able to re-deploy ONAP (or a subset of) reliably provides a level of confidence that should an outage occur the system can be brought back on-line predictably.

Health Checks

Monitoring of ONAP components is configured in the agents within JSON files and stored in gerrit under the consul-agent-config, here is an example from the AAI model loader (aai-model-loader-health.json):

{
  "service": {
    "name": "A&AI Model Loader",
    "checks": [
      {
        "id": "model-loader-process",
        "name": "Model Loader Presence",
        "script": "/consul/config/scripts/model-loader-script.sh",
        "interval": "15s",
        "timeout": "1s"
      }
    ]
  }
}

Liveness Probes

These liveness probes can simply check that a port is available, that a built-in health check is reporting good health, or that the Consul health check is positive. For example, to monitor the SDNC component has following liveness probe can be found in the SDNC DB deployment specification:

sdnc db liveness probe

livenessProbe:
  exec:
    command: ["mysqladmin", "ping"]
    initialDelaySeconds: 30 periodSeconds: 10
    timeoutSeconds: 5

The ‘initialDelaySeconds’ control the period of time between the readiness probe succeeding and the liveness probe starting. ‘periodSeconds’ and ‘timeoutSeconds’ control the actual operation of the probe. Note that containers are inherently ephemeral so the healing action destroys failed containers and any state information within it. To avoid a loss of state, a persistent volume should be used to store all data that needs to be persisted over the re-creation of a container. Persistent volumes have been created for the database components of each of the projects and the same technique can be used for all persistent state information.

Environment Files

MSB Integration

The Microservices Bus Project provides facilities to integrate micro-services into ONAP and therefore needs to integrate into OOM - primarily through Consul which is the backend of MSB service discovery. The following is a brief description of how this integration will be done:

A registrator to push the service endpoint info to MSB service discovery.

  • The needed service endpoint info is put into the kubernetes yaml file as annotation, including service name, Protocol,version, visual range,LB method, IP, Port,etc.

  • OOM deploy/start/restart/scale in/scale out/upgrade ONAP components

  • Registrator watch the kubernetes event

  • When an ONAP component instance has been started/destroyed by OOM, Registrator get the notification from kubernetes

  • Registrator parse the service endpoint info from annotation and register/update/unregister it to MSB service discovery

  • MSB API Gateway uses the service endpoint info for service routing and load balancing.

Details of the registration service API can be found at Microservice Bus API Documentation.

ONAP Component Registration to MSB

The charts of all ONAP components intending to register against MSB must have an annotation in their service(s) template. A sdc example follows:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: sdc-be
  name: sdc-be
  namespace: "{{ .Values.nsPrefix }}"
  annotations:
    msb.onap.org/service-info: '[
      {
          "serviceName": "sdc",
          "version": "v1",
          "url": "/sdc/v1",
          "protocol": "REST",
          "port": "8080",
          "visualRange":"1"
      },
      {
          "serviceName": "sdc-deprecated",
          "version": "v1",
          "url": "/sdc/v1",
          "protocol": "REST",
          "port": "8080",
          "visualRange":"1",
          "path":"/sdc/v1"
      }
      ]'
...

MSB Integration with OOM

A preliminary view of the OOM-MSB integration is as follows:

_images/MSB-OOM-Diagram.png

A message sequence chart of the registration process:

participant "OOM" as oom
participant "ONAP Component" as onap
participant "Service Discovery" as sd
participant "External API Gateway" as eagw
participant "Router (Internal API Gateway)" as iagw

box "MSB" #LightBlue
  participant sd
  participant eagw
  participant iagw
end box

== Deploy Servcie ==

oom -> onap: Deploy
oom -> sd:   Register service endpoints
sd -> eagw:  Services exposed to external system
sd -> iagw:  Services for internal use

== Component Life-cycle Management ==

oom -> onap: Start/Stop/Scale/Migrate/Upgrade
oom -> sd:   Update service info
sd -> eagw:  Update service info
sd -> iagw:  Update service info

== Service Health Check ==

sd -> onap: Check the health of service
sd -> eagw: Update service status
sd -> iagw: Update service status

MSB Deployment Instructions

MSB is helm installable ONAP component which is often automatically deployed. To install it individually enter:

> helm install <repo-name>/msb

Note

TBD: Vaidate if the following procedure is still required.

Please note that Kubernetes authentication token must be set at kubernetes/kube2msb/values.yaml so the kube2msb registrator can get the access to watch the kubernetes events and get service annotation by Kubernetes APIs. The token can be found in the kubectl configuration file ~/.kube/config

More details can be found here MSB installation.

_images/oomLogoV2-medium.png

OOM Cloud Setup Guide

OOM deploys and manages ONAP on a pre-established Kubernetes cluster - the creation of this cluster is outside of the scope of the OOM project as there are many options including public clouds with pre-established environments. However, this guide includes instructions for how to create and use some of the more popular environments which could be used to host ONAP. If creation of a Kubernetes cluster is required, the life-cycle of this cluster is independent of the life-cycle of the ONAP components themselves. Much like an OpenStack environment, the Kubernetes environment may be used for an extended period of time, possibly spanning multiple ONAP releases.

Note

Inclusion of a cloud technology or provider in this guide does not imply an endorsement.

Software Requirements

The versions of Kubernetes that are supported by OOM are as follows:

OOM Software Requirements

Release

Kubernetes

Helm

kubectl

Docker

Cert-Manager

amsterdam

1.7.x

2.3.x

1.7.x

1.12.x

beijing

1.8.10

2.8.2

1.8.10

17.03.x

casablanca

1.11.5

2.9.1

1.11.5

17.03.x

dublin

1.13.5

2.12.3

1.13.5

18.09.5

el alto

1.15.2

2.14.2

1.15.2

18.09.x

frankfurt

1.15.9

2.16.6

1.15.11

18.09.x

guilin

1.15.11

2.16.10

1.15.11

18.09.x

honolulu

1.19.9

3.5.2

1.19.9

19.03.x

1.2.0

Istanbul

1.19.11

3.6.3

1.19.11

19.03.x

1.5.4

Note

Guilin version also supports Kubernetes up to version 1.19.x and should work with Helm with version up to 3.3.x but has not been thoroughly tested.

Minimum Hardware Configuration

The hardware requirements are provided below. Note that this is for a full ONAP deployment (all components). Customizing ONAP to deploy only components that are needed will drastically reduce the requirements.

OOM Hardware Requirements

RAM

HD

vCores

Ports

224GB

160GB

112

0.0.0.0/0 (all open)

Note

Kubernetes supports a maximum of 110 pods per node - configurable in the –max-pods=n setting off the “additional kubelet flags” box in the kubernetes template window described in ‘ONAP Development - 110 pod limit Wiki’ - this limit does not need to be modified . The use of many small nodes is preferred over a few larger nodes (for example 14x16GB - 8 vCores each). Subsets of ONAP may still be deployed on a single node.

Cloud Installation

OOM can be deployed on a private set of physical hosts or VMs (or even a combination of the two). The following guide describe the recommended method to setup a Kubernetes cluster: ONAP on HA Kubernetes Cluster.

There are alternative deployment methods described on the Cloud Native Deployment Wiki

ONAP Operations Manager Release Notes

Previous Release Notes

Abstract

This document provides the release notes for the Istanbul release.

Summary

Release Data

Project

OOM

Docker images

N/A

Release designation

Istanbul

Release date

New features

Bug fixes

A list of issues resolved in this release can be found here: https://jira.onap.org/projects/OOM/versions/11074

Known Issues

Deliverables

Software Deliverables

OOM provides Helm charts that needs to be “compiled” into Helm package. see step 6 in quickstart guide.

Documentation Deliverables

Known Limitations, Issues and Workarounds

Known Vulnerabilities

Workarounds

  • OOM-2754 Because of updateEndpoint property added to cmpv2issuer CRD it is impossible to upgrade platform component from Honolulu to Istanbul release without manual steps. Actions that should be performed:

    1. Update the CRD definition:

      > kubectl -n onap apply -f oom/kubernetes/platform/components/cmpv2-cert-provider/crds/cmpv2issuer.yaml
      
    2. Upgrade the component:

      > helm -n onap upgrade dev-platform oom/kubernetes/platform
      
    3. Make sure that cmpv2issuer contains correct value for spec.updateEndpoint. The value should be: v1/certificate-update. If it’s not, edit the resource:

      > kubectl -n onap edit cmpv2issuer cmpv2-issuer-onap
      

Security Notes

Fixed Security Issues

References

For more information on the ONAP Istanbul release, please see:

  1. ONAP Home Page

  2. ONAP Documentation

  3. ONAP Release Downloads

  4. ONAP Wiki Page

_images/oomLogoV2-medium.png

ONAP on HA Kubernetes Cluster

This guide provides instructions on how to setup a Highly-Available Kubernetes Cluster. For this, we are hosting our cluster on OpenStack VMs and using the Rancher Kubernetes Engine (RKE) to deploy and manage our Kubernetes Cluster.

The result at the end of this tutorial will be:

  1. Creation of a Key Pair to use with Open Stack and RKE

  2. Creation of OpenStack VMs to host Kubernetes Control Plane

  3. Creation of OpenStack VMs to host Kubernetes Workers

  4. Installation and configuration of RKE to setup an HA Kubernetes

  5. Installation and configuration of kubectl

  6. Installation and configuration of Helm

  7. Creation of an NFS Server to be used by ONAP as shared persistance

There are many ways one can execute the above steps. Including automation through the use of HEAT to setup the OpenStack VMs. To better illustrate the steps involved, we have captured the manual creation of such an environment using the ONAP Wind River Open Lab.

Create Key Pair

A Key Pair is required to access the created OpenStack VMs and will be used by RKE to configure the VMs for Kubernetes.

Use an existing key pair, import one or create a new one to assign.

_images/key_pair_1.png

Note

If you’re creating a new Key Pair, ensure to create a local copy of the Private Key through the use of “Copy Private Key to Clipboard”.

For the purpose of this guide, we will assume a new local key called “onap-key” has been downloaded and is copied into ~/.ssh/, from which it can be referenced.

Example:

> mv onap-key ~/.ssh

> chmod 600 ~/.ssh/onap-key

Create Network

An internal network is required in order to deploy our VMs that will host Kubernetes.

_images/network_1.png _images/network_2.png _images/network_3.png

Note

It’s better to have one network per deployment and obviously the name of this network should be unique.

Now we need to create a router to attach this network to outside:

_images/network_4.png

Create Security Group

A specific security group is also required

_images/sg_1.png

then click on manage rules of the newly created security group. And finally click on Add Rule and create the following one:

_images/sg_2.png

Note

the security is clearly not good here and the right SG will be proposed in a future version

Create Kubernetes Control Plane VMs

The following instructions describe how to create 3 OpenStack VMs to host the Highly-Available Kubernetes Control Plane. ONAP workloads will not be scheduled on these Control Plane nodes.

Launch new VM instances

_images/control_plane_1.png

Select Ubuntu 18.04 as base image

Select “No” for “Create New Volume”

_images/control_plane_2.png

Select Flavor

The recommended flavor is at least 4 vCPU and 8GB ram.

_images/control_plane_3.png

Networking

Use the created network:

_images/control_plane_4.png

Security Groups

Use the created security group:

_images/control_plane_5.png

Key Pair

Assign the key pair that was created/selected previously (e.g. onap_key).

_images/control_plane_6.png

Apply customization script for Control Plane VMs

Click openstack-k8s-controlnode.sh to download the script.

#!/bin/sh

DOCKER_VERSION=18.09.5

apt-get update

curl https://releases.rancher.com/install-docker/$DOCKER_VERSION.sh | sh
mkdir -p /etc/systemd/system/docker.service.d/
cat > /etc/systemd/system/docker.service.d/docker.conf << EOF
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// --insecure-registry=nexus3.onap.org:10001
EOF

sudo usermod -aG docker ubuntu

systemctl daemon-reload
systemctl restart docker
apt-mark hold docker-ce

IP_ADDR=$(ip address |grep ens|grep inet|awk '{print $2}'| awk -F / '{print $1}')
HOST_NAME=$(hostname)

echo "$IP_ADDR $HOST_NAME" >> /etc/hosts

docker login -u docker -p docker nexus3.onap.org:10001

sudo apt-get install make -y

#nfs server
sudo apt-get install nfs-kernel-server -y
sudo mkdir -p /dockerdata-nfs
sudo chown nobody:nogroup /dockerdata-nfs/


exit 0

This customization script will:

  • update ubuntu

  • install docker

_images/control_plane_7.png

Launch Instance

_images/control_plane_8.png

Create Kubernetes Worker VMs

The following instructions describe how to create OpenStack VMs to host the Highly-Available Kubernetes Workers. ONAP workloads will only be scheduled on these nodes.

Launch new VM instances

The number and size of Worker VMs is dependent on the size of the ONAP deployment. By default, all ONAP applications are deployed. It’s possible to customize the deployment and enable a subset of the ONAP applications. For the purpose of this guide, however, we will deploy 12 Kubernetes Workers that have been sized to handle the entire ONAP application workload.

_images/worker_1.png

Select Ubuntu 18.04 as base image

Select “No” on “Create New Volume”

_images/worker_2.png

Select Flavor

The size of Kubernetes hosts depend on the size of the ONAP deployment being installed.

If a small subset of ONAP applications are being deployed (i.e. for testing purposes), then 16GB or 32GB may be sufficient.

_images/worker_3.png

Networking

_images/worker_4.png

Security Group

_images/worker_5.png

Key Pair

Assign the key pair that was created/selected previously (e.g. onap_key).

_images/worker_6.png

Apply customization script for Kubernetes VM(s)

Click openstack-k8s-workernode.sh to download the script.

#!/bin/sh

DOCKER_VERSION=18.09.5

apt-get update

curl https://releases.rancher.com/install-docker/$DOCKER_VERSION.sh | sh
mkdir -p /etc/systemd/system/docker.service.d/
cat > /etc/systemd/system/docker.service.d/docker.conf << EOF
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// --insecure-registry=nexus3.onap.org:10001
EOF

sudo usermod -aG docker ubuntu

systemctl daemon-reload
systemctl restart docker
apt-mark hold docker-ce

IP_ADDR=$(ip address |grep ens|grep inet|awk '{print $2}'| awk -F / '{print $1}')
HOST_NAME=$(hostname)

echo "$IP_ADDR $HOST_NAME" >> /etc/hosts

docker login -u docker -p docker nexus3.onap.org:10001

sudo apt-get install make -y

# install nfs
sudo apt-get install nfs-common -y


exit 0

This customization script will:

  • update ubuntu

  • install docker

  • install nfs common

Launch Instance

_images/worker_7.png

Assign Floating IP addresses

Assign Floating IPs to all Control Plane and Worker VMs. These addresses provide external access to the VMs and will be used by RKE to configure kubernetes on to the VMs.

Repeat the following for each VM previously created:

_images/floating_1.png

Resulting floating IP assignments in this example.

_images/floating_2.png

Configure Rancher Kubernetes Engine (RKE)

Install RKE

Download and install RKE on a VM, desktop or laptop. Binaries can be found here for Linux and Mac: https://github.com/rancher/rke/releases/tag/v1.0.6

Note

There are several ways to install RKE. Further parts of this documentation assumes that you have rke command available. If you don’t know how to install RKE you may follow the below steps:

  • chmod +x ./rke_linux-amd64

  • sudo mv ./rke_linux-amd64 /user/local/bin/rke

RKE requires a cluster.yml as input. An example file is show below that describes a Kubernetes cluster that will be mapped onto the OpenStack VMs created earlier in this guide.

Click cluster.yml to download the configuration file.

# An example of an HA Kubernetes cluster for ONAP
nodes:
- address: 10.12.6.85
  port: "22"
  internal_address: 10.0.0.8
  role:
  - controlplane
  - etcd
  hostname_override: "onap-control-1"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap-key"
- address: 10.12.6.90
  port: "22"
  internal_address: 10.0.0.11
  role:
  - controlplane
  - etcd
  hostname_override: "onap-control-2"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap-key"
- address: 10.12.6.89
  port: "22"
  internal_address: 10.0.0.12
  role:
  - controlplane
  - etcd
  hostname_override: "onap-control-3"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap-key"
- address: 10.12.5.165
  port: "22"
  internal_address: 10.0.0.14
  role:
  - worker
  hostname_override: "onap-k8s-1"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap-key"
- address: 10.12.6.238
  port: "22"
  internal_address: 10.0.0.26
  role:
  - worker
  hostname_override: "onap-k8s-2"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap-key"
- address: 10.12.6.126
  port: "22"
  internal_address: 10.0.0.5
  role:
  - worker
  hostname_override: "onap-k8s-3"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap-key"
- address: 10.12.5.11
  port: "22"
  internal_address: 10.0.0.6
  role:
  - worker
  hostname_override: "onap-k8s-4"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap-key"
- address: 10.12.6.244
  port: "22"
  internal_address: 10.0.0.9
  role:
  - worker
  hostname_override: "onap-k8s-5"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap-key"
- address: 10.12.6.249
  port: "22"
  internal_address: 10.0.0.17
  role:
  - worker
  hostname_override: "onap-k8s-6"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap-key"
- address: 10.12.5.191
  port: "22"
  internal_address: 10.0.0.20
  role:
  - worker
  hostname_override: "onap-k8s-7"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap-key"
- address: 10.12.6.111
  port: "22"
  internal_address: 10.0.0.10
  role:
  - worker
  hostname_override: "onap-k8s-8"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap-key"
- address: 10.12.6.195
  port: "22"
  internal_address: 10.0.0.4
  role:
  - worker
  hostname_override: "onap-k8s-9"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap-key"
- address: 10.12.5.160
  port: "22"
  internal_address: 10.0.0.16
  role:
  - worker
  hostname_override: "onap-k8s-10"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap-key"
- address: 10.12.6.74
  port: "22"
  internal_address: 10.0.0.18
  role:
  - worker
  hostname_override: "onap-k8s-11"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap-key"
- address: 10.12.6.82
  port: "22"
  internal_address: 10.0.0.7
  role:
  - worker
  hostname_override: "onap-k8s-12"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap-key"
services:
  kube-api:
    service_cluster_ip_range: 10.43.0.0/16
    pod_security_policy: false
    always_pull_images: false
  kube-controller:
    cluster_cidr: 10.42.0.0/16
    service_cluster_ip_range: 10.43.0.0/16
  kubelet:
    cluster_domain: cluster.local
    cluster_dns_server: 10.43.0.10
    fail_swap_on: false
network:
  plugin: canal
authentication:
  strategy: x509
ssh_key_path: "~/.ssh/onap-key"
ssh_agent_auth: false
authorization:
  mode: rbac
ignore_docker_version: false
kubernetes_version: "v1.15.11-rancher1-2"
private_registries:
- url: nexus3.onap.org:10001
  user: docker
  password: docker
  is_default: true
cluster_name: "onap"
restore:
  restore: false
  snapshot_name: ""

Prepare cluster.yml

Before this configuration file can be used the external address and the internal_address must be mapped for each control and worker node in this file.

Run RKE

From within the same directory as the cluster.yml file, simply execute:

> rke up

The output will look something like:

INFO[0000] Initiating Kubernetes cluster
INFO[0000] [certificates] Generating admin certificates and kubeconfig
INFO[0000] Successfully Deployed state file at [./cluster.rkestate]
INFO[0000] Building Kubernetes cluster
INFO[0000] [dialer] Setup tunnel for host [10.12.6.82]
INFO[0000] [dialer] Setup tunnel for host [10.12.6.249]
INFO[0000] [dialer] Setup tunnel for host [10.12.6.74]
INFO[0000] [dialer] Setup tunnel for host [10.12.6.85]
INFO[0000] [dialer] Setup tunnel for host [10.12.6.238]
INFO[0000] [dialer] Setup tunnel for host [10.12.6.89]
INFO[0000] [dialer] Setup tunnel for host [10.12.5.11]
INFO[0000] [dialer] Setup tunnel for host [10.12.6.90]
INFO[0000] [dialer] Setup tunnel for host [10.12.6.244]
INFO[0000] [dialer] Setup tunnel for host [10.12.5.165]
INFO[0000] [dialer] Setup tunnel for host [10.12.6.126]
INFO[0000] [dialer] Setup tunnel for host [10.12.6.111]
INFO[0000] [dialer] Setup tunnel for host [10.12.5.160]
INFO[0000] [dialer] Setup tunnel for host [10.12.5.191]
INFO[0000] [dialer] Setup tunnel for host [10.12.6.195]
INFO[0002] [network] Deploying port listener containers
INFO[0002] [network] Pulling image [nexus3.onap.org:10001/rancher/rke-tools:v0.1.27] on host [10.12.6.85]
INFO[0002] [network] Pulling image [nexus3.onap.org:10001/rancher/rke-tools:v0.1.27] on host [10.12.6.89]
INFO[0002] [network] Pulling image [nexus3.onap.org:10001/rancher/rke-tools:v0.1.27] on host [10.12.6.90]
INFO[0011] [network] Successfully pulled image [nexus3.onap.org:10001/rancher/rke-tools:v0.1.27] on host [10.12.6.89]
. . . .
INFO[0309] [addons] Setting up Metrics Server
INFO[0309] [addons] Saving ConfigMap for addon rke-metrics-addon to Kubernetes
INFO[0309] [addons] Successfully saved ConfigMap for addon rke-metrics-addon to Kubernetes
INFO[0309] [addons] Executing deploy job rke-metrics-addon
INFO[0315] [addons] Metrics Server deployed successfully
INFO[0315] [ingress] Setting up nginx ingress controller
INFO[0315] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes
INFO[0316] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes
INFO[0316] [addons] Executing deploy job rke-ingress-controller
INFO[0322] [ingress] ingress controller nginx deployed successfully
INFO[0322] [addons] Setting up user addons
INFO[0322] [addons] no user addons defined
INFO[0322] Finished building Kubernetes cluster successfully

Install Kubectl

Download and install kubectl. Binaries can be found here for Linux and Mac:

https://storage.googleapis.com/kubernetes-release/release/v1.15.11/bin/linux/amd64/kubectl https://storage.googleapis.com/kubernetes-release/release/v1.15.11/bin/darwin/amd64/kubectl

You only need to install kubectl where you’ll launch Kubernetes command. This can be any machines of the Kubernetes cluster or a machine that has IP access to the APIs. Usually, we use the first controller as it has also access to internal Kubernetes services, which can be convenient.

Validate deployment

> mkdir -p ~/.kube

> cp kube_config_cluster.yml ~/.kube/config.onap

> export KUBECONFIG=~/.kube/config.onap

> kubectl config use-context onap

> kubectl get nodes -o=wide
NAME             STATUS   ROLES               AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE           KERNEL-VERSION      CONTAINER-RUNTIME
onap-control-1   Ready    controlplane,etcd   3h53m   v1.15.2   10.0.0.8      <none>        Ubuntu 18.04 LTS   4.15.0-22-generic   docker://18.9.5
onap-control-2   Ready    controlplane,etcd   3h53m   v1.15.2   10.0.0.11     <none>        Ubuntu 18.04 LTS   4.15.0-22-generic   docker://18.9.5
onap-control-3   Ready    controlplane,etcd   3h53m   v1.15.2   10.0.0.12     <none>        Ubuntu 18.04 LTS   4.15.0-22-generic   docker://18.9.5
onap-k8s-1       Ready    worker              3h53m   v1.15.2   10.0.0.14     <none>        Ubuntu 18.04 LTS   4.15.0-22-generic   docker://18.9.5
onap-k8s-10      Ready    worker              3h53m   v1.15.2   10.0.0.16     <none>        Ubuntu 18.04 LTS   4.15.0-22-generic   docker://18.9.5
onap-k8s-11      Ready    worker              3h53m   v1.15.2   10.0.0.18     <none>        Ubuntu 18.04 LTS   4.15.0-22-generic   docker://18.9.5
onap-k8s-12      Ready    worker              3h53m   v1.15.2   10.0.0.7      <none>        Ubuntu 18.04 LTS   4.15.0-22-generic   docker://18.9.5
onap-k8s-2       Ready    worker              3h53m   v1.15.2   10.0.0.26     <none>        Ubuntu 18.04 LTS   4.15.0-22-generic   docker://18.9.5
onap-k8s-3       Ready    worker              3h53m   v1.15.2   10.0.0.5      <none>        Ubuntu 18.04 LTS   4.15.0-22-generic   docker://18.9.5
onap-k8s-4       Ready    worker              3h53m   v1.15.2   10.0.0.6      <none>        Ubuntu 18.04 LTS   4.15.0-22-generic   docker://18.9.5
onap-k8s-5       Ready    worker              3h53m   v1.15.2   10.0.0.9      <none>        Ubuntu 18.04 LTS   4.15.0-22-generic   docker://18.9.5
onap-k8s-6       Ready    worker              3h53m   v1.15.2   10.0.0.17     <none>        Ubuntu 18.04 LTS   4.15.0-22-generic   docker://18.9.5
onap-k8s-7       Ready    worker              3h53m   v1.15.2   10.0.0.20     <none>        Ubuntu 18.04 LTS   4.15.0-22-generic   docker://18.9.5
onap-k8s-8       Ready    worker              3h53m   v1.15.2   10.0.0.10     <none>        Ubuntu 18.04 LTS   4.15.0-22-generic   docker://18.9.5
onap-k8s-9       Ready    worker              3h53m   v1.15.2   10.0.0.4      <none>        Ubuntu 18.04 LTS   4.15.0-22-generic   docker://18.9.5

Install Helm

Example Helm client install on Linux:

> wget https://get.helm.sh/helm-v2.16.6-linux-amd64.tar.gz

> tar -zxvf helm-v2.16.6-linux-amd64.tar.gz

> sudo mv linux-amd64/helm /usr/local/bin/helm

Initialize Kubernetes Cluster for use by Helm

> kubectl -n kube-system create serviceaccount tiller

> kubectl create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

> helm init --service-account tiller

> kubectl -n kube-system  rollout status deploy/tiller-deploy

Setting up an NFS share for Multinode Kubernetes Clusters

Deploying applications to a Kubernetes cluster requires Kubernetes nodes to share a common, distributed filesystem. In this tutorial, we will setup an NFS Master, and configure all Worker nodes a Kubernetes cluster to play the role of NFS slaves.

It is recommended that a separate VM, outside of the kubernetes cluster, be used. This is to ensure that the NFS Master does not compete for resources with Kubernetes Control Plane or Worker Nodes.

Launch new NFS Server VM instance

_images/nfs_server_1.png

Select Ubuntu 18.04 as base image

Select “No” on “Create New Volume”

_images/nfs_server_2.png

Select Flavor

_images/nfs_server_3.png

Networking

_images/nfs_server_4.png

Security Group

_images/nfs_server_5.png

Key Pair

Assign the key pair that was created/selected previously (e.g. onap_key).

_images/nfs_server_6.png

Apply customization script for NFS Server VM

Click openstack-nfs-server.sh to download the script.

#!/bin/sh

apt-get update

IP_ADDR=$(ip address |grep ens|grep inet|awk '{print $2}'| awk -F / '{print $1}')
HOST_NAME=$(hostname)

echo "$IP_ADDR $HOST_NAME" >> /etc/hosts

sudo apt-get install make -y

# nfs server
sudo apt-get install nfs-kernel-server -y

sudo mkdir -p /nfs_share
sudo chown nobody:nogroup /nfs_share/

exit 0

This customization script will:

  • update ubuntu

  • install nfs server

Launch Instance

_images/nfs_server_7.png

Assign Floating IP addresses

_images/nfs_server_8.png

Resulting floating IP assignments in this example.

_images/nfs_server_9.png

To properly set up an NFS share on Master and Slave nodes, the user can run the scripts below.

Click master_nfs_node.sh to download the script.

#!/bin/sh

usage () {
  echo "Usage:"
  echo "   ./$(basename $0) node1_ip node2_ip ... nodeN_ip"
  exit 1
}

if [ "$#" -lt 1 ]; then
  echo "Missing NFS slave nodes"
  usage
fi

#Install NFS kernel
sudo apt-get update
sudo apt-get install -y nfs-kernel-server

#Create /dockerdata-nfs and set permissions
sudo mkdir -p /dockerdata-nfs
sudo chmod 777 -R /dockerdata-nfs
sudo chown nobody:nogroup /dockerdata-nfs/

#Update the /etc/exports
NFS_EXP=""
for i in $@; do
  NFS_EXP="${NFS_EXP}$i(rw,sync,no_root_squash,no_subtree_check) "
done
echo "/dockerdata-nfs "$NFS_EXP | sudo tee -a /etc/exports

#Restart the NFS service
sudo exportfs -a
sudo systemctl restart nfs-kernel-server

Click slave_nfs_node.sh to download the script.

#!/bin/sh

usage () {
  echo "Usage:"
  echo "   ./$(basename $0) nfs_master_ip"
  exit 1
}

if [ "$#" -ne 1 ]; then
  echo "Missing NFS mater node"
  usage
fi

MASTER_IP=$1

#Install NFS common
sudo apt-get update
sudo apt-get install -y nfs-common

#Create NFS directory
sudo mkdir -p /dockerdata-nfs

#Mount the remote NFS directory to the local one
sudo mount $MASTER_IP:/dockerdata-nfs /dockerdata-nfs/
echo "$MASTER_IP:/dockerdata-nfs /dockerdata-nfs nfs auto,nofail,noatime,nolock,intr,tcp,actimeo=1800 0 0" | sudo tee -a /etc/fstab

The master_nfs_node.sh script runs in the NFS Master node and needs the list of NFS Slave nodes as input, e.g.:

> sudo ./master_nfs_node.sh node1_ip node2_ip ... nodeN_ip

The slave_nfs_node.sh script runs in each NFS Slave node and needs the IP of the NFS Master node as input, e.g.:

> sudo ./slave_nfs_node.sh master_node_ip

ONAP Deployment via OOM

Now that Kubernetes and Helm are installed and configured you can prepare to deploy ONAP. Follow the instructions in the README.md or look at the official documentation to get started:

_images/oomLogoV2-medium.png

Ingress controller setup on HA Kubernetes Cluster

This guide provides instruction how to setup experimental ingress controller feature. For this, we are hosting our cluster on OpenStack VMs and using the Rancher Kubernetes Engine (RKE) to deploy and manage our Kubernetes Cluster and ingress controller

The result at the end of this tutorial will be:

  1. Customization of the cluster.yaml file for ingress controller support

  2. Installation and configuration test DNS server for ingress host resolution on testing machines

  3. Installation and configuration MLB (Metal Load Balancer) required for exposing ingress service

  4. Installation and configuration NGINX ingress controller

  5. Additional info how to deploy ONAP with services exposed via Ingress controller

Customize cluster.yml file

Before setup cluster for ingress purposes DNS cluster IP and ingress provider should be configured and following:

---
<...>
restore:
  restore: false
  snapshot_name: ""
ingress:
  provider: none
dns:
  provider: coredns
  upstreamnameservers:
    - <custer_dns_ip>:31555

Where the <cluster_dns_ip> should be set to the same IP as the CONTROLPANE node.

For external load balancer purposes, minimum one of the worker node should be configured with external IP address accessible outside the cluster. It can be done using the following example node configuration:

---
<...>
- address: <external_ip>
  internal_address: <internal_ip>
  port: "22"
  role:
    - worker
  hostname_override: "onap-worker-0"
  user: ubuntu
  ssh_key_path: "~/.ssh/id_rsa"
  <...>

Where the <external_ip> is external worker node IP address, and <internal_ip> is internal node IP address if it is required.

DNS server configuration and installation

DNS server deployed on the Kubernetes cluster makes it easy to use services exposed through ingress controller because it resolves all subdomain related to the ONAP cluster to the load balancer IP. Testing ONAP cluster requires a lot of entries on the target machines in the /etc/hosts. Adding many entries into the configuration files on testing machines is quite problematic and error prone. The better wait is to create central DNS server with entries for all virtual host pointed to simpledemo.onap.org and add custom DNS server as a target DNS server for testing machines and/or as external DNS for Kubernetes cluster.

DNS server has automatic installation and configuration script, so installation is quite easy:

> cd kubernetes/contrib/dns-server-for-vhost-ingress-testing

> ./deploy\_dns.sh

After DNS deploy you need to setup DNS entry on the target testing machine. Because DNS listen on non standard port configuration require iptables rules on the target machine. Please follow the configuration proposed by the deploy scripts. Example output depends on the IP address and example output looks like bellow:

DNS server already deployed:
1. You can add the DNS server to the target machine using following commands:
  sudo iptables -t nat -A OUTPUT -p tcp -d 192.168.211.211 --dport 53 -j DNAT --to-destination 10.10.13.14:31555
  sudo iptables -t nat -A OUTPUT -p udp -d 192.168.211.211 --dport 53 -j DNAT --to-destination 10.10.13.14:31555
  sudo sysctl -w net.ipv4.conf.all.route_localnet=1
  sudo sysctl -w net.ipv4.ip_forward=1
2. Update /etc/resolv.conf file with nameserver 192.168.211.211 entry on your target machine

MetalLB Load Balancer installation and configuration

By default pure Kubernetes cluster requires external load balancer if we want to expose external port using LoadBalancer settings. For this purpose MetalLB can be used. Before installing the MetalLB you need to ensure that at least one worker has assigned IP accessible outside the cluster.

MetalLB Load balancer can be easily installed using automatic install script:

> cd kubernetes/contrib/metallb-loadbalancer-inst

> ./install-metallb-on-cluster.sh

Configuration Nginx ingress controller

After installation DNS server and ingress controller we can install and configure ingress controller. It can be done using the following commands:

> cd kubernetes/contrib/ingress-nginx-post-inst

> kubectl apply -f nginx_ingress_cluster_config.yaml

> kubectl apply -f nginx_ingress_enable_optional_load_balacer_service.yaml

After deploy NGINX ingress controller you can ensure that the ingress port is exposed as load balancer service with external IP address:

> kubectl get svc -n ingress-nginx
NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                      AGE
default-http-backend   ClusterIP      10.10.10.10   <none>           80/TCP                       25h
ingress-nginx          LoadBalancer   10.10.10.11    10.12.13.14   80:31308/TCP,443:30314/TCP   24h

ONAP with ingress exposed services

If you want to deploy onap with services exposed through ingress controller you can use full onap deploy script:

> onap/resources/overrides/onap-all-ingress-nginx-vhost.yaml

Ingress also can be enabled on any onap setup override using following code:

---
<...>
global:
<...>
  ingress:
    enabled: true