../../../_images/oomLogoV2-medium.png

ONAP on HA Kubernetes Cluster

This guide provides instructions on how to setup a Highly-Available Kubernetes Cluster. For this, we are hosting our cluster on OpenStack VMs and using the Rancher Kubernetes Engine (RKE) to deploy and manage our Kubernetes Cluster.

The result at the end of this tutorial will be:

1. Creation of a Key Pair to use with Open Stack and RKE

2. Creation of OpenStack VMs to host Kubernetes Control Plane

3. Creation of OpenStack VMs to host Kubernetes Workers

4. Installation and configuration of RKE to setup an HA Kubernetes

5. Installation and configuration of kubectl

5. Installation and configuration of helm

7. Creation of an NFS Server to be used by ONAP as shared persistance

There are many ways one can execute the above steps. Including automation through the use of HEAT to setup the OpenStack VMs. To better illustrate the steps involved, we have captured the manual creation of such an environment using the ONAP Wind River Open Lab.

Create Key Pair

A Key Pair is required to access the created OpenStack VMs and will be used by RKE to configure the VMs for Kubernetes.

Use an existing key pair, import one or create a new one to assign.

../../../_images/key_pair_1.png

Note

If you’re creating a new Key Pair, ensure to create a local copy of the Private Key through the use of “Copy Private Key to Clipboard”.

For the purpose of this guide, we will assume a new local key called “onap-key” has been downloaded and is copied into ~/.ssh/, from which it can be referenced.

Example:

> mv onap-key ~/.ssh

> chmod 600 ~/.ssh/onap-key

Create Kubernetes Control Plane VMs

The following instructions describe how to create 3 OpenStack VMs to host the Highly-Available Kubernetes Control Plane. ONAP workloads will not be scheduled on these Control Plane nodes.

Launch new VM instances

../../../_images/control_plane_1.png

Select Ubuntu 18.04 as base image

Select “No” for “Create New Volume”

../../../_images/control_plane_2.png

Select Flavor

The recommended flavor is at least 4 vCPU and 8GB ram.

../../../_images/control_plane_3.png

Networking

../../../_images/control_plane_4.png

Security Groups

../../../_images/control_plane_5.png

Key Pair

Assign the key pair that was created/selected previously (e.g. onap_key).

../../../_images/control_plane_6.png

Apply customization script for Control Plane VMs

Click openstack-k8s-controlnode.sh to download the script.

#!/bin/bash

DOCKER_VERSION=18.09.5

apt-get update

curl https://releases.rancher.com/install-docker/$DOCKER_VERSION.sh | sh
mkdir -p /etc/systemd/system/docker.service.d/
cat > /etc/systemd/system/docker.service.d/docker.conf << EOF
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// --insecure-registry=nexus3.onap.org:10001
EOF

sudo usermod -aG docker ubuntu

systemctl daemon-reload
systemctl restart docker
apt-mark hold docker-ce

IP_ADDR=`ip address |grep ens|grep inet|awk '{print $2}'| awk -F / '{print $1}'`
HOSTNAME=`hostname`

echo "$IP_ADDR $HOSTNAME" >> /etc/hosts

docker login -u docker -p docker nexus3.onap.org:10001

sudo apt-get install make -y


exit 0

This customization script will:

  • update ubuntu
  • install docker
../../../_images/control_plane_7.png

Launch Instance

../../../_images/control_plane_8.png

Create Kubernetes Worker VMs

The following instructions describe how to create OpenStack VMs to host the Highly-Available Kubernetes Workers. ONAP workloads will only be scheduled on these nodes.

Launch new VM instances

The number and size of Worker VMs is depenedent on the size of the ONAP deployment. By default, all ONAP applications are deployed. It’s possible to customize the deployment and enable a subset of the ONAP applications. For the purpose of this guide, however, we will deploy 12 Kubernetes Workers that have been sized to handle the entire ONAP application workload.

../../../_images/worker_1.png

Select Ubuntu 18.04 as base image

Select “No” on “Create New Volume”

../../../_images/worker_2.png

Select Flavor

The size of Kubernetes hosts depend on the size of the ONAP deployment being installed.

If a small subset of ONAP applications are being deployed (i.e. for testing purposes), then 16GB or 32GB may be sufficient.

../../../_images/worker_3.png

Networking

../../../_images/worker_4.png

Security Group

../../../_images/worker_5.png

Key Pair

Assign the key pair that was created/selected previously (e.g. onap_key).

../../../_images/worker_6.png

Apply customization script for Kubernetes VM(s)

Click openstack-k8s-workernode.sh to download the script.

#!/bin/bash

DOCKER_VERSION=18.09.5

apt-get update

curl https://releases.rancher.com/install-docker/$DOCKER_VERSION.sh | sh
mkdir -p /etc/systemd/system/docker.service.d/
cat > /etc/systemd/system/docker.service.d/docker.conf << EOF
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// --insecure-registry=nexus3.onap.org:10001
EOF

sudo usermod -aG docker ubuntu

systemctl daemon-reload
systemctl restart docker
apt-mark hold docker-ce

IP_ADDR=`ip address |grep ens|grep inet|awk '{print $2}'| awk -F / '{print $1}'`
HOSTNAME=`hostname`

echo "$IP_ADDR $HOSTNAME" >> /etc/hosts

docker login -u docker -p docker nexus3.onap.org:10001

sudo apt-get install make -y

# install nfs
sudo apt-get install nfs-common -y


exit 0

This customization script will:

  • update ubuntu
  • install docker
  • install nfs common

Launch Instance

../../../_images/worker_7.png

Assign Floating IP addresses

Assign Floating IPs to all Control Plane and Worker VMs. These addresses provide external access to the VMs and will be used by RKE to configure kubernetes on to the VMs.

Repeat the following for each VM previously created:

../../../_images/floating_1.png

Resulting floating IP assignments in this example.

../../../_images/floating_2.png

Configure Rancher Kubernetes Engine (RKE)

Install RKE

Download and install RKE on a VM, desktop or laptop. Binaries can be found here for Linux and Mac: https://github.com/rancher/rke/releases/tag/v0.2.1

RKE requires a cluster.yml as input. An example file is show below that describes a Kubernetes cluster that will be mapped onto the OpenStack VMs created earlier in this guide.

Example: cluster.yml

../../../_images/rke_1.png

Click cluster.yml to download the configuration file.

# An example of an HA Kubernetes cluster for ONAP
nodes:
- address: 10.12.6.85
  port: "22"
  internal_address: 10.0.0.8
  role:
  - controlplane
  - etcd
  hostname_override: "onap-control-1"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap-key"
- address: 10.12.6.90
  port: "22"
  internal_address: 10.0.0.11
  role:
  - controlplane
  - etcd
  hostname_override: "onap-control-2"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap-key"
- address: 10.12.6.89
  port: "22"
  internal_address: 10.0.0.12
  role:
  - controlplane
  - etcd
  hostname_override: "onap-control-3"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap-key"
- address: 10.12.5.165
  port: "22"
  internal_address: 10.0.0.14
  role:
  - worker
  hostname_override: "onap-k8s-1"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap-key"
- address: 10.12.6.238
  port: "22"
  internal_address: 10.0.0.26
  role:
  - worker
  hostname_override: "onap-k8s-2"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap-key"
- address: 10.12.6.126
  port: "22"
  internal_address: 10.0.0.5
  role:
  - worker
  hostname_override: "onap-k8s-3"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap-key"
- address: 10.12.5.11
  port: "22"
  internal_address: 10.0.0.6
  role:
  - worker
  hostname_override: "onap-k8s-4"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap-key"
- address: 10.12.6.244
  port: "22"
  internal_address: 10.0.0.9
  role:
  - worker
  hostname_override: "onap-k8s-5"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap-key"
- address: 10.12.6.249
  port: "22"
  internal_address: 10.0.0.17
  role:
  - worker
  hostname_override: "onap-k8s-6"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap-key"
- address: 10.12.5.191
  port: "22"
  internal_address: 10.0.0.20
  role:
  - worker
  hostname_override: "onap-k8s-7"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap-key"
- address: 10.12.6.111
  port: "22"
  internal_address: 10.0.0.10
  role:
  - worker
  hostname_override: "onap-k8s-8"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap-key"
- address: 10.12.6.195
  port: "22"
  internal_address: 10.0.0.4
  role:
  - worker
  hostname_override: "onap-k8s-9"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap-key"
- address: 10.12.5.160
  port: "22"
  internal_address: 10.0.0.16
  role:
  - worker
  hostname_override: "onap-k8s-10"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap-key"
- address: 10.12.6.74
  port: "22"
  internal_address: 10.0.0.18
  role:
  - worker
  hostname_override: "onap-k8s-11"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap-key"
- address: 10.12.6.82
  port: "22"
  internal_address: 10.0.0.7
  role:
  - worker
  hostname_override: "onap-k8s-12"
  user: ubuntu
  ssh_key_path: "~/.ssh/onap-key"
services:
  kube-api:
    service_cluster_ip_range: 10.43.0.0/16
    pod_security_policy: false
    always_pull_images: false
  kube-controller:
    cluster_cidr: 10.42.0.0/16
    service_cluster_ip_range: 10.43.0.0/16
  kubelet:
    cluster_domain: cluster.local
    cluster_dns_server: 10.43.0.10
    fail_swap_on: false
network:
  plugin: canal
authentication:
  strategy: x509
ssh_key_path: "~/.ssh/onap-key"
ssh_agent_auth: false
authorization:
  mode: rbac
ignore_docker_version: false
kubernetes_version: "v1.13.5-rancher1-2"
private_registries:
- url: nexus3.onap.org:10001
  user: docker
  password: docker
  is_default: true
cluster_name: "onap"
restore:
  restore: false
  snapshot_name: ""

Prepare cluster.yml

Before this configuration file can be used the external address and the internal_address must be mapped for each control and worker node in this file.

Run RKE

From within the same directory as the cluster.yml file, simply execute:

> rke up

The output will look something like:

Install Kubectl

Download and install kubectl. Binaries can be found here for Linux and Mac:

https://storage.googleapis.com/kubernetes-release/release/v1.13.5/bin/linux/amd64/kubectl https://storage.googleapis.com/kubernetes-release/release/v1.13.5/bin/darwin/amd64/kubectl

Validate deployment

> cp kube_config_cluster.yml ~/.kube/config.onap

> export KUBECONFIG=~/.kube/config.onap

> kubectl config use-context onap

> kubectl get nodes -o=wide

Install Helm

Example Helm client install on Linux:

> wget http://storage.googleapis.com/kubernetes-helm/helm-v2.12.3-linux-amd64.tar.gz

> tar -zxvf helm-v2.12.3-linux-amd64.tar.gz

> sudo mv linux-amd64/helm /usr/local/bin/helm

Initialize Kubernetes Cluster for use by Helm

> kubectl -n kube-system create serviceaccount tiller

> kubectl create clusterrolebinding tiller –clusterrole=cluster-admin –serviceaccount=kube-system:tiller

> helm init –service-account tiller

> kubectl -n kube-system  rollout status deploy/tiller-deploy

Setting up an NFS share for Multinode Kubernetes Clusters

Deploying applications to a Kubernetes cluster requires Kubernetes nodes to share a common, distributed filesystem. In this tutorial, we will setup an NFS Master, and configure all Worker nodes a Kubernetes cluster to play the role of NFS slaves.

It is recommneded that a separate VM, outside of the kubernetes cluster, be used. This is to ensure that the NFS Master does not compete for resources with Kubernetes Control Plane or Worker Nodes.

Launch new NFS Server VM instance

../../../_images/nfs_server_1.png

Select Ubuntu 18.04 as base image

Select “No” on “Create New Volume”

../../../_images/nfs_server_2.png

Select Flavor

../../../_images/nfs_server_3.png

Networking

../../../_images/nfs_server_4.png

Security Group

../../../_images/nfs_server_5.png

Key Pair

Assign the key pair that was created/selected previously (e.g. onap_key).

../../../_images/nfs_server_6.png

Apply customization script for NFS Server VM

Click openstack-nfs-server.sh to download the script.

#!/bin/bash

DOCKER_VERSION=18.09.5

apt-get update

curl https://releases.rancher.com/install-docker/$DOCKER_VERSION.sh | sh
mkdir -p /etc/systemd/system/docker.service.d/
cat > /etc/systemd/system/docker.service.d/docker.conf << EOF
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// --insecure-registry=nexus3.onap.org:10001
EOF

sudo usermod -aG docker ubuntu

systemctl daemon-reload
systemctl restart docker
apt-mark hold docker-ce

IP_ADDR=`ip address |grep ens|grep inet|awk '{print $2}'| awk -F / '{print $1}'`
HOSTNAME=`hostname`

echo "$IP_ADDR $HOSTNAME" >> /etc/hosts

docker login -u docker -p docker nexus3.onap.org:10001

sudo apt-get install make -y

# install nfs
sudo apt-get install nfs-common -y


exit 0

This customization script will:

  • update ubuntu
  • install nfs server

Launch Instance

../../../_images/nfs_server_7.png

Assign Floating IP addresses

../../../_images/nfs_server_8.png

Resulting floating IP assignments in this example.

../../../_images/nfs_server_9.png

To properly set up an NFS share on Master and Slave nodes, the user can run the scripts below.

Click master_nfs_node.sh to download the script.

#!/bin/bash

usage () {
  echo "Usage:"
  echo "   ./$(basename $0) node1_ip node2_ip ... nodeN_ip"
  exit 1
}

if [ "$#" -lt 1 ]; then
  echo "Missing NFS slave nodes"
  usage
fi

#Install NFS kernel
sudo apt-get update
sudo apt-get install -y nfs-kernel-server

#Create /dockerdata-nfs and set permissions
sudo mkdir -p /dockerdata-nfs
sudo chmod 777 -R /dockerdata-nfs
sudo chown nobody:nogroup /dockerdata-nfs/

#Update the /etc/exports
NFS_EXP=""
for i in $@; do
  NFS_EXP+="$i(rw,sync,no_root_squash,no_subtree_check) "
done
echo "/dockerdata-nfs "$NFS_EXP | sudo tee -a /etc/exports

#Restart the NFS service
sudo exportfs -a
sudo systemctl restart nfs-kernel-server

Click slave_nfs_node.sh to download the script.

#!/bin/bash

usage () {
  echo "Usage:"
  echo "   ./$(basename $0) nfs_master_ip"
  exit 1
}

if [ "$#" -ne 1 ]; then
  echo "Missing NFS mater node"
  usage
fi

MASTER_IP=$1

#Install NFS common
sudo apt-get update
sudo apt-get install -y nfs-common

#Create NFS directory
sudo mkdir -p /dockerdata-nfs

#Mount the remote NFS directory to the local one
sudo mount $MASTER_IP:/dockerdata-nfs /dockerdata-nfs/
echo "$MASTER_IP:/dockerdata-nfs /dockerdata-nfs  nfs auto,nofail,noatime,nolock,intr,tcp,actimeo=1800 0 0" | sudo tee -a /etc/fstab

The master_nfs_node.sh script runs in the NFS Master node and needs the list of NFS Slave nodes as input, e.g.:

> sudo ./master_nfs_node.sh node1_ip node2_ip ... nodeN_ip

The slave_nfs_node.sh script runs in each NFS Slave node and needs the IP of the NFS Master node as input, e.g.:

> sudo ./slave_nfs_node.sh master_node_ip

ONAP Deployment via OOM

Now that kubernetes and Helm are installed and configured you can prepare to deploy ONAP. Follow the instructions in the README.md or look at the official documentation to get started: