CLAMP participants (kubernetes, http) Smoke Tests
1. Introduction
The CLAMP participants (kubernetes and http) are used to interact with the helm client in a kubernetes environment for the deployment of microservices via helm chart as well as to configure the microservices over REST endpoints. Both of these participants are often used together in the Automation Composition Management workflow.
This document will serve as a guide to do smoke tests on the different components that are involved when working with the participants and outline how they operate. It will also show a developer how to set up their environment for carrying out smoke tests on these participants.
2. Setup Guide
This article assumes that:
You are using the operating systems such as linux/macOS/windows.
You are using a directory called git off your home directory (~/git) for your git repositories
Your local maven repository is in the location ~/.m2/repository
You have copied the settings.xml from oparent to ~/.m2/ directory
You have added settings to access the ONAP Nexus to your M2 configuration, see Maven Settings Example (bottom of the linked page)
Your local helm is in the location /usr/local/bin/helm
Your local kubectl is in the location /usr/local/bin/kubectl
The procedure documented in this article has been verified using Ubuntu 20.04 LTS VM.
2.1 Prerequisites
Java 17
Docker
Maven 3.9
Git
helm3
k8s cluster
Refer to this guide for basic environment setup Setting up dev environment
2.2 Cloning CLAMP automation composition
Run a script such as the script below to clone the required modules from the ONAP git repository. This script clones CLAMP automation composition and all dependency.
cd ~/git
git clone https://gerrit.onap.org/r/policy/clamp clamp
Execution of the command above results in the following directory hierarchy in your ~/git directory:
~/git/clamp
2.3 Setting up the components
2.3.1 Running MariaDb and Kafka
We will be using Docker to run our mariadb instance and Kafka. It will have the acm-runtime database running in it. The easiest way to do this is to perform a SQL script. Create the mariadb.sql file in the directory ~/git.
create database clampacm;
CREATE USER 'policy'@'%' IDENTIFIED BY 'P01icY';
GRANT ALL PRIVILEGES ON clampacm.* TO 'policy'@'%';
Create the ‘docker-compose.yaml’ using following code:
services:
mariadb:
image: mariadb:10.10.2
command: ['mysqld', '--lower_case_table_names=1']
volumes:
- type: bind
source: ./mariadb.sql
target: /docker-entrypoint-initdb.d/data.sql
environment:
- MYSQL_ROOT_PASSWORD=my-secret-pw
ports:
- "3306:3306"
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- 2181:2181
kafka:
image: confluentinc/cp-kafka:latest
container_name: kafka
depends_on:
- zookeeper
ports:
- 29092:29092
- 9092:9092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
Run the docker composition:
cd ~/git/ docker compose up
2.3.2 Setting topicParameterGroup for kafka localhost
It needs to set ‘kafka’ as topicCommInfrastructure and ‘localhost:29092’ as server. In the clamp repo, you should find the file ‘runtime-acm/src/main/resources/application.yaml’. This file (in the ‘runtime’ parameters section) may need to be altered as below:
runtime:
topics:
operationTopic: policy-acruntime-participant
syncTopic: acm-ppnt-sync
participantParameters:
heartBeatMs: 20000
maxStatusWaitMs: 150000
maxOperationWaitMs: 200000
topicParameterGroup:
topicSources:
- topic: ${runtime.topics.operationTopic}
servers:
- localhost:29092
topicCommInfrastructure: kafka
fetchTimeout: 15000
topicSinks:
- topic: ${runtime.topics.operationTopic}
servers:
- localhost:29092
topicCommInfrastructure: kafka
- topic: ${runtime.topics.syncTopic}
servers:
- localhost:29092
topicCommInfrastructure: kafka
acmParameters:
toscaElementName: org.onap.policy.clamp.acm.AutomationCompositionElement
toscaCompositionName: org.onap.policy.clamp.acm.AutomationComposition
Same changes (in the ‘participant’ parameters section) may need to be apply into the file ‘participant/participant-impl/participant-impl-http/src/main/resources/config/application.yaml’.
participant:
intermediaryParameters:
topics:
operationTopic: policy-acruntime-participant
syncTopic: acm-ppnt-sync
reportingTimeIntervalMs: 120000
description: Participant Description
participantId: 101c62b3-8918-41b9-a747-d21eb79c6c01
clampAutomationCompositionTopics:
topicSources:
- topic: ${participant.intermediaryParameters.topics.operationTopic}
servers:
- localhost:29092
topicCommInfrastructure: kafka
fetchTimeout: 15000
- topic: ${participant.intermediaryParameters.topics.syncTopic}
servers:
- localhost:29092
topicCommInfrastructure: kafka
fetchTimeout: 15000
topicSinks:
- topic: ${participant.intermediaryParameters.topics.operationTopic}
servers:
- localhost:29092
topicCommInfrastructure: kafka
participantSupportedElementTypes:
-
typeName: org.onap.policy.clamp.acm.HttpAutomationCompositionElement
typeVersion: 1.0.0
And into the file ‘participant/participant-impl/participant-impl-kubernetes/src/main/resources/config/application.yaml’.
participant:
localChartDirectory: /home/policy/local-charts
infoFileName: CHART_INFO.json
intermediaryParameters:
topics:
operationTopic: policy-acruntime-participant
syncTopic: acm-ppnt-sync
reportingTimeIntervalMs: 120000
description: Participant Description
participantId: 101c62b3-8918-41b9-a747-d21eb79c6c02
clampAutomationCompositionTopics:
topicSources:
- topic: ${participant.intermediaryParameters.topics.operationTopic}
servers:
- localhost:29092
topicCommInfrastructure: kafka
fetchTimeout: 15000
- topic: ${participant.intermediaryParameters.topics.syncTopic}
servers:
- localhost:29092
topicCommInfrastructure: kafka
fetchTimeout: 15000
topicSinks:
- topic: ${participant.intermediaryParameters.topics.operationTopic}
servers:
- localhost:29092
topicCommInfrastructure: kafka
participantSupportedElementTypes:
-
typeName: org.onap.policy.clamp.acm.K8SMicroserviceAutomationCompositionElement
typeVersion: 1.0.0
If the helm location is not ‘/usr/local/bin/helm’ or the kubectl location is not ‘/usr/local/bin/kubectl’, you have to update the file ‘participant/participant-impl/participant-impl-kubernetes/src/main/java/org/onap/policy/clamp/acm/participant/kubernetes/helm/HelmClient.java’.
2.3.3 Automation composition Runtime
To start the automation composition runtime service, we need to execute the following maven command from the “runtime-acm” directory in the clamp repo. Automation composition runtime uses the config file “src/main/resources/application.yaml” by default.
mvn spring-boot:run
2.3.4 Helm chart repository
Kubernetes participant consumes helm charts from the local chart database as well as from a helm repository. For the smoke testing, we are going to add nginx-stable helm repository to the helm client. The following command can be used to add nginx repository to the helm client.
helm repo add nginx-stable https://helm.nginx.com/stable
2.3.5 Kubernetes and http participants
The participants can be started from the clamp repository by executing the following maven command from the appropriate directories. The participants will be started and get registered to the Automation composition runtime.
Navigate to the directory “participant/participant-impl/participant-impl-kubernetes/” and start kubernetes participant.
mvn spring-boot:run
Navigate to the directory “participant/participant-impl/participant-impl-http/” and start http participant.
mvn spring-boot:run
For building docker images of runtime-acm and participants:
cd ~/git/onap/policy/clamp/
mvn clean install -P docker
3. Running Tests
In this section, we will run through the sequence of steps in ACM workflow . The workflow can be triggered via Postman client.
3.1 Commissioning
Commission Automation composition TOSCA definitions to Runtime.
The Automation composition definitions are commissioned to runtime-acm which populates the ACM runtime database. The following sample TOSCA template is commissioned to the runtime endpoint which contains definitions for kubernetes participant that deploys nginx ingress microservice helm chart and a http POST request for http participant.
Commissioning Endpoint:
POST: https://<Runtime ACM IP> : <Port> /onap/policy/clamp/acm/v2/compositions
A successful commissioning gives 201 responses in the postman client.
3.2 Prime an Automation composition definition
Once the template is commissioned, we can prime it. This will connect AC definition with related participants.
Prime Endpoint:
PUT: https://<Runtime ACM IP> : <Port> /onap/policy/clamp/acm/v2/compositions/{compositionId}
Request body:
{
"primeOrder": "PRIME"
}
A successful prime request gives 202 responses in the postman client.
3.3 Create New Instances of Automation composition
Once AC definition is primes, we can instantiate automation composition instances. This will create the instances with default state “UNDEPLOYED”.
Instantiation Endpoint:
POST: https://<Runtime ACM IP> : <Port> /onap/policy/clamp/acm/v2/compositions/{compositionId}/instances
Request body:
A successful creation of new instance gives 201 responses in the postman client.
3.4 Change the State of the Instance
When the automation composition is updated with state “DEPLOYED”, the Kubernetes participant fetches the node template for all automation composition elements and deploys the helm chart of each AC element into the cluster. The following sample json input is passed on the request body.
Automation Composition Update Endpoint:
PUT: https://<Runtime ACM IP> : <Port> /onap/policy/clamp/acm/v2/compositions/{compositionId}/instances/{instanceId}
Request body:
{
"deployOrder": "DEPLOY"
}
A successful deploy request gives 202 responses in the postman client. After the state changed to “DEPLOYED”, nginx-ingress pod is deployed in the kubernetes cluster. And http participant should have posted the dummy data to the configured URL in the tosca template.
The following command can be used to verify the pods deployed successfully by kubernetes participant.
helm ls -n onap | grep nginx
kubectl get po -n onap | grep nginx
The overall state of the automation composition should be “DEPLOYED” to indicate both the participants has successfully completed the operations. This can be verified by the following rest endpoint.
Verify automation composition state:
GET: https://<Runtime ACM IP> : <Port>/onap/policy/clamp/acm/v2/compositions/{compositionId}/instances/{instanceId}
3.5 Automation Compositions can be “UNDEPLOYED” after deployment
By changing the state to “UNDEPLOYED”, all the helm deployments under the corresponding automation composition will be uninstalled from the cluster. Automation Composition Update Endpoint:
PUT: https://<Runtime ACM IP> : <Port> /onap/policy/clamp/acm/v2/compositions/{compositionId}/instances/{instanceId}
Request body:
{
"deployOrder": "UNDEPLOY"
}
The nginx pod should be deleted from the k8s cluster.
This concludes the required smoke tests for http and kubernetes participants.