CLAMP Policy Participant Smoke Tests

1. Introduction

The Smoke testing of the policy participant is executed in a local CLAMP/Policy environment. The CLAMP-ACM interfaces interact with the Policy Framework to perform actions based on the state of the policy participant. The goal of the Smoke tests is the ensure that CLAMP Policy Participant and Policy Framework work together as expected. All applications will be running by console, so they need to run with different ports. Configuration files should be changed accordingly.

Application

port

MariDB

3306

Zookeeper

2181

Kafka

29092

policy-api

6968

policy-pap

6970

policy-clamp-runtime-acm

6969

onap/policy-clamp-ac-pf-ppnt

8085

2. Setup Guide

This section will show the developer how to set up their environment to start testing in GUI with some instruction on how to carry out the tests. There are several prerequisites. Note that this guide is written by a Linux user - although the majority of the steps show will be exactly the same in Windows or other systems.

2.1 Prerequisites

2.2 Cloning CLAMP automation composition and all dependency

Run a script such as the script below to clone the required modules from the ONAP git repository. This script clones CLAMP automation composition and all dependency.

Typical ONAP Policy Framework Clone Script
 1 #!/usr/bin/env bash
 2
 3 ## script name for output
 4 MOD_SCRIPT_NAME='basename $0'
 5
 6 ## the ONAP clone directory, defaults to "onap"
 7 clone_dir="onap"
 8
 9 ## the ONAP repos to clone
10 onap_repos="\
11 policy/api \
12 policy/clamp \
13 policy/pap "
14
15 ##
16 ## Help screen and exit condition (i.e. too few arguments)
17 ##
18 Help()
19 {
20     echo ""
21     echo "$MOD_SCRIPT_NAME - clones all required ONAP git repositories"
22     echo ""
23     echo "       Usage:  $MOD_SCRIPT_NAME [-options]"
24     echo ""
25     echo "       Options"
26     echo "         -d          - the ONAP clone directory, defaults to '.'"
27     echo "         -h          - this help screen"
28     echo ""
29     exit 255;
30 }
31
32 ##
33 ## read command line
34 ##
35 while [ $# -gt 0 ]
36 do
37     case $1 in
38         #-d ONAP clone directory
39         -d)
40             shift
41             if [ -z "$1" ]; then
42                 echo "$MOD_SCRIPT_NAME: no clone directory"
43                 exit 1
44             fi
45             clone_dir=$1
46             shift
47         ;;
48
49         #-h prints help and exists
50         -h)
51             Help;exit 0;;
52
53         *)    echo "$MOD_SCRIPT_NAME: undefined CLI option - $1"; exit 255;;
54     esac
55 done
56
57 if [ -f "$clone_dir" ]; then
58     echo "$MOD_SCRIPT_NAME: requested clone directory '$clone_dir' exists as file"
59     exit 2
60 fi
61 if [ -d "$clone_dir" ]; then
62     echo "$MOD_SCRIPT_NAME: requested clone directory '$clone_dir' exists as directory"
63     exit 2
64 fi
65
66 mkdir $clone_dir
67 if [ $? != 0 ]
68 then
69     echo cannot clone ONAP repositories, could not create directory '"'$clone_dir'"'
70     exit 3
71 fi
72
73 for repo in $onap_repos
74 do
75     repoDir=`dirname "$repo"`
76     repoName=`basename "$repo"`
77
78     if [ ! -z $dirName ]
79     then
80         mkdir "$clone_dir/$repoDir"
81         if [ $? != 0 ]
82         then
83             echo cannot clone ONAP repositories, could not create directory '"'$clone_dir/repoDir'"'
84             exit 4
85         fi
86     fi
87
88     git clone https://gerrit.onap.org/r/${repo} $clone_dir/$repo
89 done
90
91 echo ONAP has been cloned into '"'$clone_dir'"'

Execution of the script above results in the following directory hierarchy in your ~/git directory:

  • ~/git/onap

  • ~/git/onap/policy

  • ~/git/onap/policy/api

  • ~/git/onap/policy/clamp

  • ~/git/onap/policy/pap

2.3 Building CLAMP automation composition and all dependency

Step 1: Setting topicParameterGroup for kafka localhost in clamp and policy-participant. It needs to set ‘kafka’ as topicCommInfrastructure and ‘localhost:29092’ as server. In the clamp repo, you should find the file ‘runtime-acm/src/main/resources/application.yaml’. This file (in the ‘runtime’ parameters section) may need to be altered as below:

runtime:
  topics:
    operationTopic: policy-acruntime-participant
    syncTopic: acm-ppnt-sync
  participantParameters:
    heartBeatMs: 20000
    maxStatusWaitMs: 150000
    maxOperationWaitMs: 200000
  topicParameterGroup:
    topicSources:
      - topic: ${runtime.topics.operationTopic}
        servers:
          - localhost:29092
        topicCommInfrastructure: kafka
        fetchTimeout: 15000
    topicSinks:
      - topic: ${runtime.topics.operationTopic}
        servers:
          - localhost:29092
        topicCommInfrastructure: kafka
      - topic: ${runtime.topics.syncTopic}
        servers:
          - localhost:29092
        topicCommInfrastructure: kafka
  acmParameters:
    toscaElementName: org.onap.policy.clamp.acm.AutomationCompositionElement
    toscaCompositionName: org.onap.policy.clamp.acm.AutomationComposition

Setting topicParameterGroup for kafka localhost and api/pap http client (in the ‘participant’ parameters section) may need to be apply into the file ‘participant/participant-impl/participant-impl-policy/src/main/resources/config/application.yaml’.

participant:
  pdpGroup: defaultGroup
  pdpType: apex
  policyApiParameters:
    clientName: api
    hostname: localhost
    port: 6968
    userName: policyadmin
    password: zb!XztG34
    useHttps: false
    allowSelfSignedCerts: false
  policyPapParameters:
    clientName: pap
    hostname: localhost
    port: 6970
    userName: policyadmin
    password: zb!XztG34
    useHttps: false
    allowSelfSignedCerts: false
  intermediaryParameters:
    topics:
      operationTopic: policy-acruntime-participant
      syncTopic: acm-ppnt-sync
    reportingTimeIntervalMs: 120000
    description: Participant Description
    participantId: 101c62b3-8918-41b9-a747-d21eb79c6c03
    clampAutomationCompositionTopics:
      topicSources:
        - topic: ${participant.intermediaryParameters.topics.operationTopic}
          servers:
            - localhost:29092
          topicCommInfrastructure: kafka
          fetchTimeout: 15000
        - topic: ${participant.intermediaryParameters.topics.syncTopic}
          servers:
            - localhost:29092
          topicCommInfrastructure: kafka
          fetchTimeout: 15000
      topicSinks:
        - topic: ${participant.intermediaryParameters.topics.operationTopic}
          servers:
            - localhost:29092
          topicCommInfrastructure: kafka
    participantSupportedElementTypes:
      -
        typeName: org.onap.policy.clamp.acm.PolicyAutomationCompositionElement
        typeVersion: 1.0.0

Step 2: Setting datasource.url, hibernate.ddl-auto and server.port in policy-api. In the api repo, you should find the file ‘main/src/main/resources/application.yaml’. This file may need to be altered as below:

spring:
  profiles:
    active: default
  security.user:
    name: policyadmin
    password: zb!XztG34
  mvc.converters.preferred-json-mapper: gson
  datasource:
    url: jdbc:mariadb://localhost:3306/policyadmin
    driverClassName: org.mariadb.jdbc.Driver
    username: policy_user
    password: policy_user
  jpa:
    hibernate:
      ddl-auto: none
      naming:
        physical-strategy: org.hibernate.boot.model.naming.PhysicalNamingStrategyStandardImpl
        implicit-strategy: org.onap.policy.common.spring.utils.CustomImplicitNamingStrategy

server:
  port: 6968
  servlet:
    context-path: /policy/api/v1

Step 3: Setting datasource.url, server.port, and api http client in policy-pap. In the pap repo, you should find the file ‘main/src/main/resources/application.yaml’. This file may need to be altered as below:

spring:
  security:
    user:
      name: policyadmin
      password: zb!XztG34
  datasource:
    url: jdbc:mariadb://localhost:3306/policyadmin
    driverClassName: org.mariadb.jdbc.Driver
    username: policy_user
    password: policy_user
  jpa:
    hibernate:
      ddl-auto: none
      naming:
        physical-strategy: org.hibernate.boot.model.naming.PhysicalNamingStrategyStandardImpl
        implicit-strategy: org.onap.policy.common.spring.utils.CustomImplicitNamingStrategy
  mvc:
    converters:
      preferred-json-mapper: gson

server:
  port: 6970
  servlet:
    context-path: /policy/pap/v1
pap:
  name: PapGroup
  topic:
    pdp-pap.name: POLICY-PDP-PAP
    notification.name: POLICY-NOTIFICATION
    heartbeat.name: POLICY-HEARTBEAT
  pdpParameters:
    heartBeatMs: 120000
    updateParameters:
      maxRetryCount: 1
      maxWaitMs: 30000
    stateChangeParameters:
      maxRetryCount: 1
      maxWaitMs: 30000
  topicParameterGroup:
    topicSources:
      - topic: ${pap.topic.pdp-pap.name}
        servers:
          - kafka
        topicCommInfrastructure: NOOP
        fetchTimeout: 15000
      - topic: ${pap.topic.heartbeat.name}
        effectiveTopic: ${pap.topic.pdp-pap.name}
        consumerGroup: policy-pap
        servers:
          - kafka
        topicCommInfrastructure: NOOP
        fetchTimeout: 15000
    topicSinks:
      - topic: ${pap.topic.pdp-pap.name}
        servers:
          - kafka
        topicCommInfrastructure: NOOP
      - topic: ${pap.topic.notification.name}
        servers:
          - kafka
        topicCommInfrastructure: NOOP
  healthCheckRestClientParameters:
    - clientName: api
      hostname: localhost
      port: 6968
      userName: policyadmin
      password: zb!XztG34
      useHttps: false
      basePath: policy/api/v1/healthcheck
    - clientName: distribution
      hostname: policy-distribution
      port: 6969
      userName: healthcheck
      password: zb!XztG34
      useHttps: true
      basePath: healthcheck
    - clientName: kafka
      hostname: kafka
      port: 3905
      useHttps: true
      basePath: topics

management:
  endpoints:
    web:
      base-path: /
      exposure:
        include: health, metrics, prometheus
      path-mapping:
        -metrics: plain-metrics
        -prometheus: metrics

Step 4: Optionally, for a completely clean build, remove the ONAP built modules from your local repository.

rm -fr ~/.m2/repository/org/onap

Step 5: A pom such as the one below can be used to build the ONAP Policy Framework modules. Create the pom.xml file in the directory ~/git/onap/policy.

Typical pom.xml to build the ONAP Policy Framework
 1  <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
 2      <modelVersion>4.0.0</modelVersion>
 3      <groupId>org.onap</groupId>
 4      <artifactId>onap-policy</artifactId>
 5      <version>1.0.0-SNAPSHOT</version>
 6      <packaging>pom</packaging>
 7      <name>${project.artifactId}</name>
 8      <inceptionYear>2024</inceptionYear>
 9      <organization>
10          <name>ONAP</name>
11      </organization>
12
13      <modules>
14          <module>api</module>
15          <module>clamp</module>
16          <module>pap</module>
17      </modules>
18  </project>

Step 6: You can now build the Policy framework.

Build java artifacts only:

cd ~/git/onap/policy
mvn clean install -DskipTests

Build with docker images:

cd ~/git/onap/policy/clamp/packages/
mvn clean install -P docker

cd ~/git/onap/policy/api/packages/
mvn clean install -P docker

cd ~/git/onap/policy/pap/packages/
mvn clean install -P docker

2.4 Setting up the components

2.4.1 MariaDB and Kafka Setup

We will be using Docker to run our mariadb instance`and Zookeeper/Kafka. It will have a total of two databases running in mariadb.

  • clampacm: the policy-clamp-runtime-acm db

  • policyadmin: the policy-api db

Step 1: Create the mariadb.sql file in a directory ~/git.

create database clampacm;
CREATE USER 'policy'@'%' IDENTIFIED BY 'P01icY';
GRANT ALL PRIVILEGES ON clampacm.* TO 'policy'@'%';
CREATE DATABASE `policyadmin`;
CREATE USER 'policy_user'@'%' IDENTIFIED BY 'policy_user';
GRANT ALL PRIVILEGES ON policyadmin.* to 'policy_user'@'%';
CREATE DATABASE `migration`;
GRANT ALL PRIVILEGES ON migration.* to 'policy_user'@'%';
FLUSH PRIVILEGES;

Step 2: Create the init.sh file in a directory ~/git with execution permission.

#!/bin/sh

export POLICY_HOME=/opt/app/policy
export SQL_USER=${MYSQL_USER}
export SQL_PASSWORD=${MYSQL_PASSWORD}
export SCRIPT_DIRECTORY=sql

/opt/app/policy/bin/prepare_upgrade.sh ${SQL_DB}
/opt/app/policy/bin/db-migrator -s ${SQL_DB} -o report
/opt/app/policy/bin/db-migrator -s ${SQL_DB} -o upgrade
rc=$?
/opt/app/policy/bin/db-migrator -s ${SQL_DB} -o report
nc -l -p 6824
exit $rc

Step 3: Create the wait_for_port.sh file in a directory ~/git with execution permission.

#!/bin/sh

usage() {
  echo args: [-t timeout] [-c command] hostname1 port1 hostname2 port2 ... >&2
  exit 1
}
tmout=300
cmd=
while getopts c:t: opt
do
    case "$opt" in
        c)
            cmd="$OPTARG"
            ;;
        t)
            tmout="$OPTARG"
            ;;
        *)
            usage
            ;;
    esac
done
nargs=$((OPTIND-1))
shift "$nargs"
even_args=$(($#%2))
if [ $# -lt 2 ] || [ "$even_args" -ne 0 ]
then
    usage
fi
while [ $# -ge 2 ]
do
    export host="$1"
    export port="$2"
    shift
    shift
    echo "Waiting for $host port $port..."

    while [ "$tmout" -gt 0 ]
    do
        if command -v docker > /dev/null 2>&1
        then
            docker ps --format "table {{ .Names }}\t{{ .Status }}"
        fi
        nc -vz "$host" "$port"
        rc=$?
        if [ $rc -eq 0 ]
        then
            break
        else
            tmout=$((tmout-1))
            sleep 1
        fi
    done
    if [ $rc -ne 0 ]
    then
        echo "$host port $port cannot be reached"
        exit $rc
    fi
done
$cmd
exit 0

Step 4: Create the ‘docker-compose.yaml’ using following code:

services:
  mariadb:
    image: mariadb:10.10.2
    command: ['mysqld', '--lower_case_table_names=1']
    volumes:
      - type: bind
        source: ./mariadb.sql
        target: /docker-entrypoint-initdb.d/data.sql
    environment:
      - MYSQL_ROOT_PASSWORD=my-secret-pw
    ports:
      - "3306:3306"

  policy-db-migrator:
    image: nexus3.onap.org:10001/onap/policy-db-migrator:3.1.3-SNAPSHOT
    container_name: policy-db-migrator
    hostname: policy-db-migrator
    depends_on:
      - mariadb
    expose:
      - 6824
    environment:
      SQL_DB: policyadmin
      SQL_HOST: mariadb
      MYSQL_ROOT_PASSWORD: my-secret-pw
      MYSQL_USER: policy_user
      MYSQL_PASSWORD: policy_user
      MYSQL_CMD: mysql
    volumes:
      - ./init.sh:/opt/app/policy/bin/db_migrator_policy_init.sh:ro
      - ./wait_for_port.sh:/opt/app/policy/bin/wait_for_port.sh:ro
    entrypoint: /opt/app/policy/bin/wait_for_port.sh
    command: [
      '-c',
      '/opt/app/policy/bin/db_migrator_policy_init.sh',
      'mariadb', '3306'
    ]

  zookeeper:
    image: confluentinc/cp-zookeeper:latest
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000
    ports:
      - 2181:2181

  kafka:
    image: confluentinc/cp-kafka:latest
    container_name: kafka
    depends_on:
      - zookeeper
    ports:
      - 29092:29092
      - 9092:9092
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1

Step 5: Run the docker composition:

cd ~/git/
docker compose up

2.4.2 Policy API

In the policy-api repo, navigate to the “/main” directory. You can then run the following command to start the policy api:

mvn spring-boot:run

2.4.3 Policy PAP

In the policy-pap repo, navigate to the “/main” directory. You can then run the following command to start the policy pap:

mvn spring-boot:run

2.4.4 ACM Runtime

To start the clampacm runtime we need to go the “runtime-acm” directory in the clamp repo. You can then run the following command to start the clampacm runtime:

mvn spring-boot:run

2.4.5 ACM Policy Participant

To start the policy participant we need to go to the “participant/participant-impl/participant-impl-policy” directory in the clamp repo. You can then run the following command to start the policy-participant:

mvn spring-boot:run

3. Testing Procedure

3.1 Testing Outline

To perform the Smoke testing of the policy-participant we will be verifying the behaviours of the participant when the ACM changes state. The scenarios are:

  • UNDEPLOYED to DEPLOYED: participant creates policies and policyTypes specified in the ToscaServiceTemplate using policy-api and deploys the policies using pap.

  • LOCK to UNLOCK: participant changes lock state to UNLOCK. No operation performed.

  • UNLOCK to LOCK: participant changes lock state to LOCK. No operation performed.

  • DEPLOYED to UNDEPLOYED: participant undeploys deployed policies and deletes policies and policyTypes which have been created.

3.2 Testing Steps

Creation of AC Definition:

An AC Definition is created by commissioning a Tosca template. Using postman, commission a TOSCA template using the following template:

Tosca Service Template

To verify this, we check that the AC Definition has been created and is in state COMMISSIONED.

../../../_images/pol-part-clampacm-get-composition.png

Priming AC Definition:

The AC Definition state is changed from COMMISSIONED to PRIMED using postman:

{
    "primeOrder": "PRIME"
}

To verify this, we check that the AC Definition has been primed.

../../../_images/pol-part-clampacm-get-primed-composition.png

Creation of AC Instance:

Using postman, instance the AC definition using the following template:

Instantiate ACM

To verify this, we check that the AC Instance has been created and is in state UNDEPLOYED.

../../../_images/pol-part-clampacm-creation-ver.png

Creation and deploy of policies and policyTypes:

The AC Instance deploy state is changed from UNDEPLOYED to DEPLOYED using postman:

{
    "deployOrder": "DEPLOY"
}

This state change will trigger the creation of policies and policyTypes using the policy-api and the deployment of the policies specified in the ToscaServiceTemplate. To verify this we will check, using policy-api endpoints, that the onap.policies.native.apex.ac.element policy, which is specified in the service template, has been created.

../../../_images/pol-part-clampacm-ac-policy-ver.png

And we will check that the apex onap.policies.native.apex.ac.element policy has been deployed to the defaultGroup. We check this using pap:

../../../_images/pol-part-clampacm-ac-deploy-ver.png

Undeployment and deletion of policies and policyTypes:

The ACM STATE is changed from DEPLOYED to UNDEPLOYED using postman:

{
    "deployOrder": "UNDEPLOY"
}

This state change will trigger the undeployment of the onap.policies.native.apex.ac.element policy which was deployed previously and the deletion of the previously created policies and policyTypes. To verify this we do a PdpGroup Query as before and check that the onap.policies.native.apex.ac.element policy has been undeployed and removed from the defaultGroup:

../../../_images/pol-part-clampacm-ac-undep-ver.png

As before, we can check that the Test Policy policyType is not found this time and likewise for the onap.policies.native.apex.ac.element policy:

../../../_images/pol-part-clampacm-test-policy-nf.png