Policy Framework Architecture

Policy Framework Architecture

Abstract

This document describes the ONAP Policy Framework. It lays out the architecture of the framework and shows the APIs provided to other components that interwork with the framework. It describes the implementation of the framework, mapping out the components, software structure, and execution ecosystem of the framework.

TOSCA Policy Primer

This page gives a short overview of how Policy is modelled in the TOSCA Simple Profile in YAML.

TOSCA defines three concepts for Policy: Policy Type, Policy, and Trigger.

_images/TOSCAPolicyConcepts.svg

Policy Type

A Policy Type is used to specify the types of policies that may be used in a service. The parameter definitions for a policy of this type, the entity types to which it applies, and what triggers policies of this type may be specified.

The types of policies that are used in a service are defined in the policy_types section of the TOSCA service template as a Policy Type. More formally, TOSCA defines a Policy Type as an artifact that “defines a type of requirement that affects or governs an application or service’s topology at some stage of its life cycle, but is not explicitly part of the topology itself”. In the definition of a Policy Type in TOSCA, you specify:

  • its properties, which define the type of configuration parameters that the policy takes

  • its targets, which define the node types and/or groups to which the policy type applies

  • its triggers, which specify the conditions in which policies of this type are fired

Policy

A Policy is used to specify the actual instances of policies that are used in a service. The parameter values of the policy and the actual entities to which it applies may be specified.

The policies that are used in a service are defined in the policies section of the TOSCA topology template as a Policy. More formally, TOSCA defines a Policy as an artifact that “defines a policy that can be associated with a TOSCA topology or top-level entity definition”. In the definition of a Policy in TOSCA, you specify:

  • its properties, which define the values of the configuration parameters that the policy takes

  • its targets, which define the node types and/or group types to which the policy type applies

Note that policy triggers are specified on the Policy Type definition and are not specified on the Policy itself.

Trigger

A Trigger defines an event, condition, and action that is used to initiate execution of a policy associated with it. The definition of the Trigger allows specification of the type of events to trigger on, the filters on those events, conditions and constraints for trigger firing, the action to perform on triggering, and various other parameters.

The triggers that are used in a service are defined as reusable modules in the TOSCA service template as a Trigger. More formally, TOSCA defines a Trigger as an artifact that “defines the event, condition and action that is used to “trigger” a policy it is associated with”. In the definition of a Trigger in TOSCA, you specify:

  • its event_type, which defines the name of the event that fires the policy

  • its schedule, which defines the time interval in which the trigger is active

  • its target_filter, which defines specific filters for firing such as specific characteristics of the nodes or relations for which the trigger should or should not fire

  • its condition, which defines extra conditions on the incoming event for firing the trigger

  • its constraint, which defines extra conditions on the incoming event for not firing the trigger

  • its period, which defines the period to use for evaluating conditions and constraints

  • its evaluations, which defines the number of evaluations that must be performed over the period to assert the condition or constraint exists

  • its method, the method to use for evaluation of conditions and constraints

  • its action, the workflow or operation to invoke when the trigger fires

Note that how a Trigger actually works with a Policy is not clear from the specification.

End of Document

1. Overview

The ONAP Policy Framework is a comprehensive policy design, deployment, and execution environment. The Policy Framework is the decision making component in an ONAP system. It allows you to specify, deploy, and execute the governance of the features and functions in your ONAP system, be they closed loop, orchestration, or more traditional open loop use case implementations. The Policy Framework is the component that is the source of truth for all policy decisions.

One of the most important goals of the Policy Framework is to support Policy Driven Operational Management during the execution of ONAP control loops at run time. In addition, use case implementations such as orchestration and control benefit from the ONAP policy Framework because they can use the capabilities of the framework to manage and execute their policies rather than embedding the decision making in their applications.

The Policy Framework is deployment agnostic, it manages Policy Execution (in PDPs) and Enforcement (in PEPs) regardless of how the PDPs and PEPs are deployed. This allows policy execution and enforcement to be deployed in a manner that meets the performance requirements of a given application or use case. In one deployment, policy execution could be deployed in a separate executing entity in a Docker container. In another, policy execution could be co-deployed with an application to increase performance. An example of co-deployment is the Drools PDP Control Loop image, which is a Docker image that combines the ONAP Drools use case application and dependencies with the Drools PDP engine.

The ONAP Policy Framework architecture separates policies from the platform that is supporting them. The framework supports development, deployment, and execution of any type of policy in ONAP. The Policy Framework is metadata (model) driven so that policy development, deployment, and execution is as flexible as possible and can support modern rapid development ways of working such as DevOps. A metadata driven approach also allows the amount of programmed support required for policies to be reduced or ideally eliminated.

We have identified five capabilities as being essential for the framework:

  1. Most obviously, the framework must be capable of being triggered by an event or invoked, and making decisions at run time.

  2. It must be deployment agnostic; capable of managing policies for various Policy Decision Points (PDPs) or policy engines.

  3. It must be metadata driven, allowing policies to be deployed, modified, upgraded, and removed as the system executes.

  4. It must provide a flexible model driven policy design approach for policy type programming and specification of policies.

  5. It must be extensible, allowing straightforward integration of new PDPs, policy formats, and policy development environments.

Another important aim of the architecture of a model driven policy framework is that it enables much more flexible policy specification. The ONAP Policy Framework complies with the TOSCA modelling approach for policies, see the TOSCA Policy Primer for more information on how policies are modeled in TOSCA.

  1. A Policy Type describes the properties, targets, and triggers that the policy for a feature can have. A Policy type is implementation independent. It is the metadata that specifies:

  • the configuration data that the policy can take. The Policy Type describes each property that a policy of a given type can take. A Policy Type definition also allows the default value, optionality, and the ranges of properties to be defined.

  • the targets such as network element types, functions, services, or resources on which a policy of the given type can act.

  • the triggers such as the event type, filtered event, scheduled trigger, or conditions that can activate a policy of the given type.

Policy Types are hierarchical, A Policy Type can inherit from a parent Policy Type, inheriting the properties, targets, and triggers of its parent. Policy Types are developed by domain experts in consultation with the developers that implement the logic and rules for the Policy Type.

  1. A Policy is defined using a Policy Type. The Policy defines:

  • the values for each property of the policy type

  • the specific targets (network elements, functions, services, resources) on which this policy will act

  • the specific triggers that trigger this policy.

  1. A Policy Type Implementation or Raw Policy, is the logic that implements the policy. It is implemented by a skilled policy developer in consultation with domain experts. The implementation has software that reads the Policy Type and parses the incoming configuration properties. The software has domain logic that is triggered when one of the triggers described in the Policy Type occurs. The software logic executes and acts on the targets specified in the Policy Type.

For example, a Policy Type could be written to describe how to manage Service Level Agreements for VPNs. The VPN Policy Type can be used to create VPN policies for a bank network, a car dealership network, or a university with many campuses. The Policy Type has two parameters:

  • The maximumDowntime parameter allows the maximum downtime allowed per year to be specified

  • The mitigationStrategy parameter allows one of three strategies to be selected for downtime breaches

  • allocateMoreResources, which will automatically allocate more resources to mitigate the problem

  • report, which report the downtime breach to a trouble ticketing system

  • ignore, which logs the breach and takes no further action

The Policy Type defines a trigger event, an event that is received from an analytics system when the maximum downtime value for a VPN is breached. The target of the policy type is an instance of the VPN service.

The Policy Type Implementation is developed that can configure the maximum downtime parameter in an analytics system, can receive a trigger from the analytics system when the maximum downtime is breached, and that can either request more resources, report an issue to a trouble ticketing system, and can log a breach.

VPN Policies are created by specifying values for the properties, triggers, and targets specified in VPN Policy Type.

In the case of the bank network, the maximumDowntime threshold is specified as 5 minutes downtime per year and the mitigationStrategy is defined as allocateMoreResources, and the target is specified as being the bank’s VPN service ID. When a breach is detected by the analytics system, the policy is executed, the target is identified as being the bank’s network, and more resources are allocated by the policy.

For the car dealership VPN policy, a less stringent downtime threshold of 60 minutes per year is specified, and the mitigation strategy is to issue a trouble ticket. The university network is best effort, so a downtime of 4 days per year is specified. Breaches are logged and mitigated as routine network administration tasks.

In ONAP, specific ONAP Policy Types are used to create specific policies that drive the ONAP Platform and Components. For more detailed information on designing Policy Types and developing an implementation for that policy type, see Policy Design and Development.

The ONAP Policy Framework for building, configuring and deploying PDPs is extendable. It allows the use of ONAP PDPs as is, the extension of ONAP PDPs, and lastly provides the capability for users to create and deploy their own PDPs. The ONAP Policy Framework provides distributed policy management for all policies in ONAP at run time. Not only does this provide unified policy access and version control, it provides life cycle control for policies and allows detection of conflicts across all policies running in an ONAP installation.

2. Architecture

The diagram below shows the architecture of the ONAP Policy Framework at its highest level.

_images/PFHighestLevel.svg

The PolicyDevelopment component implements the functionality for development of policy types and policies. PolicyAdministration is responsible for the deployment life cycle of policies as well as interworking with the mechanisms required to orchestrate the nodes and containers on which policies run. PolicyAdministration is also responsible for the administration of policies at run time; ensuring that policies are available to users, that policies are executing correctly, and that the state and status of policies is monitored. PolicyExecution is the set of PDPs running in the ONAP system and is responsible for making policy decisions and for managing the administrative state of the PDPs as directed by PolicyAdministration.

PolicyDevelopment provides APIs that allow creation of policy artifacts and supporting information in the policy database. PolicyAdministration reads those artifacts and the supporting information from the policy database whilst deploying policy artifacts. Once the policy artifacts are deployed, PolicyAdministration handles the run-time management of the PDPs on which the policies are running. PolicyDevelopment interacts with the database, and has no programmatic interface with PolicyAdministration, PolicyExecution or any other run-time ONAP components.

The diagram below shows a more detailed view of the architecture, as inspired by RFC-2753 and RFC-3198.

_images/PFDesignAndAdmin.svg

PolicyDevelopment provides a CRUD API for policy types and policies. The policy types and policy artifacts and their metadata (information about policies, policy types, and their interrelations) are stored in the PolicyDB. The PolicyDevGUI, PolicyDistribution, and other applications such as CLAMP can use the PolicyDevelopment API to create, update, delete, and read policy types and policies.

PolicyAdministration has two important functions:

  • Management of the life cycle of PDPs in an ONAP installation. PDPs register with PolicyAdministration when they come up. PolicyAdministration handles the allocation of PDPs to PDP Groups and PDP Subgroups, so that they can be managed as microservices in infrastructure management systems such as Kubernetes.

  • Management of the deployment of policies to PDPs in an ONAP installation. PolicyAdministration gives each PDP group a set of domain policies to execute.

PolicyAdministration handles PDPs and policy allocation to PDPs using asynchronous messaging over DMaaP. It provides three APIs:

  • a CRUD API for policy groups and subgroups

  • an API that allows the allocation of policies to PDP groups and subgroups to be controlled

  • an API allows policy execution to be managed, showing the status of policy execution on PDP Groups, subgroups, and individual PDPs as well as the life cycle state of PDPs

PolicyExecution is the set of running PDPs that are executing policies, logically partitioned into PDP groups and subgroups.

_images/PolicyExecution.svg

The figure above shows how PolicyExecution looks at run time with PDPs running in Kubernetes. A PDPGroup is a purely logical construct that collects all the PDPs that are running policies for a particular domain together. A PDPSubGroup is a group of PDPs of the same type that are running the same policies. A PDPSubGroup is deployed as a Kubernetes Deployment. PDPs are defined as Kubernetes Pods. At run time, the actual number of PDPs in each PDPSubGroup is specified in the configuration of the Deployment of that PDPSubGroup in Kubernetes. This structuring of PDPs is required because, in order to simplify deployment and scaling of PDPs in Kubernetes, we gather all the PDPs of the same type that are running the same policies together for deployment.

For example, assume we have policies for the SON (Self Organizing Network) and ACPS (Advanced Customer Premises Service) domains. For SON,we have XACML, Drools, and APEX policies, and for ACPS we have XACML and Drools policies. The table below shows the resulting PDPGroup, PDPSubGroup, and PDP allocations:

PDP Group

PDP Subgroup

Kubernetes Deployment

Kubernetes Deployment Strategy

PDPs in Pods

SON

SON-XACML

SON-XACML-Dep

Always 2, be geo redundant

2 PDP-X

SON-Drools

SON-Drools-Dep

At Least 4, scale up on 70% load, scale down on 40% load, be geo-redundant

>= 4 PDP-D

SON-APEX

SON-APEX-Dep

At Least 3, scale up on 70% load, scale down on 40% load, be geo-redundant

>= 3 PDP-A

ACPS

ACPS-XACML

ACPS-XACML-Dep

Always 2

2 PDP-X

ACPS-Drools

ACPS-Drools-Dep

At Least 2, scale up on 80% load, scale down on 50% load

>=2 PDP-D

For more details on PolicyAdministration APIs and management of PDPGroup and PDPSubGroup, see the documentation for Policy Administration Point (PAP) Architecture.

2.1 Policy Framework Object Model

This section describes the structure of and relations between the main concepts in the Policy Framework. This model is implemented as a common model and is used by PolicyDevelopment, PolicyDeployment, and PolicyExecution.

_images/ClassStructure.svg

The UML class diagram above shows thePolicy Framework Object Model.

2.2 Policy Design Architecture

This section describes the architecture of the model driven system used to develop policy types and to create policies using policy types. The output of Policy Design is deployment-ready artifacts and Policy metadata in the Policy Framework database.

Policy types that are expressed via natural language or a model require an implementation that allows them to be translated into runtime policies. Some Policy Type implementations are set up and available in the platform during startup such as Control Loop Operational Policy Models, OOF placement Models, DCAE microservice models. Policy type implementations can also be loaded and deployed at run time.

2.2.1 Policy Type Design

Policy Type Design is the task of creating policy types that capture the generic and vendor independent aspects of a policy for a particular domain use case.

All policy types are specified in TOSCA service templates. Once policy types are defined and created in the system, PolicyDevelopment manages them and uses them to allow policies to be created from these policy types in a uniform way regardless of the domain that the policy type is addressing or the PDP technology that will execute the policy.

A PolicyTypeImpl is developed for a policy type for a certain type of PDP (for example XACML oriented for decision policies, Drools rules or Apex state machines oriented for ECA policies). While a policy type is implementation independent, a policy type implementation for a policy type is specific for the technology of the PDP on which policies that use that policy type implementation will execute. A Policy Type may have many implementations. A PolicyTypeImpl is the specification of the specific rules or tasks, the flow of the policy, its internal states and data structures and other relevant information. A PolicyTypeImpl can be specific to a particular policy type or it can be more general, providing the implementation of a class of policy types. Further, the design environment and tool chain for implementing implementations of policy types is specific to the technology of the PDP on which the implementation will run.

In the xacml-pdp and drools-pdp, an application is written for a given category of policy types. Such an application may have logic written in Java or another programming language, and may have additional artifacts such as scripts and SQL queries. The application unmarshals and marshals events going into and out of policies as well as handling the sequencing of events for interactions of the policies with other components in ONAP. For example, drools-applications handles the interactions for operational policies running in the drools PDP. In the apex-pdp, all unmarshaling, marshaling, and component interactions are captured in the state machine, logic, and configuraiton of the policy, a Java application is not used.

PolicyDevelopment provides the RESTful Policy Design API, which allows other components to query policy types, Those components can then create policies that specify values for the properties, triggers, and targets specified in a policy type. This API is used by components such as CLAMP and PolicyDistribution to create policies from policy types.

Consider a policy type created for managing faults on vCPE equipment in a vendor independent way. The policy type implementation captures the generic logic required to manage the faults and specifies the vendor specific information that must be supplied to the type for specific vendor vCPE VFs. The actual vCPE policy that is used for managing particular vCPE equipment is created by setting the properties specified in the policy type for that vendor model of vCPE.

2.2.1.1 Generating Policy Types

It is possible to generate policy types using MDD (Model Driven Development) techniques. Policy types are expressed using a DSL (Domain Specific Language) or a policy specification environment for a particular application domain. For example, policy types for specifying SLAs could be expressed in a SLA DSL and policy types for managing SON features could be generated from a visual SON management tool. The ONAP Policy framework provides an API that allows tool chains to create policy types, see the Policy Design and Development page.

_images/PolicyTypeDesign.svg

A GUI implementation in another ONAP component (a PolicyTypeDesignClient) may use the API_User API to create and edit ONAP policy types.

2.2.1.2 Programming Policy Type Implementations

For skilled developers, the most straightforward way to create a policy type is to program it. Programming a policy type might simply mean creating and editing text files, thus manually creating the TOSCA Policy Type YAML file and the policy type implementation for the policy type.

A more formal approach is preferred. For policy type implementations, programmers use a specific Eclipse project type for developing each type of implementation, a Policy Type Implementation SDK. The project is under source control in git. This Eclipse project is structured correctly for creating implementations for a specific type of PDP. It includes the correct POM files for generating the policy type implementation and has editors and perspectives that aid programmers in their work

2.2.2 Policy Design

The PolicyCreation function of PolicyDevelopment creates policies from a policy type. The information expressed during policy type design is used to parameterize a policy type to create an executable policy. A service designer and/or operations team can use tooling that reads the TOSCA Policy Type specifications to express and capture a policy at its highest abstraction level. Alternatively, the parameter for the policy can be expressed in a raw JSON or YAML file and posted over the policy design API described on the Policy Design and Development page.

A number of mechanisms for policy creation are supported in ONAP. The process in PolicyDevelopment for creating a policy is the same for all mechanisms. The most general mechanism for creating a policy is using the RESTful Policy Design API, which provides a full interface to the policy creation support of PolicyDevelopment. This API may be exercised directly using utilities such as curl.

In future releases, the Policy Framework may provide a command line tool that will be a loose wrapper around the API. It may also provide a general purpose Policy GUI in the ONAP Portal for policy creation, which again would be a general purpose wrapper around the policy creation API. The Policy GUI would interpret any TOSCA Model that has been loaded into it and flexibly presents a GUI for a user to create policies from. The development of these mechanisms will be phased over a number of ONAP releases.

A number of ONAP components use policy in manners which are specific to their particular needs. The manner in which the policy creation process is triggered and the way in which information required to create a policy is specified and accessed is specialized for these ONAP components.

For example, CLAMP provides a GUI for creation of Control Loop policies, which reads the Policy Type associated with a control loop, presents the properties as fields in its GUI, and creates a policy using the property values entered by the user.

The following subsections outline the mechanisms for policy creation and modification supported by the ONAP Policy Framework.

2.2.2.1 Policy Design in the ONAP Policy Framework

Policy creation in PolicyDevelopment follows the general sequence shown in the sequence diagram below. An API_USER is any component that wants to create a policy from a policy type. PolicyDevelopment supplies a REST interface that exposes the API and also provides a command line tool and general purpose client that wraps the API.

_images/PolicyDesign.svg

An API_User first gets a reference to and the metadata for the Policy type for the policy they want to work on from PolicyDevelopment. PolicyDevelopment reads the metadata and artifact for the policy type from the database. The API_User then asks for a reference and the metadata for the policy. PolicyDevelopment looks up the policy in the database. If the policy already exists, PolicyDevelopment reads the artifact and returns the reference of the existing policy to the API_User with the metadata for the existing policy. If the policy does not exist, PolicyDevelopment informs the API_User.

The API_User may now proceed with a policy specification session, where the parameters are set for the policy using the policy type specification. Once the API_User is happy that the policy is completely and correctly specified, it requests PolicyDevelopment to create the policy. PolicyDevelopment creates the policy, stores the created policy artifact and its metadata in the database.

2.2.2.2 Model Driven VF (Virtual Function) Policy Design via VNF SDK Packaging

VF vendors express policies such as SLA, Licenses, hardware placement, run-time metric suggestions, etc. These details are captured within the VNF SDK and uploaded into the SDC Catalog. The SDC Distribution APIs are used to interact with SDC. For example, SLA and placement policies may be captured via TOSCA specification. License policies can be captured via TOSCA or an XACML specification. Run-time metric vendor recommendations can be captured via the VES Standard specification.

The sequence diagram below is a high level view of SDC-triggered concrete policy generation for some arbitrary entity EntityA. The parameters to create a policy are read from a TOSCA Policy specification read from a CSAR received from SDC.

_images/ModelDrivenPolicyDesign.svg

PolicyDesign uses the PolicyDistribution component for managing SDC-triggered policy creation and update requests. PolicyDistribution is an API_User, it uses the Policy Design API for policy creation and update. It reads the information it needs to populate the policy type from a TOSCA specification in a CSAR received from SDC and then uses this information to automatically generate a policy.

Note that SDC provides a wrapper for the SDC API as a Java Client and also provides a TOSCA parser. See the documentation for the Policy Distribution Component.

In Step 4 above, the PolicyDesign must download the CSAR file. If the policy is to be composed from the TOSCA definition, it must also parse the TOSCA definition.

In Step 11 above, the PolicyDesign must send back/publish status events to SDC such as DOWNLOAD_OK, DOWNLOAD_ERROR, DEPLOY_OK, DEPLOY_ERROR, NOTIFIED.

2.2.2.3 Scripted Model Driven Policy Design

Service policies such as optimization and placement policies can be specified as a TOSCA Policy at design time. These policies use a TOSCA Policy Type specification as their schemas. Therefore, scripts can be used to create TOSCA policies using TOSCA Policy Types.

_images/ScriptedPolicyDesign.svg

One straightforward way of generating policies from Policy types is to use commands specified in a script file. A command line utility such as curl is an API_User. Commands read policy types using the Policy Type API, parse the policy type and uses the properties of the policy type to prepare a TOSCA Policy. It then issues further commands to use the Policy API to create policies.

2.2.3 Policy Design Process

All policy types must be certified as being fit for deployment prior to run time deployment. Where design is executed using the SDC application, it is assumed the life cycle being implemented by SDC certifies any policy types that are declared within the ONAP Service CSAR. For other policy types and policy type implementations, the life cycle associated with the applied software development process suffices. Since policy types and their implementations are designed and implemented using software development best practices, they can be utilized and configured for various environments (eg. development, testing, production) as desired.

2.3 Policy Runtime Architecture

The Policy Framework Platform components are themselves designed as microservices that are easy to configure and deploy via Docker images and K8S both supporting resiliency and scalability if required. PAPs and PDPs are deployed by the underlying ONAP management infrastructure and are designed to comply with the ONAP interfaces for deploying containers.

The PAPs keep track of PDPs, support the deployment of PDP groups and the deployment of a policy set across those PDP groups. A PAP is stateless in a RESTful sense. Therefore, if there is more than one PAP deployed, it does not matter which PAP a user contacts to handle a request. The PAP uses the database (persistent storage) to keep track of ongoing sessions with PDPs. Policy management on PDPs is the responsibility of PAPs; management of policy sets or policies by any other manner is not permitted.

In the ONAP Policy Framework, the interfaces to the PDP are designed to be as streamlined as possible. Because the PDP is the main unit of scalability in the Policy Framework, the framework is designed to allow PDPs in a PDP group to arbitrarily appear and disappear and for policy consistency across all PDPs in a PDP group to be easily maintained. Therefore, PDPs have just two interfaces; an interface that users can use to execute policies and interface to the PAP for administration, life cycle management and monitoring. The PAP is responsible for controlling the state across the PDPs in a PDP group. The PAP interacts with the Policy database and transfers policy sets to PDPs, and may cache the policy sets for PDP groups.

See also Section 2 of the Policy Design and Development page, where the mechanisms for PDP Deployment and Registration with PAP are explained.

2.3.1 Policy Framework Services

The ONAP Policy Framework follows the architectural approach for microservices recommended by the ONAP Architecture Subcommittee.

The ONAP Policy Framework uses an infrastructure such as Kubernetes Services to manage the life cycle of Policy Framework executable components at runtime. A Kubernetes service allows, among other parameters, the number of instances (pods in Kubernetes terminology) that should be deployed for a particular service to be specified and a common endpoint for that service to be defined. Once the service is started in Kubernetes, Kubernetes ensures that the specified number of instances is always kept running. As requests are received on the common endpoint, they are distributed across the service instances. More complex call distribution and instance deployment strategies may be used; please see the Kubernetes Services documentation for those details.

If, for example, a service called policy-pdpd-control-loop is defined that runs 5 PDP-D instances. The service has the end point https://policy-pdpd-control-loop.onap/<service-specific-path>. When the service is started, Kubernetes spins up 5 PDP-Ds. Calls to the end point https://policy-pdpd-control-loop.onap/<service-specific-path> are distributed across the 5 PDP-D instances. Note that the .onap part of the service endpoint is the namespace being used and is specified for the full ONAP Kubernetes installation.

The following services will be required for the ONAP Policy Framework:

Service

Endpoint

Description

PAP

https://policy-pap

The PAP service, used for policy administration and deployment. See Policy Design and Development for details of the API for this service

PDP-X-domain

https://policy-pdpx-domain

A PDP service is defined for each PDP group. A PDP group is identified by the domain on which it operates.

For example, there could be two PDP-X domains, one for admission policies for ONAP proper and another for admission policies for VNFs of operator Supacom. Two PDP-X services are defined:

PDP-D-domain

https://policy-pdpd-domain

PDP-A-domain

https://policy-pdpa-domain

There is one and only one PAP service, which handles policy deployment, administration, and monitoring for all policies in all PDPs and PDP groups in the system. There are multiple PDP services, one PDP service for each domain for which there are policies.

2.3.2 The Policy Framework Information Structure

The following diagram captures the relationship between Policy Framework concepts at run time.

_images/RuntimeRelationships.svg

There is a one to one relationship between a PDP SubGroup, a Kubernetes PDP service, and the set of policies assigned to run in the PDP subgroup. Each PDP service runs a single PDP subgroup with multiple PDPs, which executes a specific Policy Set containing a number of policies that have been assigned to that PDP subgroup. Having and maintaining this principle makes policy deployment and administration much more straightforward than it would be if complex relationships between PDP services, PDP subgroups, and policy sets.

The topology of the PDPs and their policy sets is held in the Policy Framework database and is administered by the PAP service.

_images/PolicyDatabase.svg

The diagram above gives an indicative structure of the run time topology information in the Policy Framework database. Note that the PDP_SUBGROUP_STATE and PDP_STATE fields hold state information for life cycle management of PDP groups and PDPs.

2.3.3 Startup, Shutdown and Restart

This section describes the interactions between Policy Framework components themselves and with other ONAP components at startup, shutdown and restart.

2.3.3.1 PAP Startup and Shutdown

The sequence diagram below shows the actions of the PAP at startup.

_images/PAPStartStop.svg

The PAP is the run time point of coordination for the ONAP Policy Framework. When it is started, it initializes itself using data from the database. It then waits for periodic PDP status updates and for administration requests.

PAP shutdown is trivial. On receipt or a shutdown request, the PAP completes or aborts any ongoing operations and shuts down gracefully.

2.3.3.2 PDP Startup and Shutdown

The sequence diagram below shows the actions of the PDP at startup. See also Section 4 of the Policy Design and Development page for the API used to implement this sequence.

_images/PDPStartStop.svg

At startup, the PDP initializes itself. At this point it is in PASSIVE mode. The PDP begins sending periodic Status messages to the PAP. The first Status message initializes the process of loading the correct Policy Set on the PDP in the PAP.

On receipt or a shutdown request, the PDP completes or aborts any ongoing policy executions and shuts down gracefully.

2.3.4 Policy Execution

Policy execution is the execution of a policy in a PDP. Policy enforcement occurs in the component that receives a policy decision.

_images/PolicyExecutionFlow.svg

Policy execution can be synchronous or asynchronous. In synchronous policy execution, the component requesting a policy decision requests a policy decision and waits for the result. The PDP-X and PDP-A implement synchronous policy execution. In asynchronous policy execution, the component that requests a policy decision does not wait for the decision. Indeed, the decision may be passed to another component. The PDP-D and PDP-A implement asynchronous polic execution.

Policy execution is carried out using the current life cycle mode of operation of the PDP. While the actual implementation of the mode may vary somewhat between PDPs of different types, the principles below hold true for all PDP types:

Lifecycle Mode

Behaviour

PASSIVE MODE

Policy execution is always rejected irrespective of PDP type.

ACTIVE MODE

Policy execution is executed in the live environment by the PDP.

SAFE MODE*

Policy execution proceeds, but changes to domain state or context are not carried out. The PDP returns an indication that it is running in SAFE mode together with the action it would have performed if it was operating in ACTIVE mode. The PDP type and the policy types it is running must support SAFE mode operation.

TEST MODE*

Policy execution proceeds and changes to domain and state are carried out in a test or sandbox environment. The PDP returns an indication it is running in TEST mode together with the action it has performed on the test environment. The PDP type and the policy types it is running must support TEST mode operation.

* SAFE Mode and TEST Mode will be implemented in future versions of the Policy Framework.

2.3.5 Policy Lifecycle Management

Policy lifecycle management manages the deployment and life cycle of policies in PDP groups at run time. Policy sets can be deployed at run time without restarting PDPs or stopping policy execution. PDPs preserve state for minor/patch version upgrades and rollbacks.

2.3.5.1 Load/Update Policies on PDP

The sequence diagram below shows how policies are loaded or updated on a PDP.

_images/DownloadPoliciesToPDP.svg

This sequence can be initiated in two ways; from the PDP or from a user action.

  1. A PDP sends regular status update messages to the PAP. If this message indicates that the PDP has no policies or outdated policies loaded, then this sequence is initiated

  2. A user may explicitly trigger this sequence to load policies on a PDP

The PAP controls the entire process. The PAP reads the current PDP metadata and the required policy and policy set artifacts from the database. It then builds the policy set for the PDP. Once the policies are ready, the PAP sets the mode of the PDP to PASSIVE. The Policy Set is transparently passed to the PDP by the PAP. The PDP loads all the policies in the policy set including any models, rules, tasks, or flows in the policy set in the policy implementations.

Once the Policy Set is loaded, the PAP orders the PDP to enter the life cycle mode that has been specified for it (ACTIVE/SAFE*/TEST*). The PDP begins to execute policies in the specified mode (see section 2.3.4).

* SAFE Mode and TEST Mode will be implemented in future versions of the Policy Framework.

2.3.5.2 Policy Rollout

A policy set steps through a number of life cycle modes when it is rolled out.

_images/PolicyRollout.svg

The user defines the set of policies for a PDP group. It is deployed to a PDP group and is initially in PASSIVE mode. The user sets the PDP Group into TEST mode. The policies are run in a test or sandboxed environment for a period of time. The test results are passed back to the user. The user may revert the policy set to PASSIVE mode a number of times and upgrade the policy set during test operation.

When the user is satisfied with policy set execution and when quality criteria have been reached for the policy set, the PDP group is set to run in SAFE mode. In this mode, the policies run on the target environment but do not actually exercise any actions or change any context in the target environment. Again, as in TEST mode, the operator may decide to revert back to TEST mode or even PASSIVE mode if issues arise with a policy set.

Finally, when the user is satisfied with policy set execution and when quality criteria have been reached, the PDP group is set into ACTIVE state and the policy set executes on the target environment. The results of target operation are reported. The PDP group can be reverted to SAFE, TEST, or even PASSIVE mode at any time if problems arise.

* SAFE Mode and TEST Mode will be implemented in future versions of the Policy Framework. In current versions, policies transition directly from PASSIVE mode to ACTIVE mode.

2.3.5.3 Policy Upgrade and Rollback

There are a number of approaches for managing policy upgrade and rollback. Upgrade and rollback will be implemented in future versions of the Policy Framework.

The most straightforward approach is to use the approach described in section 2.3.5.2 Policy Rollout for upgrading and rolling back policy sets. In order to upgrade a policy set, one follows the process in 2.3.5.2 Policy Rollout with the new policy set version. For rollback, one follows the process in 2.3.5.2 Policy Rollout with the older policy set, most probably setting the old policy set into ACTIVE mode immediately. The advantage of this approach is that the approach is straightforward. The obvious disadvantage is that the PDP group is not executing on the target environment while the new policy set is in PASSIVE, TEST, and SAFE mode.

A second manner to tackle upgrade and rollback is to use a spare-wheel approach. An special upgrade PDP group service is set up as a K8S service in parallel with the active one during the upgrade procedure. The spare wheel service is used to execute the process described in 2.3.5.2 Policy Rollout. When the time comes to activate the policy set, the references for the active and spare wheel services are simply swapped. The advantage of this approach is that the down time during upgrade is minimized, the spare wheel PDP group can be abandoned at any time without affecting the in service PDP group, and the upgrade can be rolled back easily for a period simply by preserving the old service for a time. The disadvantage is that this approach is more complex and uses more resources than the first approach.

A third approach is to have two policy sets running in each PDP, an active set and a standby set. However such an approach would increase the complexity of implementation in PDPs significantly.

2.3.6 Policy Monitoring

PDPs provide a periodic report of their status to the PAP. All PDPs report using a standard reporting format that is extended to provide information for specific PDP types. PDPs provide at least the information below:

Field

Description

State

Lifecycle State (PASSIVE/TEST*/SAFE*/ACTIVE)

Timestamp

Time the report record was generated

InvocationCount

The number of execution invocations the PDP has processed since the last report

LastInvocationTime

The time taken to process the last execution invocation

AverageInvocationTime

The average time taken to process an invocation since the last report

StartTime

The start time of the PDP

UpTime

The length of time the PDP has been executing

RealTimeInfo

Real time information on running policies.

* SAFE Mode and TEST Mode will be implemented in future versions of the Policy Framework.

Currently, policy monitoring is supported by PAP and by pdp-apex. Policy monitoring for all PDPs will be supported in future versions of the Policy Framework.

2.3.7 PEP Registration and Enforcement Guidelines

In ONAP there are several applications outside the Policy Framework that enforce policy decisions based on models provided to the Policy Framework. These applications are considered Policy Enforcement Engines (PEP) and roles will be provided to those applications using AAF/CADI to ensure only those applications can make calls to the Policy Decision APIs. Some example PEPs are: DCAE, OOF, and SDNC.

See Section 3.4 of the Policy Design and Development for more information on the Decision APIs.

2.3.8 Multi-Cluster Support

Multi-cluster support was added to the Policy Framework during the Istanbul release, enabling redundancy, load-sharing, and inter-site failover.

Note: multi-cluster support has only been minimally tested, and is thus still experimental.

2.3.8.1 Shared DB

Multi-cluster support requires a shared DB. Rather than spinning up a separate DB for each cluster, all of the clusters are pointed to a common DB. Policy-API adds policy types and policies, while Policy-PAP manages PDP Groups and Subgroups, as well as policy deployments. The information in these tables is not segregated, but is, instead, shared across the API and PAP components across all of the clusters.

_images/MCSharedDB.svg
2.3.8.2 DMaaP Arrangement

As in prior releases, communication between the PAPs and PDPs still takes place via DMaaP. Two arrangements, described below, are supported.

2.3.8.2.1 Local DMaaP

In this arrangement, each cluster is associated with its own, local DMaaP, and communication only happens between PAPs and PDPs within the same cluster.

_images/MCLocalDmaap.svg

The one limitation with this approach is that, when a PAP in cluster A deploys a policy, PAP is only able to inform the PDPs in the local cluster; the PDPs in the other clusters are not made aware of the new deployment until they generate a heartbeat, at which point, their local PAP will inform them of the new deployment. The same is true of changes made to the state of a PDP Group; changes only propagate to PDPs in other clusters in response to heartbeats generated by the PDPs.

_images/MCLocalHB.svg
2.3.8.2.2 Shared DMaaP

In this arrangement, the PAPs and PDPs in all of the clusters are pointed to a common DMaaP. Because the PAP and PDPs all communicate via the same DMaaP, when a PAP deploys a policy, all PDPs are made aware, rather than having to wait for a heartbeat.

_images/MCSharedDmaap.svg
2.3.8.3 Missed Heartbeat

To manage the removal of terminated PDPs from the DB, a record, containing a “last-updated” timestamp, is maintained within the DB for each PDP. Whether using a local or shared DMaaP, any PAP receiving a message from a PDP will update the timestamp in the associated record, thus keeping the records “current”.

_images/MCSharedHB.svg

Periodically, each PAP will sweep the DB of PDP records whose timestamp has not been updated recently. The frequency with which it is checked is based on the value of the “heartbeatMs” configuration parameter, with a record considered expired if no heartbeat has been received for three cycles.

_images/MCMissedHB.svg

3. APIs Provided by the Policy Framework

See the Policy Design and Development page.

4. Terminology

PAP (Policy Administration Point)

A component that administers and manages policies

PDP (Policy Deployment Point)

A component that executes a policy artifact (One or many?)

PDP_<>

A specific type of PDP

PDP Group

A group of PDPs that execute the same set of policies

Policy Development

The development environment for policies

Policy Type

A generic prototype definition of a type of policy in TOSCA, see the TOSCA Policy Primer

Policy

An executable policy defined in TOSCA and created using a Policy Type, see the TOSCA Policy Primer

Policy Set

A set of policies that are deployed on a PDP group. One and only one Policy Set is deployed on a PDP group

End of Document

Policy Design and Development

This document describes the design principles that should be used to write, deploy, and run policies of various types using the Policy Framework. It explains the APIs that are available for Policy Framework users. It provides copious examples to illustrate policy design and API usage.

The figure below shows the Artifacts (Blue) in the ONAP Policy Framework, the Activities (Yellow) that manipulate them, and important components (Salmon) that interact with them. The Policy Framework is fully TOSCA compliant, and uses TOSCA to model policies. Please see the TOSCA Policy Primer page for an introduction to TOSCA policy concepts.

_images/APIsInPolicyFramework.svg

TOSCA defines the concept of a PolicyType, the definition of a type of policy that can be applied to a service. It also defines the concept of a Policy, an instance of a PolicyType. In the Policy Framework, we handle and manage these TOSCA definitions and tie them to real implementations of policies that can run on PDPs.

The diagram above outlines how this is achieved. Each TOSCA PolicyType must have a corresponding PolicyTypeImpl in the Policy Framework. The TOSCA PolicyType definition can be used to create a TOSCA Policy definition, either directly by the Policy Framework, by CLAMP, or by some other system. Once the Policy artifact exists, it can be used together with the PolicyTypeImpl artifact to create a PolicyImpl artifact. A PolicyImpl artifact is an executable policy implementation that can run on a PDP.

The TOSCA PolicyType artifact defines the external characteristics of the policy; defining its properties, the types of entities it acts on, and its triggers.  A PolicyTypeImpl artifact is an XACML, Drools, or APEX implementation of that policy definition. PolicyType and PolicyTypeImpl artifacts may be preloaded, may be loaded manually, or may be created using the Lifecycle API. Alternatively, PolicyType definitions may be loaded over the Lifecycle API for preloaded PolicyTypeImpl artifacts. A TOSCA PolicyType artifact can be used by clients (such as CLAMP or CLI tools) to create, parse, serialize, and/or deserialize an actual Policy.

The TOSCA Policy artifact is used internally by the Policy Framework, or is input by CLAMP or other systems. This artifact specifies the values of the properties for the policy and specifies the specific entities the policy acts on. Policy Design uses the TOSCA Policy artifact and the PolicyTypeImpl artifact to create an executable PolicyImpl artifact.

ONAP Policy Types

Policy Type Design manages TOSCA PolicyType artifacts and their PolicyTypeImpl implementations.

A TOSCA PolicyType may ultimately be defined by the modeling team but for now are defined by the Policy Framework project. Various editors and GUIs are available for creating PolicyTypeImpl implementations. However, systematic integration of PolicyTypeImpl implementation is outside the scope of the ONAP Dublin release.

The PolicyType definitions and implementations listed below can be preloaded so that they are available for use in the Policy Framework upon platform installation. For a full listing of available preloaded policy types, see the Policy API Preloaded Policy Type List.

Base Policy Types

Description

onap.policies.Monitoring

Base model that supports Policy driven DCAE microservice components used in Control Loops

onap.policies.controlloop.operational.Common

Base Control Loop operational policy common definitions

onap.policies.controlloop.guard.Common

Control Loop Guard Policy common definitions

onap.policies.Optimization

Base OOF Optimization Policy Type definition

onap.policies.Naming

Base SDNC Naming Policy Type definition

onap.policies.Native

Base Native Policy Type for PDPs to inherit from in order to provide their own native policy type.

Note

The El Alto onap.policies.controlloop.Guard policy types were deprecated and removed in Frankfurt.

1 Base Policy Type: onap.policies.Monitoring

This is a base Policy Type that supports Policy driven DCAE microservice components used in a Control Loops. The implementation of this Policy Type is done in the XACML PDP. The Decision API is used by the DCAE Policy Handler to retrieve a decision on which policy to enforce during runtime.

Base Policy Type definition for onap.policies.Monitoring
1tosca_definitions_version: tosca_simple_yaml_1_1_0
2topology_template:
3  policy_types:
4    - onap.policies.Monitoring:
5        derived_from: tosca.policies.Root
6        version: 1.0.0
7        description: a base policy type for all policies that govern monitoring provision

The PolicyTypeImpl implementation of the onap.policies.Montoring Policy Type is generic to support definition of TOSCA PolicyType artifacts in the Policy Framework using the Policy Type Design API. Therefore many TOSCA PolicyType artifacts will use the same PolicyTypeImpl implementation with different property types and towards different targets. This allows dynamically generated DCAE microservice component Policy Types to be created at Design Time.

Please be sure to name your Policy Type appropriately by prepending it with onap.policies.monitoring.Custom. Notice the lowercase m for monitoring, which follows TOSCA conventions. And also notice the capitalized “C” for your analytics policy type name.

Example PolicyType onap.policies.monitoring.MyDCAEComponent derived from onap.policies.Monitoring
1tosca_definitions_version: tosca_simple_yaml_1_1_0
2policy_types:
3 - onap.policies.monitoring.Mycomponent:
4      derived_from: onap.policies.Monitoring
5      version: 1.0.0
6      properties:
7          my_property_1:
8          type: string
9          description: A description of this property

For more examples of monitoring policy type definitions, please refer to the examples in the ONAP policy-models gerrit repository. Please note that some of the examples do not adhere to TOSCA naming conventions due to backward compatibility.

2 Base Policy Type onap.policies.controlloop.operational.Common

This is the new Operational Policy Type introduced in Frankfurt release to fully support TOSCA Policy Type. There are common properties and datatypes that are independent of the PDP engine used to enforce this Policy Type.

Operational Policy Type Inheritance
2.1 onap.policies.controlloop.operational.common.Drools

Drools PDP Control Loop Operational Policy definition extends the base common policy type by adding a property for controllerName.

Please see the definition of the Drools Operational Policy Type

2.2 onap.policies.controlloop.operational.common.Apex

Apex PDP Control Loop Operational Policy definition extends the base common policy type by adding additional properties.

Please see the definition of the Apex Operational Policy Type

3 Base Policy Type: onap.policies.controlloop.guard.Common

This base policy type is the the type definition for Control Loop guard policies for frequency limiting, blacklisting and min/max guards to help protect runtime Control Loop Actions from doing harm to the network. This policy type is developed using the XACML PDP to support question/answer Policy Decisions during runtime for the Drools and APEX onap.controlloop.Operational policy type implementations.

Guard Policy Type Inheritance

Please see the definition of the Common Guard Policy Type

3.1 Frequency Limiter Guard onap.policies.controlloop.guard.common.FrequencyLimiter

The frequency limiter supports limiting the frequency of actions being taken by an Actor.

Please see the definition of the Guard Frequency Limiter Policy Type

3.2 Min/Max Guard onap.policies.controlloop.guard.common.MinMax

The Min/Max Guard supports Min/Max number of entity for scaling operations.

Please see the definition of the Guard Min/Max Policy Type

3.3 Blacklist Guard onap.policies.controlloop.guard.common.Blacklist

The Blacklist Guard Supports blacklisting control loop actions from being performed on specific entity id’s.

Please see the definition of the Guard Blacklist Policy Type

3.4 Filter Guard onap.policies.controlloop.guard.common.Filter

The Filter Guard Supports filtering control loop actions from being performed on specific entity id’s.

Please see the definition of the Guard Filter Policy Type

4 Optimization onap.policies.Optimization

The Optimization Base Policy Type supports the OOF optimization policies. The Base policy Type has common properties shared by all its derived policy types.

Optimization Policy Type Inheritance

Please see the definition of the Base Optimization Policy Type.

These Policy Types are unique in that some properties have an additional metadata property matchable set to true which indicates that this property can be used to support more fine-grained Policy Decisions. For more information, see the XACML Optimization application implementation.

4.1 Optimization Service Policy Type onap.policies.optimization.Service

This policy type further extends the base onap.policies.Optimization type by defining additional properties specific to a service. For more information:

Service Optimization Base Policy Type

Several additional policy types inherit from the Service Optimization Policy Type. For more information, XACML Optimization application implementation.

4.2 Optimization Resource Policy Type onap.policies.optimization.Resource

This policy type further extends the base onap.policies.Optimization type by defining additional properties specific to a resource. For more information:

Resource Optimization Base Policy Type

Several additional policy types inherit from the Resource Optimization Policy Type. For more information, XACML Optimization application implementation.

5 Naming onap.policies.Naming

Naming policies are used in SDNC to enforce which naming policy should be used during instantiation.

Policies of this type are composed using the Naming Policy Type Model.

6 Native Policy Types onap.policies.Native

This is the Base Policy Type used by PDP engines to support their native language policies. PDP engines inherit from this base policy type to implement support for their own custom policy type:

tosca_definitions_version: tosca_simple_yaml_1_1_0
policy_types:
    onap.policies.Native:
        derived_from: tosca.policies.Root
        description: a base policy type for all native PDP policies
        version: 1.0.0
6.1 Policy Type: onap.policies.native.drools.Controller

This policy type supports creation of native PDP-D controllers via policy. A controller is an abstraction on the PDP-D that groups communication channels, message mapping rules, and any other arbitrary configuration data to realize an application.

Policies of this type are composed using the onap.policies.native.drools.Controller policy type specification specification.

6.2 Policy Type: onap.policies.native.drools.Artifact

This policy type supports the dynamic association of a native PDP-D controller with rules and dependent java libraries. This policy type is used in conjuction with the onap.policies.native.drools.Controller type to create or upgrade a drools application on a live PDP-D.

Policies of this type are composed against the onap.policies.native.drools.Controller policy type specification specification.

6.3 Policy Type: onap.policies.native.Xacml

This policy type supports XACML OASIS 3.0 XML Policies. The policies are URL encoded in order to be easily transported via Lifecycle API json and yaml Content-Types. When deployed to the XACML PDP (PDP-X), they will be managed by the native application. The PDP-X will route XACML Request/Response RESTful API calls to the native application who manages those decisions.

XACML Native Policy Type

6.4 Policy Type: onap.policies.native.Apex

This policy type supports Apex native policy types.

Apex Native Policy Type

Policy Offered APIs

The Policy Framework supports the public APIs listed in the links below:

Policy Life Cycle API

The purpose of this API is to support CRUD of TOSCA PolicyType and Policy entities. This API is provided by the PolicyDevelopment component of the Policy Framework, see the The ONAP Policy Framework Architecture page. The Policy design API backend is running in an independent building block component of the policy framework that provides REST services for the aforementioned CRUD behaviors. The Policy design API component interacts with a policy database for storing and fetching new policies or policy types as needed. Apart from CRUD, an API is also exposed for clients to retrieve healthcheck status of the API REST service and statistics report including a variety of counters that reflect the history of API invocation.

We strictly follow TOSCA Specification to define policy types and policies. A policy type defines the schema for a policy, expressing the properties, targets, and triggers that a policy may have. The type (string, int etc) and constraints (such as the range of legal values) of each property is defined in the Policy Type. Both Policy Type and policy are included in a TOSCA Service Template, which is used as the entity passed into an API POST call and the entity returned by API GET and DELETE calls. More details are presented in following sections. Policy Types and Policies can be composed for any given domain of application. All Policy Types and Policies must be composed as well-formed TOSCA Service Templates. One Service Template can contain multiple policies and policy types.

Child policy types can inherit from parent policy types, so a hierarchy of policy types can be built up. For example, the HpaPolicy Policy Type in the table below is a child of a Resource Policy Type, which is a child of an Optimization policy. See also the examples in Github.

onap.policies.Optimization.yaml
 onap.policies.optimization.Resource.yaml
  onap.policies.optimization.resource.AffinityPolicy.yaml
  onap.policies.optimization.resource.DistancePolicy.yaml
  onap.policies.optimization.resource.HpaPolicy.yaml
  onap.policies.optimization.resource.OptimizationPolicy.yaml
  onap.policies.optimization.resource.PciPolicy.yaml
  onap.policies.optimization.resource.Vim_fit.yaml
  onap.policies.optimization.resource.VnfPolicy.yaml
onap.policies.optimization.Service.yaml
  onap.policies.optimization.service.QueryPolicy.yaml
  onap.policies.optimization.service.SubscriberPolicy.yaml

Custom data types can be defined in TOSCA for properties specified in Policy Types. Data types can also inherit from parents, so a hierarchy of data types can also be built up.

Warning

When creating a Policy Type, the ancestors of the Policy Type and all its custom Data Type definitions and ancestors MUST either already exist in the database or MUST also be defined in the incoming TOSCA Service Template. Requests with missing or bad references are rejected by the API.

Each Policy Type can have multiple Policy instances created from it. Therefore, many Policy instances of the HpaPolicy Policy Type above can be created. When a policy is created, its Policy Type is specified in the type and type_version fields of the policy.

Warning

The Policy Type specified for a Policy MUST exist in the database before the policy can be created. Requests with missing or bad Policy Type references are rejected by the API.

The API allows applications to create, update, delete, and query PolicyType entities so that they become available for use in ONAP by applications such as CLAMP. Some Policy Type entities are preloaded in the Policy Framework.

Warning

If a TOSCA entity (Data Type, Policy Type, or Policy with a certain version) already exists in the database and an attempt is made to re-create the entity with different fields, the API will reject the request with the error message “entity in incoming fragment does not equal existing entity”. In such cases, delete the Policy or Policy Type and re-create it using the API.

The TOSCA fields below are valid on API calls:

Field

GET

POST

DELETE

Comment

(name)

M

M

M

The definition of the reference to the Policy Type, GET allows ranges to be specified

version

O

M

C

GET allows ranges to be specified, must be specified if more than one version of the Policy Type exists and a specific version is required

description

R

O

N/A

Desciption of the Policy Type

derived_from

R

C

N/A

Must be specified when a Policy Type is derived from another Policy Type such as in the case of derived Monitoring Policy Types. The referenced Policy Type must either already exist in the database or be defined as another policy type in the incoming TOSCA service template

metadata

R

O

N/A

Metadata for the Policy Type

properties

R

M

N/A

This field holds the specification of the specific Policy Type in ONAP. Any user defined data types specified on properties must either already exist in the database or be defined in the incoming TOSCA service template

targets

R

O

N/A

A list of node types and/or group types to which the Policy Type can be applied

triggers

R

O

N/A

Specification of policy triggers, not currently supported in ONAP

Note

On this and subsequent tables, use the following legend: M-Mandatory, O-Optional, R-Read-only, C-Conditional. Conditional means the field is mandatory when some other field is present.

Note

Preloaded policy types may only be queried over this API, modification or deletion of preloaded policy type implementations is disabled.

Note

Policy types that are in use (referenced by defined Policies and/or child policy types) may not be deleted.

Note

The group types of targets in TOSCA are groups of TOSCA nodes, not PDP groups; the target concept in TOSCA is equivalent to the Policy Enforcement Point (PEP) concept

To ease policy creation, we preload several widely used policy types in policy database. Below is a table listing the preloaded policy types.

Policy Type Name

Payload

Monitoring.TCA

onap.policies.monitoring.tcagen2.yaml

Monitoring.Collectors

onap.policies.monitoring.dcaegen2.collectors.datafile.datafile-app-server.yaml

Optimization

onap.policies.Optimization.yaml

Optimization.Resource

onap.policies.optimization.Resource.yaml

Optimization.Resource.AffinityPolicy

onap.policies.optimization.resource.AffinityPolicy.yaml

Optimization.Resource.DistancePolicy

onap.policies.optimization.resource.DistancePolicy.yaml

Optimization.Resource.HpaPolicy

onap.policies.optimization.resource.HpaPolicy.yaml

Optimization.Resource.OptimizationPolicy

onap.policies.optimization.resource.OptimizationPolicy.yaml

Optimization.Resource.PciPolicy

onap.policies.optimization.resource.PciPolicy.yaml

Optimization.Resource.Vim_fit

onap.policies.optimization.resource.Vim_fit.yaml

Optimization.Resource.VnfPolicy

onap.policies.optimization.resource.VnfPolicy.yaml

Optimization.Service

onap.policies.optimization.Service.yaml

Optimization.Service.QueryPolicy

onap.policies.optimization.service.QueryPolicy.yaml

Optimization.Service.SubscriberPolicy

onap.policies.optimization.service.SubscriberPolicy.yaml

Controlloop.Guard.Common

onap.policies.controlloop.guard.Common.yaml

Controlloop.Guard.Common.Blacklist

onap.policies.controlloop.guard.common.Blacklist.yaml

Controlloop.Guard.Common.FrequencyLimiter

onap.policies.controlloop.guard.common.FrequencyLimiter.yaml

Controlloop.Guard.Common.MinMax

onap.policies.controlloop.guard.common.MinMax.yaml

Controlloop.Guard.Common.Filter

onap.policies.controlloop.guard.common.Filter.yaml

Controlloop.Guard.Coordination.FirstBlocksSecond

onap.policies.controlloop.guard.coordination.FirstBlocksSecond.yaml

Controlloop.Operational.Common

onap.policies.controlloop.operational.Common.yaml

Controlloop.Operational.Common.Apex

onap.policies.controlloop.operational.common.Apex.yaml

Controlloop.Operational.Common.Drools

onap.policies.controlloop.operational.common.Drools.yaml

Naming

onap.policies.Naming.yaml

Native.Drools

onap.policies.native.Drools.yaml

Native.Xacml

onap.policies.native.Xacml.yaml

Native.Apex

onap.policies.native.Apex.yaml

We also preload a policy in the policy database. Below is a table listing the preloaded polic(ies).

Policy Type Name

Payload

SDNC.Naming

sdnc.policy.naming.input.tosca.yaml

Below is a table containing sample well-formed TOSCA compliant policies.

Policy Name

Payload

vCPE.Monitoring.Tosca

vCPE.policy.monitoring.input.tosca.yaml vCPE.policy.monitoring.input.tosca.json

vCPE.Optimization.Tosca

vCPE.policies.optimization.input.tosca.yaml vCPE.policies.optimization.input.tosca.json

vCPE.Operational.Tosca

vCPE.policy.operational.input.tosca.yaml vCPE.policy.operational.input.tosca.json

vDNS.Guard.FrequencyLimiting.Tosca

vDNS.policy.guard.frequencylimiter.input.tosca.yaml

vDNS.Guard.MinMax.Tosca

vDNS.policy.guard.minmaxvnfs.input.tosca.yaml

vDNS.Guard.Blacklist.Tosca

vDNS.policy.guard.blacklist.input.tosca.yaml

vDNS.Monitoring.Tosca

vDNS.policy.monitoring.input.tosca.yaml vDNS.policy.monitoring.input.tosca.json

vDNS.Operational.Tosca

vDNS.policy.operational.input.tosca.yaml vDNS.policy.operational.input.tosca.json

vFirewall.Monitoring.Tosca

vFirewall.policy.monitoring.input.tosca.yaml vFirewall.policy.monitoring.input.tosca.json

vFirewall.Operational.Tosca

vFirewall.policy.operational.input.tosca.yaml vFirewall.policy.operational.input.tosca.json

vFirewallCDS.Operational.Tosca

vFirewallCDS.policy.operational.input.tosca.yaml

Below is a global API table from where swagger JSON for different types of policy design API can be downloaded.

Global API Table

API name

Swagger JSON

Healthcheck API

link

Statistics API

link

Tosca Policy Type API

link

Tosca Policy API

link

API Swagger

It is worth noting that we use basic authorization for API access with username and password set to healthcheck and zb!XztG34 respectively. Also, the new APIs support both http and https.

For every API call, client is encouraged to insert an uuid-type requestID as parameter. It is helpful for tracking each http transaction and facilitates debugging. Mostly importantly, it complies with Logging requirements v1.2. If a client does not provide the requestID in API call, one will be randomly generated and attached to response header x-onap-requestid.

In accordance with ONAP API Common Versioning Strategy Guidelines, in the response of each API call, several custom headers are added:

x-latestversion: 1.0.0
x-minorversion: 0
x-patchversion: 0
x-onap-requestid: e1763e61-9eef-4911-b952-1be1edd9812b
x-latestversion is used only to communicate an API's latest version.

x-minorversion is used to request or communicate a MINOR version back from the client to the server, and from the server back to the client.

x-patchversion is used only to communicate a PATCH version in a response for troubleshooting purposes only, and will not be provided by the client on request.

x-onap-requestid is used to track REST transactions for logging purpose, as described above.

HealthCheck

GET /policy/api/v1/healthcheck

Perform a system healthcheck

  • Description: Returns healthy status of the Policy API component

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation; Healthcheck report will be returned.

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

Statistics

GET /policy/api/v1/statistics

Retrieve current statistics

  • Description: Returns current statistics including the counters of API invocation

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation; All statistics counters of API invocation will be returned.

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

PolicyType

GET /policy/api/v1/policytypes

Retrieve existing policy types

  • Description: Returns a list of existing policy types stored in Policy Framework

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation; All policy types will be returned.

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

POST /policy/api/v1/policytypes

Create a new policy type

  • Description: Create a new policy type. Client should provide TOSCA body of the new policy type

  • Consumes: [‘application/json’, ‘application/yaml’]

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

body

body

Entity body of policy type

ToscaServiceTemplate

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation; The newly created policy type will be returned.

400 - Invalid Body

401 - Authentication Error

403 - Authorization Error

406 - Not Acceptable Version

500 - Internal Server Error

GET /policy/api/v1/policytypes/{policyTypeId}

Retrieve all available versions of a policy type

  • Description: Returns a list of all available versions for the specified policy type

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

policyTypeId

path

ID of policy type

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation; All versions of specified policy type will be returned.

401 - Authentication Error

403 - Authorization Error

404 - Resource Not Found

500 - Internal Server Error

GET /policy/api/v1/policytypes/{policyTypeId}/versions/{versionId}

Retrieve one particular version of a policy type

  • Description: Returns a particular version for the specified policy type

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

policyTypeId

path

ID of policy type

string

versionId

path

Version of policy type

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation; One specified version of specified policy type will be returned.

401 - Authentication Error

403 - Authorization Error

404 - Resource Not Found

500 - Internal Server Error

DELETE /policy/api/v1/policytypes/{policyTypeId}/versions/{versionId}

Delete one version of a policy type

  • Description: Delete one version of a policy type. It must follow two rules. Rule 1: pre-defined policy types cannot be deleted; Rule 2: policy types that are in use (parameterized by a TOSCA policy) cannot be deleted. The parameterizing TOSCA policies must be deleted first.

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

policyTypeId

path

ID of policy type

string

versionId

path

Version of policy type

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation; Newly deleted policy type will be returned.

401 - Authentication Error

403 - Authorization Error

404 - Resource Not Found

409 - Delete Conflict, Rule Violation

500 - Internal Server Error

GET /policy/api/v1/policytypes/{policyTypeId}/versions/latest

Retrieve latest version of a policy type

  • Description: Returns latest version for the specified policy type

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

policyTypeId

path

ID of policy type

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation; Latest version of specified policy type will be returned.

401 - Authentication Error

403 - Authorization Error

404 - Resource Not Found

500 - Internal Server Error

Policy

GET /policy/api/v1/policytypes/{policyTypeId}/versions/{policyTypeVersion}/policies

Retrieve all versions of a policy created for a particular policy type version

  • Description: Returns a list of all versions of specified policy created for the specified policy type version

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

policyTypeId

path

ID of policy type

string

policyTypeVersion

path

Version of policy type

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation; All policies matching specified policy type will be returned.

401 - Authentication Error

403 - Authorization Error

404 - Resource Not Found

500 - Internal Server Error

POST /policy/api/v1/policytypes/{policyTypeId}/versions/{policyTypeVersion}/policies

Create a new policy for a policy type version

  • Description: Create a new policy for a policy type. Client should provide TOSCA body of the new policy

  • Consumes: [‘application/json’, ‘application/yaml’]

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

policyTypeId

path

ID of policy type

string

policyTypeVersion

path

Version of policy type

string

X-ONAP-RequestID

header

RequestID for http transaction

string

body

body

Entity body of policy

ToscaServiceTemplate

Responses

200 - successful operation; Newly created policy matching specified policy type will be returned.

400 - Invalid Body

401 - Authentication Error

403 - Authorization Error

404 - Resource Not Found

406 - Not Acceptable Version

500 - Internal Server Error

GET /policy/api/v1/policytypes/{policyTypeId}/versions/{policyTypeVersion}/policies/{policyId}

Retrieve all version details of a policy created for a particular policy type version

  • Description: Returns a list of all version details of the specified policy

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

policyTypeId

path

ID of policy type

string

policyTypeVersion

path

Version of policy type

string

policyId

path

ID of policy

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation; All versions of specified policy matching specified policy type will be returned.

401 - Authentication Error

403 - Authorization Error

404 - Resource Not Found

500 - Internal Server Error

GET /policy/api/v1/policytypes/{policyTypeId}/versions/{policyTypeVersion}/policies/{policyId}/versions/{policyVersion}

Retrieve one version of a policy created for a particular policy type version

  • Description: Returns a particular version of specified policy created for the specified policy type version

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

policyTypeId

path

ID of policy type

string

policyTypeVersion

path

Version of policy type

string

policyId

path

ID of policy

string

policyVersion

path

Version of policy

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation; The specified policy matching specified policy type will be returned.

401 - Authentication Error

403 - Authorization Error

404 - Resource Not Found

500 - Internal Server Error

DELETE /policy/api/v1/policytypes/{policyTypeId}/versions/{policyTypeVersion}/policies/{policyId}/versions/{policyVersion}

Delete a particular version of a policy

  • Description: Delete a particular version of a policy. It must follow one rule. Rule: the version that has been deployed in PDP group(s) cannot be deleted

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

policyTypeId

path

PolicyType ID

string

policyTypeVersion

path

Version of policy type

string

policyId

path

ID of policy

string

policyVersion

path

Version of policy

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation; Newly deleted policy matching specified policy type will be returned.

401 - Authentication Error

403 - Authorization Error

404 - Resource Not Found

409 - Delete Conflict, Rule Violation

500 - Internal Server Error

GET /policy/api/v1/policytypes/{policyTypeId}/versions/{policyTypeVersion}/policies/{policyId}/versions/latest

Retrieve the latest version of a particular policy

  • Description: Returns the latest version of specified policy

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

policyTypeId

path

ID of policy type

string

policyTypeVersion

path

Version of policy type

string

policyId

path

ID of policy

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation; Latest version of specified policy matching specified policy type will be returned.

401 - Authentication Error

403 - Authorization Error

404 - Resource Not Found

500 - Internal Server Error

GET /policy/api/v1/policies/{policyId}/versions/{policyVersion}

Retrieve specific version of a specified policy

  • Description: Returns a particular version of specified policy

  • Consumes: [‘application/json’, ‘application/yaml’]

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

policyId

path

Name of policy

string

policyVersion

path

Version of policy

string

X-ONAP-RequestID

header

RequestID for http transaction

string

mode

query

Fetch mode for policies, BARE for bare policies (default), REFERENCED for fully referenced policies

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

404 - Resource Not Found

500 - Internal Server Error

DELETE /policy/api/v1/policies/{policyId}/versions/{policyVersion}

Delete a particular version of a policy

  • Description: Rule: the version that has been deployed in PDP group(s) cannot be deleted

  • Consumes: [‘application/json’, ‘application/yaml’]

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

policyId

path

ID of policy

string

policyVersion

path

Version of policy

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

404 - Resource Not Found

409 - Delete Conflict, Rule Violation

500 - Internal Server Error

GET /policy/api/v1/policies

Retrieve all versions of available policies

  • Description: Returns all version of available policies

  • Consumes: [‘application/json’, ‘application/yaml’]

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

mode

query

Fetch mode for policies, BARE for bare policies (default), REFERENCED for fully referenced policies

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

404 - Resource Not Found

500 - Internal Server Error

POST /policy/api/v1/policies

Create one or more new policies

  • Description: Create one or more new policies. Client should provide TOSCA body of the new policies

  • Consumes: [‘application/json’, ‘application/yaml’]

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

body

body

Entity body of policies

ToscaServiceTemplate

Responses

200 - successful operation; Newly created policies will be returned.

400 - Invalid Body

401 - Authentication Error

403 - Authorization Error

404 - Resource Not Found

406 - Not Acceptable Version

500 - Internal Server Error

When making a POST policy API call, the client must not only provide well-formed JSON/YAML, but also must conform to the TOSCA specification. For example. the “type” field for a TOSCA policy should strictly match the policy type name it derives. Please check out the sample policies in above policy table.

Also, in the POST payload passed into each policy or policy type creation call (i.e. POST API invocation), the client needs to explicitly specify the version of the policy or policy type to create. That being said, the “version” field is mandatory in the TOSCA service template formatted policy or policy type payload. If the version is missing, that POST call will return “406 - Not Acceptable” and the policy or policy type to create will not be stored in the database.

To avoid inconsistent versions between the database and policies deployed in the PDPs, policy API REST service employs some enforcement rules that validate the version specified in the POST payload when a new version is to create or an existing version to update. Policy API will not blindly override the version of the policy or policy type to create/update. Instead, we encourage the client to carefully select a version for the policy or policy type to change and meanwhile policy API will check the validity of the version and feed an informative warning back to the client if the specified version is not good. To be specific, the following rules are implemented to enforce the version:

  1. If the incoming version is not in the database, we simply insert it. For example: if policy version 1.0.0 is stored in the database and now a client wants to create the same policy with updated version 3.0.0, this POST call will succeed and return “200” to the client.

  2. If the incoming version is already in the database and the incoming payload is different from the same version in the database, “406 - Not Acceptable” will be returned. This forces the client to update the version of the policy if the policy is changed.

  3. If a client creates a version of a policy and wishes to update a property on the policy, they must delete that version of the policy and re-create it.

  4. If multiple policies are included in the POST payload, policy API will also check if duplicate version exists in between any two policies or policy types provided in the payload. For example, a client provides a POST payload which includes two policies with the same name and version but different policy properties. This POST call will fail and return “406” error back to the calling application along with a message such as “duplicate policy {name}:{version} found in the payload”.

  5. The same version validation is applied to policy types too.

  6. To avoid unnecessary id/version inconsistency between the ones specified in the entity fields and the ones returned in the metadata field, “policy-id” and “policy-version” in the metadata will only be set by policy API. Any incoming explicit specification in the POST payload will be ignored. For example, A POST payload has a policy with name “sample-policy-name1” and version “1.0.0” specified. In this policy, the metadata also includes “policy-id”: “sample-policy-name2” and “policy-version”: “2.0.0”. The 200 return of this POST call will have this created policy with metadata including “policy-id”: “sample-policy-name1” and “policy-version”: “1.0.0”.

Regarding DELETE APIs for TOSCA compliant policies, we only expose API to delete one particular version of policy or policy type at a time for safety purpose. If client has the need to delete multiple or a group of policies or policy types, they will need to delete them one by one.

Sample API Curl Commands

From an API client perspective, using http or https does not make much difference to the curl command. Here we list some sample curl commands (using http) for POST, GET and DELETE monitoring and operational policies that are used in vFirewall use case. JSON payload for POST calls can be downloaded from policy table above.

If you are accessing the api from the container, the default ip and port would be https:/policy-api:6969/policy/api/v1/.

Create vFirewall Monitoring Policy::

curl –user ‘healthcheck:zb!XztG34’ -X POST “http://{ip}:{port}/policy/api/v1/policytypes/onap.policies.monitoring.tcagen2/versions/1.0.0/policies” -H “Accept: application/json” -H “Content-Type: application/json” -d @vFirewall.policy.monitoring.input.tosca.json

Get vFirewall Monitoring Policy::

curl –user ‘healthcheck:zb!XztG34’ -X GET “http://{ip}:{port}/policy/api/v1/policytypes/onap.policies.monitoring.tcagen2/versions/1.0.0/policies/onap.vfirewall.tca/versions/1.0.0” -H “Accept: application/json” -H “Content-Type: application/json”

Delete vFirewall Monitoring Policy::

curl –user ‘healthcheck:zb!XztG34’ -X DELETE “http://{ip}:{port}/policy/api/v1/policytypes/onap.policies.monitoring.tcagen2/versions/1.0.0/policies/onap.vfirewall.tca/versions/1.0.0” -H “Accept: application/json” -H “Content-Type: application/json”

Create vFirewall Operational Policy::

curl –user ‘healthcheck:zb!XztG34’ -X POST “http://{ip}:{port}/policy/api/v1/policytypes/onap.policies.controlloop.operational.common.Drools/versions/1.0.0/policies” -H “Accept: application/json” -H “Content-Type: application/json” -d @vFirewall.policy.operational.input.tosca.json

Get vFirewall Operational Policy::

curl –user ‘healthcheck:zb!XztG34’ -X GET “http://{ip}:{port}/policy/api/v1/policytypes/onap.policies.controlloop.operational.common.Drools/versions/1.0.0/policies/operational.modifyconfig/versions/1.0.0” -H “Accept: application/json” -H “Content-Type: application/json”

Delete vFirewall Operational Policy::

curl –user ‘healthcheck:zb!XztG34’ -X DELETE “http://{ip}:{port}/policy/api/v1/policytypes/onap.policies.controlloop.operational.common.Drools/versions/1.0.0/policies/operational.modifyconfig/versions/1.0.0” -H “Accept: application/json” -H “Content-Type: application/json”

Get all available policies::

curl –user ‘healthcheck:zb!XztG34’ -X GET “http://{ip}:{port}/policy/api/v1/policies” -H “Accept: application/json” -H “Content-Type: application/json”

Get version 1.0.0 of vFirewall Monitoring Policy::

curl –user ‘healthcheck:zb!XztG34’ -X GET “http://{ip}:{port}/policy/api/v1/policies/onap.vfirewall.tca/versions/1.0.0” -H “Accept: application/json” -H “Content-Type: application/json”

Delete version 1.0.0 of vFirewall Monitoring Policy::

curl –user ‘healthcheck:zb!XztG34’ -X DELETE “http://{ip}:{port}/policy/api/v1/policies/onap.vfirewall.tca/versions/1.0.0” -H “Accept: application/json” -H “Content-Type: application/json”

Policy Administration Point (PAP) Architecture

The Internal Policy Framework PAP-PDP API

This page describes the API between the PAP and PDPs. The APIs in this section are implemented using DMaaP API messaging. The APIs in this section are used for internal communication in the Policy Framework. The APIs are NOT supported for use by components outside the Policy Framework and are subject to revision and change at any time.

There are three messages on the API:

  1. PDP_STATUS: PDP→PAP, used by PDPs to report to the PAP

  2. PDP_UPDATE: PAP→PDP, used by the PAP to update the policies running on PDPs, triggers a PDP_STATUS message with the result of the PDP_UPDATE operation

  3. PDP_STATE_CHANGE: PAP→PDP, used by the PAP to change the state of PDPs, triggers a PDP_STATUS message with the result of the PDP_STATE_CHANGE operation

The fields in the table below are valid on API calls:

Field

PDP STATUS

PDP UPDATE

PDP STATE CHANGE

Comment

(message_name)

M

M

M

pdp_status, pdp_update, pdp_state_change, or pdp_health_check

name

M

M

M

The name of the PDP, for state changes and health checks, the PDP group and subgroup can be used to specify the scope of the operation

pdpType

M

N/A

N/A

The type of the PDP, currently xacml, drools, or apex

state

M

N/A

M

The administrative state of the PDP group: PASSIVE, SAFE, TEST, ACTIVE, or TERMINATED

healthy

M

N/A

N/A

The result of the latest health check on the PDP: HEALTHY/NOT_HEALTHY/TEST_IN_PROGRESS

description

O

O

N/A

The description of the PDP

pdpGroup

M

M

C

The PDP group to which the PDP belongs, the PDP group and subgroup can be used to specify the scope of the operation

pdpSubgroup

O

M

C

The PDP subgroup to which the PDP belongs, the PDP group and subgroup can be used to specify the scope of the operation

source

N/A

M

M

The source of the message

policies

M

N/A

N/A

The list of policies running on the PDP

policiesToBeDeployed

N/A

M

N/A

The list of policies to be deployed on the PDP

policiesToBeUndeployed

N/A

M

N/A

The list of policies to be undeployed from the PDP

->(name)

O

M

N/A

The name of a TOSCA policy running on the PDP

->policy_type

O

M

N/A

The TOSCA policy type of the policyWhen a PDP starts, it commences periodic sending of PDP_STATUS messages on DMaaP. The PAP receives these messages and acts in whatever manner is appropriate.

->policy_type_version

O

M

N/A

The version of the TOSCA policy type of the policy

->properties

O

M

N/A

The properties of the policy for the XACML, Drools, or APEX PDP for details Pod

properties

O

N/A

N/A

Other properties specific to the PDP

statistics

O

N/A

N/A

Statistics on policy execution in the PDP

->policyDeployCount

M

N/A

N/A

The number of policies deployed into the PDP

->policyDeploySuccessCount

M

N/A

N/A

The number of policies successfully deployed into the PDP

->policyDeployFailCount

M

N/A

N/A

The number of policies deployed into the PDP where the deployment failed

->policyUndeployCount

M

N/A

N/A

The number of policies undeployed from the PDP

->policyUndeploySuccessCount

M

N/A

N/A

The number of policies successfully undeployed from the PDP

->policyUndeployFailCount

M

N/A

N/A

The number of policies undeployed from the PDP where the undeployment failed

->policyExecutedCount

M

N/A

N/A

The number of policy executions on the PDP

->policyExecutedSuccessCount

M

N/A

N/A

The number of policy executions on the PDP that completed successfully

->policyExecutedFailCount

M

N/A

N/A

The number of policy executions on the PDP that failed

response

O

N/A

N/A

The response to the last operation that the PAP executed on the PDP

->responseTo

M

N/A

N/A

The PAP to PDP message to which this is a response

->responseStatus

M

N/A

N/A

SUCCESS or FAIL

->responseMessage

O

N/A

N/A

Message giving further information on the successful or failed operation

YAML is used for illustrative purposes in the examples in this section. JSON (application/json) is used as the content type in the implementation of this API.

1 PAP API for PDPs

The purpose of this API is for PDPs to provide heartbeat, status, health, and statistical information to Policy Administration. There is a single PDP_STATUS message on this API. PDPs send this message to the PAP using the POLICY_PDP_PAP DMaaP topic. The PAP listens on this topic for messages.

When a PDP starts, it commences periodic sending of PDP_STATUS messages on DMaaP. The PAP receives these messages and acts in whatever manner is appropriate. PDP_UPDATE and PDP_STATE_CHANGE operations trigger a PDP_STATUS message as a response.

The PDP_STATUS message is used for PDP heartbeat monitoring. A PDP sends a PDP_STATUS message with a state of TERMINATED when it terminates normally. If a PDP_STATUS message is not received from a PDP periodically or in response to a pdp_update or pdp-state_change message in a certain configurable time, then the PAP assumes the PDP has failed.

A PDP may be preconfigured with its PDP group, PDP subgroup, and policies. If the PDP group, subgroup, or any policy sent to the PAP in a PDP_STATUS message is unknown to the PAP, the PAP locks the PDP in state PASSIVE.

PDP_STATUS message from an XACML PDP running control loop policies
 1pdp_status:
 2  pdpType: xacml
 3  state: ACTIVE
 4  healthy: HEALTHY
 5  description: XACML PDP running control loop policies
 6  policies:
 7    - name: SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP
 8      version: 1.0.0
 9    - name: onap.policies.controlloop.guard.frequencylimiter.EastRegion
10      version: 1.0.0
11    - name: onap.policies.controlloop.guard.blacklist.eastRegion
12      version: 1.0.0
13    - name: .policies.controlloop.guard.minmax.eastRegion
14      version: 1.0.0
15  messageName: PDP_STATUS
16  requestId: 5551bd1b-4020-4fc5-95b7-b89c80a337b1
17  timestampMs: 1633534472002
18  name: xacml-23d33c2a-8715-43a8-ade5-5923fc0f185c
19  pdpGroup: defaultGroup
20  pdpSubgroup: xacml
21  statistics:
22    policyDeployCount: 0
23    policyDeploySuccessCount: 0
24    policyDeployFailCount: 0
25    policyExecutedCount: 123
26    policyExecutedSuccessCount: 122
27    policyExecutedFailCount: 1
PDP_STATUS message from a Drools PDP running control loop policies
 1pdp_status:
 2  pdpType: drools
 3  state: ACTIVE
 4  healthy: HEALTHY
 5  description: Drools PDP running control loop policies
 6  policies:
 7    - name: onap.controllloop.operational.drools.vcpe.EastRegion
 8      version: 1.0.0
 9    - name: onap.controllloop.operational.drools.vfw.EastRegion
10      version: 1.0.0
11  instance: drools_2
12  deployment_instance_info:
13    node_address: drools_2_pod
14    # Other deployment instance info
15  statistics:
16    policyDeployCount: 3
17    policyDeploySuccessCount: 3
18    policyDeployFailCount: 0
19    policyExecutedCount: 123
20    policyExecutedSuccessCount: 122
21    policyExecutedFailCount: 1
22    policyUndeployCount: 0
23    policyUndeploySuccessCount: 0
24    policyUndeployFailCount: 0
25  response:
26    responseTo: 52117e25-f416-45c7-a955-83ed929d557f
27    responseStatus: SUCCESSSS
28  messageName: PDP_STATUS
29  requestId: 52117e25-f416-45c7-a955-83ed929d557f
30  timestampMs: 1633355052181
31  name: drools-8819a672-57fd-4e74-ad89-aed1a64e1837
32  pdpGroup: defaultGroup
33  pdpSubgroup: drools
PDP_STATUS message from an APEX PDP running control loop policies
 1  pdpType: apex
 2  state: ACTIVE
 3  healthy: HEALTHY
 4  description: Pdp status response message for PdpUpdate
 5  policies:
 6    - name: onap.controllloop.operational.apex.bbs.EastRegion
 7      version: 1.0.0
 8  statistics:
 9    policyExecutedCount: 0
10    policyExecutedSuccessCount: 0
11    policyExecutedFailCount: 0
12    policyDeployCount: 1
13    policyDeploySuccessCount: 1
14    policyDeployFailCount: 0
15    policyUndeployCount: 0
16    policyUndeploySuccessCount: 0
17    policyUndeployFailCount: 0
18  response:
19    responseTo: 679fad9b-abbf-4b9b-971c-96a8372ec8af
20    responseStatus: SUCCESS
21    responseMessage: >-
22      Apex engine started. Deployed policies are:
23      onap.policies.apex.sample.Salecheck:1.0.0
24  messageName: PDP_STATUS
25  requestId: 932c17b0-7ef9-44ec-be58-f17e104e7d5d
26  timestampMs: 1633435952217
27  name: apex-d0610cdc-381e-4aae-8e99-3f520c2a50db
28  pdpGroup: defaultGroup
29  pdpSubgroup: apex
PDP_STATUS message from an XACML PDP running monitoring policies
 1pdp_status:
 2  pdpType: xacml
 3  state: ACTIVE
 4  healthy: HEALTHY
 5  description: XACML PDP running control loop policies
 6  policies:
 7    - name: SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP
 8      version: 1.0.0
 9    - name: onap.scaleout.tca:message
10      version: 1.0.0
11  messageName: PDP_STATUS
12  requestId: 5551bd1b-4020-4fc5-95b7-b89c80a337b1
13  timestampMs: 1633534472002
14  name: xacml-23d33c2a-8715-43a8-ade5-5923fc0f185c
15  pdpGroup: onap.pdpgroup.Monitoring
16  pdpSubgroup: xacml
17  statistics:
18    policyDeployCount: 0
19    policyDeploySuccessCount: 0
20    policyDeployFailCount: 0
21    policyExecutedCount: 123
22    policyExecutedSuccessCount: 122
23    policyExecutedFailCount: 1
2 PDP API for PAPs

The purpose of this API is for the PAP to load and update policies on PDPs and to change the state of PDPs. The PAP sends PDP_UPDATE and PDP_STATE_CHANGE messages to PDPs using the POLICY_PAP_PDP DMaaP topic. PDPs listen on this topic for messages.

The PAP can set the scope of PDP_STATE_CHANGE message:

  • PDP Group: If a PDP group is specified in a message, then the PDPs in that PDP group respond to the message and all other PDPs ignore it.

  • PDP Group and subgroup: If a PDP group and subgroup are specified in a message, then only the PDPs of that subgroup in the PDP group respond to the message and all other PDPs ignore it.

  • Single PDP: If the name of a PDP is specified in a message, then only that PDP responds to the message and all other PDPs ignore it.

2.1 PDP Update

The PDP_UPDATE operation allows the PAP to modify the PDP with information such as policiesToBeDeployed/Undeployed, the interval to send heartbeats, subgroup etc.

The following examples illustrate how the operation is used.

PDP_UPDATE message to upgrade XACML PDP control loop policies to version 1.0.1
 1pdp_update:
 2  source: pap-6e46095a-3e12-4838-912b-a8608fc93b51
 3  pdpHeartbeatIntervalMs: 120000
 4  policiesToBeDeployed:
 5    - type: onap.policies.Naming
 6      type_version: 1.0.0
 7      properties:
 8        # Omitted for brevity
 9      name: onap.policies.controlloop.guard.frequencylimiter.EastRegion
10      version: 1.0.1
11      metadata:
12        policy-id: onap.policies.controlloop.guard.frequencylimiter.EastRegion
13        policy-version: 1.0.1
14  messageName: PDP_UPDATE
15  requestId: cbfb9781-da6c-462f-9601-8cf8ca959d2b
16  timestampMs: 1633466294898
17  name: xacml-23d33c2a-8715-43a8-ade5-5923fc0f185c
18  description: XACML PDP running control loop policies, Upgraded
19  pdpGroup: defaultGroup
20  pdpSubgroup: xacml
PDP_UPDATE message to a Drools PDP to add an extra control loop policy
 1pdp_update:
 2  source: pap-0674bd0c-0862-4b72-abc7-74246fd11a79
 3  pdpHeartbeatIntervalMs: 120000
 4  policiesToBeDeployed:
 5    - type: onap.controllloop.operational.drools.vFW
 6      type_version: 1.0.0
 7      properties:
 8        # Omitted for brevity
 9      name: onap.controllloop.operational.drools.vfw.WestRegion
10      version: 1.0.0
11      metadata:
12        policy-id: onap.controllloop.operational.drools.vfw.WestRegion
13        policy-version: 1.0.0
14  messageName: PDP_UPDATE
15  requestId: e91c4515-86db-4663-b68e-e5179d0b000e
16  timestampMs: 1633355039004
17  name: drools-8819a672-57fd-4e74-ad89-aed1a64e1837
18  description: Drools PDP running control loop policies, extra policy added
19  pdpGroup: defaultGroup
20  pdpSubgroup: drools
PDP_UPDATE message to an APEX PDP to remove a control loop policy
 1pdp_update:
 2  source: pap-56c8531d-5376-4e53-a820-6973c62bfb9a
 3  pdpHeartbeatIntervalMs: 120000
 4  policiesToBeDeployed:
 5    - type: onap.policies.native.Apex
 6      type_version: 1.0.0
 7      properties:
 8        # Omitted for brevity
 9      name: onap.controllloop.operational.apex.bbs.WestRegion
10      version: 1.0.0
11      metadata:
12        policy-id: onap.controllloop.operational.apex.bbs.WestRegion
13        policy-version: 1.0.0
14  messageName: PDP_UPDATE
15  requestId: 3534e54f-4432-4c68-81c8-a6af07e59fb2
16  timestampMs: 1632325037040
17  name: apex-45c6b266-a5fa-4534-b22c-33c2f9a45d02
18  pdpGroup: defaultGroup
19  pdpSubgroup: apex
2.2 PDP State Change

The PDP_STATE_CHANGE operation allows the PAP to order state changes on PDPs in PDP groups and subgroups. The following examples illustrate how the operation is used.

Change the state of Drools PDP to ACTIVE
1pdp_state_change:
2  source: pap-6e46095a-3e12-4838-912b-a8608fc93b51
3  state: ACTIVE
4  messageName: PDP_STATE_CHANGE
5  requestId: 7d422be6-5baa-4316-9649-09e18301b5a8
6  timestampMs: 1633466294899
7  name: drools-23d33c2a-8715-43a8-ade5-5923fc0f185c
8  pdpGroup: defaultGroup
9  pdpSubgroup: drools
Change the state of all XACML PDPs to ACTIVE
1pdp_state_change:
2  source: pap-6e46095a-3e12-4838-912b-a8608fc93b51
3  state: ACTIVE
4  messageName: PDP_STATE_CHANGE
5  requestId: 7d422be6-5baa-4316-9649-09e18301b5a8
6  timestampMs: 1633466294899
7  name: xacml-23d33c2a-8715-43a8-ade5-5923fc0f185c
8  pdpGroup: defaultGroup
9  pdpSubgroup: xacml
Change the state of APEX PDP to passive
1pdp_state_change:
2  source: pap-e6272159-e1a3-4777-860a-19c47a14cc00
3  state: PASSIVE
4  messageName: PDP_STATE_CHANGE
5  requestId: 60d9a724-ebf3-4434-9da4-caac9c515a2c
6  timestampMs: 1633528747518
7  name: apex-a3c58a9e-af72-436c-b46f-0c6f31032ca5
8  pdpGroup: defaultGroup
9  pdpSubgroup: apex

The Policy Administration Point (PAP) keeps track of PDPs, supporting the deployment of PDP groups and the deployment of policies across those PDP groups. Policies are created using the Policy API, but are deployed via the PAP.

The PAP is stateless in a RESTful sense, using the database (persistent storage) to track PDPs and the deployment of policies to those PDPs. In short, policy management on PDPs is the responsibility of PAP; management of policies by any other manner is not permitted.

Because the PDP is the main unit of scalability in the Policy Framework, the framework is designed to allow PDPs in a PDP group to arbitrarily appear and disappear and for policy consistency across all PDPs in a PDP group to be easily maintained. The PAP is responsible for controlling the state across the PDPs in a PDP group. The PAP interacts with the policy database and transfers policies to PDPs.

The unit of execution and scaling in the Policy Framework is a PolicyImpl entity. A PolicyImpl entity runs on a PDP. As is explained above, a PolicyImpl entity is a PolicyTypeImpl implementation parameterized with a TOSCA Policy.

_images/PolicyImplPDPSubGroup.svg

In order to achieve horizontal scalability, we group the PDPs running instances of a given PolicyImpl entity logically together into a PDPSubGroup. The number of PDPs in a PDPSubGroup can then be scaled up and down using Kubernetes. In other words, all PDPs in a subgroup run the same PolicyImpl, that is the same policy template implementation (in XACML, Drools, or APEX) with the same parameters.

The figure above shows the layout of PDPGroup and PDPSubGroup entities. The figure shows examples of PDP groups for Control Loop and Monitoring policies on the right.

The health of PDPs is monitored by the PAP in order to alert operations teams managing policies. The PAP manages the life cycle of policies running on PDPs.

The table below shows the deployment methods in which PolicyImpl entities can be deployed to PDP Subgroups.

Method

Description

Advantages

Disadvantages

Cold

The PolicyImpl (PolicyTypeImpl and TOSCA Policy) are predeployed on the PDP. PDP is fully configured and ready to execute when started.

PDPs register with the PAP when they start, providing the pdpGroup they have been preconfigured with.

No run time configuration required and run time administration is simple.

Very restrictive, no run time configuration of PDPs is possible.

Warm

The PolicyTypeImpl entity is predeployed on the PDP. A TOSCA Policy may be loaded at startup. The PDP may be configured or reconfigured with a new or updated TOSCA Policy at run time.

PDPs register with the PAP when they start, providing the pdpGroup they have been predeployed with if any. The PAP may update the TOSCA Policy on a PDP at any time after registration.

The configuration, parameters, and PDP group of PDPs may be changed at run time by loading or updating a TOSCA Policy into the PDP.

Support TOSCA Policy entity life cycle managgement is supported, allowing features such as PolicyImpl Safe Mode and PolicyImpl retirement.

Administration and management is required. The configuration and life cycle of the TOSCA policies can change at run time and must be administered and managed.

Hot

The PolicyImpl (PolicyTypeImpl and TOSCA Policy) are deployed at run time. The PolicyImpl (PolicyTypeImpl and TOSCA Policy) may be loaded at startup. The PDP may be configured or reconfigured with a new or updated PolicyTypeImpl and/or TOSCA Policy at run time.

PDPs register with the PAP when they start, providing the pdpGroup they have been preconfigured with if any. The PAP may update the TOSCA Policy and PolicyTypeImpl on a PDP at any time after registration

The policy logic, rules, configuration, parameters, and PDP group of PDPs may be changed at run time by loading or updating a TOSCA Policy and PolicyTypeImpl into the PDP.

Lifecycle management of TOSCA Policy entities and PolicyTypeImpl entites is supported, allowing features such as PolicyImpl Safe Mode and PolicyImpl retirement.

Administration and management is more complex. The PolicyImpl itself and its configuration and life cycle as well as the life cycle of the TOSCA policies can change at run time and must be administered and managed.

1 APIs

The APIs in the subchapters below are supported by the PAP.

1.1 REST API

The purpose of this API is to support CRUD of PDP groups and subgroups and to support the deployment and life cycles of policies on PDP sub groups and PDPs. This API is provided by the PolicyAdministration component (PAP) of the Policy Framework, see the ONAP Policy Framework Architecture page.

PDP groups and subgroups may be prefedined in the system. Predefined groups and subgroups may be modified or deleted over this API. The policies running on predefined groups or subgroups as well as the instance counts and properties may also be modified.

A PDP may be preconfigured with its PDP group, PDP subgroup, and policies. The PDP sends this information to the PAP when it starts. If the PDP group, subgroup, or any policy is unknown to the PAP, the PAP locks the PDP in state PASSIVE.

PAP supports the operations listed in the following table, via its REST API:

Operation

Description

Health check

Queries the health of the PAP

Consolidated healthcheck

Queries the health of all policy components

Statistics

Queries various statistics

PDP state change

Changes the state of all PDPs in a PDP Group

PDP Group create/update

Creates/updates PDP Groups

PDP Group delete

Deletes a PDP Group

PDP Group query

Queries all PDP Groups

Deployment update

Deploy/undeploy one or more policies in specified PdpGroups

Deploy policy

Deploys one or more policies to the PDPs

Undeploy policy

Undeploys a policy from the PDPs

Policy Status

Queries the status of all policies

Policy deployment status

Queries the status of all deployed policies

PDP statistics

Queries the statistics of PDPs

Policy Audit

Queries the audit records of policies

1.2 DMaaP API

PAP interacts with the PDPs via the DMaaP Message Router. The messages listed in the following table are transmitted via DMaaP:

Message

Direction

Description

PDP status

Incoming

Registers a PDP with PAP; also sent as a periodic heart beat; also sent in response to requests from the PAP

PDP update

Outgoing

Assigns a PDP to a PDP Group and Subgroup; also deploys or undeploys policies from the PDP

PDP state change

Outgoing

Changes the state of a PDP or all PDPs within a PDP Group or Subgroup

In addition, PAP generates notifications via the DMaaP Message Router when policies are successfully or unsuccessfully deployed (or undeployed) from all relevant PDPs.

Here is a sample notification:

{
    "deployed-policies": [
        {
            "policy-type": "onap.policies.monitoring.tcagen2",
            "policy-type-version": "1.0.0",
            "policy-id": "onap.scaleout.tca",
            "policy-version": "2.0.0",
            "success-count": 3,
            "failure-count": 0
        }
    ],
    "undeployed-policies": [
        {
            "policy-type": "onap.policies.monitoring.tcagen2",
            "policy-type-version": "1.0.0",
            "policy-id": "onap.firewall.tca",
            "policy-version": "6.0.0",
            "success-count": 3,
            "failure-count": 0
        }
    ]
}

2 PAP REST API Swagger

It is worth noting that we use basic authorization for access with user name and password set to healthcheck and zb!XztG34, respectively.

For every call, the client is encouraged to insert a uuid-type requestID as parameter. It is helpful for tracking each http transaction and facilitates debugging. More importantly, it complies with Logging requirements v1.2. If the client does not provide the requestID in a call, one will be randomly generated and attached to the response header, x-onap-requestid.

In accordance with ONAP API Common Versioning Strategy Guidelines, several custom headers are added in the response to each call:

Header

Example value

Description

x-latestversion

1.0.0

latest version of the API

x-minorversion

0

MINOR version of the API

x-patchversion

0

PATCH version of the API

x-onap-requestid

e1763e61-9eef-4911-b952-1be1edd9812b

described above; used for logging purposes

Download Health Check PAP API Swagger

HealthCheck

GET /policy/pap/v1/healthcheck

Perform healthcheck

  • Description: Returns healthy status of the Policy Administration component

  • Produces: [‘application/json’]

Parameters

Name

Position

Description

Type

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

This operation performs a health check on the PAP.

Here is a sample response:

{
    "code": 200,
    "healthy": true,
    "message": "alive",
    "name": "Policy PAP",
    "url": "self"
}

Download Consolidated Health Check PAP API Swagger

Consolidated Healthcheck

GET /policy/pap/v1/components/healthcheck

Returns health status of all policy components, including PAP, API, Distribution, and PDPs

  • Description: Queries health status of all policy components, returning all policy components health status

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

This operation performs a health check of all policy components. The response contains the health check result of each component. The consolidated health check is reported as healthy only if all the components are healthy, otherwise the “healthy” flag is marked as false.

Here is a sample response:

{
  "pdps": {
    "xacml": [
      {
        "instanceId": "dev-policy-xacml-pdp-5b6697c845-9j8lb",
        "pdpState": "ACTIVE",
        "healthy": "HEALTHY"
      }
    ],
    "drools": [
      {
        "instanceId": "dev-drools-0",
        "pdpState": "ACTIVE",
        "healthy": "HEALTHY"
      }
    ],
    "apex": [
      {
        "instanceId": "dev-policy-apex-pdp-0",
        "pdpState": "ACTIVE",
        "healthy": "HEALTHY",
        "message": "Pdp Heartbeat"
      }
    ]
  },
  "healthy": true,
  "api": {
    "name": "Policy API",
    "url": "https://dev-policy-api-7fb479754f-7nr5s:6969/policy/api/v1/healthcheck",
    "healthy": true,
    "code": 200,
    "message": "alive"
  },
  "distribution": {
    "name": "Policy SSD",
    "url": "https://dev-policy-distribution-84854cd6c7-zn8vh:6969/healthcheck",
    "healthy": true,
    "code": 200,
    "message": "alive"
  },
  "pap": {
    "name": "Policy PAP",
    "url": "https://dev-pap-79fd8f78d4-hwx7j:6969/policy/pap/v1/healthcheck",
    "healthy": true,
    "code": 200,
    "message": "alive"
  }
}

Download Statistics PAP API Swagger

Statistics

GET /policy/pap/v1/statistics

Fetch current statistics

  • Description: Returns current statistics of the Policy Administration component

  • Produces: [‘application/json’]

Parameters

Name

Position

Description

Type

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

This operation allows statistics for PDP groups, PDP subgroups, and individual PDPs to be retrieved.

Note

While this API is supported, most of the statistics are not currently updated; that work has been deferred to a later release.

Here is a sample response:

{
    "code": 200,
    "policyDeployFailureCount": 0,
    "policyDeploySuccessCount": 0,
    "policyDownloadFailureCount": 0,
    "policyDownloadSuccessCount": 0,
    "totalPdpCount": 0,
    "totalPdpGroupCount": 0,
    "totalPolicyDeployCount": 0,
    "totalPolicyDownloadCount": 0
}

Download State Change PAP Swagger

PdpGroup State Change

PUT /policy/pap/v1/pdps/groups/{name}

Change state of a PDP Group

  • Description: Changes state of PDP Group, returning optional error details

  • Produces: [‘application/json’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

name

path

PDP Group Name

string

state

query

PDP Group State

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

The state of PDP groups is managed by this operation. PDP groups can be in states PASSIVE, TEST, SAFE, or ACTIVE. For a full description of PDP group states, see the ONAP Policy Framework Architecture page.

Download Group Batch PAP API Swagger

PdpGroup Create/Update

POST /policy/pap/v1/pdps/groups/batch

Create or update PDP Groups

  • Description: Create or update one or more PDP Groups, returning optional error details

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

body

body

List of PDP Group Configuration

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

This operation allows the PDP groups and subgroups to be created and updated. Many PDP groups can be created or updated in a single POST operation by specifying more than one PDP group in the POST operation body. This can be used to create the PDP group by providing all the details including the supported policy types for each subgroup. However, it cannot be used to update policies; that is done using one of the deployment requests. Consequently, the “policies” property of this request will be ignored. This can also be used to update a PDP Group, but supported policy types cannot be updated during the update operation. So, “policies” and “supportedPolicyTypes” properties in the request will be ignored if provided during the PDP Group update operation.

The “desiredInstanceCount” specifies the minimum number of PDPs of the given type that should be registered with PAP. Currently, this is just used for health check purposes; if the number of PDPs registered with PAP drops below the given value, then PAP will return an “unhealthy” indicator if a “Consolidated Health Check” is performed.

Note

If a subgroup is to be deleted from a PDP Group, then the policies must be removed from the subgroup first.

Note

Policies cannot be added/updated during PDP Group create/update operations. So, if provided, they are ignored. Supported policy types are defined during PDP Group creation. They cannot be updated once they are created. So, supportedPolicyTypes are expected during PDP Group create, but ignored if provided during PDP Group update.

Here is a sample request:

{
    "groups": [
        {
            "name": "SampleGroup",
            "pdpGroupState": "ACTIVE",
            "properties": {},
            "pdpSubgroups": [
                {
                    "pdpType": "apex",
                    "desiredInstanceCount": 2,
                    "properties": {},
                    "supportedPolicyTypes": [
                        {
                            "name": "onap.policies.controlloop.operational.common.Apex",
                            "version": "1.0.0"
                        }
                    ],
                    "policies": []
                },
                {
                    "pdpType": "xacml",
                    "desiredInstanceCount": 1,
                    "properties": {},
                    "supportedPolicyTypes": [
                        {
                            "name": "onap.policies.monitoring.tcagen2",
                            "version": "1.0.0"
                        }
                    ],
                    "policies": []
                }
            ]
        }
    ]
}

Download Group Delete PAP API Swagger

PdpGroup Delete

DELETE /policy/pap/v1/pdps/groups/{name}

Delete PDP Group

  • Description: Deletes a PDP Group, returning optional error details

  • Produces: [‘application/json’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

name

path

PDP Group Name

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

The API also allows PDP groups to be deleted. DELETE operations are only permitted on PDP groups in PASSIVE state.

Download Group Query PAP API Swagger

PdpGroup Query

GET /policy/pap/v1/pdps

Query details of all PDP groups

  • Description: Queries details of all PDP groups, returning all group details

  • Produces: [‘application/json’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

This operation allows the PDP groups and subgroups to be listed as well as the policies that are deployed on each PDP group and subgroup.

Here is a sample response:

{
    "groups": [
        {
            "description": "This group should be used for managing all control loop related policies and pdps",
            "name": "controlloop",
            "pdpGroupState": "ACTIVE",
            "pdpSubgroups": [
                {
                    "currentInstanceCount": 0,
                    "desiredInstanceCount": 1,
                    "pdpInstances": [],
                    "pdpType": "apex",
                    "policies": [],
                    "properties": {},
                    "supportedPolicyTypes": [
                        {
                            "name": "onap.policies.controlloop.operational.common.Apex",
                            "version": "1.0.0"
                        }
                    ]
                },
                {
                    "currentInstanceCount": 0,
                    "desiredInstanceCount": 1,
                    "pdpInstances": [],
                    "pdpType": "drools",
                    "policies": [],
                    "properties": {},
                    "supportedPolicyTypes": [
                        {
                            "name": "onap.policies.controlloop.operational.common.Drools",
                            "version": "1.0.0"
                        }
                    ]
                },
                {
                    "currentInstanceCount": 0,
                    "desiredInstanceCount": 1,
                    "pdpInstances": [],
                    "pdpType": "xacml",
                    "policies": [],
                    "properties": {},
                    "supportedPolicyTypes": [
                        {
                            "name": "onap.policies.controlloop.Guard",
                            "version": "1.0.0"
                        }
                    ]
                }
            ],
            "properties": {}
        },
        {
            "description": "This group should be used for managing all monitoring related policies and pdps",
            "name": "monitoring",
            "pdpGroupState": "ACTIVE",
            "pdpSubgroups": [
                {
                    "currentInstanceCount": 0,
                    "desiredInstanceCount": 1,
                    "pdpInstances": [],
                    "pdpType": "xacml",
                    "policies": [],
                    "properties": {},
                    "supportedPolicyTypes": [
                        {
                            "name": "onap.policies.Monitoring",
                            "version": "1.0.0"
                        }
                    ]
                }
            ],
            "properties": {}
        },
        {
            "description": "The default group that registers all supported policy types and pdps.",
            "name": "defaultGroup",
            "pdpGroupState": "ACTIVE",
            "pdpSubgroups": [
                {
                    "currentInstanceCount": 0,
                    "desiredInstanceCount": 1,
                    "pdpInstances": [],
                    "pdpType": "apex",
                    "policies": [],
                    "properties": {},
                    "supportedPolicyTypes": [
                        {
                            "name": "onap.policies.controlloop.operational.common.Apex",
                            "version": "1.0.0"
                        },
                        {
                            "name": "onap.policies.native.Apex",
                            "version": "1.0.0"
                        }
                    ]
                },
                {
                    "currentInstanceCount": 0,
                    "desiredInstanceCount": 1,
                    "pdpInstances": [],
                    "pdpType": "drools",
                    "policies": [],
                    "properties": {},
                    "supportedPolicyTypes": [
                        {
                            "name": "onap.policies.controlloop.operational.common.Drools",
                            "version": "1.0.0"
                        },
                        {
                            "name": "onap.policies.native.drools.Controller",
                            "version": "1.0.0"
                        },
                        {
                            "name": "onap.policies.native.drools.Artifact",
                            "version": "1.0.0"
                        }
                    ]
                },
                {
                    "currentInstanceCount": 0,
                    "desiredInstanceCount": 1,
                    "pdpInstances": [],
                    "pdpType": "xacml",
                    "policies": [],
                    "properties": {},
                    "supportedPolicyTypes": [
                        {
                            "name": "onap.policies.controlloop.guard.FrequencyLimiter",
                            "version": "1.0.0"
                        },
                        {
                            "name": "onap.policies.controlloop.guard.MinMax",
                            "version": "1.0.0"
                        },
                        {
                            "name": "onap.policies.controlloop.guard.Blacklist",
                            "version": "1.0.0"
                        },
                        {
                            "name": "onap.policies.controlloop.guard.coordination.FirstBlocksSecond",
                            "version": "1.0.0"
                        },
                        {
                            "name": "onap.Monitoring",
                            "version": "1.0.0"
                        },
                        {
                            "name": "onap.policies.monitoring.*",
                            "version": "1.0.0"
                        },
                        {
                            "name": "onap.policies.optimization.AffinityPolicy",
                            "version": "1.0.0"
                        },
                        {
                            "name": "onap.policies.optimization.DistancePolicy",
                            "version": "1.0.0"
                        },
                        {
                            "name": "onap.policies.optimization.HpaPolicy",
                            "version": "1.0.0"
                        },
                        {
                            "name": "onap.policies.optimization.OptimizationPolicy",
                            "version": "1.0.0"
                        },
                        {
                            "name": "onap.policies.optimization.PciPolicy",
                            "version": "1.0.0"
                        },
                        {
                            "name": "onap.policies.optimization.QueryPolicy",
                            "version": "1.0.0"
                        },
                        {
                            "name": "onap.policies.optimization.SubscriberPolicy",
                            "version": "1.0.0"
                        },
                        {
                            "name": "onap.policies.optimization.Vim_fit",
                            "version": "1.0.0"
                        },
                        {
                            "name": "onap.policies.optimization.VnfPolicy",
                            "version": "1.0.0"
                        }
                    ]
                }
            ],
            "properties": {}
        }
    ]
}

Download Deployments Batch PAP API Swagger

Deployments Update

POST /policy/pap/v1/pdps/deployments/batch

Updates policy deployments within specific PDP groups

  • Description: Updates policy deployments within specific PDP groups, returning optional error details

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

body

body

List of PDP Group Deployments

Responses

202 - operation accepted

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

This operation allows policies to be deployed on specific PDP groups. Each subgroup includes an “action” property, which is used to indicate that the policies are being added (POST) to the subgroup, deleted (DELETE) from the subgroup, or that the subgroup’s entire set of policies is being replaced (PATCH) by a new set of policies. As such, a subgroup may appear more than once in a single request, one time to delete some policies and another time to add new policies to the same subgroup.

Here is a sample request:

{
    "groups": [
        {
            "name": "SampleGroup",
            "deploymentSubgroups": [
                {
                    "pdpType": "apex",
                    "action": "POST",
                    "policies": [
                        {
                            "name": "onap.policies.native.apex.bbs.EastRegion",
                            "version": "1.0.0"
                        }
                    ]
                }
            ]
        }
    ]
}

Here is a sample response:

{
    "message": "Use the policy status url to fetch the latest status. Kindly note that when a policy is successfully undeployed, it will no longer appear in policy status response.",
    "uri": "/policy/pap/v1/policies/status"
}

Download Deploy PAP API Swagger

Deploy Policy

POST /policy/pap/v1/pdps/policies

Deploy or update PDP Policies

  • Description: Deploys or updates PDP Policies, returning optional error details

  • Produces: [‘application/json’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

body

body

PDP Policies; only the name is required

Responses

202 - operation accepted

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

This operation allows policies to be deployed across all relevant PDP groups. PAP will deploy the specified policies to all relevant subgroups. Only the policies supported by a given subgroup will be deployed to that subgroup.

Note

The policy version is optional. If left unspecified, then the latest version of the policy is deployed. On the other hand, if it is specified, it may be an integer, or it may be a fully qualified version (e.g., “3.0.2”). In addition, a subgroup to which a policy is being deployed must have at least one PDP instance, otherwise the request will be rejected.

Here is a sample request:

{
  "policies": [
    {
      "policy-id": "onap.scaleout.tca",
      "policy-version": 1
    },
    {
      "policy-id": "ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3"
    },
    {
      "policy-id": "guard.frequency.ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3"
    },
    {
      "policy-id": "guard.minmax.ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3"
    }
  ]
}

Here is a sample response:

{
    "message": "Use the policy status url to fetch the latest status. Kindly note that when a policy is successfully undeployed, it will no longer appear in policy status response.",
    "uri": "/policy/pap/v1/policies/status"
}

Download Undeploy PAP API Swagger

Undeploy Policy

DELETE /policy/pap/v1/pdps/policies/{name}

Undeploy a PDP Policy from PDPs

  • Description: Undeploys the latest version of a policy from the PDPs, returning optional error details

  • Produces: [‘application/json’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

name

path

PDP Policy Name

string

Responses

202 - operation accepted

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

DELETE /policy/pap/v1/pdps/policies/{name}/versions/{version}

Undeploy version of a PDP Policy from PDPs

  • Description: Undeploys a specific version of a policy from the PDPs, returning optional error details

  • Produces: [‘application/json’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

name

path

PDP Policy Name

string

version

path

PDP Policy Version

string

Responses

202 - operation accepted

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

This operation allows policies to be undeployed from PDP groups.

Note

If the policy version is specified, then it may be an integer, or it may be a fully qualified version (e.g., “3.0.2”). On the other hand, if left unspecified, then the latest deployed version will be undeployed.

Note

Due to current limitations, a fully qualified policy version must always be specified.

Here is a sample response:

{
    "message": "Use the policy status url to fetch the latest status. Kindly note that when a policy is successfully undeployed, it will no longer appear in policy status response.",
    "uri": "/policy/pap/v1/policies/status"
}

Download Policy Status PAP API Swagger

Policy Status

GET /policy/pap/v1/policies/status

Queries status of policies in all PdpGroups

  • Description: Queries status of policies in all PdpGroups, returning status of policies in all the PDPs belonging to all PdpGroups

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

GET /policy/pap/v1/policies/status/{pdpGroupName}

Queries status of policies in a specific PdpGroup

  • Description: Queries status of policies in a specific PdpGroup, returning status of policies in all the PDPs belonging to the PdpGroup

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

pdpGroupName

path

Name of the PdpGroup

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

404 - Resource not found

500 - Internal Server Error

GET /policy/pap/v1/policies/status/{pdpGroupName}/{policyName}

Queries status of all versions of a specific policy in a specific PdpGroup

  • Description: Queries status of all versions of a specific policy in a specific PdpGroup, returning status of all versions of the policy in the PDPs belonging to the PdpGroup

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

pdpGroupName

path

Name of the PdpGroup

string

policyName

path

Name of the Policy

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

404 - Resource not found

500 - Internal Server Error

GET /policy/pap/v1/policies/status/{pdpGroupName}/{policyName}/{policyVersion}

Queries status of a specific version of a specific policy in a specific PdpGroup

  • Description: Queries status of a specific version of a specific policy in a specific PdpGroup, returning status of the policy in the PDPs belonging to the PdpGroup

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

pdpGroupName

path

Name of the PdpGroup

string

policyName

path

Name of the Policy

string

policyVersion

path

Version of the Policy

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

404 - Resource not found

500 - Internal Server Error

This operation allows the status of all policies that are deployed or undeployed to be listed together. The result can be filtered based on pdp group name, policy name & version.

Note

When a policy is successfully undeployed, it will no longer appear in the policy status response.

Here is a sample response:

[
    {
        "pdpGroup": "defaultGroup",
        "pdpType": "apex",
        "pdpId": "policy-apex-pdp-0",
        "policy": {
            "name": "onap.policies.apex.Controlloop",
            "version": "1.0.0"
        },
        "policyType": {
            "name": "onap.policies.native.Apex",
            "version": "1.0.0"
        },
        "deploy": true,
        "state": "SUCCESS"
    },
    {
        "pdpGroup": "defaultGroup",
        "pdpType": "drools",
        "pdpId": "policy-drools-pdp-0",
        "policy": {
            "name": "OPERATIONAL_vFW_CDS_Service_v2_0_Drools_1_0_0_6SN",
            "version": "1.0.0"
        },
        "policyType": {
            "name": "onap.policies.controlloop.operational.common.Drools",
            "version": "1.0.0"
        },
        "deploy": true,
        "state": "SUCCESS"
    }
]

Download Deployed Policy PAP API Swagger

Policy Deployment Status

GET /policy/pap/v1/policies/deployed

Queries status of all deployed policies

  • Description: Queries status of all deployed policies, returning success and failure counts of the PDPs

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

GET /policy/pap/v1/policies/deployed/{name}

Queries status of specific deployed policies

  • Description: Queries status of specific deployed policies, returning success and failure counts of the PDPs

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

name

path

Policy Id

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

GET /policy/pap/v1/policies/deployed/{name}/{version}

Queries status of a specific deployed policy

  • Description: Queries status of a specific deployed policy, returning success and failure counts of the PDPs

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

name

path

Policy Id

string

version

path

Policy Version

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

This operation allows the deployed policies to be listed together with their respective deployment status. The result can be filtered based on policy name & version.

Here is a sample response:

[
  {
    "policy-type": "onap.policies.monitoring.tcagen2",
    "policy-type-version": "1.0.0",
    "policy-id": "MICROSERVICE_vFW_CDS_Service_v2_0_app_1_0_0_I95",
    "policy-version": "1.0.0",
    "success-count": 1,
    "failure-count": 0,
    "incomplete-count": 0
  },
  {
    "policy-type": "onap.policies.monitoring.tcagen2",
    "policy-type-version": "1.0.0",
    "policy-id": "MICROSERVICE_vFW_CDS_Service_v2_0_app_1_0_0_WNX",
    "policy-version": "1.0.0",
    "success-count": 1,
    "failure-count": 0,
    "incomplete-count": 0
  },
  {
    "policy-type": "onap.policies.controlloop.operational.common.Drools",
    "policy-type-version": "1.0.0",
    "policy-id": "OPERATIONAL_vFW_CDS_Service_v2_0_Drools_1_0_0_6SN",
    "policy-version": "1.0.0",
    "success-count": 1,
    "failure-count": 0,
    "incomplete-count": 0
  }
]

Download PDP Statistics PAP API Swagger

PDP Statistics

GET /policy/pap/v1/pdps/statistics

Fetch statistics for all PDP Groups and subgroups in the system

  • Description: Returns for all PDP Groups and subgroups statistics of the Policy Administration component

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

recordCount

query

Record count between 1-100

integer

startTime

query

Start time in epoch timestamp

integer

endTime

query

End time in epoch timestamp

integer

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

GET /policy/pap/v1/pdps/statistics/{group}

Fetch current statistics for given PDP Group

  • Description: Returns statistics for given PDP Group of the Policy Administration component

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

group

path

PDP Group Name

string

recordCount

query

Record count between 1-100

integer

startTime

query

Start time in epoch timestamp

integer

endTime

query

End time in epoch timestamp

integer

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

GET /policy/pap/v1/pdps/statistics/{group}/{type}

Fetch statistics for the specified subgroup

  • Description: Returns statistics for the specified subgroup of the Policy Administration component

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

group

path

PDP Group Name

string

type

path

PDP SubGroup type

string

recordCount

query

Record count between 1-100

integer

startTime

query

Start time in epoch timestamp

integer

endTime

query

End time in epoch timestamp

integer

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

GET /policy/pap/v1/pdps/statistics/{group}/{type}/{pdp}

Fetch statistics for the specified pdp

  • Description: Returns statistics for the specified pdp of the Policy Administration component

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

group

path

PDP Group Name

string

type

path

PDP SubGroup type

string

pdp

path

PDP Instance name

string

recordCount

query

Record count between 1-100

integer

startTime

query

Start time in epoch timestamp

integer

endTime

query

End time in epoch timestamp

integer

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

This operation allows the PDP statistics to be retrieved for all registered PDPs. The result can be filtered based on PDP group, PDP subgroup & PDP instance. Along with record count, start time & end time as query parameters.

Here is a sample response:

{
  "defaultGroup": {
    "apex": [
      {
        "pdpInstanceId": "dev-policy-apex-pdp-0",
        "timeStamp": "Apr 29, 2020, 6:15:29 PM",
        "pdpGroupName": "defaultGroup",
        "pdpSubGroupName": "apex",
        "policyDeployCount": 0,
        "policyDeploySuccessCount": 0,
        "policyDeployFailCount": 0,
        "policyExecutedCount": 0,
        "policyExecutedSuccessCount": 0,
        "policyExecutedFailCount": 0,
        "engineStats": []
      },
      {
        "pdpInstanceId": "dev-policy-apex-pdp-0",
        "timeStamp": "Apr 29, 2020, 6:17:29 PM",
        "pdpGroupName": "defaultGroup",
        "pdpSubGroupName": "apex",
        "policyDeployCount": 0,
        "policyDeploySuccessCount": 0,
        "policyDeployFailCount": 0,
        "policyExecutedCount": 0,
        "policyExecutedSuccessCount": 0,
        "policyExecutedFailCount": 0,
        "engineStats": []
      },
      {
        "pdpInstanceId": "dev-policy-apex-pdp-0",
        "timeStamp": "Apr 29, 2020, 6:19:29 PM",
        "pdpGroupName": "defaultGroup",
        "pdpSubGroupName": "apex",
        "policyDeployCount": 0,
        "policyDeploySuccessCount": 0,
        "policyDeployFailCount": 0,
        "policyExecutedCount": 0,
        "policyExecutedSuccessCount": 0,
        "policyExecutedFailCount": 0,
        "engineStats": []
      }
    ]
  }
}

Download Policy Audit PAP API Swagger

Policy Audit

GET /policy/pap/v1/policies/audit

Queries audit information for all the policies

  • Description: Queries audit information for all the policies, returning audit information for all the policies in the database

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

recordCount

query

Record count between 1-100

integer

startTime

query

Start time in epoch timestamp

integer

endTime

query

End time in epoch timestamp

integer

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

GET /policy/pap/v1/policies/audit/{pdpGroupName}

Queries audit information for all the policies in a PdpGroup

  • Description: Queries audit information for all the policies in a PdpGroup, returning audit information for all the policies belonging to the PdpGroup

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

recordCount

query

Record count between 1-100

integer

startTime

query

Start time in epoch timestamp

integer

endTime

query

End time in epoch timestamp

integer

pdpGroupName

path

PDP Group Name

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

GET /policy/pap/v1/policies/audit/{policyName}/{policyVersion}

Queries audit information for a specific version of a policy

  • Description: Queries audit information for a specific version of a policy, returning audit information for the policy

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

recordCount

query

Record count between 1-100

integer

startTime

query

Start time in epoch timestamp

integer

endTime

query

End time in epoch timestamp

integer

policyName

path

Policy Name

string

policyVersion

path

Policy Version

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

GET /policy/pap/v1/policies/audit/{pdpGroupName}/{policyName}/{policyVersion}

Queries audit information for a specific version of a policy in a PdpGroup

  • Description: Queries audit information for a specific version of a policy in a PdpGroup, returning audit information for the policy belonging to the PdpGroup

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

recordCount

query

Record count between 1-100

integer

startTime

query

Start time in epoch timestamp

integer

endTime

query

End time in epoch timestamp

integer

pdpGroupName

path

PDP Group Name

string

policyName

path

Policy Name

string

policyVersion

path

Policy Version

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

This operation allows the audit records of policies to be listed together. The result can be filtered based on pdp group name, policy name & version. Along with record count, start time & end time as query parameters.

Here is a sample response:

[
    {
        "auditId": 123,
        "pdpGroup": "defaultGroup",
        "pdpType": "apex",
        "policy": {
            "name": "onap.policies.apex.Controlloop",
            "version": "1.0.0"
        },
        "action": "DEPLOYMENT",
        "timestamp": "2021-07-27T13:25:15Z",
        "user": "test"
    },
    {
        "auditId": 456,
        "pdpGroup": "defaultGroup",
        "pdpType": "drools",
        "policy": {
            "name": "operational.modifyconfig",
            "version": "1.0.0"
        },
        "action": "UNDEPLOYMENT",
        "timestamp": "2021-07-27T13:15:15Z",
        "user": "test"
    }
]

3 Configuration

3.1 Disable collection of PDP Statistics

This configuration is to inform PAP to not save the PDP statistics in the database.

In config.json, add or change the property savePdpStatisticsInDb to false.

Note

By default, if the property is not present, it will be considered as false and PDP statistics will not be saved in the database.

4 Future Features

4.1 Disable policies in PDP

This operation will allow disabling individual policies running in PDP engine. It is mainly beneficial in scenarios where network operators/administrators want to disable a particular policy in PDP engine for a period of time due to a failure in the system or for scheduled maintenance.

End of Document

Decision API

The Decision API is used by ONAP components that enforce policies and need a decision on which policy to enforce for a specific situation. The Decision API mimics closely the XACML request standard in that it supports a subject, action and resource.

When the PAP activates an xacml-pdp, the decision API becomes available. Conversely, when the PAP deactivates an xacml-pdp, the decision API is disabled. The decision API is enabled/disabled by the PDP-STATE-CHANGE messages from PAP. If a request is made to the decision API while it is deactivated, a “404 - Not Found” error will be returned.

Field

Required

XACML equivalent

Description

ONAPName

True

subject

The name of the ONAP project making the call

ONAPComponent

True

subject

The name of the ONAP sub component making the call

ONAPInstance

False

subject

An optional instance ID for that sub component

action

True

action

The action being performed

resource

True

resource

An object specific to the action that contains properties describing the resource

It is worth noting that we use basic authorization for API access with username and password set to healthcheck and zb!XztG34 respectively. Also, the new APIs support both http and https.

For every API call, the client is encouraged to insert an uuid-type requestID as parameter. It is helpful for tracking each http transaction and facilitates debugging. Most importantly, it complies with Logging requirements v1.2. If the client does not provide the requestID in the API call, one will be randomly generated and attached to the response header x-onap-requestid.

In accordance with ONAP API Common Versioning Strategy Guidelines, in the response of each API call, several custom headers are added:

x-latestversion: 1.0.0
x-minorversion: 0
x-patchversion: 0
x-onap-requestid: e1763e61-9eef-4911-b952-1be1edd9812b

x-latestversion is used only to communicate an API’s latest version.

x-minorversion is used to request or communicate a MINOR version back from the client to the server, and from the server back to the client.

x-patchversion is used only to communicate a PATCH version in a response for troubleshooting purposes only, and will be provided to the client on request.

x-onap-requestid is used to track REST transactions for logging purpose, as described above.

Download the Decision API Swagger

HealthCheck

GET /policy/pdpx/v1/healthcheck

Perform a system healthcheck

  • Description: Provides healthy status of the Policy Xacml PDP component

  • Consumes: [‘application/json’, ‘application/yaml’]

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

Decision

POST /policy/pdpx/v1/xacml

Fetch the decision using specified decision parameters

  • Description: Returns the policy decision from Policy Xacml PDP

  • Consumes: [‘application/xacml+json’, ‘application/xacml+xml’]

  • Produces: [‘application/xacml+json’, ‘application/xacml+xml’]

Parameters

Name

Position

Description

Type

body

body

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation

400 - Bad Request

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

POST /policy/pdpx/v1/decision

Fetch the decision using specified decision parameters

  • Description: Returns the policy decision from Policy Xacml PDP

  • Consumes: [‘application/json’, ‘application/yaml’]

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

body

body

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation

400 - Bad Request

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

Statistics

GET /policy/pdpx/v1/statistics

Fetch current statistics

  • Description: Provides current statistics of the Policy Xacml PDP component

  • Consumes: [‘application/json’, ‘application/yaml’]

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

End of Document

Postman Environment for API Testing

The following environment file from postman can be used for testing API’s. All you need to do is fill in the IP and Port information for the installation that you have created.

link

Postman Collection for API Testing

Postman collection for Policy Framework Lifecycle API

Postman collection for Policy Framework Administration API

Postman collection for Policy Framework Decision API

API Swagger Generation

The standard for API definition in the RESTful API world is the OpenAPI Specification (OAS). The OAS, which is based on the original “Swagger Specification,” is being widely used in API developments.

Execute the below curl command for swagger generation by filling in the authorization details, IP and Port information:

“curl -k --user {user_id}:{password} https://{ip}:{port}/swagger.json”

Policy Component Installation

Policy OOM Installation

Policy OOM Charts

The policy K8S charts are located in the OOM repository.

Please refer to the OOM documentation on how to install and deploy ONAP.

Policy Pods

To get a listing of the Policy Pods, run the following command:

kubectl get pods -n onap | grep dev-policy

dev-policy-59684c7b9c-5gd6r                        2/2     Running            0          8m41s
dev-policy-apex-pdp-0                              1/1     Running            0          8m41s
dev-policy-api-56f55f59c5-nl5cg                    1/1     Running            0          8m41s
dev-policy-distribution-54cc59b8bd-jkg5d           1/1     Running            0          8m41s
dev-policy-mariadb-0                               1/1     Running            0          8m41s
dev-policy-xacml-pdp-765c7d58b5-l6pr7              1/1     Running            0          8m41s

Note

To get a listing of the Policy services, run this command: kubectl get svc -n onap | grep policy

Accessing Policy Containers

Accessing the policy docker containers is the same as for any kubernetes container. Here is an example:

kubectl -n onap exec -it dev-policy-policy-xacml-pdp-584844b8cf-9zptx bash

Installing or Upgrading Policy

The assumption is you have cloned the charts from the OOM repository into a local directory.

Step 1 Go into local copy of OOM charts

From your local copy, edit any of the values.yaml files in the policy tree to make desired changes.

The policy schema will be installed automatically as part of the database configuration using db-migrator. By default the policy schema is upgraded to the latest version. For more information on how to change the db-migrator setup please see: Using Policy DB Migrator.

Step 2 Build the charts

make policy
make SKIP_LINT=TRUE onap

Note

SKIP_LINT is only to reduce the “make” time

Step 3 Undeploy Policy After undeploying policy, loop on monitoring the policy pods until they go away.

helm undeploy dev-policy
kubectl get pods -n onap | grep dev-policy

Step 4 Re-Deploy Policy pods

After deploying policy, loop on monitoring the policy pods until they come up.

helm deploy dev-policy local/onap --namespace onap
kubectl get pods -n onap | grep dev-policy

Note

If you want to purge the existing data and start with a clean install, please follow these steps after undeploying:

Step 1 Delete NFS persisted data for Policy

rm -fr /dockerdata-nfs/dev/policy

Step 2 Make sure there is no orphan policy database persistent volume or claim.

First, find if there is an orphan database PV or PVC with the following commands:

kubectl get pvc -n onap | grep policy
kubectl get pv -n onap | grep policy

If there are any orphan resources, delete them with

kubectl delete pvc <orphan-policy-mariadb-resource>
kubectl delete pv <orphan-policy-mariadb-resource>

Restarting a faulty component

Each policy component can be restarted independently by issuing the following command:

kubectl delete pod <policy-pod> -n onap

Exposing ports

For security reasons, the ports for the policy containers are configured as ClusterIP and thus not exposed. If you find you need those ports in a development environment, then the following will expose them.

kubectl -n onap expose service policy-api --port=7171 --target-port=6969 --name=api-public --type=NodePort

Overriding certificate stores

Policy components package default key and trust stores that support https based communication with other AAF-enabled ONAP components. Each store can be overridden at installation.

To override a default keystore, the new certificate store (policy-keystore) file should be placed at the appropriate helm chart locations below:

  • oom/kubernetes/policy/charts/drools/resources/secrets/policy-keystore drools pdp keystore override.

  • oom/kubernetes/policy/charts/policy-apex-pdp/resources/config/policy-keystore apex pdp keystore override.

  • oom/kubernetes/policy/charts/policy-api/resources/config/policy-keystore api keystore override.

  • oom/kubernetes/policy/charts/policy-distribution/resources/config/policy-keystore distribution keystore override.

  • oom/kubernetes/policy/charts/policy-pap/resources/config/policy-keystore pap keystore override.

  • oom/kubernetes/policy/charts/policy-xacml-pdp/resources/config/policy-keystore xacml pdp keystore override.

In the event that the truststore (policy-truststore) needs to be overriden as well, place it at the appropriate location below:

  • oom/kubernetes/policy/charts/drools/resources/configmaps/policy-truststore drools pdp truststore override.

  • oom/kubernetes/policy/charts/policy-apex-pdp/resources/config/policy-truststore apex pdp truststore override.

  • oom/kubernetes/policy/charts/policy-api/resources/config/policy-truststore api truststore override.

  • oom/kubernetes/policy/charts/policy-distribution/resources/config/policy-truststore distribution truststore override.

  • oom/kubernetes/policy/charts/policy-pap/resources/config/policy-truststore pap truststore override.

  • oom/kubernetes/policy/charts/policy-xacml-pdp/resources/config/policy-truststore xacml pdp truststore override.

When the keystore passwords are changed, the corresponding component configuration ([1]) should also change:

  • oom/kubernetes/policy/charts/drools/values.yaml

  • oom/kubernetes/policy-apex-pdp/resources/config/config.json

  • oom/kubernetes/policy-distribution/resources/config/config.json

This procedure is applicable to an installation that requires either AAF or non-AAF derived certificates. The reader is refered to the AAF documentation when new AAF-compliant keystores are desired:

After these changes, follow the procedures in the Installing or Upgrading Policy section to make usage of the new stores effective.

Additional PDP-D Customizations

Credentials and other configuration parameters can be set as values when deploying the policy (drools) subchart. Please refer to PDP-D Default Values for the current default values. It is strongly recommended that sensitive information is secured appropriately before using in production.

Additional customization can be applied to the PDP-D. Custom configuration goes under the “resources” directory of the drools subchart (oom/kubernetes/policy/charts/drools/resources). This requires rebuilding the policy subchart (see section Installing or Upgrading Policy).

Configuration is done by adding or modifying configmaps and/or secrets. Configmaps are placed under “drools/resources/configmaps”, and secrets under “drools/resources/secrets”.

Custom configuration supportes these types of files:

  • *.conf files to support additional environment configuration.

  • features*.zip to add additional custom features.

  • *.pre.sh scripts to be executed before starting the PDP-D process.

  • *.post.sh scripts to be executed after starting the PDP-D process.

  • policy-keystore to override the PDP-D policy-keystore.

  • policy-truststore to override the PDP-D policy-truststore.

  • aaf-cadi.keyfile to override the PDP-D AAF key.

  • *.properties to override or add properties files.

  • *.xml to override or add xml configuration files.

  • *.json to override json configuration files.

  • *settings.xml to override maven repositories configuration .

Examples

To disable AAF, simply override the “aaf.enabled” value when deploying the helm chart (see the OOM installation instructions mentioned above).

To override the PDP-D keystore or trustore, add a suitable replacement(s) under “drools/resources/secrets”. Modify the drools chart values.yaml with new credentials, and follow the procedures described at Installing or Upgrading Policy to redeploy the chart.

To disable https for the DMaaP configuration topic, add a copy of engine.properties with “dmaap.source.topics.PDPD-CONFIGURATION.https” set to “false”, or alternatively create a “.pre.sh” script (see above) that edits this file before the PDP-D is started.

To use noop topics for standalone testing, add a “noop.pre.sh” script under oom/kubernetes/policy/charts/drools/resources/configmaps/:

#!/bin/bash
sed -i "s/^dmaap/noop/g" $POLICY_HOME/config/*.properties

Footnotes

Policy Docker Installation

Building the ONAP Policy Framework Docker Images

The instructions here are based on the instructions in the file ~/git/onap/policy/docker/README.md.

Step 1: Build the Policy API Docker image

cd ~/git/onap/policy/api/packages
mvn clean install -P docker

Step 2: Build the Policy PAP Docker image

cd ~/git/onap/policy/pap/packages
mvn clean install -P docker

Step 3: Build the Drools PDP docker image.

This image is a standalone vanilla Drools engine, which does not contain any pre-built drools rules or applications.

cd ~/git/onap/policy/drools-pdp/
mvn clean install -P docker

Step 4: Build the Drools Application Control Loop image.

This image has the drools use case application and the supporting software built together with the Drools PDP engine. It is recommended to use this image if you are first working with ONAP Policy and wish to test or learn how the use cases work.

cd ~/git/onap/policy/drools-applications
mvn clean install -P docker

Step 5: Build the Apex PDP docker image:

cd ~/git/onap/policy/apex-pdp
mvn clean install -P docker

Step 6: Build the XACML PDP docker image:

cd ~/git/onap/policy/xacml-pdp/packages
mvn clean install -P docker

Step 7: Build the policy engine docker image (If working with the legacy Policy Architecture/API):

cd ~/git/onap/policy/engine/
mvn clean install -P docker

Step 8: Build the Policy SDC Distribution docker image:

cd ~/git/onap/policy/distribution/packages
mvn clean install -P docker

Starting the ONAP Policy Framework Docker Images

In order to run the containers, you can use docker-compose. This uses the docker-compose.yml yaml file to bring up the ONAP Policy Framework. This file is located in the policy/docker repository.

Step 1: Set the environment variable MTU to be a suitable MTU size for the application.

export MTU=9126

Step 2: Determine if you want the legacy Policy Engine to have policies pre-loaded or not. By default, all the configuration and operational policies will be pre-loaded by the docker compose script. If you do not wish for that to happen, then export this variable:

Note

This applies ONLY to the legacy Engine and not the Policy Lifecycle polices

export PRELOAD_POLICIES=false

Step 3: Run the system using docker-compose. Note that on some systems you may have to run the docker-compose command as root or using sudo. Note that this command takes a number of minutes to execute on a laptop or desktop computer.

docker-compose up -d

You now have a full standalone ONAP Policy framework up and running!

Policy Platform Development

Policy Platform Development Tools

This article explains how to build the ONAP Policy Framework for development purposes and how to run stability/performance tests for a variety of components. To start, the developer should consult the latest ONAP Wiki to familiarize themselves with developer best practices and how-tos to setup their environment, see https://wiki.onap.org/display/DW/Developer+Best+Practices.

This article assumes that:

  • You are using a *nix operating system such as linux or macOS.

  • You are using a directory called git off your home directory (~/git) for your git repositories

  • Your local maven repository is in the location ~/.m2/repository

  • You have copied the settings.xml from oparent to ~/.m2/ directory

  • You have added settings to access the ONAP Nexus to your M2 configuration, see Maven Settings Example (bottom of the linked page)

The procedure documented in this article has been verified to work on a MacBook laptop running macOS Mojave Version 10.14.6 and an Unbuntu 18.06 VM.

Cloning All The Policy Repositories

Run a script such as the script below to clone the required modules from the ONAP git repository. This script clones all the ONAP Policy Framework repositories.

ONAP Policy Framework has dependencies to the ONAP Parent oparent module, the ONAP ECOMP SDK ecompsdkos module, and the A&AI Schema module.

Typical ONAP Policy Framework Clone Script
  1 #!/usr/bin/env bash
  2
  3 ## script name for output
  4 MOD_SCRIPT_NAME=`basename $0`
  5
  6 ## the ONAP clone directory, defaults to "onap"
  7 clone_dir="onap"
  8
  9 ## the ONAP repos to clone
 10 onap_repos="\
 11 policy/parent \
 12 policy/common \
 13 policy/models \
 14 policy/docker \
 15 policy/api \
 16 policy/pap \
 17 policy/apex-pdp \
 18 policy/drools-pdp \
 19 policy/drools-applications \
 20 policy/xacml-pdp \
 21 policy/distribution \
 22 policy/gui \
 23 policy/clamp "
 24
 25 ##
 26 ## Help screen and exit condition (i.e. too few arguments)
 27 ##
 28 Help()
 29 {
 30     echo ""
 31     echo "$MOD_SCRIPT_NAME - clones all required ONAP git repositories"
 32     echo ""
 33     echo "       Usage:  $MOD_SCRIPT_NAME [-options]"
 34     echo ""
 35     echo "       Options"
 36     echo "         -d          - the ONAP clone directory, defaults to '.'"
 37     echo "         -h          - this help screen"
 38     echo ""
 39     exit 255;
 40 }
 41
 42 ##
 43 ## read command line
 44 ##
 45 while [ $# -gt 0 ]
 46 do
 47     case $1 in
 48         #-d ONAP clone directory
 49         -d)
 50             shift
 51             if [ -z "$1" ]; then
 52                 echo "$MOD_SCRIPT_NAME: no clone directory"
 53                 exit 1
 54             fi
 55             clone_dir=$1
 56             shift
 57         ;;
 58
 59         #-h prints help and exists
 60         -h)
 61             Help;exit 0;;
 62
 63         *)    echo "$MOD_SCRIPT_NAME: undefined CLI option - $1"; exit 255;;
 64     esac
 65 done
 66
 67 if [ -f "$clone_dir" ]; then
 68     echo "$MOD_SCRIPT_NAME: requested clone directory '$clone_dir' exists as file"
 69     exit 2
 70 fi
 71 if [ -d "$clone_dir" ]; then
 72     echo "$MOD_SCRIPT_NAME: requested clone directory '$clone_dir' exists as directory"
 73     exit 2
 74 fi
 75
 76 mkdir $clone_dir
 77 if [ $? != 0 ]
 78 then
 79     echo cannot clone ONAP repositories, could not create directory '"'$clone_dir'"'
 80     exit 3
 81 fi
 82
 83 for repo in $onap_repos
 84 do
 85     repoDir=`dirname "$repo"`
 86     repoName=`basename "$repo"`
 87
 88     if [ ! -z $dirName ]
 89     then
 90         mkdir "$clone_dir/$repoDir"
 91         if [ $? != 0 ]
 92         then
 93             echo cannot clone ONAP repositories, could not create directory '"'$clone_dir/repoDir'"'
 94             exit 4
 95         fi
 96     fi
 97
 98     git clone https://gerrit.onap.org/r/${repo} $clone_dir/$repo
 99 done
100
101 echo ONAP has been cloned into '"'$clone_dir'"'

Execution of the script above results in the following directory hierarchy in your ~/git directory:

  • ~/git/onap

  • ~/git/onap/policy

  • ~/git/onap/policy/parent

  • ~/git/onap/policy/common

  • ~/git/onap/policy/models

  • ~/git/onap/policy/api

  • ~/git/onap/policy/pap

  • ~/git/onap/policy/gui

  • ~/git/onap/policy/docker

  • ~/git/onap/policy/drools-applications

  • ~/git/onap/policy/drools-pdp

  • ~/git/onap/policy/clamp

  • ~/git/onap/policy/apex-pdp

  • ~/git/onap/policy/xacml-pdp

  • ~/git/onap/policy/distribution

Building ONAP Policy Framework Components

Step 1: Optionally, for a completely clean build, remove the ONAP built modules from your local repository.

rm -fr ~/.m2/repository/org/onap

Step 2: A pom such as the one below can be used to build the ONAP Policy Framework modules. Create the pom.xml file in the directory ~/git/onap/policy.

Typical pom.xml to build the ONAP Policy Framework
 1 <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
 2     <modelVersion>4.0.0</modelVersion>
 3     <groupId>org.onap</groupId>
 4     <artifactId>onap-policy</artifactId>
 5     <version>1.0.0-SNAPSHOT</version>
 6     <packaging>pom</packaging>
 7     <name>${project.artifactId}</name>
 8     <inceptionYear>2017</inceptionYear>
 9     <organization>
10         <name>ONAP</name>
11     </organization>
12
13     <modules>
14         <module>parent</module>
15         <module>common</module>
16         <module>models</module>
17         <module>api</module>
18         <module>pap</module>
19         <module>apex-pdp</module>
20         <module>xacml-pdp</module>
21         <module>drools-pdp</module>
22         <module>drools-applications</module>
23         <module>distribution</module>
24         <module>gui</module>
25         <module>clamp</module>
26     </modules>
27 </project>

Policy Architecture/API Transition

In Dublin, a new Policy Architecture was introduced. The legacy architecture runs in parallel with the new architecture. It will be deprecated after Frankfurt release. If the developer is only interested in working with the new architecture components, the engine sub-module can be ommitted.

Step 3: You can now build the Policy framework.

Java artifacts only:

cd ~/git/onap
mvn clean install

With docker images:

cd ~/git/onap
mvn clean install -P docker

Developing and Debugging each Policy Component

Running a MariaDb Instance

The Policy Framework requires a MariaDb instance running. The easiest way to do this is to run a docker image locally.

One example on how to do this is to use the scripts used by the policy/api S3P tests.

Simulator Setup Script Example

cd ~/git/onap/api/testsuites/stability/src/main/resources/simulatorsetup
./setup_components.sh

Another example on how to run the MariaDb is using the docker compose file used by the Policy API CSITs:

Example Compose Script to run MariaDB

Running the API component standalone

Assuming you have successfully built the codebase using the instructions above. The only requirement for the API component to run is a running MariaDb database instance. The easiest way to do this is to run the docker image, please see the mariadb documentation for the latest information on doing so. Once the mariadb is up and running, a configuration file must be provided to the api in order for it to know how to connect to the mariadb. You can locate the default configuration file in the packaging of the api component:

Default API Configuration

You will want to change the fields pertaining to “host”, “port” and “databaseUrl” to your local environment settings.

Running the API component using Docker Compose

An example of running the api using a docker compose script is located in the Policy Integration CSIT test repository.

Policy CSIT API Docker Compose

Running the Smoke Tests

The following links contain instructions on how to run the smoke tests. These may be helpful to developers to become familiar with the Policy Framework components and test any local changes.

CLAMP GUI Smoke Tests
1. Introduction

The CLAMP GUI for Control Loops is designed to provide a user the ability to interact with the Control Loop Runtime to perform several actions.

  • Commission new Tosca Service Templates.

  • Editing Common Properties.

  • Decommission existing Tosca Service Templates.

  • Create new instances of Control Loops.

  • Change the state of the Control Loops.

  • Delete Control Loops.

This document will serve as a guide to do smoke tests on the different components that are involved when working with the GUI and outline how they operate. It will also show a developer how to set up their environment for carrying out smoke tests on the GUI.

2. Setup Guide

This section will show the developer how to set up their environment to start testing in GUI with some instruction on how to carry out the tests. There are a number of prerequisites. Note that this guide is written by a Linux user - although the majority of the steps show will be exactly the same in Windows or other systems. The IDE used in the examples here is Intellij but most or all of what is described should be the same across IDEs.

2.1 Prerequisites
2.2 Assumptions
  • You are accessing the policy repositories through gerrit

  • You are using “git review”.

The following repositories are required for development in this project. These repositories should be present on your machine and you should run “mvn clean install” on all of them so that the packages are present in your .m2 repository.

  • policy/parent

  • policy/common

  • policy/models

  • policy/clamp

  • policy/docker

  • policy/gui

  • policy/api

In this setup guide, we will be setting up all the components technically required for a working convenient dev environment. We will not be setting up all of the participants - we will setup only the policy participant as an example.

2.3 Setting up the components
2.3.3 MariaDB Setup

We will be using Docker to run our mariadb instance. It will have a total of three databases running in it.

  • controlloop: the runtime-controlloop db

  • cldsdb4: the clamp backend db

  • policyadmin: the policy-api db

The easiest way to do this is to perform a small alteration on an SQL script provided by the clamp backend in the file “runtime/extra/sql/bulkload/create-db.sql”

CREATE DATABASE `cldsdb4`;
USE `cldsdb4`;
DROP USER 'clds';
CREATE USER 'clds';
GRANT ALL on cldsdb4.* to 'clds' identified by 'sidnnd83K' with GRANT OPTION;
CREATE DATABASE `controlloop`;
USE `controlloop`;
DROP USER 'policy';
CREATE USER 'policy';
GRANT ALL on controlloop.* to 'policy' identified by 'P01icY' with GRANT OPTION;
CREATE DATABASE `policyadmin`;
USE `policyadmin`;
DROP USER 'policy_user';
CREATE USER 'policy_user';
GRANT ALL on controlloop.* to 'policy_user' identified by 'policy_user' with GRANT OPTION;
FLUSH PRIVILEGES;

Once this has been done, we can run the bash script provided here: “runtime/extra/bin-for-dev/start-db.sh”

./start-db.sh

This will setup all three databases. It will also ensure that the tables in cldsdb4 are created. The database will be exposed locally on port 3306 and will be backed by an anonymous docker volume.

2.3.4 DMAAP Simulator

For convenience, a dmaap simulator has been provided in the policy/models repository. To start the simulator, you can do the following:

  1. Navigate to /models-sim/policy-models-simulators in the policy/models repository.

  2. Add a configuration file to src/test/resources with the following contents:

{
   "dmaapProvider":{
      "name":"DMaaP simulator",
      "topicSweepSec":900
   },
   "restServers":[
      {
         "name":"DMaaP simulator",
         "providerClass":"org.onap.policy.models.sim.dmaap.rest.DmaapSimRestControllerV1",
         "host":"localhost",
         "port":3904,
         "https":false
      }
   ]
}
  1. You can then start dmaap with:

mvn exec:java  -Dexec.mainClass=org.onap.policy.models.simulators.Main -Dexec.args="src/test/resources/YOUR_CONF_FILE.json"

At this stage the dmaap simulator should be running on your local machine on port 3904.

2.3.5 Policy API

In the policy-api repo, you should fine the file “src/main/resources/etc/defaultConfig.json”. This file must be altered slightly - as below with the restServerParameters and databaseProviderParameters shown. Note how the database parameters match-up with what you setup in Mariadb:

{
    "restServerParameters": {
        "host": "0.0.0.0",
        "port": 6970,
        "userName": "healthcheck",
        "password": "zb!XztG34",
        "prometheus": true,
        "https": false,
        "aaf": false
    },
    "databaseProviderParameters": {
        "name": "PolicyProviderParameterGroup",
        "implementation": "org.onap.policy.models.provider.impl.DatabasePolicyModelsProviderImpl",
        "databaseDriver": "org.mariadb.jdbc.Driver",
        "databaseUrl": "jdbc:mariadb://mariadb:3306/policyadmin",
        "databaseUser": "policy_user",
        "databasePassword": "policy_user",
        "persistenceUnit": "PolicyMariaDb"
    },
}

Next, navigate to the “/main” directory. You can then run the following command to start the policy api:

mvn exec:java -Dexec.mainClass=org.onap.policy.api.main.startstop.Main -Dexec.args=" -c ../packages/policy-api-tarball/src/main/resources/etc/defaultConfig.json"
2.3.6 Clamp Backend

The Clamp Backend can potentially make calls to policy pap, policy api, cds, sdc and others. For controlloop development purposes, we only need to connect with the controlloop runtime api. For convenience, there has been an emulator provided to respond to requests from Clamp to all those services that we do not care about. This emulator can be run by running the following bash script “runtime/extra/bin-for-dev/start-emulator.sh”

./start-emulator.sh

Once the emulator is running, we can then run the clamp backend. Before doing this, we need to make sure that all of the calls from the clamp backend are directed towards the correct places. We can do this by editing the application-noaaf.properties file: “src/main/resources/application-noaaf.properties”. For development purposes and because we are running the components in a non-https way, this file will not need to be altered currently. The clamp backend can then be run with the script “runtime/extra/bin-for-dev/start-backend.sh”.

./start-backend.sh

Once the clamp backend is running, we can start the controlloop runtime.

2.3.7 Controlloop Runtime

To start the controlloop runtime we need to go the “runtime-controlloop” directory in the clamp repo. There is a config file that is used, by default, for the controlloop runtime. That config file is here: “src/main/resources/application.yaml”. For development in your local environment, it shouldn’t need any adjustment and we can just run the controlloop runtime with:

mvn spring-boot:run
2.3.8 Controlloop GUI

At this point, all of the components required to test out the controlloop gui are running.We can start to make changes, and have those changes reflected in the UI for immediate feedback on our changes. But first, we must run the GUI.

Firstly, go to the GUI repo and navigate to “gui-clamp/ui-react”. To setup for development, we must install the dependencies of the GUI. We can do this using the npm package manager. In the directory, simply run:

npm install

This will trigger installation of the required packages. The application is configured to proxy all relevant calls to the clamp backend. The application can be started with a simple:

npm start

This uses nodes internal test dev web server to server the GUI. Once started, you can navigate to the server at “https://localhost:3000” and login with “admin/password”.

That completes the development setup of the environment.

3. Running Tests

In this section, we will run through the functionalities mentioned at the start of this document is section 1. Each functionality will be tested and we will confirm that they were carried out successfully. There is a tosca service template that can be used for this test

Tosca Service Template

3.1 Commissioning

We can carry out commissioning using the GUI. To do so, from the main page, we can select “Upload Tosca to Commissioning” as shown in the image below:

_images/CommissioningUpload.png

Clicking this will take us to a screen where we can upload a file. Select a file to upload and click on the upload button.

_images/CommissioningModal.png

After clicking upload, you should get a message on the modal to tell you that the upload was successful. You can then look in the logs of the policy-participant to see that the message has been received from the runtime:

_images/CommissioningMessageOnParticipant.png

This confirms that commissioning has been complete.

3.2 Edit Common Properties

At this stage we can edit the common properties. These properties will be common to all instances of the control loop definitions we uploaded with the tosca service template. Once an instance is created, we will not be able to alter these common properties again. We can simply click on “Edit Common Properties” in the dropdown menu and we will be taken to the modal shown below.

_images/CommonPropertiesModal.png

The arrows to the left of the modal can be used to expand and contract the elements. If we expand one of the elements, we can see that the provider is one of the properties that we can edit. Edit this property to be “Ericsson Software Technologies”. Press “Save” and then press “Commission”. You should get a success message. Once you do, you can look at the full tosca service template to confirm the change in provider has been recorder. Click on “Manage Commissioned Tosca Template”. Then click on “Pull Tosca Service Template”. You should receive the full template on the screen. You should find your change as shown below.

_images/ViewEditedCommonProperties.png
3.3 Create New Instances of Control Loops

Once the template is commissioned, we can start to create instances. In the dropdown, click on “Instantiation Management”. In the modal, you will see an empty table, as shown.

_images/ManageInstancesModal.png

Then we will click on “Create Instance”. That takes us to a page where we can edit the properties of the instance. Not the common properties, but the instance properties. The last element has Provider set as an instance property. In the same way as we did for the common properties, change the provider to “Some Other Company” - then click save. You should get a success message if all went ok. You can then go back to the instantiation management table and you should now see an instance there.

_images/InstanceUninitialised.png

Since the instance is uninitialised, the policies and policy types are not being deployed to the policy api. We can confirm this by looking at the policy-apis database. See the image below.

_images/PolicyTypeNotPresent.png
3.3 Change the State of the Instance

Now we will change the instance state to PASSIVE. This should trigger the deployment of the policy types onto the policy-api. To trigger the change of state, click on the “change” button on the instance in the instance management table. This will bring up another modal to allow you to change the state.

_images/ChangeState.png

Pick PASSIVE and then click save. If we once again navigate to the Instance Management table, we can see that our actual state has become passive.

_images/PassiveState.png

This should also mean that our policies and policy types should be written to the policy-api database. We can query that DB again. In the images below, we can see that the policies and the policy types have been written successfully.

_images/PolicyTypeSuccess.png

and

_images/PolicySuccess.png

Following the same procedure as changeing the state to PASSIVE, we can then change to UNINITIALISED. This deletes the policies and policy types through the policy api and changes the overall state of the loop. we can then delete it from the Manage Instances table by clicking on Delete.

Decommissioning

Finally, we can decommission the template. On the dropdown menu, click “Manage Commissioned Tosca Template” and then pull it. Clicking the “Delete Tosca Service Template” button will fully decommission the template. You will receive a success message if the deletion was successful.

_images/ViewEditedCommonProperties.png

This concluded the required smoke tests

Policy DB Migrator Smoke Tests
Prerequisites

Check number of files in each release

1  ls 0800/upgrade/*.sql | wc -l = 96
2  ls 0900/upgrade/*.sql | wc -l = 13
3  ls 0800/downgrade/*.sql | wc -l = 96
4  ls 0900/downgrade/*.sql | wc -l = 13
Upgrade scripts
1  /opt/app/policy/bin/prepare_upgrade.sh policyadmin
2  /opt/app/policy/bin/db-migrator -s policyadmin -o upgrade

Note

You can also run db-migrator upgrade with the -t and -f options

Downgrade scripts
1  /opt/app/policy/bin/prepare_downgrade.sh policyadmin
2  /opt/app/policy/bin/db-migrator -s policyadmin -o downgrade -f 0900 -t 0800
Db migrator initialization script

Update /oom/kubernetes/policy/resources/config/db_migrator_policy_init.sh with the appropriate upgrade/downgrade calls.

The policy version you are deploying should either be an upgrade or downgrade from the current db migrator schema version.

Every time you modify db_migrator_policy_init.sh you will have to undeploy, make and redeploy before updates are applied.

1. Fresh Install

Number of files run

109

Tables in policyadmin

75

Records Added

109

schema_version

0900

2. Downgrade to Honolulu (0800)

Modify db_migrator_policy_init.sh - remove any lines referencing upgrade and add the 2 lines under “Downgrade scripts”

Make/Redeploy to run downgrade.

Number of files run

13

Tables in policyadmin

73

Records Added

13

schema_version

0800

3. Upgrade to Istanbul (0900)

Modify db_migrator_policy_init.sh - remove any lines referencing downgrade and add the 2 lines under “Upgrade scripts”.

Make/Redeploy to run upgrade.

Number of files run

13

Tables in policyadmin

75

Records Added

13

schema_version

0900

4. Upgrade to Istanbul (0900) without any information in the migration schema

Ensure you are on release 0800. (This may require running a downgrade before starting the test)

Drop db-migrator tables in migration schema:

1  DROP TABLE schema_versions;
2  DROP TABLE policyadmin_schema_changelog;

Modify db_migrator_policy_init.sh - remove any lines referencing downgrade and add the 2 lines under “Upgrade scripts”.

Make/Redeploy to run upgrade.

Number of files run

13

Tables in policyadmin

75

Records Added

13

schema_version

0900

5. Upgrade to Istanbul (0900) after failed downgrade

Ensure you are on release 0900.

Rename pdpstatistics table in policyadmin schema:

RENAME TABLE pdpstatistics TO backup_pdpstatistics;

Modify db_migrator_policy_init.sh - remove any lines referencing upgrade and add the 2 lines under “Downgrade scripts”

Make/Redeploy to run downgrade

This should result in an error (last row in policyadmin_schema_changelog will have a success value of 0)

Rename backup_pdpstatistic table in policyadmin schema:

RENAME TABLE backup_pdpstatistics TO pdpstatistics;

Modify db_migrator_policy_init.sh - Remove any lines referencing downgrade and add the 2 lines under “Upgrade scripts”

Make/Redeploy to run upgrade

Number of files run

11

Tables in policyadmin

75

Records Added

11

schema_version

0900

6. Downgrade to Honolulu (0800) after failed downgrade

Ensure you are on release 0900.

Add timeStamp column to papdpstatistics_enginestats:

ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN timeStamp datetime DEFAULT NULL NULL AFTER UPTIME;

Modify db_migrator_policy_init.sh - remove any lines referencing upgrade and add the 2 lines under “Downgrade scripts”

Make/Redeploy to run downgrade

This should result in an error (last row in policyadmin_schema_changelog will have a success value of 0)

Remove timeStamp column from jpapdpstatistics_enginestats:

ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp;

The config job will retry 5 times. If you make your fix before this limit is reached you won’t need to redeploy.

Redeploy to run downgrade

Number of files run

14

Tables in policyadmin

73

Records Added

14

schema_version

0800

7. Downgrade to Honolulu (0800) after failed upgrade

Ensure you are on release 0800.

Modify db_migrator_policy_init.sh - remove any lines referencing downgrade and add the 2 lines under “Upgrade scripts”

Update pdpstatistics:

ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL NULL AFTER POLICYEXECUTEDSUCCESSCOUNT;

Make/Redeploy to run upgrade

This should result in an error (last row in policyadmin_schema_changelog will have a success value of 0)

Once the retry count has been reached, update pdpstatistics:

ALTER TABLE pdpstatistics DROP COLUMN POLICYUNDEPLOYCOUNT;

Modify db_migrator_policy_init.sh - Remove any lines referencing upgrade and add the 2 lines under “Downgrade scripts”

Make/Redeploy to run downgrade

Number of files run

7

Tables in policyadmin

73

Records Added

7

schema_version

0800

8. Upgrade to Istanbul (0900) after failed upgrade

Ensure you are on release 0800.

Modify db_migrator_policy_init.sh - remove any lines referencing downgrade and add the 2 lines under “Upgrade scripts”

Update PDP table:

ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY;

Make/Redeploy to run upgrade

This should result in an error (last row in policyadmin_schema_changelog will have a success value of 0)

Update PDP table:

ALTER TABLE pdp DROP COLUMN LASTUPDATE;

The config job will retry 5 times. If you make your fix before this limit is reached you won’t need to redeploy.

Redeploy to run upgrade

Number of files run

14

Tables in policyadmin

75

Records Added

14

schema_version

0900

9. Downgrade to Honolulu (0800) with data in pdpstatistics and jpapdpstatistics_enginestats

Ensure you are on release 0900.

Check pdpstatistics and jpapdpstatistics_enginestats are populated with data.

1  SELECT count(*) FROM pdpstatistics;
2  SELECT count(*) FROM jpapdpstatistics_enginestats;

Modify db_migrator_policy_init.sh - remove any lines referencing upgrade and add the 2 lines under “Downgrade scripts”

Make/Redeploy to run downgrade

Check the tables to ensure the number records is the same.

1  SELECT count(*) FROM pdpstatistics;
2  SELECT count(*) FROM jpapdpstatistics_enginestats;

Check pdpstatistics to ensure the primary key has changed:

SELECT column_name, constraint_name FROM information_schema.key_column_usage WHERE table_name='pdpstatistics';

Check jpapdpstatistics_enginestats to ensure id column has been dropped and timestamp column added.

SELECT table_name, column_name, data_type FROM information_schema.columns WHERE table_name = 'jpapdpstatistics_enginestats';

Check the pdp table to ensure the LASTUPDATE column has been dropped.

SELECT table_name, column_name, data_type FROM information_schema.columns WHERE table_name = 'pdp';

Number of files run

13

Tables in policyadmin

73

Records Added

13

schema_version

0800

10. Upgrade to Istanbul (0900) with data in pdpstatistics and jpapdpstatistics_enginestats

Ensure you are on release 0800.

Check pdpstatistics and jpapdpstatistics_enginestats are populated with data.

1  SELECT count(*) FROM pdpstatistics;
2  SELECT count(*) FROM jpapdpstatistics_enginestats;

Modify db_migrator_policy_init.sh - remove any lines referencing downgrade and add the 2 lines under “Upgrade scripts”

Make/Redeploy to run upgrade

Check the tables to ensure the number records is the same.

1  SELECT count(*) FROM pdpstatistics;
2  SELECT count(*) FROM jpapdpstatistics_enginestats;

Check pdpstatistics to ensure the primary key has changed:

SELECT column_name, constraint_name FROM information_schema.key_column_usage WHERE table_name='pdpstatistics';

Check jpapdpstatistics_enginestats to ensure timestamp column has been dropped and id column added.

SELECT table_name, column_name, data_type FROM information_schema.columns WHERE table_name = 'jpapdpstatistics_enginestats';

Check the pdp table to ensure the LASTUPDATE column has been added and the value has defaulted to the CURRENT_TIMESTAMP.

SELECT table_name, column_name, data_type, column_default FROM information_schema.columns WHERE table_name = 'pdp';

Number of files run

13

Tables in policyadmin

75

Records Added

13

schema_version

0900

Note

The number of records added may vary depnding on the number of retries.

End of Document

CLAMP participants (kubernetes, http) Smoke Tests
1. Introduction

The CLAMP participants (kubernetes and http) are used to interact with the helm client in a kubernetes environment for the deployment of microservices via helm chart as well as to configure the microservices over REST endpoints. Both of these participants are often used together in the Control loop workflow.

This document will serve as a guide to do smoke tests on the different components that are involved when working with the participants and outline how they operate. It will also show a developer how to set up their environment for carrying out smoke tests on these participants.

2. Setup Guide

This article assumes that:

  • You are using the operating systems such as linux/macOS/windows.

  • You are using a directory called git off your home directory (~/git) for your git repositories

  • Your local maven repository is in the location ~/.m2/repository

  • You have copied the settings.xml from oparent to ~/.m2/ directory

  • You have added settings to access the ONAP Nexus to your M2 configuration, see Maven Settings Example (bottom of the linked page)

The procedure documented in this article has been verified using Ubuntu 20.04 LTS VM.

2.1 Prerequisites
2.2 Assumptions
  • You are accessing the policy repositories through gerrit.

The following repositories are required for development in this project. These repositories should be present on your machine and you should run “mvn clean install” on all of them so that the packages are present in your .m2 repository.

  • policy/parent

  • policy/common

  • policy/models

  • policy/clamp

  • policy/docker

In this setup guide, we will be setting up all the components technically required for a working dev environment.

2.3 Setting up the components
2.3.1 MariaDB Setup

We will be using Docker to run our mariadb instance. It will have the runtime-controlloop database running in it.

  • controlloop: the runtime-controlloop db

The easiest way to do this is to perform a small alteration on an SQL script provided by the clamp backend in the file “runtime/extra/sql/bulkload/create-db.sql”

CREATE DATABASE `controlloop`;
USE `controlloop`;
DROP USER 'policy';
CREATE USER 'policy';
GRANT ALL on controlloop.* to 'policy' identified by 'P01icY' with GRANT OPTION;

Once this has been done, we can run the bash script provided here: “runtime/extra/bin-for-dev/start-db.sh”

./start-db.sh

This will setup all the Control Loop runtime database. The database will be exposed locally on port 3306 and will be backed by an anonymous docker volume.

2.3.2 DMAAP Simulator

For convenience, a dmaap simulator has been provided in the policy/models repository. To start the simulator, you can do the following:

  1. Navigate to /models-sim/policy-models-simulators in the policy/models repository.

  2. Add a configuration file to src/test/resources with the following contents:

{
   "dmaapProvider":{
      "name":"DMaaP simulator",
      "topicSweepSec":900
   },
   "restServers":[
      {
         "name":"DMaaP simulator",
         "providerClass":"org.onap.policy.models.sim.dmaap.rest.DmaapSimRestControllerV1",
         "host":"localhost",
         "port":3904,
         "https":false
      }
   ]
}
  1. You can then start dmaap with:

mvn exec:java  -Dexec.mainClass=org.onap.policy.models.simulators.Main -Dexec.args="src/test/resources/YOUR_CONF_FILE.json"

At this stage the dmaap simulator should be running on your local machine on port 3904.

2.3.3 Controlloop Runtime

To start the controlloop runtime service, we need to execute the following maven command from the “runtime-controlloop” directory in the clamp repo. Control Loop runtime uses the config file “src/main/resources/application.yaml” by default.

mvn spring-boot:run
2.3.4 Helm chart repository

Kubernetes participant consumes helm charts from the local chart database as well as from a helm repository. For the smoke testing, we are going to add nginx-stable helm repository to the helm client. The following command can be used to add nginx repository to the helm client.

helm repo add nginx-stable https://helm.nginx.com/stable
2.3.5 Kubernetes and http participants

The participants can be started from the clamp repository by executing the following maven command from the appropriate directories. The participants will be started and get registered to the Control Loop runtime.

Navigate to the directory “participant/participant-impl/participant-impl-kubernetes/” and start kubernetes participant.

mvn spring-boot:run

Navigate to the directory “participant/participant-impl/participant-impl-http/” and start http participant.

mvn spring-boot:run
3. Running Tests

In this section, we will run through the sequence of steps in Control Loop workflow . The workflow can be triggered via Postman client.

3.1 Commissioning

Commission Control loop TOSCA definitions to Runtime.

The Control Loop definitions are commissioned to CL runtime which populates the CL runtime database. The following sample TOSCA template is commissioned to the runtime endpoint which contains definitions for kubernetes participant that deploys nginx ingress microservice helm chart and a http POST request for http participant.

Tosca Service Template

Commissioning Endpoint:

POST: https://<CL Runtime IP> : <Port> /onap/controlloop/v2/commission

A successful commissioning gives 200 response in the postman client.

3.2 Create New Instances of Control Loops

Once the template is commissioned, we can instantiate Control Loop instances. This will create the instances with default state “UNINITIALISED”.

Instantiation Endpoint:

POST: https://<CL Runtime IP> : <Port> /onap/controlloop/v2/instantiation

Request body:

Instantiation json

3.3 Change the State of the Instance

When the Control loop is updated with state “PASSIVE”, the Kubernetes participant fetches the node template for all control loop elements and deploys the helm chart of each CL element in to the cluster. The following sample json input is passed on the request body.

Control Loop Update Endpoint:

PUT: https://<CL Runtime IP> : <Port> /onap/controlloop/v2/instantiation/command

Request body:
{
  "orderedState": "PASSIVE",
  "controlLoopIdentifierList": [
    {
      "name": "K8SInstance0",
      "version": "1.0.1"
    }
  ]
}

After the state changed to “PASSIVE”, nginx-ingress pod is deployed in the kubernetes cluster. And http participant should have posted the dummy data to the configured URL in the tosca template.

The following command can be used to verify the pods deployed successfully by kubernetes participant.

helm ls -n onap | grep nginx
kubectl get po -n onap | grep nginx

The overall state of the control loop should be “PASSIVE” to indicate both the participants has successfully completed the operations. This can be verified by the following rest endpoint.

Verify control loop state:

GET: https://<CL Runtime IP> : <Port>/onap/controlloop/v2/instantiation
3.4 Control Loop can be “UNINITIALISED” after deployment

By changing the state to “UNINITIALISED”, all the helm deployments under the corresponding control loop will be uninstalled from the cluster. Control Loop Update Endpoint:

PUT: https://<CL Runtime IP> : <Port> /onap/controlloop/v2/instantiation/command

Request body:
{
  "orderedState": "UNINITIALISED",
  "controlLoopIdentifierList": [
    {
      "name": "K8SInstance0",
      "version": "1.0.1"
    }
  ]
}

The nginx pod should be deleted from the k8s cluster.

This concludes the required smoke tests for http and kubernetes participants.

CLAMP control loop runtime Smoke Tests

This article explains how to build the CLAMP control loop runtime for development purposes and how to run smoke tests for control loop runtime. To start, the developer should consult the latest ONAP Wiki to familiarize themselves with developer best practices and how-tos to setup their environment, see https://wiki.onap.org/display/DW/Developer+Best+Practices.

This article assumes that:

  • You are using a *nix operating system such as linux or macOS.

  • You are using a directory called git off your home directory (~/git) for your git repositories

  • Your local maven repository is in the location ~/.m2/repository

  • You have copied the settings.xml from oparent to ~/.m2/ directory

  • You have added settings to access the ONAP Nexus to your M2 configuration, see Maven Settings Example (bottom of the linked page)

The procedure documented in this article has been verified using Unbuntu 20.04 LTS VM.

Cloning CLAMP control loop runtime and all dependency

Run a script such as the script below to clone the required modules from the ONAP git repository. This script clones CLAMP control loop runtime and all dependency.

ONAP Policy Framework has dependencies to the ONAP Parent oparent module, the ONAP ECOMP SDK ecompsdkos module, and the A&AI Schema module.

Typical ONAP Policy Framework Clone Script
 1 #!/usr/bin/env bash
 2
 3 ## script name for output
 4 MOD_SCRIPT_NAME='basename $0'
 5
 6 ## the ONAP clone directory, defaults to "onap"
 7 clone_dir="onap"
 8
 9 ## the ONAP repos to clone
10 onap_repos="\
11 policy/parent \
12 policy/common \
13 policy/models \
14 policy/clamp \
15 policy/docker "
16
17 ##
18 ## Help screen and exit condition (i.e. too few arguments)
19 ##
20 Help()
21 {
22     echo ""
23     echo "$MOD_SCRIPT_NAME - clones all required ONAP git repositories"
24     echo ""
25     echo "       Usage:  $MOD_SCRIPT_NAME [-options]"
26     echo ""
27     echo "       Options"
28     echo "         -d          - the ONAP clone directory, defaults to '.'"
29     echo "         -h          - this help screen"
30     echo ""
31     exit 255;
32 }
33
34 ##
35 ## read command line
36 ##
37 while [ $# -gt 0 ]
38 do
39     case $1 in
40         #-d ONAP clone directory
41         -d)
42             shift
43             if [ -z "$1" ]; then
44                 echo "$MOD_SCRIPT_NAME: no clone directory"
45                 exit 1
46             fi
47             clone_dir=$1
48             shift
49         ;;
50
51         #-h prints help and exists
52         -h)
53             Help;exit 0;;
54
55         *)    echo "$MOD_SCRIPT_NAME: undefined CLI option - $1"; exit 255;;
56     esac
57 done
58
59 if [ -f "$clone_dir" ]; then
60     echo "$MOD_SCRIPT_NAME: requested clone directory '$clone_dir' exists as file"
61     exit 2
62 fi
63 if [ -d "$clone_dir" ]; then
64     echo "$MOD_SCRIPT_NAME: requested clone directory '$clone_dir' exists as directory"
65     exit 2
66 fi
67
68 mkdir $clone_dir
69 if [ $? != 0 ]
70 then
71     echo cannot clone ONAP repositories, could not create directory '"'$clone_dir'"'
72     exit 3
73 fi
74
75 for repo in $onap_repos
76 do
77     repoDir=`dirname "$repo"`
78     repoName=`basename "$repo"`
79
80     if [ ! -z $dirName ]
81     then
82         mkdir "$clone_dir/$repoDir"
83         if [ $? != 0 ]
84         then
85             echo cannot clone ONAP repositories, could not create directory '"'$clone_dir/repoDir'"'
86             exit 4
87         fi
88     fi
89
90     git clone https://gerrit.onap.org/r/${repo} $clone_dir/$repo
91 done
92
93 echo ONAP has been cloned into '"'$clone_dir'"'

Execution of the script above results in the following directory hierarchy in your ~/git directory:

  • ~/git/onap

  • ~/git/onap/policy

  • ~/git/onap/policy/parent

  • ~/git/onap/policy/common

  • ~/git/onap/policy/models

  • ~/git/onap/policy/clamp

  • ~/git/onap/policy/docker

Building CLAMP control loop runtime and all dependency

Step 1: Optionally, for a completely clean build, remove the ONAP built modules from your local repository.

rm -fr ~/.m2/repository/org/onap

Step 2: A pom such as the one below can be used to build the ONAP Policy Framework modules. Create the pom.xml file in the directory ~/git/onap/policy.

Typical pom.xml to build the ONAP Policy Framework
 1 <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
 2     <modelVersion>4.0.0</modelVersion>
 3     <groupId>org.onap</groupId>
 4     <artifactId>onap-policy</artifactId>
 5     <version>1.0.0-SNAPSHOT</version>
 6     <packaging>pom</packaging>
 7     <name>${project.artifactId}</name>
 8     <inceptionYear>2017</inceptionYear>
 9     <organization>
10         <name>ONAP</name>
11     </organization>
12
13     <modules>
14         <module>parent</module>
15         <module>common</module>
16         <module>models</module>
17         <module>clamp</module>
18     </modules>
19 </project>

Step 3: You can now build the Policy framework.

Java artifacts only:

cd ~/git/onap/policy
mvn -pl '!org.onap.policy.clamp:policy-clamp-runtime' install

With docker images:

cd ~/git/onap/policy/clamp/packages/
mvn clean install -P docker
Running MariaDb and DMaaP Simulator
Running a MariaDb Instance

Assuming you have successfully built the codebase using the instructions above. There are two requirements for the Clamp controlloop runtime component to run, one of them is a running MariaDb database instance. The easiest way to do this is to run the docker image locally.

An sql such as the one below can be used to build the SQL initialization. Create the mariadb.sql file in the directory ~/git.

create database controlloop;
CREATE USER 'policy'@'%' IDENTIFIED BY 'P01icY';
GRANT ALL PRIVILEGES ON controlloop.* TO 'policy'@'%';

Execution of the command above results in the creation and start of the mariadb-smoke-test container.

cd ~/git
docker run --name mariadb-smoke-test  \
 -p 3306:3306 \
 -e MYSQL_ROOT_PASSWORD=my-secret-pw  \
 --mount type=bind,source=~/git/mariadb.sql,target=/docker-entrypoint-initdb.d/data.sql \
 mariadb:10.5.8
Running the DMaaP Simulator during Development

The second requirement for the Clamp controlloop runtime component to run is to run the DMaaP simulator. You can run it from the command line using Maven.

Change the local configuration file src/test/resources/simParameters.json using the below code:

{
  "dmaapProvider": {
    "name": "DMaaP simulator",
    "topicSweepSec": 900
  },
  "restServers": [
    {
      "name": "DMaaP simulator",
      "providerClass": "org.onap.policy.models.sim.dmaap.rest.DmaapSimRestControllerV1",
      "host": "localhost",
      "port": 3904,
      "https": false
    }
  ]
}

Run the following commands:

cd ~/git/onap/policy/models/models-sim/policy-models-simulators
mvn exec:java  -Dexec.mainClass=org.onap.policy.models.simulators.Main -Dexec.args="src/test/resources/simParameters.json"
Developing and Debugging CLAMP control loop runtime
Running on the Command Line using Maven

Once the mariadb and DMaap simulator are up and running, run the following commands:

cd ~/git/onap/policy/clamp/runtime-controlloop
mvn spring-boot:run
Running on the Command Line
cd ~/git/onap/policy/clamp/runtime-controlloop
java -jar target/policy-clamp-runtime-controlloop-6.1.3-SNAPSHOT.jar
Running in Eclipse
  1. Check out the policy models repository

  2. Go to the policy-clamp-runtime-controlloop module in the clamp repo

  3. Specify a run configuration using the class org.onap.policy.clamp.controlloop.runtime.Application as the main class

  4. Run the configuration

Swagger UI of Control loop runtime is available at http://localhost:6969/onap/controlloop/swagger-ui/, and swagger JSON at http://localhost:6969/onap/controlloop/v2/api-docs/

Running one or more participant simulators

Into dockercsitclamptestsdata you can find a test case with policy-participant. In order to use that test you can use particpant-simulator. Copy the file src/main/resources/config/application.yaml and paste into src/test/resources/, after that change participantId and participantType as showed below:

participantId:
  name: org.onap.policy.controlloop.PolicyControlLoopParticipant
  version: 2.3.1
participantType:
  name: org.onap.PM_Policy
  version: 1.0.0

Run the following commands:

cd ~/git/onap/policy/clamp/participant/participant-impl/participant-impl-simulator
 java -jar target/policy-clamp-participant-impl-simulator-6.1.3-SNAPSHOT.jar --spring.config.location=src/test/resources/application.yaml
Creating self-signed certificate

There is an additional requirement for the Clamp control loop runtime docker image to run, is creating the SSL self-signed certificate.

Run the following commands:

cd ~/git/onap/policy/docker/csit/
./gen_truststore.sh
./gen_keystore.sh

Execution of the commands above results additional files into the following directory ~/git/onap/policy/docker/csit/config:

  • ~/git/onap/policy/docker/csit/config/cakey.pem

  • ~/git/onap/policy/docker/csit/config/careq.pem

  • ~/git/onap/policy/docker/csit/config/caroot.cer

  • ~/git/onap/policy/docker/csit/config/ks.cer

  • ~/git/onap/policy/docker/csit/config/ks.csr

  • ~/git/onap/policy/docker/csit/config/ks.jks

Running the CLAMP control loop runtime docker image

Run the following command:

docker run --name runtime-smoke-test \
 -p 6969:6969 \
 -e mariadb.host=host.docker.internal \
 -e topicServer=host.docker.internal \
 --mount type=bind,source=~/git/onap/policy/docker/csit/config/ks.jks,target=/opt/app/policy/clamp/etc/ssl/policy-keystore  \
 --mount type=bind,source=~/git/onap/policy/clamp/runtime-controlloop/src/main/resources/application.yaml,target=/opt/app/policy/clamp/etc/ClRuntimeParameters.yaml  \
 onap/policy-clamp-cl-runtime

Swagger UI of Control loop runtime is available at https://localhost:6969/onap/controlloop/swagger-ui/, and swagger JSON at https://localhost:6969/onap/controlloop/v2/api-docs/

CLAMP Participant Protocol Smoke Tests
1. Introduction

The CLAMP Control Loop Participant protocol is an asynchronous protocol that is used by the CLAMP runtime to coordinate life cycle management of Control Loop instances. This document will serve as a guide to do smoke tests on the different usecases that are involved when working with the Participant protocol and outline how they operate. It will also show a developer how to set up their environment for carrying out smoke tests on the participants.

2. Setup Guide

This section will show the developer how to set up their environment to start testing participants with some instructions on how to carry out the tests. There are a number of prerequisites. Note that this guide is written by a Linux user - although the majority of the steps show will be exactly the same in Windows or other systems.

2.1 Prerequisites
2.2 Setting up the components
  • Controlloop runtime component docker image is started and running.

  • Participant docker images policy-clamp-cl-pf-ppnt, policy-clamp-cl-http-ppnt, policy-clamp-cl-k8s-ppnt are started and running.

  • Dmaap simulator for communication between components.

  • mariadb docker container for policy and controlloop database.

  • policy-api for communication between policy participant and policy-framework

In this setup guide, we will be setting up all the components technically required for a working convenient dev environment. We will not be setting up all of the participants - we will setup only the policy participant as an example.

2.2.1 MariaDB Setup

We will be using Docker to run our mariadb instance. It will have a total of two databases running in it.

  • controlloop: the runtime-controlloop db

  • policyadmin: the policy-api db

3. Running Tests of protocol dialogues

loop type definitions and common property values for participant types.

In this section, we will run through the functionalities mentioned at the start of this document is section 1. Each functionality will be tested and we will confirm that they were carried out successfully. There is a tosca service template that can be used for this test Tosca Service Template

3.1 Participant Registration

Action: Bring up the participant

Test result:

  • Observe PARTICIPANT_REGISTER going from participant to runtime

  • Observe PARTICIPANT_REGISTER_ACK going from runtime to participant

  • Observe PARTICIPANT_UPDATE going from runtime to participant

3.2 Participant Deregistration

Action: Bring down the participant Test result:

  • Observe PARTICIPANT_DEREGISTER going from participant to runtime

  • Observe PARTICIPANT_DEREGISTER_ACK going from runtime to participant

3.3 Participant Priming

When a control loop is primed, the portion of the Control Loop Type Definition and Common Property values for the participants of each participant type mentioned in the Control Loop Definition are sent to the participants. Action: Invoke a REST API to prime controlloop type definitions and set values of common properties

Test result:

  • Observe PARTICIPANT_UPDATE going from runtime to participant with controlloop type definitions and common property values for participant types

  • Observe that the controlloop type definitions and common property values for participant types are stored on ParticipantHandler

  • Observe PARTICIPANT_UPDATE_ACK going from runtime to participant

3.4 Participant DePriming

When a control loop is de-primed, the portion of the Control Loop Type Definition and Common Property values for the participants of each participant type mentioned in the Control Loop Definition are deleted on participants. Action: Invoke a REST API to deprime controlloop type definitions

Test result:

  • If controlloop instances exist in runtime database, return a response for the REST API with error response saying “Cannot decommission controlloop type definition”

  • If no controlloop instances exist in runtime database, Observe PARTICIPANT_UPDATE going from runtime to participant with definitions as null

  • Observe that the controlloop type definitions and common property values for participant types are removed on ParticipantHandler

  • Observe PARTICIPANT_UPDATE_ACK going from runtime to participant

3.5 Control Loop Update

Control Loop Update handles creation, change, and deletion of control loops on participants. Action: Trigger controlloop instantiation from GUI

Test result:

  • Observe CONTROL_LOOP_UPDATE going from runtime to participant

  • Observe that the controlloop type instances and respective property values for participant types are stored on ControlLoopHandler

  • Observe that the controlloop state is UNINITIALISED

  • Observe CONTROL_LOOP_UPDATE_ACK going from participant to runtime

3.6 Control Loop state change to PASSIVE

Control Loop Update handles creation, change, and deletion of control loops on participants. Action: Change state of the controlloop to PASSIVE

Test result:

  • Observe CONTROL_LOOP_STATE_CHANGE going from runtime to participant

  • Observe that the ControlLoopElements state is PASSIVE

  • Observe that the controlloop state is PASSIVE

  • Observe CONTROL_LOOP_STATE_CHANGE_ACK going from participant to runtime

3.7 Control Loop state change to RUNNING

Control Loop Update handles creation, change, and deletion of control loops on participants. Action: Change state of the controlloop to RUNNING

Test result:

  • Observe CONTROL_LOOP_STATE_CHANGE going from runtime to participant

  • Observe that the ControlLoopElements state is RUNNING

  • Observe that the controlloop state is RUNNING

  • Observe CONTROL_LOOP_STATE_CHANGE_ACK going from participant to runtime

3.8 Control Loop state change to PASSIVE

Control Loop Update handles creation, change, and deletion of control loops on participants. Action: Change state of the controlloop to PASSIVE

Test result:

  • Observe CONTROL_LOOP_STATE_CHANGE going from runtime to participant

  • Observe that the ControlLoopElements state is PASSIVE

  • Observe that the controlloop state is PASSIVE

  • Observe CONTROL_LOOP_STATE_CHANGE_ACK going from participant to runtime

3.9 Control Loop state change to UNINITIALISED

Control Loop Update handles creation, change, and deletion of control loops on participants. Action: Change state of the controlloop to UNINITIALISED

Test result:

  • Observe CONTROL_LOOP_STATE_CHANGE going from runtime to participant

  • Observe that the ControlLoopElements state is UNINITIALISED

  • Observe that the controlloop state is UNINITIALISED

  • Observe that the ControlLoopElements undeploy the instances from respective frameworks

  • Observe that the control loop instances are removed from participants

  • Observe CONTROL_LOOP_STATE_CHANGE_ACK going from participant to runtime

3.10 Control Loop monitoring and reporting

This dialogue is used as a heartbeat mechanism for participants, to monitor the status of Control Loop Elements, and to gather statistics on control loops. The ParticipantStatus message is sent periodically by each participant. The reporting interval for sending the message is configurable Action: Bring up participant

Test result:

  • Observe that PARTICIPANT_STATUS message is sent from participants to runtime in a regular interval

  • Trigger a PARTICIPANT_STATUS_REQ from runtime and observe a PARTICIPANT_STATUS message with tosca definitions of control loop type definitions sent from all the participants to runtime

This concluded the required smoke tests

CLAMP Policy Participant Smoke Tests
1. Introduction

The Smoke testing of the policy participant is executed in a local CLAMP/Policy environment. The CLAMP-Controlloop interfaces interact with the Policy Framework to perform actions based on the state of the policy participant. The goal of the Smoke tests is the ensure that CLAMP Policy Participant and Policy Framework work together as expected.

2. Setup Guide

This section will show the developer how to set up their environment to start testing in GUI with some instruction on how to carry out the tests. There are a number of prerequisites. Note that this guide is written by a Linux user - although the majority of the steps show will be exactly the same in Windows or other systems.

2.1 Prerequisites
2.2 Assumptions
  • You are accessing the policy repositories through gerrit

  • You are using “git review”.

The following repositories are required for development in this project. These repositories should be present on your machine and you should run “mvn clean install” on all of them so that the packages are present in your .m2 repository.

  • policy/parent

  • policy/common

  • policy/models

  • policy/clamp

  • policy/docker

  • policy/gui

  • policy/api

In this setup guide, we will be setting up all the components technically required for a working convenient dev environment.

2.3 Setting up the components
2.3.1 MariaDB Setup

We will be using Docker to run our mariadb instance. It will have a total of two databases running in it.

  • controlloop: the runtime-controlloop db

  • policyadmin: the policy-api db

The easiest way to do this is to perform a small alteration on an SQL script provided by the clamp backend in the file “runtime/extra/sql/bulkload/create-db.sql”

CREATE DATABASE `controlloop`;
USE `controlloop`;
DROP USER 'policy';
CREATE USER 'policy';
GRANT ALL on controlloop.* to 'policy' identified by 'P01icY' with GRANT OPTION;
CREATE DATABASE `policyadmin`;
USE `policyadmin`;
DROP USER 'policy_user';
CREATE USER 'policy_user';
GRANT ALL on controlloop.* to 'policy_user' identified by 'policy_user' with GRANT OPTION;
FLUSH PRIVILEGES;

Once this has been done, we can run the bash script provided here: “runtime/extra/bin-for-dev/start-db.sh”

./start-db.sh

This will setup the two databases needed. The database will be exposed locally on port 3306 and will be backed by an anonymous docker volume.

2.3.2 DMAAP Simulator

For convenience, a dmaap simulator has been provided in the policy/models repository. To start the simulator, you can do the following: 1. Navigate to /models-sim/policy-models-simulators in the policy/models repository. 2. Add a configuration file to src/test/resources with the following contents:

{
   "dmaapProvider":{
      "name":"DMaaP simulator",
      "topicSweepSec":900
   },
   "restServers":[
      {
         "name":"DMaaP simulator",
         "providerClass":"org.onap.policy.models.sim.dmaap.rest.DmaapSimRestControllerV1",
         "host":"localhost",
         "port":3904,
         "https":false
      }
   ]
}
  1. You can then start dmaap with:

mvn exec:java  -Dexec.mainClass=org.onap.policy.models.simulators.Main -Dexec.args="src/test/resources/YOUR_CONF_FILE.json"

At this stage the dmaap simulator should be running on your local machine on port 3904.

2.3.3 Policy API

In the policy-api repo, you should find the file “src/main/resources/etc/defaultConfig.json”. This file must be altered slightly - as below with the restServerParameters and databaseProviderParameters shown. Note how the database parameters match-up with what you setup in Mariadb:

{
    "restServerParameters": {
        "host": "0.0.0.0",
        "port": 6970,
        "userName": "healthcheck",
        "password": "zb!XztG34",
        "prometheus": true,
        "https": false,
        "aaf": false
    },
    "databaseProviderParameters": {
        "name": "PolicyProviderParameterGroup",
        "implementation": "org.onap.policy.models.provider.impl.DatabasePolicyModelsProviderImpl",
        "databaseDriver": "org.mariadb.jdbc.Driver",
        "databaseUrl": "jdbc:mariadb://mariadb:3306/policyadmin",
        "databaseUser": "policy_user",
        "databasePassword": "policy_user",
        "persistenceUnit": "PolicyMariaDb"
    },
}

Next, navigate to the “/main” directory. You can then run the following command to start the policy api:

mvn exec:java -Dexec.mainClass=org.onap.policy.api.main.startstop.Main -Dexec.args=" -c ../packages/policy-api-tarball/src/main/resources/etc/defaultConfig.json"
2.3.4 Policy PAP

In the policy-pap repo, you should find the file ‘main/src/test/resources/parameters/PapConfigParameters.json’. This file may need to be altered slightly as below:

{
    "name": "PapGroup",
    "restServerParameters": {
        "host": "0.0.0.0",
        "port": 6968,
        "userName": "healthcheck",
        "password": "zb!XztG34",
        "https": false
    },
    "pdpParameters": {
        "heartBeatMs": 60000,
        "updateParameters": {
            "maxRetryCount": 1,
            "maxWaitMs": 30000
        },
        "stateChangeParameters": {
            "maxRetryCount": 1,
            "maxWaitMs": 30000
        }
    },
    "databaseProviderParameters": {
        "name": "PolicyProviderParameterGroup",
        "implementation": "org.onap.policy.models.provider.impl.DatabasePolicyModelsProviderImpl",
        "databaseDriver": "org.mariadb.jdbc.Driver",
        "databaseUrl": "jdbc:mariadb://localhost:3306/policyadmin",
        "databaseUser": "policy_user",
        "databasePassword": "policy_user",
        "persistenceUnit": "PolicyMariaDb"
    },
    "topicParameterGroup": {
        "topicSources" : [{
            "topic" : "POLICY-PDP-PAP",
            "servers" : [ "localhost:3904" ],
            "topicCommInfrastructure" : "dmaap"
        }],
        "topicSinks" : [{
            "topic" : "POLICY-PDP-PAP",
            "servers" : [ "localhost:3904" ],
            "topicCommInfrastructure" : "dmaap"
        },{
            "topic" : "POLICY-NOTIFICATION",
            "servers" : [ "localhost:3904" ],
            "topicCommInfrastructure" : "dmaap"
        }]
    },
    "healthCheckRestClientParameters":[{
        "clientName": "api",
        "hostname": "policy-api",
        "port": 6968,
        "userName": "healthcheck",
        "password": "zb!XztG34",
        "useHttps": false,
        "basePath": "policy/api/v1/healthcheck"
    },
    {
        "clientName": "distribution",
        "hostname": "policy-distribution",
        "port": 6970,
        "userName": "healthcheck",
        "password": "zb!XztG34",
        "useHttps": false,
        "basePath": "healthcheck"
    }]
}

Next, navigate to the “/main” directory. You can then run the following command to start the policy pap

mvn -q -e clean compile exec:java -Dexec.mainClass="org.onap.policy.pap.main.startstop.Main" -Dexec.args="-c /src/test/resources/parameters/PapConfigParameters.json"
2.3.5 Controlloop Runtime

To start the controlloop runtime we need to go the “runtime-controlloop” directory in the clamp repo. There is a config file that is used, by default, for the controlloop runtime. That config file is here: “src/main/resources/application.yaml”. For development in your local environment, it shouldn’t need any adjustment and we can just run the controlloop runtime with:

mvn spring-boot:run
2.3.6 Controlloop Policy Participant

To start the policy participant we need to go to the “participant-impl/participant-impl-policy” directory in the clamp repo. There is a config file under “src/main/resources/config/application.yaml”. For development in your local environment, we will need to adjust this file slightly:

server:
    port: 8082

participant:
  pdpGroup: defaultGroup
  pdpType: apex
  policyApiParameters:
    clientName: api
    hostname: localhost
    port: 6970
    userName: healthcheck
    password: zb!XztG34
    https: true
    allowSelfSignedCerts: true
  policyPapParameters:
    clientName: pap
    hostname: localhost
    port: 6968
    userName: healthcheck
    password: zb!XztG34
    https: true
    allowSelfSignedCerts: true
  intermediaryParameters:
    reportingTimeIntervalMs: 120000
    description: Participant Description
    participantId:
      name: org.onap.PM_Policy
      version: 1.0.0
    participantType:
      name: org.onap.policy.controlloop.PolicyControlLoopParticipant
      version: 2.3.1
    clampControlLoopTopics:
      topicSources:
        -
          topic: POLICY-CLRUNTIME-PARTICIPANT
          servers:
            - ${topicServer:localhost}
          topicCommInfrastructure: dmaap
          fetchTimeout: 15000
      topicSinks:
        -
          topic: POLICY-CLRUNTIME-PARTICIPANT
          servers:
            - ${topicServer:localhost}
          topicCommInfrastructure: dmaap

Navigate to the participant-impl/particpant-impl-policy/main directory. We can then run the policy-participant with the following command:

mvn spring-boot:run -Dspring-boot.run.arguments="--server.port=8082 --topicServer=localhost"
3. Testing Procedure
3.1 Testing Outline

To perform the Smoke testing of the policy-participant we will be verifying the behaviours of the participant when the control loop changes state. The scenarios are:

  • UNINITIALISED to PASSIVE: participant creates policies and policyTypes specified in the ToscaServiceTemplate using policy-api

  • PASSIVE to RUNNING: participant deploys created policies specified in the ToscaServiceTemplate

  • RUNNING to PASSIVE: participant undeploys policies which have been deployed

  • PASSIVE to UNINITIALISED: participant deletes policies and policyTypes which has been created

3.2 Testing Steps
Creation of Controlloop:

A Control Loop is created by commissioning a Tosca template with Control loop definitions and instantiating the Control Loop with the state “UNINITIALISED”. Using postman, commision a TOSCA template and instantiate using the following template:

Tosca Service Template

Instantiate Controlloop

To verify this, we check that the Controlloop has been created and is in state UNINITIALISED.

_images/pol-part-controlloop-creation-ver.png
Creation of policies and policyTypes:

The Controlloop STATE is changed from UNINITIALISED to PASSIVE using postman:

{
    "orderedState": "PASSIVE",
    "controlLoopIdentifierList": [
        {
            "name": "PMSHInstance0",
            "version": "1.0.1"
        }
    ]
}

This state change will trigger the creation of policies and policyTypes using the policy-api. To verify this we will check, using policy-api endpoints, that the “Sirisha” policyType, which is specified in the service template, has been created.

_images/pol-part-controlloop-sirisha-ver.png

We can also check that the pm-control policy has been created.

_images/pol-part-controlloop-pmcontrol-ver.png
Deployment of policies:

The Controlloop STATE is changed from PASSIVE to RUNNING using postman:

{
    "orderedState": "RUNNING",
    "controlLoopIdentifierList": [
        {
            "name": "PMSHInstance0",
            "version": "1.0.1"
        }
    ]
}

This state change will trigger the deployment of the policies specified in the ToscaServiceTemplate. To verify this, we will check that the apex pmcontrol policy has been deployed to the defaultGroup. We check this using pap:

_images/pol-part-controlloop-pmcontrol-deploy-ver.png
Undeployment of policies:

The Controlloop STATE is changed from RUNNING to PASSIVE using postman:

{
    "orderedState": "PASSIVE",
    "controlLoopIdentifierList": [
        {
            "name": "PMSHInstance0",
            "version": "1.0.1"
        }
    ]
}

This state change will trigger the undeployment of the pmcontrol policy which was deployed previously. To verifiy this we do a PdpGroup Query as before and check that the pmcontrol policy has been undeployed and removed from the defaultGroup:

_images/pol-part-controlloop-pmcontrol-undep-ver.png
Deletion of policies and policyTypes:

The Controlloop STATE is changed from PASSIVE to UNINITIALISED using postman:

{
    "orderedState": "UNINITIALISED",
    "controlLoopIdentifierList": [
        {
            "name": "PMSHInstance0",
            "version": "1.0.1"
        }
    ]
}

This state change will trigger the deletion of the previously created policies and policyTypes. To verify this, as before, we can check that the Sirisha policyType is not found this time and likewise for the pmcontrol policy:

_images/pol-part-controlloop-sirisha-nf.png _images/pol-part-controlloop-pmcontrol-nf.png
Policy API Smoke Test

The policy-api smoke testing is executed against a default ONAP installation as per OOM charts. This test verifies the execution of all the REST api’s exposed by the component to make sure the contract works as expected.

General Setup

The kubernetes installation will allocate all onap components across multiple worker node VMs. The normal worker VM hosting onap components has the following spec:

  • 16GB RAM

  • 8 VCPU

  • 160GB Ephemeral Disk

The ONAP components used during the smoke tests are:

  • Policy API to perform CRUD of policies.

  • Policy DB to store the policies.

Testing procedure

The test set is focused on the following use cases:

  • Execute all the REST api’s exposed by policy-api component.

Execute policy-api testing

Download & execute the steps in postman collection for verifying policy-api component. The steps needs to be performed sequentially one after another. And no input is required from user.

Policy Framework Lifecycle API

Make sure to execute the delete steps in order to clean the setup after testing.

Policy PAP Smoke Test

The policy-pap smoke testing is executed against a default ONAP installation as per OOM charts. This test verifies the execution of all the REST api’s exposed by the component to make sure the contract works as expected.

General Setup

The kubernetes installation will allocate all onap components across multiple worker node VMs. The normal worker VM hosting onap components has the following spec:

  • 16GB RAM

  • 8 VCPU

  • 160GB Ephemeral Disk

The ONAP components used during the smoke tests are:

  • Policy API to perform CRUD of policies.

  • Policy DB to store the policies.

  • DMaaP for the communication between components.

  • Policy PAP to perform runtime administration (deploy/undeploy/status/statistics/etc).

  • Policy Apex-PDP to deploy & undeploy policies. And send heartbeats to PAP.

  • Policy Drools-PDP to deploy & undeploy policies. And send heartbeats to PAP.

  • Policy Xacml-PDP to deploy & undeploy policies. And send heartbeats to PAP.

Testing procedure

The test set is focused on the following use cases:

  • Execute all the REST api’s exposed by policy-pap component.

Create policies using policy-api

In order to test policy-pap, we need to use policy-api component to create the policies.

Download & execute the steps in postman collection for creating policies. The steps needs to be performed sequentially one after another. And no input is required from user.

Policy Framework Lifecycle API

Make sure to skip the delete policy steps.

Execute policy-pap testing

Download & execute the steps in postman collection for verifying policy-pap component. The steps needs to be performed sequentially one after another. And no input is required from user.

Policy Framework Administration API

Make sure to execute the delete steps in order to clean the setup after testing.

Delete policies using policy-api

Use the previously downloaded policy-api postman collection to delete the policies created for testing.

Apex-PDP Smoke Test

The apex-pdp smoke testing is executed against a default ONAP installation as per OOM charts. This test verifies the functionalities supported by apex-pdp to make sure they are working as expected.

General Setup

The kubernetes installation will allocate all onap components across multiple worker node VMs. The normal worker VM hosting onap components has the following spec:

  • 16GB RAM

  • 8 VCPU

  • 160GB Ephemeral Disk

The ONAP components used during the smoke tests are:

  • AAI for creating dummy VNF & PNF for testing purpose.

  • CDS for publishing the blueprints & triggering the actions.

  • DMaaP for the communication between components.

  • Policy API to perform CRUD of policies.

  • Policy PAP to perform runtime administration (deploy/undeploy/status/statistics/etc).

  • Policy Apex-PDP to execute policies for both VNF & PNF scenarios.

Testing procedure

The test set is focused on the following use cases:

  • End to end testing of a sample VNF based policy using Apex-PDP.

  • End to end testing of a sample PNF based policy using Apex-PDP.

Creation of VNF & PNF in AAI

In order for PDP engines to fetch the resource details from AAI during runtime execution, we need to create dummy VNF & PNF entities in AAI. In a real control loop flow, the entities in AAI will be either created during orchestration phase or provisioned in AAI separately.

Download & execute the steps in postman collection for creating the entities along with it’s dependencies. The steps needs to be performed sequentially one after another. And no input is required from user.

Create VNF & PNF in AAI

Make sure to skip the delete VNF & PNF steps.

Publish Blueprints in CDS

In order for PDP engines to trigger an action in CDS during runtime execution, we need to publish relevant blueprints in CDS.

Download the zip files containing the blueprint for VNF & PNF specific actions.

VNF Test CBA PNF Test CBA

Download & execute the steps in postman collection for publishing the blueprints in CDS. In the enrich & publish CBA step, provide the previously downloaded zip file one by one. The execute steps are provided to verify that the blueprints are working as expected.

Publish Blueprints in CDS

Make sure to skip the delete CBA step.

Apex-PDP VNF & PNF testing

The below provided postman collection is prepared to have end to end testing experience of apex-pdp engine. Including both VNF & PNF scenarios. List of steps covered in the postman collection:

  • Create & Verify VNF & PNF policies as per policy type supported by apex-pdp.

  • Deploy both VNF & PNF policies to apex-pdp engine.

  • Query PdpGroup at multiple stages to verify current set of policies deployed.

  • Fetch policy status at multiple stages to verify policy deployment & undeployment status.

  • Fetch policy audit information at multiple stages to verify policy deployment & undeployment operations.

  • Fetch PDP Statistics at multiple stages to verify deployment, undeployment & execution counts.

  • Send onset events to DMaaP for triggering policies to test both success & failure secnarios.

  • Read policy notifications from DMaaP to verify policy execution.

  • Undeploy both VNF & PNF policies from apex-pdp engine.

  • Delete both VNF & PNF policies at the end.

Download & execute the steps in postman collection. The steps needs to be performed sequentially one after another. And no input is required from user.

Apex-PDP VNF & PNF Testing

Make sure to wait for 2 minutes (the default heartbeat interval) to verify PDP Statistics.

Delete Blueprints in CDS

Use the previously downloaded CDS postman collection to delete the blueprints published in CDS for testing.

Delete VNF & PNF in AAI

Use the previously downloaded AAI postman collection to delete the VNF & PNF entities created in AAI for testing.

Running the Stability/Performance Tests

The following links contain instructions on how to run the S3P Stability and Performance tests. These may be helpful to developers to become familiar with the Policy Framework components and test any local changes.

Policy API S3P Tests
72 Hours Stability Test of Policy API
Introduction

The 72 hour stability test of policy API has the goal of verifying the stability of running policy design API REST service by ingesting a steady flow of transactions in a multi-threaded fashion to simulate multiple clients’ behaviors. All the transaction flows are initiated from a test client server running JMeter for the duration of 72 hours.

Setup Details

The stability test was performed on a default ONAP OOM installation in the Intel Wind River Lab environment. JMeter was installed on a separate VM to inject the traffic defined in the API stability script with the following command:

nohup ./apache-jmeter-5.4.1/bin/jmeter.sh -n -t policy_api_stability.jmx -l stabilityTestResultsPolicyApi.jtl

The test was run in the background via “nohup”, to prevent it from being interrupted.

Test Plan

The 72+ hours stability test will be running the following steps sequentially in multi-threaded loops. Thread number is set to 5 to simulate 5 API clients’ behaviors (they can be calling the same policy CRUD API simultaneously). Each thread creates a different version of the policy types and policies to not interfere with one another while operating simultaneously. The point version of each entity is set to the running thread number.

Setup Thread (will be running only once)

  • Get policy-api Healthcheck

  • Get API Counter Statistics

  • Get Preloaded Policy Types

API Test Flow (5 threads running the same steps in the same loop)

  • Get Policy Metrics

  • Create a new Monitoring Policy Type with Version 6.0.#

  • Create a new Monitoring Policy Type with Version 7.0.#

  • Create a new Optimization Policy Type with Version 6.0.#

  • Create a new Guard Policy Type with Version 6.0.#

  • Create a new Native APEX Policy Type with Version 6.0.#

  • Create a new Native Drools Policy Type with Version 6.0.#

  • Create a new Native XACML Policy Type with Version 6.0.#

  • Get All Policy Types

  • Get All Versions of the new Monitoring Policy Type

  • Get Version 6.0.# of the new Monitoring Policy Type

  • Get Version 6.0.# of the new Optimzation Policy Type

  • Get Version 6.0.# of the new Guard Policy Type

  • Get Version 6.0.# of the new Native APEX Policy Type

  • Get Version 6.0.# of the new Native Drools Policy Type

  • Get Version 6.0.# of the new Native XACML Policy Type

  • Get the Latest Version of the New Monitoring Policy Type

  • Create Monitoring Policy Ver 6.0.# w/Monitoring Policy Type Ver 6.0.#

  • Create Monitoring Policy Ver 7.0.# w/Monitoring Policy Type Ver 7.0.#

  • Create Optimization Policy Ver 6.0.# w/Optimization Policy Type Ver 6.0.#

  • Create Guard Policy Ver 6.0.# w/Guard Policy Type Ver 6.0.#

  • Create Native APEX Policy Ver 6.0.# w/Native APEX Policy Type Ver 6.0.#

  • Create Native Drools Policy Ver 6.0.# w/Native Drools Policy Type Ver 6.0.#

  • Create Native XACML Policy Ver 6.0.# w/Native XACML Policy Type Ver 6.0.#

  • Get Version 6.0.# of the new Monitoring Policy

  • Get Version 6.0.# of the new Optimzation Policy

  • Get Version 6.0.# of the new Guard Policy

  • Get Version 6.0.# of the new Native APEX Policy

  • Get Version 6.0.# of the new Native Drools Policy

  • Get Version 6.0.# of the new Native XACML Policy

  • Get the Latest Version of the new Monitoring Policy

  • Delete Version 6.0.# of the new Monitoring Policy

  • Delete Version 7.0.# of the new Monitoring Policy

  • Delete Version 6.0.# of the new Optimzation Policy

  • Delete Version 6.0.# of the new Guard Policy

  • Delete Version 6.0.# of the new Native APEX Policy

  • Delete Version 6.0.# of the new Native Drools Policy

  • Delete Version 6.0.# of the new Native XACML Policy

  • Delete Monitoring Policy Type with Version 6.0.#

  • Delete Monitoring Policy Type with Version 7.0.#

  • Delete Optimization Policy Type with Version 6.0.#

  • Delete Guard Policy Type with Version 6.0.#

  • Delete Native APEX Policy Type with Version 6.0.#

  • Delete Native Drools Policy Type with Version 6.0.#

  • Delete Native XACML Policy Type with Version 6.0.#

TearDown Thread (will only be running after API Test Flow is completed)

  • Get policy-api Healthcheck

  • Get Preloaded Policy Types

Test Results

Summary

No errors were found during the 72 hours of the Policy API stability run. The load was performed against a non-tweaked ONAP OOM installation.

Test Statistics

Total # of requests

Success %

TPS

Avg. time taken per request

Min. time taken per request

Max. time taken per request

242277

100%

0.935

5340 ms

1 ms

736976 ms

_images/api-s3p-jm-1_I.png

JMeter Results

The following graphs show the response time distributions. The “Get Policy Types” API calls are the most expensive calls that average a 7 seconds plus response time.

_images/api-response-time-distribution_I.png _images/api-response-time-overtime_I.png

Memory and CPU usage

The memory and CPU usage can be monitored by running “top” command in the policy-api pod. A snapshot is taken before and after test execution to monitor the changes in resource utilization.

Memory and CPU usage before test execution:

_images/api_top_before_72h.jpg

Memory and CPU usage after test execution:

_images/api_top_after_72h.jpg
Performance Test of Policy API
Introduction

Performance test of policy-api has the goal of testing the min/avg/max processing time and rest call throughput for all the requests when the number of requests are large enough to saturate the resource and find the bottleneck.

Setup Details

The performance test was performed on a default ONAP OOM installation in the Intel Wind River Lab environment. JMeter was installed on a separate VM to inject the traffic defined in the API performace script with the following command:

nohup ./apache-jmeter-5.4.1/bin/jmeter.sh -n -t policy_api_performance.jmx -l performanceTestResultsPolicyApi.jtl

The test was run in the background via “nohup”, to prevent it from being interrupted.

Test Plan

Performance test plan is the same as stability test plan above. Only differences are, in performance test, we increase the number of threads up to 20 (simulating 20 users’ behaviors at the same time) whereas reducing the test time down to 2.5 hours.

Run Test

Running/Triggering performance test will be the same as stability test. That is, launch JMeter pointing to corresponding .jmx test plan. The API_HOST and API_PORT are already set up in .jmx.

Test Statistics

Total # of requests

Success %

TPS

Avg. time taken per request

Min. time taken per request

Max. time taken per request

2822

100%

0.31

63794 ms

2 ms

1183376 ms

_images/api-s3p-jm-2_I.png
Test Results

The following graphs show the response time distributions.

_images/api-response-time-distribution_performance_I.png _images/api-response-time-overtime_performance_I.png
Policy PAP component

Both the Performance and the Stability tests were executed by performing requests against Policy components installed as part of a full ONAP OOM deployment in Nordix lab.

Setup Details
  • Policy-PAP along with all policy components deployed as part of a full ONAP OOM deployment.

  • A second instance of APEX-PDP is spun up in the setup. Update the configuration file(OnapPfConfig.json) such that the PDP can register to the new group created by PAP in the tests.

  • Both tests were run via jMeter, which was installed on a separate VM.

Stability Test of PAP
Test Plan

The 72 hours stability test ran the following steps sequentially in a single threaded loop.

  • Create Policy defaultDomain - creates an operational policy using policy/api component

  • Create Policy sampleDomain - creates an operational policy using policy/api component

  • Check Health - checks the health status of pap

  • Check Statistics - checks the statistics of pap

  • Change state to ACTIVE - changes the state of defaultGroup PdpGroup to ACTIVE

  • Check PdpGroup Query - makes a PdpGroup query request and verifies that PdpGroup is in the ACTIVE state.

  • Deploy defaultDomain Policy - deploys the policy defaultDomain in the existing PdpGroup

  • Check status of defaultGroup - checks the status of defaultGroup PdpGroup with the defaultDomain policy 1.0.0.

  • Check PdpGroup Audit defaultGroup - checks the audit information for the defaultGroup PdpGroup.

  • Check PdpGroup Audit Policy (defaultGroup) - checks the audit information for the defaultGroup PdpGroup with the defaultDomain policy 1.0.0.

  • Create/Update PDP Group - creates a new PDPGroup named sampleGroup.

  • Check PdpGroup Query - makes a PdpGroup query request and verifies that 2 PdpGroups are in the ACTIVE state and defaultGroup has a policy deployed on it.

  • Deployment Update sampleDomain - deploys the policy sampleDomain in sampleGroup PdpGroup using pap api

  • Check status of sampleGroup - checks the status of the sampleGroup PdpGroup.

  • Check status of PdpGroups - checks the status of both PdpGroups.

  • Check PdpGroup Query - makes a PdpGroup query request and verifies that the defaultGroup has a policy defaultDomain deployed on it and sampleGroup has policy sampleDomain deployed on it.

  • Check Audit - checks the audit information for all PdpGroups.

  • Check Consolidated Health - checks the consolidated health status of all policy components.

  • Check Deployed Policies - checks for all the deployed policies using pap api.

  • Undeploy Policy sampleDomain - undeploys the policy sampleDomain from sampleGroup PdpGroup using pap api

  • Undeploy Default Policy - undeploys the policy defaultDomain from PdpGroup

  • Change state to PASSIVE(sampleGroup) - changes the state of sampleGroup PdpGroup to PASSIVE

  • Delete PdpGroup SampleGroup - delete the sampleGroup PdpGroup using pap api

  • Change State to PASSIVE(defaultGroup) - changes the state of defaultGroup PdpGroup to PASSIVE

  • Check PdpGroup Query - makes a PdpGroup query request and verifies that PdpGroup is in the PASSIVE state.

  • Delete Policy defaultDomain - deletes the operational policy defaultDomain using policy/api component

  • Delete Policy sampleDomain - deletes the operational policy sampleDomain using policy/api component

The following steps can be used to configure the parameters of test plan.

  • HTTP Authorization Manager - used to store user/password authentication details.

  • HTTP Header Manager - used to store headers which will be used for making HTTP requests.

  • User Defined Variables - used to store following user defined parameters.

Name

Description

PAP_HOST

IP Address or host name of PAP component

PAP_PORT

Port number of PAP for making REST API calls

API_HOST

IP Address or host name of API component

API_PORT

Port number of API for making REST API calls

The test was run in the background via “nohup”, to prevent it from being interrupted:

nohup ./jMeter/apache-jmeter-5.3/bin/jmeter.sh -n -t stability.jmx -l testresults.jtl
Test Results

Summary

Stability test plan was triggered for 72 hours.

Note

As part of the OOM deployment, another APEX-PDP pod is spun up with the pdpGroup name specified as ‘sampleGroup’. After creating the new group called ‘sampleGroup’ as part of the test, a time delay of 2 minutes is added, so that the pdp is registered to the newly created group. This has resulted in a spike in the Average time taken per request. But, this is required to make proper assertions, and also for the consolidated health check.

Test Statistics

Total # of requests

Success %

Error %

Average time taken per request

34053

99.14 %

0.86 %

1051 ms

Note

There were some failures during the 72 hour stability tests. These tests were caused by the apex-pdp pods restarting intermitently due to limited resources in our testing environment. The second apex instance was configured as a replica of the apex-pdp pod and therefore, when it restarted, registered to the “defaultGroup” as the configuration was taken from the original apex-pdp pod. This meant a manual change whenever the pods restarted to make apex-pdp-“2” register with the “sampleGroup”. When both pods were running as expected, no errors relating to the pap functionality were observed. These errors are strictly caused by the environment setup and not by pap.

JMeter Screenshot

_images/pap-s3p-stability-result-jmeter.png

Memory and CPU usage

The memory and CPU usage can be monitored by running “top” command on the PAP pod. A snapshot is taken before and after test execution to monitor the changes in resource utilization.

Memory and CPU usage before test execution:

_images/pap-s3p-mem-bt.png

Memory and CPU usage after test execution:

_images/pap-s3p-mem-at.png
Performance Test of PAP
Introduction

Performance test of PAP has the goal of testing the min/avg/max processing time and rest call throughput for all the requests with multiple requests at the same time.

Setup Details

The performance test is performed on a similar setup as Stability test. The JMeter VM will be sending a large number of REST requests to the PAP component and collecting the statistics.

Test Plan

Performance test plan is the same as the stability test plan above except for the few differences listed below.

  • Increase the number of threads up to 5 (simulating 5 users’ behaviours at the same time).

  • Reduce the test time to 2 hours.

  • Usage of counters to create different groups by the ‘Create/Update PDP Group’ test case.

  • Removed the delay to wait for the new PDP to be registered. Also removed the corresponding assertions where the Pdp instance registration to the newly created group is validated.

Run Test

Running/Triggering the performance test will be the same as the stability test. That is, launch JMeter pointing to corresponding .jmx test plan. The API_HOST , API_PORT , PAP_HOST , PAP_PORT are already set up in .jmx.

nohup ./jMeter/apache-jmeter-5.3/bin/jmeter.sh -n -t performance.jmx -l perftestresults.jtl

Once the test execution is completed, execute the below script to get the statistics:

$ cd /home/ubuntu/pap/testsuites/performance/src/main/resources/testplans
$ ./results.sh /home/ubuntu/pap_perf/resultTree.log
Test Results

Test results are shown as below.

Test Statistics

Total # of requests

Success %

Error %

Average time taken per request

24092

100 %

0.00 %

2467 ms

JMeter Screenshot

_images/pap-s3p-performance-result-jmeter.png
Policy APEX PDP component

Both the Stability and the Performance tests were executed in a full ONAP OOM deployment in Nordix lab.

Setup Details
  • APEX-PDP along with all policy components deployed as part of a full ONAP OOM deployment.

  • Policy-models-simulator is deployed to use CDS and DMaaP simulators during policy execution.
    Simulator configurations used are available in apex-pdp repository:

    testsuites/apex-pdp-stability/src/main/resources/simulatorConfig/

  • Two APEX policies are executed in the APEX-PDP engine, and are triggered by multiple threads during the tests.

  • Both tests were run via jMeter.

    Stability test script is available in apex-pdp repository:

    testsuites/apex-pdp-stability/src/main/resources/apexPdpStabilityTestPlan.jmx

    Performance test script is available in apex-pdp repository:

    testsuites/performance/performance-benchmark-test/src/main/resources/apexPdpPerformanceTestPlan.jmx

Note

Policy executions are validated in a more strict fashion during the tests. There are test cases where upto 80 events are expected on the DMaaP topic. DMaaP simulator is used to keep it simple and avoid any message pickup timing related issues.

Stability Test of APEX-PDP
Test Plan

The 72 hours stability test ran the following steps.

Setup Phase

Policies are created and deployed to APEX-PDP during this phase. Only one thread is in action and this step is done only once.

  • Create Policy onap.policies.apex.Simplecontrolloop - creates the first APEX policy using policy/api component.

    This is a sample policy used for PNF testing.

  • Create Policy onap.policies.apex.Example - creates the second APEX policy using policy/api component.

    This is a sample policy used for VNF testing.

  • Deploy Policies - Deploy both the policies created to APEX-PDP using policy/pap component

Main Phase

Once the policies are created and deployed to APEX-PDP by the setup thread, five threads execute the below tests for 72 hours.

  • Healthcheck - checks the health status of APEX-PDP

  • Prometheus Metrics - checks that APEX-PDP is exposing prometheus metrics

  • Test Simplecontrolloop policy success case - Send a trigger event to unauthenticated.DCAE_CL_OUTPUT DMaaP topic.

    If the policy execution is successful, 3 different notification events are sent to APEX-CL-MGT topic by each one of the 5 threads. So, it is checked if 15 notification messages are received in total on APEX-CL-MGT topic with the relevant messages.

  • Test Simplecontrolloop policy failure case - Send a trigger event with invalid pnfName to unauthenticated.DCAE_CL_OUTPUT DMaaP topic.

    The policy execution is expected to fail due to AAI failure response. 2 notification events are expected on APEX-CL-MGT topic by a thread in this case. It is checked if 10 notification messages are received in total on APEX-CL-MGT topic with the relevant messages.

  • Test Example policy success case - Send a trigger event to unauthenticated.DCAE_POLICY_EXAMPLE_OUTPUT DMaaP topic.

    If the policy execution is successful, 4 different notification events are sent to APEX-CL-MGT topic by each one of the 5 threads. So, it is checked if 20 notification messages are received in total on APEX-CL-MGT topic with the relevant messages.

  • Test Example policy failure case - Send a trigger event with invalid vnfName to unauthenticated.DCAE_POLICY_EXAMPLE_OUTPUT DMaaP topic.

    The policy execution is expected to fail due to AAI failure response. 2 notification events are expected on APEX-CL-MGT topic by a thread in this case. So, it is checked if 10 notification messages are received in total on APEX-CL-MGT topic with the relevant messages.

  • Clean up DMaaP notification topic - DMaaP notification topic which is APEX-CL-MGT is cleaned up after each test to make sure that one failure doesn’t lead to cascading errors.

Teardown Phase

Policies are undeployed from APEX-PDP and deleted during this phase. Only one thread is in action and this step is done only once after the Main phase is complete.

  • Undeploy Policies - Undeploy both the policies from APEX-PDP using policy/pap component

  • Delete Policy onap.policies.apex.Simplecontrolloop - delete the first APEX policy using policy/api component.

  • Delete Policy onap.policies.apex.Example - delete the second APEX policy also using policy/api component.

The following steps can be used to configure the parameters of test plan.

  • HTTP Authorization Manager - used to store user/password authentication details.

  • HTTP Header Manager - used to store headers which will be used for making HTTP requests.

  • User Defined Variables - used to store following user defined parameters.

Name

Description

HOSTNAME

IP Address or host name to access the components

PAP_PORT

Port number of PAP for making REST API calls such as deploy/undeploy of policy

API_PORT

Port number of API for making REST API calls such as create/ delete of policy

APEX_PORT

Port number of APEX for making REST API calls such as healthcheck/metrics

wait

Wait time if required after a request (in milliseconds)

threads

Number of threads to run test cases in parallel

threadsTimeOutInMs

Synchronization timer for threads running in parallel (in milliseconds)

Run Test

The test was run in the background via “nohup”, to prevent it from being interrupted:

nohup ./apache-jmeter-5.4.1/bin/jmeter.sh -n -t apexPdpStabilityTestPlan.jmx -l stabilityTestResults.jtl
Test Results

Summary

Stability test plan was triggered for 72 hours. There were no failures during the 72 hours test.

Test Statistics

Total # of requests

Success %

Error %

Average time taken per request

428661

100 %

0.00 %

162 ms

Note

There were no failures during the 72 hours test.

JMeter Screenshot

_images/apex_stability_jmeter_results.jpg

Memory and CPU usage

The memory and CPU usage can be monitored by running “top” command in the APEX-PDP pod. A snapshot is taken before and after test execution to monitor the changes in resource utilization. Prometheus metrics is also collected before and after the test execution.

Memory and CPU usage before test execution:

_images/apex_top_before_72h.jpg

Prometheus metrics before 72h test

Memory and CPU usage after test execution:

_images/apex_top_after_72h.jpg

Prometheus metrics after 72h test

Performance Test of APEX-PDP
Introduction

Performance test of APEX-PDP is done similar to the stability test, but in a more extreme manner using higher thread count.

Setup Details

The performance test is performed on a similar setup as Stability test.

Test Plan

Performance test plan is the same as the stability test plan above except for the few differences listed below.

  • Increase the number of threads used in the Main Phase from 5 to 20.

  • Reduce the test time to 2 hours.

Run Test
nohup ./apache-jmeter-5.4.1/bin/jmeter.sh -n -t apexPdpPerformanceTestPlan.jmx -l perftestresults.jtl
Test Results

Test results are shown as below.

Test Statistics

Total # of requests

Success %

Error %

Average time taken per request

46946

100 %

0.00 %

198 ms

JMeter Screenshot

_images/apex_perf_jmeter_results.jpg
Summary

Multiple policies were executed in a multi threaded fashion for both stability and performance tests. Both tests ran smoothly without any issues.

Policy Drools PDP component

Both the Performance and the Stability tests were executed against an ONAP installation in the policy-k8s tenant in the windriver lab, from an independent VM running the jmeter tool to inject the load.

General Setup

The installation runs the following components in a single VM:

  • AAF

  • AAI

  • DMAAP

  • POLICY

The VM has the following hardware spec:

  • 126GB RAM

  • 12 VCPUs

  • 155GB Ephemeral Disk

Jmeter is run from a different VM with the following configuration:

  • 16GB RAM

  • 8 VCPUs

  • 155GB Ephemeral Disk

The drools-pdp container uses the JVM memory settings from a default OOM installation.

Other ONAP components exercised during the stability tests were:

  • Policy XACML PDP to process guard queries for each transaction.

  • DMaaP to carry PDP-D and jmeter initiated traffic to complete transactions.

  • Policy API to create (and delete at the end of the tests) policies for each scenario under test.

  • Policy PAP to deploy (and undeploy at the end of the tests) policies for each scenario under test.

The following components are simulated during the tests.

  • SO actor for the vDNS use case.

  • APPC responses for the vCPE and vFW use cases.

  • AAI to answer queries for the use cases under test.

SO, and AAI actors were simulated within the PDP-D JVM by enabling the feature-controlloop-utils before running the tests.

PDP-D Setup

The kubernetes charts were modified previous to the installation to add the following script that enables the controlloop-utils feature:

oom/kubernetes/policy/charts/drools/resources/configmaps/features.pre.sh:

#!/bin/sh
sh -c "features enable controlloop-utils"
Stability Test of Policy PDP-D
PDP-D performance

The tests focused on the following use cases:

  • vCPE

  • vDNS

  • vFirewall

For 72 hours the following 5 scenarios ran in parallel:

  • vCPE success scenario

  • vCPE failure scenario (failure returned by simulated APPC recipient through DMaaP).

  • vDNS success scenario.

  • vDNS failure scenario (failure by introducing in the DCAE ONSET a non-existant vserver-name reference).

  • vFirewall success scenario.

Five threads ran in parallel, one for each scenario, back to back with no pauses. The transactions were initiated by each jmeter thread group. Each thread initiated a transaction, monitored the transaction, and as soon as the transaction ending was detected, it initiated the next one.

JMeter was run in a docker container with the following command:

docker run --interactive --tty --name jmeter --rm --volume $PWD:/jmeter -e VERBOSE_GC="" egaillardon/jmeter-plugins --nongui --testfile s3p.jmx --loglevel WARN

The results were accessed by using the telemetry API to gather statistics:

vCPE Success scenario

ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e:

# Times are in milliseconds

Control Loop Name: ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e
Number of Transactions Executed: 114007
Number of Successful Transactions: 112727
Number of Failure Transactions: 1280
Average Execution Time: 434.9942021103967 ms.
vCPE Failure scenario

ControlLoop-vCPE-Fail:

# Times are in milliseconds

Control Loop Name: ControlLoop-vCPE-Fail
Number of Transactions Executed: 114367
Number of Successful Transactions: 114367 (failure transactions are expected)
Number of Failure Transactions: 0         (success transactions are not expected)
Average Execution Time: 433.61750330077734 ms.
vDNS Success scenario

ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3:

# Times are in milliseconds

Control Loop Name: ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3
Number of Transactions Executed: 237512
Number of Successful Transactions: 229532
Number of Failure Transactions: 7980
Average Execution Time: 268.028794334602 ms.
vDNS Failure scenario

ControlLoop-vDNS-Fail:

# Times are in milliseconds

Control Loop Name: ControlLoop-vDNS-Fail
Number of Transactions Executed: 1957987
Number of Successful Transactions: 1957987 (failure transactions are expected)
Number of Failure Transactions: 0         (success transactions are not expected)
Average Execution Time: 39.369322166081794
vFirewall Success scenario

ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a:

# Times are in milliseconds

Control Loop Name: ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a
Number of Transactions Executed: 120308
Number of Successful Transactions: 118895
Number of Failure Transactions: 1413
Average Execution Time: 394.8609236293513 ms.
Commentary

There has been a degradation of performance observed in this release when compared with the previous one. Approximately 1% of transactions were not completed as expected for some use cases. Average Execution Times are extended as well. The unexpected results seem to point in the direction of the interactions of the distributed locking feature with the database. These areas as well as the conditions for the test need to be investigated further.

# Common pattern in the audit.log for unexpected transaction completions

a8d637fc-a2d5-49f9-868b-5b39f7befe25||ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a|
policy:usecases:[org.onap.policy.drools-applications.controlloop.common:controller-usecases:1.9.0:usecases]|
2021-10-12T19:48:02.052+00:00|2021-10-12T19:48:02.052+00:00|0|
null:operational.modifyconfig.EVENT.MANAGER.FINAL:1.0.0|dev-policy-drools-pdp-0|
ERROR|400|Target Lock was lost|||VNF.generic-vnf.vnf-name||dev-policy-drools-pdp-0||
dev-policy-drools-pdp-0|microservice.stringmatcher|
{vserver.prov-status=ACTIVE, vserver.is-closed-loop-disabled=false,
generic-vnf.vnf-name=fw0002vm002fw002, vserver.vserver-name=OzVServer}||||
INFO|Session org.onap.policy.drools-applications.controlloop.common:controller-usecases:1.9.0:usecases|

# The "Target Lock was lost" is a common message error in the unexpected results.

END-OF-DOCUMENT


Performance Test of Policy XACML PDP

The Performance test was executed by performing requests against the Policy RESTful APIs residing on the XACML PDP installed on a Cloud based Virtual Machine.

VM Configuration: - 16GB RAM - 4 VCPU - 40GB Disk

ONAP was deployed using a K8s Configuration on the same VM. Running jmeter and ONAP OOM on the same VM may adversely impact the performance of the XACML-PDP being tested.

Summary

The Performance test was executed, and the result analyzed, via:

jmeter -Jduration=1200 -Jusers=10 \
    -Jxacml_ip=$ip -Jpap_ip=$ip -Japi_ip=$ip \
    -Jxacml_port=30111 -Jpap_port=30197 -Japi_port=30664 \
    -n -t perf.jmx -l testresults.jtl

Note: the ports listed above correspond to port 6969 of the respective components.

The performance test, perf.jmx, runs the following, all in parallel:

  • Healthcheck, 10 simultaneous threads

  • Statistics, 10 simultaneous threads

  • Decisions, 10 simultaneous threads, each running the following in sequence:

    • Monitoring Decision

    • Monitoring Decision, abbreviated

    • Naming Decision

    • Optimization Decision

    • Default Guard Decision (always “Permit”)

    • Frequency Limiter Guard Decision

    • Min/Max Guard Decision

When the script starts up, it uses policy-api to create, and policy-pap to deploy, the policies that are needed by the test. It assumes that the “naming” policy has already been created and deployed. Once the test completes, it undeploys and deletes the policies that it previously created.

Results

The test was run for 20 minutes at a time, for different numbers of users (i.e., threads), with the following results:

Number of Users

Throughput (requests/second)

Average Latency (ms)

10

309.919

5.83457

20

2527.73

22.2634

40

3184.78

35.1173

80

3677.35

60.2893

Stability Test of Policy XACML PDP

The stability test was executed by performing requests against the Policy RESTful APIs residing on the XACML PDP installed in the citycloud lab. This was running on a kubernetes pod having the following configuration:

  • 16GB RAM

  • 4 VCPU

  • 40GB Disk

The test was run via jmeter, which was installed on the same VM. Running jmeter and ONAP OOM on the same VM may adversely impact the performance of the XACML-PDP being tested. Due to the minimal nauture of this setup, the K8S cluster became overloaded on a couple of occasions during the test. This resulted in a small number of errors and a greater maximum transaction time than normal.

Summary

The stability test was performed on a default ONAP OOM installation in the city Cloud Lab environment. JMeter was installed on the same VM to inject the traffic defined in the XACML PDP stability script with the following command:

jmeter.sh -Jduration=259200 -Jusers=2 -Jxacml_ip=$ip -Jpap_ip=$ip -Japi_ip=$ip \
    -Jxacml_port=30111 -Jpap_port=30197 -Japi_port=30664 --nongui --testfile stability.jmx

Note: the ports listed above correspond to port 6969 of the respective components.

The default log level of the root and org.eclipse.jetty.server.RequestLog loggers in the logback.xml of the XACML PDP (om/kubernetes/policy/components/policy-xacml-pdp/resources/config/logback.xml) was set to WARN since the OOM installation did have log rotation enabled of the container logs in the kubernetes worker nodes.

The stability test, stability.jmx, runs the following, all in parallel:

  • Healthcheck, 2 simultaneous threads

  • Statistics, 2 simultaneous threads

  • Decisions, 2 simultaneous threads, each running the following tasks in sequence:
    • Monitoring Decision

    • Monitoring Decision, abbreviated

    • Naming Decision

    • Optimization Decision

    • Default Guard Decision (always “Permit”)

    • Frequency Limiter Guard Decision

    • Min/Max Guard Decision

When the script starts up, it uses policy-api to create, and policy-pap to deploy the policies that are needed by the test. It assumes that the “naming” policy has already been created and deployed. Once the test completes, it undeploys and deletes the policies that it previously created.

Results

The stability summary results were reported by JMeter with the following summary line:

summary = 222450112 in 72:00:39 =  858.1/s Avg:     5 Min:     1 Max: 946942 Err:    17 (0.00%)

The XACML PDP offered good performance with JMeter for the traffic mix described above, using 858 threads per second to inject the traffic load. A small number of errors were encountered, and no significant CPU spikes were noted. The average transaction time was 5ms. with a maximum of 946942ms.

Policy Distribution component
72h Stability and 4h Performance Tests of Distribution
VM Details

The stability and performance tests are performed on VM’s running in the OpenStack cloud environment in the ONAP integration lab.

Policy VM details

  • OS: Ubuntu 18.04 LTS (GNU/Linux 4.15.0-151-generic x86_64)

  • CPU: 4 core

  • RAM: 15 GB

  • HardDisk: 39 GB

  • Docker version 20.10.7, build 20.10.7-0ubuntu1~18.04.2

  • Java: openjdk 11.0.11 2021-04-20

Common Setup

Update the ubuntu software installer

sudo apt update

Install Java

sudo apt install -y openjdk-11-jdk

Ensure that the Java version that is executing is OpenJDK version 11

$ java --version
openjdk 11.0.11 2021-04-20
OpenJDK Runtime Environment (build 11.0.11+9-Ubuntu-0ubuntu2.18.04)
OpenJDK 64-Bit Server VM (build 11.0.11+9-Ubuntu-0ubuntu2.18.04, mixed mode)

Install Docker and Docker Compose

# Add docker repository
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt update

# Install docker
sudo apt-get install docker-ce docker-ce-cli containerd.io

Change the permissions of the Docker socket file

sudo chmod 666 /var/run/docker.sock

Check the status of the Docker service and ensure it is running correctly

systemctl status --no-pager docker
docker.service - Docker Application Container Engine
   Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2020-10-14 13:59:40 UTC; 1 weeks 0 days ago
   # ... (truncated for brevity)

docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

Install and verify docker-compose

# Install compose
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

# Check if install was successful
docker-compose --version

Clone the policy-distribution repo to access the test scripts

git clone https://gerrit.onap.org/r/policy/distribution
Start services for MariaDB, Policy API, PAP and Distribution

Navigate to the main folder for scripts to setup services:

cd ~/distribution/testsuites/stability/src/main/resources/setup

Modify the versions.sh script to match all the versions being tested.

vi ~/distribution/testsuites/stability/src/main/resources/setup/versions.sh

Ensure the correct docker image versions are specified - e.g. for Istanbul-M4

  • export POLICY_DIST_VERSION=2.6.1-SNAPSHOT

Run the start.sh script to start the components. After installation, script will execute docker ps and show the running containers.

./start.sh

Creating network "setup_default" with the default driver
Creating policy-distribution ... done
Creating mariadb             ... done
Creating simulator           ... done
Creating policy-db-migrator  ... done
Creating policy-api          ... done
Creating policy-pap          ... done

CONTAINER ID   IMAGE                                                               COMMAND                  CREATED         STATUS                  PORTS                NAMES
f91be98ad1f4   nexus3.onap.org:10001/onap/policy-pap:2.5.1-SNAPSHOT                "/opt/app/policy/pap…"   1 second ago    Up Less than a second   6969/tcp             policy-pap
d92cdbe971d4   nexus3.onap.org:10001/onap/policy-api:2.5.1-SNAPSHOT                "/opt/app/policy/api…"   1 second ago    Up Less than a second   6969/tcp             policy-api
9a019f5d641e   nexus3.onap.org:10001/onap/policy-db-migrator:2.3.1-SNAPSHOT        "/opt/app/policy/bin…"   2 seconds ago   Up 1 second             6824/tcp             policy-db-migrator
108ba238edeb   nexus3.onap.org:10001/mariadb:10.5.8                                "docker-entrypoint.s…"   3 seconds ago   Up 1 second             3306/tcp             mariadb
bec9b223e79f   nexus3.onap.org:10001/onap/policy-models-simulator:2.5.1-SNAPSHOT   "simulators.sh"          3 seconds ago   Up 1 second             3905/tcp             simulator
74aa5abeeb08   nexus3.onap.org:10001/onap/policy-distribution:2.6.1-SNAPSHOT       "/opt/app/policy/bin…"   3 seconds ago   Up 1 second             6969/tcp, 9090/tcp   policy-distribution

Note

The containers on this docker-compose are running with HTTP configuration. For HTTPS, ports and configurations will need to be changed, as well certificates and keys must be generated for security.

Install JMeter

Download and install JMeter

# Install required packages
sudo apt install -y wget unzip

# Install JMeter
mkdir -p jmeter
cd jmeter
wget https://dlcdn.apache.org//jmeter/binaries/apache-jmeter-5.4.1.zip
unzip -q apache-jmeter-5.4.1.zip
rm apache-jmeter-5.4.1.zip
Install & configure visualVM

VisualVM needs to be installed in the virtual machine running Distribution. It will be used to monitor CPU, Memory and GC for Distribution while the stability tests are running.

sudo apt install -y visualvm

Run these commands to configure permissions

# Set globally accessable permissions on policy file
sudo chmod 777 /usr/lib/jvm/java-11-openjdk-amd64/bin/visualvm.policy

# Create Java security policy file for VisualVM
sudo cat > /usr/lib/jvm/java-11-openjdk-amd64/bin/visualvm.policy << EOF
grant codebase "jrt:/jdk.jstatd" {
   permission java.security.AllPermission;
};
grant codebase "jrt:/jdk.internal.jvmstat" {
   permission java.security.AllPermission;
};
EOF

Run the following command to start jstatd using port 1111

/usr/lib/jvm/java-11-openjdk-amd64/bin/jstatd -p 1111 -J-Djava.security.policy=/usr/lib/jvm/java-11-openjdk-amd64/bin/visualvm.policy &

Run visualVM to connect to POLICY_DISTRIBUTION_IP:9090

# Get the Policy Distribution container IP
echo $(docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' policy-distribution)

# Start visual vm
visualvm &

This will load up the visualVM GUI

Connect to Distribution JMX Port.

  1. On the visualvm toolbar, click on “Add JMX Connection”

  2. Enter the Distribution container IP and Port 9090. This is the JMX port exposed by the distribution container

  3. Double click on the newly added nodes under “Remotes” to start monitoring CPU, Memory & GC.

Example Screenshot of visualVM

_images/distribution-visualvm-snapshot.png
Stability Test of Policy Distribution
Introduction

The 72 hour Stability Test for policy distribution has the goal of introducing a steady flow of transactions initiated from a test client server running JMeter. The policy distribution is configured with a special FileSystemReception plugin to monitor a local directory for newly added csar files to be processed by itself. The input CSAR will be added/removed by the test client (JMeter) and the result will be pulled from the backend (PAP and PolicyAPI) by the test client (JMeter).

The test will be performed in an environment where Jmeter will continuously add/remove a test csar into the special directory where policy distribution is monitoring and will then get the processed results from PAP and PolicyAPI to verify the successful deployment of the policy. The policy will then be undeployed and the test will loop continuously until 72 hours have elapsed.

Test Plan Sequence

The 72h stability test will run the following steps sequentially in a single threaded loop.

  • Delete Old CSAR - Checks if CSAR already exists in the watched directory, if so it deletes it

  • Add CSAR - Adds CSAR to the directory that distribution is watching

  • Get Healthcheck - Ensures Healthcheck is returning 200 OK

  • Get Statistics - Ensures Statistics is returning 200 OK

  • Assert PDP Group Query - Checks that PDPGroupQuery contains the deployed policy

  • Assert PoliciesDeployed - Checks that the policy is deployed

  • Undeploy/Delete Policy - Undeploys and deletes the Policy for the next loop

  • Assert PDP Group Query for Deleted Policy - Ensures the policy has been removed and does not exist

The following steps can be used to configure the parameters of the test plan.

  • HTTP Authorization Manager - used to store user/password authentication details.

  • HTTP Header Manager - used to store headers which will be used for making HTTP requests.

  • User Defined Variables - used to store following user defined parameters.

Name

Description

PAP_HOST

IP Address or host name of PAP component

PAP_PORT

Port number of PAP for making REST API calls

API_HOST

IP Address or host name of API component

API_PORT

Port number of API for making REST API calls

DURATION

Duration of Test

Screenshot of Distribution stability test plan

_images/distribution-jmeter-testcases.png
Running the Test Plan

Check if the /tmp/policydistribution/distributionmount exists as it was created during the start.sh script execution. If not, run the following commands to create folder and change folder permissions to allow the testplan to insert the CSAR into the /tmp/policydistribution/distributionmount folder.

Note

Make sure that only csar file is being loaded in the watched folder and log generation is in a logs folder, as any sort of zip file can be understood by distribution as a policy file. A logback.xml configuration file is available under setup/distribution folder.

sudo mkdir -p /tmp/policydistribution/distributionmount
sudo chmod -R a+trwx /tmp

Navigate to the stability test folder.

cd ~/distribution/testsuites/stability/src/main/resources/testplans/

Execute the run_test.sh

./run_test.sh
Test Results

Summary

  • Stability test plan was triggered for 72 hours.

  • No errors were reported

Test Statistics

_images/stability-statistics.png _images/stability-threshold.png

VisualVM Screenshots

_images/stability-monitor.png _images/stability-threads.png
Performance Test of Policy Distribution
Introduction

The 4h Performance Test of Policy Distribution has the goal of testing the min/avg/max processing time and rest call throughput for all the requests when the number of requests are large enough to saturate the resource and find the bottleneck.

It also tests that distribution can handle multiple policy CSARs and that these are deployed within 30 seconds consistently.

Setup Details

The performance test is based on the same setup as the distribution stability tests.

Test Plan Sequence

Performance test plan is different from the stability test plan.

  • Instead of handling one policy csar at a time, multiple csar’s are deployed within the watched folder at the exact same time.

  • We expect all policies from these csar’s to be deployed within 30 seconds.

  • There are also multithreaded tests running towards the healthcheck and statistics endpoints of the distribution service.

Running the Test Plan

Check if /tmp folder permissions to allow the Testplan to insert the CSAR into the /tmp/policydistribution/distributionmount folder. Clean up from previous run. If necessary, put containers down with script down.sh from setup folder mentioned on Setup components

sudo mkdir -p /tmp/policydistribution/distributionmount
sudo chmod -R a+trwx /tmp

Navigate to the testplan folder and execute the test script:

cd ~/distribution/testsuites/performance/src/main/resources/testplans/
./run_test.sh
Test Results

Summary

  • Performance test plan was triggered for 4 hours.

  • No errors were reported

Test Statistics

_images/performance-statistics.png _images/performance-threshold.png

VisualVM Screenshots

_images/performance-monitor.png _images/performance-threads.png
Policy Clamp Controlloop

Both the Performance and the Stability tests were executed by performing requests against controlloop components installed as docker images in local environment.

Setup Details
  • Controlloop runtime component docker image is started and running.

  • Participant docker images policy-clamp-cl-pf-ppnt, policy-clamp-cl-http-ppnt, policy-clamp-cl-k8s-ppnt are started and running.

  • Dmaap simulator for communication between components.

  • mariadb docker container for policy and controlloop database.

  • policy-api for communication between policy participant and policy-framework

  • Both tests were run via jMeter, which was installed on a separate VM.

Stability Test of Controlloop components
Test Plan

The 72 hours stability test ran the following steps sequentially in a single threaded loop.

  • Create Policy defaultDomain - creates an operational policy using policy/api component

  • Delete Policy sampleDomain - deletes the operational policy sampleDomain using policy/api component

  • Commission Contorlloop definition - commissions the controlloop definition in runtime

  • Instantiate controlloop - Instantiate the controlloop towards participants

  • Check controlloop state - check the current state of controlloop

  • Change State to PASSIVE - change the state of the controlloop to PASSIVE

  • Check controlloop state - check the current state of controlloop

  • Change State to UNINITIALISED - change the state of the controloop to UNINITIALISED

  • Check controlloop state - check the current state of controlloop

  • Delete instantiated controlloop - delete the instantiated controlloop from all participants

  • Delete ControlLoop Definition - delete the controlloop definition on runtime

The following steps can be used to configure the parameters of test plan.

  • HTTP Authorization Manager - used to store user/password authentication details.

  • HTTP Header Manager - used to store headers which will be used for making HTTP requests.

  • User Defined Variables - used to store following user defined parameters.

Name

Description

RUNTIME_HOST

IP Address or host name of controlloop runtime component

RUNTIME_PORT

Port number of controlloop runtime components for making REST API calls

POLICY_PARTICIPANT_HOST

IP Address or host name of policy participant

POLICY_PARTICIPANT_HOST_PORT

Port number of policy participant

The test was run in the background via “nohup”, to prevent it from being interrupted:

nohup ./jMeter/apache-jmeter-5.2.1/bin/jmeter -n -t stability.jmx -l testresults.jtl
Test Results

Summary

Stability test plan was triggered for 72 hours.

Note

The assertions of state changes are not completely taken care of, as the stability is ran with controlloop componenets alone, and not including complete policy framework deployment, which makes it difficult for actual state changes from PASSIVE to RUNNING etc to happen.

Test Statistics

Total # of requests

Success %

Error %

Average time taken per request

99992

100.00 %

0.00 %

192 ms

Controloop component Setup

CONTAINER ID

IMAGE

PORTS

NAMES

a9cb0cd103cf

onap/policy-clamp-cl-runtime:latest

6969/tcp

policy-clamp-cl-runtime

886e572b8438

onap/policy-clamp-cl-pf-ppnt:latest

6973/tcp

policy-clamp-cl-pf-ppnt

035707b1b95f

nexus3.onap.org:10001/onap/policy-api:latest

6969/tcp

policy-api

d34204f95ff3

onap/policy-clamp-cl-http-ppnt:latest

6971/tcp

policy-clamp-cl-http-ppnt

4470e608c9a8

onap/policy-clamp-cl-k8s-ppnt:latest

6972/tcp, 8083/tcp

policy-clamp-cl-k8s-ppnt

62229d46b79c

nexus3.onap.org:10001/onap/policy-models-simulator:latest

3905/tcp, 6666/tcp, 6668-6670/tcp, 6680/tcp

simulator

efaf0ca5e1f0

nexus3.onap.org:10001/mariadb:10.5.8

3306/tcp

mariadb

Note

There were no failures during the 72 hours test.

JMeter Screenshot

_images/controlloop_stability_jmeter.png

JMeter Screenshot

_images/controlloop_stability_table.png

Memory and CPU usage

The memory and CPU usage can be monitored by running “docker stats” command. A snapshot is taken before and after test execution to monitor the changes in resource utilization.

Memory and CPU usage before test execution:

_images/Stability_before_stats.png

Memory and CPU usage after test execution:

_images/Stability_after_stats.png
Performance Test of Controlloop components
Introduction

Performance test of Controlloop components has the goal of testing the min/avg/max processing time and rest call throughput for all the requests with multiple requests at the same time.

Setup Details

The performance test is performed on a similar setup as Stability test. The JMeter VM will be sending a large number of REST requests to the runtime component and collecting the statistics.

Test Plan

Performance test plan is the same as the stability test plan above except for the few differences listed below.

  • Increase the number of threads up to 5 (simulating 5 users’ behaviours at the same time).

  • Reduce the test time to 2 hours.

Run Test

Running/Triggering the performance test will be the same as the stability test. That is, launch JMeter pointing to corresponding .jmx test plan. The RUNTIME_HOST, RUNTIME_PORT, POLICY_PARTICIPANT_HOST, POLICY_PARTICIPANT_HOST_PORT are already set up in .jmx

nohup ./jMeter/apache-jmeter-5.2.1/bin/jmeter -n -t performance.jmx -l testresults.jtl

Once the test execution is completed, execute the below script to get the statistics:

$ cd ./clamp/testsuites/performance/src/main/resources/testplans
$ ./results.sh resultTree.log
Test Results

Test results are shown as below.

Test Statistics

Total # of requests

Success %

Error %

Average time taken per request

13809

100 %

0.00 %

206 ms

Controloop component Setup

CONTAINER ID

IMAGE

PORTS

NAMES

a9cb0cd103cf

onap/policy-clamp-cl-runtime:latest

6969/tcp

policy-clamp-cl-runtime

886e572b8438

onap/policy-clamp-cl-pf-ppnt:latest

6973/tcp

policy-clamp-cl-pf-ppnt

035707b1b95f

nexus3.onap.org:10001/onap/policy-api:latest

6969/tcp

policy-api

d34204f95ff3

onap/policy-clamp-cl-http-ppnt:latest

6971/tcp

policy-clamp-cl-http-ppnt

4470e608c9a8

onap/policy-clamp-cl-k8s-ppnt:latest

6972/tcp, 8083/tcp

policy-clamp-cl-k8s-ppnt

62229d46b79c

nexus3.onap.org:10001/onap/policy-models-simulator:latest

3905/tcp, 6666/tcp, 6668-6670/tcp, 6680/tcp

simulator

efaf0ca5e1f0

nexus3.onap.org:10001/mariadb:10.5.8

3306/tcp

mariadb

JMeter Screenshot

_images/cl-s3p-performance-result-jmeter.png

Running the Pairwise Tests

The following links contain instructions on how to run the pairwise tests. These may be helpful to developers check that the Policy Framework works in a full ONAP deployment.

CLAMP <-> Policy Core

The pairwise testing is executed against a default ONAP installation in the OOM. CLAMP-Control loop interacts with Policy framework to create and deploy policies. This test verifies the interaction between policy and controlloop works as expected.

General Setup

The kubernetes installation allocated all policy components across multiple worker node VMs. The worker VM hosting the policy components has the following spec:

  • 16GB RAM

  • 8 VCPU

  • 160GB Ephemeral Disk

The ONAP components used during the pairwise tests are:

  • CLAMP control loop runtime, policy participant, kubernetes participant.

  • DMaaP for the communication between Control loop runtime and participants.

  • Policy API to create (and delete at the end of the tests) policies for each scenario under test.

  • Policy PAP to deploy (and undeploy at the end of the tests) policies for each scenario under test.

  • Policy Gui for instantiation and commissioning of control loops.

Testing procedure

The test set focused on the following use cases:

  • creation/Deletion of policies

  • Deployment/Undeployment of policies

Creation of the Control Loop:

A Control Loop is created by commissioning a Tosca template with Control loop definitions and instantiating the Control Loop with the state “UNINITIALISED”.

  • Upload a TOSCA template from the POLICY GUI. The definitions includes a policy participant and a control loop element that creates and deploys required policies. Sample Tosca template

    _images/cl-commission.png

    Verification: The template is commissioned successfully without errors.

  • Instantiate the commissioned Control loop from the Policy Gui under ‘Instantiation Management’.

    _images/create-instance.png

    Update instance properties of the Control Loop Elements if required.

    _images/update-instance.png

    Verification: The control loop is created with default state “UNINITIALISED” without errors.

    _images/cl-instantiation.png
Creation of policies:

The Control Loop state is changed from “UNINITIALISED” to “PASSIVE” from the Policy Gui. Verify the POLICY API endpoint for the creation of policy types that are defined in the TOSCA template.

_images/cl-passive.png

Verification:

  • The policy types defined in the tosca template is created by the policy participant and listed in the policy Api. Policy Api endpoint: <https://<POLICY-API-IP>/policy/api/v1/policytypes>

  • The overall state of the Control Loop is changed to “PASSIVE” in the Policy Gui.

_images/cl-create.png
Deployment of policies:

The Control Loop state is changed from “PASSIVE” to “RUNNING” from the Policy Gui.

_images/cl-running.png

Verification:

  • The policy participant deploys the policies of Tosca Control loop elements in Policy PAP for all the pdp groups. Policy PAP endpoint: <https://<POLICY-PAP-IP>/policy/pap/v1/pdps>

  • The overall state of the Control Loop is changed to “RUNNING” in the Policy Gui.

_images/cl-running-state.png
Deletion of Policies:

The Control Loop state is changed from “RUNNING” to “PASSIVE” from the Policy Gui.

Verification:

  • The policy participant deletes the created policy types which can be verified on the Policy Api. The policy types created as part of the control loop should not be listed on the Policy Api. Policy Api endpoint: <https://<POLICY-API-IP>/policy/api/v1/policytypes>

  • The overall state of the Control Loop is changed to “PASSIVE” in the Policy Gui.

_images/cl-create.png
Undeployment of policies:

The Control Loop state is changed from “PASSIVE” to “UNINITIALISED” from the Policy Gui.

Verification:

  • The policy participant undeploys the policies of the control loop element from the pdp groups. The policies deployed as part of the control loop should not be listed on the Policy PAP. Policy PAP endpoint: <https://<POLICY-PAP-IP>/policy/pap/v1/pdps>

  • The overall state of the Control Loop is changed to “UNINITIALISED” in the Policy Gui.

_images/cl-uninitialised-state.png
CLAMP <-> Dcae

The pairwise testing is executed against a default ONAP installation in the OOM. CLAMP-Control loop interacts with DCAE to deploy dcaegen2 services like PMSH. This test verifies the interaction between DCAE and controlloop works as expected.

General Setup

The kubernetes installation allocated all policy components across multiple worker node VMs. The worker VM hosting the policy components has the following spec:

  • 16GB RAM

  • 8 VCPU

  • 160GB Ephemeral Disk

The ONAP components used during the pairwise tests are:

  • CLAMP control loop runtime, policy participant, kubernetes participant.

  • DCAE for running dcaegen2-service via kubernetes participant.

  • ChartMuseum service from platform, initialised with DCAE helm charts.

  • DMaaP for the communication between Control loop runtime and participants.

  • Policy Gui for instantiation and commissioning of control loops.

ChartMuseum Setup

The chartMuseum helm chart from the platform is deployed in the same cluster. The chart server is then initialized with the helm charts of dcaegen2-services by running the below script in OOM repo. The script accepts the directory path as an argument where the helm charts are located.

#!/bin/sh
./oom/kubernetes/contrib/tools/registry-initialize.sh -d /oom/kubernetes/dcaegen2-services/charts/
Testing procedure

The test set focused on the following use cases:

  • Deployment and Configuration of DCAE microservice PMSH

  • Undeployment of PMSH

Creation of the Control Loop:

A Control Loop is created by commissioning a Tosca template with Control loop definitions and instantiating the Control Loop with the state “UNINITIALISED”.

  • Upload a TOSCA template from the POLICY GUI. The definitions includes a kubernetes participant and control loop elements that deploys and configures a microservice in the kubernetes cluster. Control loop element for kubernetes participant includes a helm chart information of DCAE microservice and the element for Http Participant includes the configuration entity for the microservice. Sample Tosca template

    _images/cl-commission.png

    Verification: The template is commissioned successfully without errors.

  • Instantiate the commissioned Control loop definitions from the Policy Gui under ‘Instantiation Management’.

    _images/create-instance.png

    Update instance properties of the Control Loop Elements if required.

    _images/update-instance.png

    Verification: The control loop is created with default state “UNINITIALISED” without errors.

    _images/cl-instantiation.png
Deployment and Configuration of DCAE microservice (PMSH):

The Control Loop state is changed from “UNINITIALISED” to “PASSIVE” from the Policy Gui. The kubernetes participant deploys the PMSH helm chart from the DCAE chartMuseum server.

_images/cl-passive.png

Verification:

  • DCAE service PMSH is deployed in to the kubernetes cluster. PMSH pods are in RUNNING state. helm ls -n <namespace> - The helm deployment of dcaegen2 service PMSH is listed. kubectl get pod -n <namespace> - The PMSH pods are deployed, up and Running.

  • The subscription configuration for PMSH microservice from the TOSCA definitions are updated in the Consul server. The configuration can be verified on the Consul server UI http://<CONSUL-SERVER_IP>/ui/#/dc1/kv/

  • The overall state of the Control Loop is changed to “PASSIVE” in the Policy Gui.

_images/cl-create.png
Undeployment of DCAE microservice (PMSH):

The Control Loop state is changed from “PASSIVE” to “UNINITIALISED” from the Policy Gui.

_images/cl-uninitialise.png

Verification:

  • The kubernetes participant uninstall the DCAE PMSH helm chart from the kubernetes cluster. The pods are removed from the cluster.

  • The overall state of the Control Loop is changed to “UNINITIALISED” in the Policy Gui.

_images/cl-uninitialised-state.png
Policy <-> CDS

The pairwise testing is executed against a default ONAP installation as per OOM charts. Apex-PDP or Drools-PDP engine interacts with CDS to execute a control loop action. This test verifies the interaction between Policy and CDS to make sure the contract works as expected.

General Setup

The kubernetes installation will allocate all onap components across multiple worker node VMs. The normal worker VM hosting onap components has the following spec:

  • 16GB RAM

  • 8 VCPU

  • 160GB Ephemeral Disk

The ONAP components used during the pairwise tests are:

  • AAI for creating dummy VNF & PNF for testing purpose.

  • CDS for publishing the blueprints & triggering the actions.

  • DMaaP for the communication between components.

  • Policy API to perform CRUD of policies.

  • Policy PAP to perform runtime administration (deploy/undeploy/status/statistics/etc).

  • Policy Apex-PDP to execute policies for both VNF & PNF scenarios.

  • Policy Drools-PDP to execute policies for both VNF & PNF scenarios.

  • Policy Xacml-PDP to execute decisions for guard requests.

Testing procedure

The test set is focused on the following use cases:

  • End to end testing of a sample VNF based policy using Apex-PDP & Drools-PDP.

  • End to end testing of a sample PNF based policy using Apex-PDP & Drools-PDP.

Creation of VNF & PNF in AAI

In order for PDP engines to fetch the resource details from AAI during runtime execution, we need to create dummy VNF & PNF entities in AAI. In a real control loop flow, the entities in AAI will be either created during orchestration phase or provisioned in AAI separately.

Download & execute the steps in postman collection for creating the entities along with it’s dependencies. The steps needs to be performed sequentially one after another. And no input is required from user.

Create VNF & PNF in AAI

Make sure to skip the delete VNF & PNF steps.

Publish Blueprints in CDS

In order for PDP engines to trigger an action in CDS during runtime execution, we need to publish relevant blueprints in CDS.

Download the zip files containing the blueprint for VNF & PNF specific actions.

VNF Test CBA PNF Test CBA

Download & execute the steps in postman collection for publishing the blueprints in CDS. In the enrich & publish CBA step, provide the previously downloaded zip file one by one. The execute steps are provided to verify that the blueprints are working as expected.

Publish Blueprints in CDS

Make sure to skip the delete CBA step.

Apex-PDP VNF & PNF testing

The below provided postman collection is prepared to have end to end testing experience of apex-pdp engine. Including both VNF & PNF scenarios. List of steps covered in the postman collection:

  • Create & Verify VNF & PNF policies as per policy type supported by apex-pdp.

  • Deploy both VNF & PNF policies to apex-pdp engine.

  • Query PdpGroup at multiple stages to verify current set of policies deployed.

  • Fetch policy status at multiple stages to verify policy deployment & undeployment status.

  • Fetch policy audit information at multiple stages to verify policy deployment & undeployment operations.

  • Fetch PDP Statistics at multiple stages to verify deployment, undeployment & execution counts.

  • Send onset events to DMaaP for triggering policies to test both success & failure secnarios.

  • Read policy notifications from DMaaP to verify policy execution.

  • Undeploy both VNF & PNF policies from apex-pdp engine.

  • Delete both VNF & PNF policies at the end.

Download & execute the steps in postman collection. The steps needs to be performed sequentially one after another. And no input is required from user.

Apex-PDP VNF & PNF Testing

Make sure to wait for 2 minutes (the default heartbeat interval) to verify PDP Statistics.

Drools-PDP VNF & PNF testing

The below provided postman collection is prepared to have end to end testing experience of drools-pdp engine. Including both VNF & PNF scenarios. List of steps covered in the postman collection:

  • Create & Verify VNF & PNF policies as per policy type supported by drools-pdp.

  • Deploy both VNF & PNF policies to drools-pdp engine.

  • Query PdpGroup at multiple stages to verify current set of policies deployed.

  • Fetch policy status at multiple stages to verify policy deployment & undeployment status.

  • Fetch policy audit information at multiple stages to verify policy deployment & undeployment operations.

  • Fetch PDP Statistics at multiple stages to verify deployment, undeployment & execution counts.

  • Send onset events to DMaaP for triggering policies to test both success & failure secnarios.

  • Read policy notifications from DMaaP to verify policy execution.

  • Undeploy both VNF & PNF policies from drools-pdp engine.

  • Delete both VNF & PNF policies at the end.

Download & execute the steps in postman collection. The steps needs to be performed sequentially one after another. And no input is required from user.

Drools-PDP VNF & PNF Testing

Make sure to wait for 2 minutes (the default heartbeat interval) to verify PDP Statistics.

Delete Blueprints in CDS

Use the previously downloaded CDS postman collection to delete the blueprints published in CDS for testing.

Delete VNF & PNF in AAI

Use the previously downloaded AAI postman collection to delete the VNF & PNF entities created in AAI for testing.

Generating Swagger Documentation

The Policy Parent Integration POM contains a generateSwaggerDocs profile. This profile can be activated on any module that has a Swagger endopint. When active, this profile creates a tarball in Nexus with the name <project-artifactId>-swagger-docs.tar.gz. The tarball contains the fillowing files:

swagger/swagger.html
swagger/swagger.json
swagger/swagger.pdf

The profile is activated when:

  1. The following property is defined at the top of the pom.xml file for a module

    <!--  This property triggers generation of the Swagger documents -->
    <swagger.generation.phase>post-integration-test</swagger.generation.phase>
    

    See the CLAMP runtime POM for an example of the usage of this property.

  2. Unit tests are being executed in the build, in other wirds when the skipTests flag is false.

You must create a unit test in your module that generates the following file:

src/test/resources/swagger/swagger.json

Typically, you do this by starting your REST endpoint in a unit test, issuing a REST call to get the Swagger API documentation. The test case below is an example of such a test case.

@Test
public void testSwaggerJson() throws Exception {
    ResponseEntity<String> httpsEntity = getRestTemplate()
            .getForEntity("https://localhost:" + this.httpsPort + "/restservices/clds/api-doc", String.class);
    assertThat(httpsEntity.getStatusCode()).isEqualTo(HttpStatus.OK);
    assertThat(httpsEntity.getBody()).contains("swagger");
    FileUtils.writeStringToFile(new File("target/swagger/swagger.json"), httpsEntity.getBody(),
            Charset.defaultCharset());
}

See this unit test case for the full example.

Running the DMaaP Simulator during Development

It is sometimes convenient to run the DMaaP simulator during development. You can run it from the command line using Maven or from within your IDE.

Running on the Command Line
  1. Check out the policy models repository

  2. Go to the models-sim/policy-models-simulators subdirectory in the policy-models repo

  3. Run the following Maven command:

    mvn exec:java  -Dexec.mainClass=org.onap.policy.models.simulators.Main -Dexec.args="src/test/resources/simParameters.json"
    
Running in Eclipse
  1. Check out the policy models repository

  2. Go to the models-sim/policy-models-simulators module in the policy-models repo

  3. Specify a run configuration using the class org.onap.policy.models.simulators.Main as the main class

  4. Specify an argument of src/test/resources/simParameters.json to the run configuration

  5. Run the configuration

Specifying a local configuration file

You may specify a local configuration file instead of src/test/resources/simParameters.json on the command line or as an arument in the run configuration in eclipse:

{
  "dmaapProvider": {
    "name": "DMaaP simulator",
    "topicSweepSec": 900
  },
  "restServers": [
    {
      "name": "DMaaP simulator",
      "providerClass": "org.onap.policy.models.sim.dmaap.rest.DmaapSimRestControllerV1",
      "host": "localhost",
      "port": 3904,
      "https": false
    }
  ]
}

Guidelines for PDP-PAP interaction

A PDP (Policy Decision Point) is where the policy execution happens. The administrative actions such as managing the PDPs, deploying or undeploying policies to these PDPs etc. are handled by PAP (Policy Administration Point). Any PDP should follow certain behavior to be registered and functional in the Policy Framework. All the communications between PAP and PDP happen over DMaaP on topic POLICY-PDP-PAP. The below diagram shows how a PDP interacts with PAP.

_images/PDP_PAP.svg

1. Start PDP

A PDP should be configured to start with the below information in its startup configuration file.

  • pdpGroup to which the PDP should belong to.

  • DMaaP topic ‘POLICY-PDP-PAP’ which should be the source and sink for communicating with PAP.

2. PDP sends PDP_STATUS (registration message)

As soon as a PDP is up, it sends a registration message to POLICY-PDP-PAP topic. Some of the information included in the message are:

  • pdpType the type of the PDP (apex/drools/xacml etc.).

  • pdpGroup to which the PDP should belong to.

  • state the initial state of the PDP which is PASSIVE.

  • healthy whether the PDP is “HEALTHY” or not.

  • name a name that is unique to the PDP instance.

Sample PDP_STATUS Registration message (from APEX-PDP)
 1  {
 2    "pdpType": "apex",
 3    "state": "PASSIVE",
 4    "healthy": "HEALTHY",
 5    "description": "Pdp Heartbeat",
 6    "statistics": {
 7      ..... Omitted for brevity
 8    },
 9    "messageName": "PDP_STATUS",
10    "requestId": "54926ad0-440f-4b40-9237-40ca754ad00d",
11    "timestampMs": 1632325024286,
12    "name": "apex-45c6b266-a5fa-4534-b22c-33c2f9a45d02",
13    "pdpGroup": "defaultGroup"
14  }

3. PAP sends PDP_UPDATE message

On receiving the registration message from a PDP, PAP checks and assigns it to a subgroup under the group. If there are policies that were already deployed (for e.g. previously deployed, and the PDP is restarted) under the subgroup, then the policiesToBeDeployed are also sent along with the subgroup it is assigned to. PAP also sends the pdpHeartbeatIntervalMs which is the time interval in which PDPs should send heartbeats to PAP.

Sample PDP_UPDATE message (for APEX-PDP)
 1  {
 2    "source": "pap-56c8531d-5376-4e53-a820-6973c62bfb9a",
 3    "pdpHeartbeatIntervalMs": 120000,
 4    "policiesToBeDeployed": [],
 5    "messageName": "PDP_UPDATE",
 6    "requestId": "3534e54f-4432-4c68-81c8-a6af07e59fb2",
 7    "timestampMs": 1632325037040,
 8    "name": "apex-45c6b266-a5fa-4534-b22c-33c2f9a45d02",
 9    "pdpGroup": "defaultGroup",
10    "pdpSubgroup": "apex"
11  }

4. PDP sends PDP_STATUS response to PDP_UPDATE

PDP on receiving the PDP_UPDATE message from the DMaaP topic, it first checks if the message is intended for the PDP. If so, it updates itself with the information in PDP_UPDATE message from PAP such as pdpSubgroup, pdpHeartbeatIntervalMs and policiesToBeDeployed (if any). After handling the PDP_UPDATE message, the PDP sends a response message back to PAP with the current status of the PDP along with the result of the PDP_UPDATE operation.

Sample PDP_STATUS response message (from APEX-PDP)
 1  {
 2    "pdpType": "apex",
 3    "state": "PASSIVE",
 4    "healthy": "HEALTHY",
 5    "description": "Pdp status response message for PdpUpdate",
 6    "policies": [],
 7    "statistics": {
 8      ..... Omitted for brevity
 9    },
10    "response": {
11      "responseTo": "3534e54f-4432-4c68-81c8-a6af07e59fb2",
12      "responseStatus": "SUCCESS",
13      "responseMessage": "Pdp update successful."
14    },
15    "messageName": "PDP_STATUS",
16    "requestId": "e3c72783-4e91-4cb5-8140-e4ac0630706d",
17    "timestampMs": 1632325038075,
18    "name": "apex-45c6b266-a5fa-4534-b22c-33c2f9a45d02",
19    "pdpGroup": "defaultGroup",
20    "pdpSubgroup": "apex"
21  }

5. PAP sends PDP_STATE_CHANGE message

PAP sends PDP_STATE_CHANGE message to PDPs to change the state from PASSIVE to active or ACTIVE to PASSIVE. When a PDP is in PASSIVE state, the policy execution will not happen. All PDPs start up in PASSIVE state, and they can be changed to ACTIVE/PASSIVE using PAP. After registration is complete, PAP makes a PDP ACTIVE by default.

Sample PDP_STATE_CHANGE message
 1  {
 2    "source": "pap-56c8531d-5376-4e53-a820-6973c62bfb9a",
 3    "state": "ACTIVE",
 4    "messageName": "PDP_STATE_CHANGE",
 5    "requestId": "90eada6d-bb98-4750-a4e1-b439cb5e041d",
 6    "timestampMs": 1632325037040,
 7    "name": "apex-45c6b266-a5fa-4534-b22c-33c2f9a45d02",
 8    "pdpGroup": "defaultGroup",
 9    "pdpSubgroup": "apex"
10  }

6. PDP sends PDP_STATUS response to PDP_STATE_CHANGE

PDP updates its state as per the PDP_STATE_CHANGE received from PAP. When a PDP is changed to ACTIVE, any policies that are already pushed to the PDP start execution and start processing events as per the policies deployed. If no policies are running in a PDP, then it waits in ACTIVE state, ready to execute any policies as and when they are pushed to them from PAP. After handling the PDP_STATE_CHANGE message, the PDP sends a response message back to PAP with the current status of the PDP along with the result of the PDP_STATE_CHANGE operation.

Sample PDP_STATUS response message (from APEX-PDP)
 1  {
 2    "pdpType": "apex",
 3    "state": "ACTIVE",
 4    "healthy": "HEALTHY",
 5    "description": "Pdp status response message for PdpStateChange",
 6    "policies": [],
 7    "statistics": {
 8      ..... Omitted for brevity
 9    },
10    "response": {
11      "responseTo": "90eada6d-bb98-4750-a4e1-b439cb5e041d",
12      "responseStatus": "SUCCESS",
13      "responseMessage": "State changed to active. No policies are running."
14    },
15    "messageName": "PDP_STATUS",
16    "requestId": "8a88806c-4d3e-4c80-8048-dc85d4bb75dd",
17    "timestampMs": 1632325043068,
18    "name": "apex-45c6b266-a5fa-4534-b22c-33c2f9a45d02",
19    "pdpGroup": "defaultGroup",
20    "pdpSubgroup": "apex"
21  }

7. PDP sends PDP_STATUS Heartbeat messages

A PDP has to send Heartbeat messages to PAP periodically with the current status information of the PDP. PAP receives this information and makes sure they are updated. In case of any mismatch with the data in the database, PAP sends out a PDP_UPDATE message to update the PDP. PAP considers a PDP as expired if three consecutive heartbeats are missing from the PDP, and removes the PDP instance details from the database.

Sample PDP_STATUS response message (from APEX-PDP)
 1  {
 2    "pdpType": "apex",
 3    "state": "ACTIVE",
 4    "healthy": "HEALTHY",
 5    "description": "Pdp Heartbeat",
 6    "policies": [],
 7    "statistics": {
 8      ..... Omitted for brevity
 9    },
10    "messageName": "PDP_STATUS",
11    "requestId": "e3c72783-4e91-4cb5-8140-e4ac0630706d",
12    "timestampMs": 1632325038075,
13    "name": "apex-45c6b266-a5fa-4534-b22c-33c2f9a45d02",
14    "pdpGroup": "defaultGroup",
15    "pdpSubgroup": "apex"
16  }

8. Deploy/Undeploy Policy using PAP

Policies can be deployed or undeployed using PAP APIs. PAP fetches the policies to be deployed from the database, and send the whole policies’ list under policiesToBeDeployed field. In case of undeployment, PAP sends the list of policies with their name and version under policiesToBeUndeployed in the PDP_UPDATE message.

9. PAP sends PDP_UPDATE message with policiesToBeDeployed/Undeployed

PAP sends a PDP_UPDATE message with information about policies to be deployed and undeployed. If there are some policies that are already deployed, then only the new ones are sent under the policiesToBeDeployed field.

Sample PDP_STATUS response message (from APEX-PDP)
 1  {
 2    "source": "pap-56c8531d-5376-4e53-a820-6973c62bfb9a",
 3    "pdpHeartbeatIntervalMs": 120000,
 4    "policiesToBeDeployed": [
 5      {
 6        "type": "onap.policies.native.Apex",
 7        "type_version": "1.0.0",
 8        "properties": {
 9        ..... Omitted for brevity
10        },
11        "name": "onap.policies.apex.Simplecontrolloop",
12        "version": "1.0.0",
13        "metadata": {
14          "policy-id": "onap.policies.apex.Simplecontrolloop",
15          "policy-version": "1.0.0"
16        }
17      }
18    ],
19    "policiesToBeUndeployed":[],
20    "messageName": "PDP_UPDATE",
21    "requestId": "3534e54f-4432-4c68-81c8-a6af07e59fb2",
22    "timestampMs": 1632325037040,
23    "name": "apex-45c6b266-a5fa-4534-b22c-33c2f9a45d02",
24    "pdpGroup": "defaultGroup",
25    "pdpSubgroup": "apex"
26  }

10. PDP sends PDP_STATUS response to PDP_UPDATE

All policies to be deployed/undeployed are updated in the PDP engine. Policies that are part of policiesToBeDeployed are updated to the engine, and all policies under policiesToBeUndeployed are removed from the PDP engine. Once the processing of PDP_UPDATE message is complete, PDP sends back a PDP_STATUS message with the updated status, the current list of policies that are in the engine, and the result of the PDP_UPDATE operation.

Sample PDP_STATUS response message (from APEX-PDP)
 1  {
 2    "pdpType": "apex",
 3    "state": "ACTIVE",
 4    "healthy": "HEALTHY",
 5    "description": "Pdp status response message for PdpUpdate",
 6    "policies": [
 7      {
 8        "name": "onap.policies.apex.Simplecontrolloop",
 9        "version": "1.0.0"
10      }
11    ],
12    "statistics": {
13      "pdpInstanceId": "apex-45c6b266-a5fa-4534-b22c-33c2f9a45d02",
14      "timeStamp": "2021-09-22T15:37:18.075436Z",
15      "pdpGroupName": "defaultGroup",
16      "pdpSubGroupName": "apex",
17      "policyExecutedCount": 0,
18      "policyExecutedSuccessCount": 0,
19      "policyExecutedFailCount": 0,
20      "policyDeployCount": 1,
21      "policyDeploySuccessCount": 1,
22      "policyDeployFailCount": 0,
23      "policyUndeployCount": 0,
24      "policyUndeploySuccessCount": 0,
25      "policyUndeployFailCount": 0
26    },
27    "response": {
28      "responseTo": "4534e54f-4432-4c68-81c8-a6af07e59fb2",
29      "responseStatus": "SUCCESS",
30      "responseMessage": "Apex engine started. Deployed policies are: onap.policies.apex.Simplecontrolloop:1.0.0"
31    },
32    "messageName": "PDP_STATUS",
33    "requestId": "e3c72783-4e91-4cb5-8140-e4ac0630706d",
34    "timestampMs": 1632325038075,
35    "name": "apex-45c6b266-a5fa-4534-b22c-33c2f9a45d02",
36    "pdpGroup": "defaultGroup",
37    "pdpSubgroup": "apex"
38  }

More details about the messages used for PDP-PAP internal communication and their structure can be found here The Internal Policy Framework PAP-PDP API.

Policy Platform Actor Development Guidelines

Actor Design Overview

Intro

An actor/operation is any ONAP component that an Operational Policy can use to control a VNF/VM/etc. during execution of a control loop operational policy when a Control Loop Event is triggered.

_images/topview.png

An Actor Service object contains one or more Actor objects, which are found and created using ServiceLoader. Each Actor object, in turn, creates one or more Operator objects. All of these components, the Actor Service, the Actor, and the Operator are typically singletons that are created once, at start-up (or on the first request). The Actor Service includes several methods, configure(), start(), and stop(), which are cascaded to the Actors and then to the Operators.

Operation objects, on the other hand, are not singletons; a new Operation object is created for each operation that an application wishes to perform. For instance, if an application wishes to use the “SO” Actor to add two new modules, then two separate Operation objects would be created, one for each module.

Actors are configured by invoking the Actor Service configure() method, passing it a set of properties. The configure() method extracts the properties that are relevant to each Actor and passes them to the Actor’s configure() method. Similarly, the Actor’s configure() method extracts the properties that are relevant to each Operator and passes them to the Operator’s configure() method. Note: Actors typically extract “default” properties from their respective property sets and include those when invoking each Operator’s configure() method.

Once the Actor Service has been configured, it can be started via start(). It will then continue to run until no longer needed, at which point stop() can be invoked.

Note: it is possible to create separate instances of an Actor Service, each with its own set of properties. In that case, each Actor Service will get its own instances of Actors and Operators.

Components

This section describes things to consider when creating a new Actor/Operator.

Actor
  • The constructor should use addOperator() to add operators

  • By convention, the name of the actor is specified by a static field, “NAME”

  • An actor is registered via the Java ServiceLoader by including its jar on the classpath and adding its class name to this file, typically contained within the jar:

    onap.policy.controlloop.actorServiceProvider.spi

  • Actor loading is ordered, so that those having a lower (i.e., earlier) sequence number are loaded first. If a later actor has the same name as one that has already been loaded, a warning will be generated and the later actor discarded. This makes it possible for an organization to override an actor implementation

  • An implementation for a specific Actor will typically be derived from HttpActor or BidirectionalTopicActor, depending whether it is HTTP/REST-based or DMaaP-topic-based. These super classes provide most of the functionality needed to configure the operators, extracting operator-specific properties and adding default, actor-level properties

Operator
  • Typically, developers don’t have to implement any Operator classes; they just use HttpOperator or BidirectionalTopicOperator

Operation
  • Most operations require guard checks to be performed first. Thus, at a minimum, they should override startPreprocessorAsync() and have it invoke startGuardAsync()

  • In addition, if the operation depends on data being previously gathered and placed into the context, then it should override startPreprocessorAsync() and have it invoke obtain(). Note: obtain() and the guard can be performed in parallel by using the allOf() method. If the guard happens to depend on the same data, then it will block until the data is available, and then continue; the invoker need not deal with the dependency

  • Subclasses will typically derive from HttpOperation or BidirectionalTopicOperation, though if neither of those suffice, then they can extend OperationPartial, or even just implement a raw Operation. OperationPartial is the super class of HttpOperation and BidirectionalTopicOperation and provides most of the methods used by the Operation subclasses, including a number of utility methods (e.g., cancellable allOf)

  • Operation subclasses should be written in a way so-as to avoid any blocking I/O. If this proves too difficult, then the implementation should override doOperation() instead of startOperationAsync()

  • Operations return a “future” when start() is invoked. Typically, if the “future” is canceled, then any outstanding operation should be canceled. For instance, HTTP connections should be closed without waiting for a response

  • If an operation sets the outcome to “FAILURE”, it will be automatically retried; other failure types are not retried

ControlLoopParams
  • Identifies the operation to be performed

  • Includes timeout and retry information, though the actors typically provide default values if they are not specified in the parameters

  • Includes the event “context”

  • Includes “Policy” fields (e.g., “actor” and “operation”)

Context (aka, Event Context)
  • Includes:

    • the original onset event

    • enrichment data associated with the event

    • results of A&AI queries

XxxParams and XxxConfig
  • XxxParams objects are POJOs into which the property Maps are decoded when configuring actors or operators

  • XxxConfig objects contain a single Operator’s (or Actor’s) configuration information, based on what was in the XxxParams. For instance, the HttpConfig contains a reference to the HttpClient that is used to perform HTTP operations, while the associated HttpParams just contains the name of the HttpClient. XxxConfig objects are shared by all operations created by a single Operator. As a result, it should not contain any data associated with an individual operation; such data should be stored within the Operation object, itself

Junit tests
  • Operation Tests may choose to subclass from BasicHttpOperation, which provides some supporting utilities and mock objects

  • Should include a test to verify that the Actor, and possibly each Operator, can be retrieved via an Actor Service

  • Tests with an actual REST server are performed within HttpOperationTest, so need not be repeated in subclasses. Instead, they can catch the callback to the get(), post(), etc., methods and pass the rawResponse to it there. That being said, a number of actors spin up a simulator to verify end-to-end request/response processing

Clients (e.g., drools-applications)
  • When using callbacks, a client may want to use the isFor() method to verify that the outcome is for the desired operation, as callbacks are invoked with the outcome of all operations performed, including any preprocessor steps

Flow of operation
  • PDP:

    • Populates a ControlLoopParams using ControlLoopParams.builder()

    • Invokes start() on the ControlLoopParams

  • ControlLoopParams:

    • Finds the actor/operator

    • Uses it to invoke buildOperation()

    • Invokes start() on the Operation

  • Operation:

    • start() invokes startPreprocessorAsync() and then startOperationAsync()

    • Exceptions that occur while constructing the operation pipeline propagate back to the client that invoked start()

    • Exceptions that occur while executing the operation pipeline are caught and turned into an OperationOutcome whose result is FAILURE_EXCEPTION. In addition, the “start” callback (i.e., specified via the ControlLoopParams) will be invoked, if it hasn’t been invoked yet, and then the “complete” callback will be invoked

    • By default, startPreprocessorAsync() does nothing, thus most subclasses will override it to:

      • Do any A&AI query that is needed (beyond enrichment, which is already available in the Context)

      • Use Context obtain() to request the data asynchronously

      • Invoke startGuardAsync()

    • By default, startGuardAsync() will simply perform a guard check, passing it the “standard” payload

    • Subclasses may override makeGuardPayload() to add extra fields to the payload (e.g., some SO operations add the VF count)

    • If any preprocessing step fails, then the “start” and “complete” callbacks will be invoked to indicate a failure of the operation as a whole. Otherwise, the flow will continue on to startOperationAsync(), after the “start” callback is invoked

    • StartOperationAsync() will perform whatever needs to be done to start the operation

    • Once it completes, the “complete” callback will be invoked with the outcome of the operation. StartOperationAsync() should not invoke the callback, as that is handled automatically by OperationPartial, which is the superclass of most Operations

A&AI Actor

Overview of A&AI Actor

ONAP Policy Framework enables various actors, several of which require additional data to be gathered from A&AI via a REST call. Previously, the request was built, and the REST call made, by the application. However, A&AI queries have now been implemented using the new Actor framework.

Each operation supported by the actor is associated with its own java class, which is responsible for populating the request structure and invoking the REST service. The class hierarchy is shown below.

_images/classHierarchy.png

Currently, the following operations are supported:

  • Tenant

  • Pnf

  • CustomQuery

One thing that sets the A&AI Actor implementation apart from the other Actor implementations is that it is typically used to gather data for input to the other actors. Consequently, when an A&AI operation completes, it places its response into the properties field of the context, which is passed via the ControlLoopOperationParams. The names of the keys within the properties field are typically of the form, “AAI.<operation>.<targetEntity>”, where “operation” is the name of the operation, and “targetEntity” is the targetEntity passed via the ControlLoopOperationParams. For example, the response for the Tenant query for a target entity named “ozVserver” would be stored as a properties named “AAI.Tenant.ozVserver”.

On the other hand, as there is only one “custom query” for a given ONSET, the Custom Query operation deviates from this, in that it always stores its response using the key, “AAI.AaiCqResponse”.

Request

Most of the the A&AI operations use “GET” requests and thus do not populate a request structure. However, for those that do, the request structure is described in the table below.

Note: the Custom Query Operation requires tenant data, thus it performs a Tenant operation before sending its request. The tenant data is gathered for the vserver whose name is found in the “vserver.vserver-name” field of the enrichment data provided by DCAE with the ONSET event.

Field Name

Type

Description

Custom Query:

start

string

Extracted from the result-data[0].resource-link field of the Tenant query response.

Examples

Suppose the ControlLoopOperationParams were populated as follows, with the tenant query having already been performed:

{
    "actor": "AAI",
    "operation": "CustomQuery",
    "context": {
        "enrichment": {
            "vserver.vserver-name": "Ete_vFWCLvFWSNK_7ba1fbde_0"
        },
        "properties": {
            "AAI.Tenant.Ete_vFWCLvFWSNK_7ba1fbde_0": {
                "result-data": [
                    {
                        "resource-type": "vserver",
                        "resource-link": "/aai/v15/cloud-infrastructure/cloud-regions/cloud-region/CloudOwner/RegionOne/tenants/tenant/3f2aaef74ecb4b19b35e26d0849fe9a2/vservers/vserver/6c3b3714-e36c-45af-9f16-7d3a73d99497"
                    }
                ]
            }
        }
    }
}

An example of a Custom Query request constructed by the actor using the above parameters, sent to the A&AI REST server:

{
  "start": "/aai/v15/cloud-infrastructure/cloud-regions/cloud-region/CloudOwner/RegionOne/tenants/tenant/3f2aaef74ecb4b19b35e26d0849fe9a2/vservers/vserver/6c3b3714-e36c-45af-9f16-7d3a73d99497",
  "query": "query/closed-loop"
}

An example response received from the A&AI REST service:

{
    "results": [
        {
            "vserver": {
                "vserver-id": "f953c499-4b1e-426b-8c6d-e9e9f1fc730f",
                "vserver-name": "Ete_vFWCLvFWSNK_7ba1fbde_0",
                "vserver-name2": "Ete_vFWCLvFWSNK_7ba1fbde_0",
                "prov-status": "ACTIVE",
                "vserver-selflink": "http://10.12.25.2:8774/v2.1/41d6d38489bd40b09ea8a6b6b852dcbd/servers/f953c499-4b1e-426b-8c6d-e9e9f1fc730f",
                "in-maint": false,
                "is-closed-loop-disabled": false,
    ...
}
Configuration of the A&AI Actor

The following table specifies the fields that should be provided to configure the A&AI actor.

Field name

type

Description

clientName

string

Name of the HTTP client to use to send the request to the A&AI REST server.

timeoutSec

integer (optional)

Maximum time, in seconds, to wait for a response to be received from the REST server. Defaults to 90s.

path

string

URI appended to the URL. This field only applies to individual operations; it does not apply at the actor level. Note: the path should not include a leading or trailing slash.

The individual operations are configured using these same field names. However, all of them, except the path, are optional, as they inherit their values from the corresponding actor-level fields.

APPC Legacy Actor

Overview of APPC Legacy Actor

ONAP Policy Framework enables APPC Legacy as one of the supported actors. APPC Legacy uses a single DMaaP topic for both requests and responses. As a result, the actor implementation must cope with the fact that requests may appear on the same stream from which it is reading responses, thus it must use the message content to distinguish responses from requests. This particular implementation uses the Status field to identify responses.

In addition, APPC may generate more than one response for a particular request, the first response simply indicating that the request was accepted, while the second response indicates completion of the request. For each request, a unique sub-request ID is generated. This is used to match the received responses with the published requests.

Each operation supported by the actor is associated with its own java class, which is responsible for populating the request structure appropriately. The operation-specific classes are all derived from the AppcOperation class, which is, itself, derived from BidirectionalTopicOperation.

Request
CommonHeader

The “CommonHeader” field in the request is built by policy.

“CommonHeader” field name

type

Description

SubRequestID

string

Generated by Policy. Is a UUID and used internally by policy to match the response with the request.

RequestID

string

Inserted by Policy. Maps to the UUID sent by DCAE i.e. the ID used throughout the closed loop lifecycle to identify a request.

Action

The “Action” field uniquely identifies the operation to perform. Currently, only “ModifyConfig” is supported.

Payload

“Payload” field name

type

Description

generic-vnf.vnf-id

string

The ID of the VNF selected from the A&AI Custom Query response using the Target resource ID specified in the ControlLoopOperationParams.

Additional fields are populated based on the payload specified within the ControlLoopOperationParams. Each value found within the payload is treated as a JSON string and is decoded into a POJO, which is then inserted into the request payload using the same key.

Examples

Suppose the ControlLoopOperationParams were populated as follows:

{
    "actor": "APPC",
    "operation": "ModifyConfig",
    "target": {
        "resourceID": "2246ebc9-9b9f-42d0-a5e4-0248324fb884"
    },
    "payload": {
        "my-key-A": "{\"input\":\"hello\"}",
        "my-key-B": "{\"output\":\"world\"}"
    },
    "context": {
        "event": {
            "requestId": "c7c6a4aa-bb61-4a15-b831-ba1472dd4a65"
        },
        "cqdata": {
            "generic-vnf": [
                {
                    "vnfId": "my-vnf",
                    "vf-modules": [
                        {
                            "model-invariant-id": "2246ebc9-9b9f-42d0-a5e4-0248324fb884"
                        }
                    ]
                }
            ]
        }
    }
}

An example of a request constructed by the actor using the above parameters, published to the APPC topic:

{
  "CommonHeader": {
    "TimeStamp": 1589400050910,
    "APIver": "1.01",
    "RequestID": "c7c6a4aa-bb61-4a15-b831-ba1472dd4a65",
    "SubRequestID": "ee3f2dc0-a2e0-4ae8-98c3-478c784b8eb5",
    "RequestTrack": [],
    "Flags": []
  },
  "Action": "ModifyConfig",
  "Payload": {
    "my-key-B": {
      "output": "world"
    },
    "my-key-A": {
      "input": "hello"
    },
    "generic-vnf.vnf-id": "my-vnf"
  }
}

An example initial response received from APPC on the same topic:

{
  "CommonHeader": {
    "TimeStamp": 1589400050923,
    "APIver": "1.01",
    "RequestID": "c7c6a4aa-bb61-4a15-b831-ba1472dd4a65",
    "SubRequestID": "ee3f2dc0-a2e0-4ae8-98c3-478c784b8eb5",
    "RequestTrack": [],
    "Flags": []
  },
  "Status": {
    "Code": 100,
    "Value": "ACCEPTED"
  }
}

An example final response received from APPC on the same topic:

{
  "CommonHeader": {
    "TimeStamp": 1589400050934,
    "APIver": "1.01",
    "RequestID": "c7c6a4aa-bb61-4a15-b831-ba1472dd4a65",
    "SubRequestID": "ee3f2dc0-a2e0-4ae8-98c3-478c784b8eb5",
    "RequestTrack": [],
    "Flags": []
  },
  "Status": {
    "Code": 400,
    "Value": "SUCCESS"
  }
}
Configuration of the APPC Legacy Actor

The following table specifies the fields that should be provided to configure the APPC Legacy actor.

Field name

type

Description

sinkTopic

string

Name of the topic to which the request should be published.

sourceTopic

string

Name of the topic from which the response should be read.

timeoutSec

integer (optional)

Maximum time, in seconds, to wait for a response to be received on the topic.

The individual operations are configured using these same field names. However, all of them are optional, as they inherit their values from the corresponding actor-level fields.

APPC LCM Actor

Overview of APPC LCM Actor

ONAP Policy Framework enables APPC as one of the supported actors. The APPC LCM Actor contains operations supporting both the LCM interface and the legacy interface. As such, this actor supersedes the APPC Legacy actor. Its sequence number is lower than the APPC Legacy actor’s sequence number, which ensures that it is loaded first.

APPC Legacy uses a single DMaaP topic for both requests and responses. The class(es) supporting this interface are described in APPC Legacy Actor. The APPC LCM Actor only supports the APPC Legacy operation, ModifyConfig.

The APPC LCM interface, on the other hand, uses two DMaaP topics, one to which requests are published, and another from which responses are received. Similar to the legacy interface, APPC LCM may generate more than one response for a particular request, the first response simply indicating that the request was accepted, while the second response indicates completion of the request.

For each request, a unique sub-request ID is generated. This is used to match the received responses with the published requests. (APPC LCM also has a “correlation-id” field, which could potentially be used to match the response to the request, but apparently APPC LCM has not implemented that capability yet.)

All APPC LCM operations are currently supported by a single java class, AppcLcmOperation, which is responsible for populating the request structure appropriately. This class is derived from BidirectionalTopicOperation.

The remainder of this discussion describes the operations that are specific to APPC LCM.

Request
CommonHeader

The “common-header” field in the request is built by policy.

“common-header” field name

type

Description

sub-request-id

string

Generated by Policy. Is a UUID and used internally by policy to match the response with the request.

request-id

string

Inserted by Policy. Maps to the UUID sent by DCAE i.e. the ID used throughout the closed loop lifecycle to identify a request.

originator-id

string

Copy of the request-id.

Action

The “action” field uniquely identifies the operation to perform. Currently, the following operations are supported:

  • Restart

  • Rebuild

  • Migrate

  • ConfigModify

The valid operations are listed in AppcLcmConstants. These are the values that must be specified in the policy. However, before being stuffed into the “action” field, they are converted to camel case, stripping any hyphens, and translating the first character to upper case, if it isn’t already.

Action Identifiers

Currently, the “action-identifiers” field contains only the VNF ID, which should be the targetEntity specified within the ControlLoopOperationParams.

Payload

The “payload” field is populated based on the payload specified within the ControlLoopOperationParams. Unlike the APPC Legacy operations, which inject POJOs into the “payload” field, the APPC LCM operations simply encode the entire parameter payload into a JSON string, and then place the encoded string into the “payload” field of the request.

Examples

Suppose the ControlLoopOperationParams were populated as follows:

{
    "actor": "APPC",
    "operation": "Restart",
    "targetEntity": "my-target",
    "payload": {
        "my-key-A": "hello",
        "my-key-B": "world"
    },
    "context": {
        "event": {
            "requestId": "664be3d2-6c12-4f4b-a3e7-c349acced200"
        }
    }
}

An example of a request constructed by the actor using the above parameters, published to the APPC LCM request topic:

{
  "version": "2.0",
  "rpc-name": "restart",
  "correlation-id": "664be3d2-6c12-4f4b-a3e7-c349acced200-8c4c1914-00ed-4be0-ae3b-49dd22e8f461",
  "type": "request",
  "body": {
    "input": {
      "common-header": {
        "timestamp": "2020-05-14T19:19:32.619890900Z",
        "api-ver": "2.00",
        "originator-id": "664be3d2-6c12-4f4b-a3e7-c349acced200",
        "request-id": "664be3d2-6c12-4f4b-a3e7-c349acced200",
        "sub-request-id": "8c4c1914-00ed-4be0-ae3b-49dd22e8f461",
        "flags": {}
      },
      "action": "Restart",
      "action-identifiers": {
        "vnf-id": "my-target"
      },
      "payload": "{\"my-key-A\":\"hello\", \"my-key-B\":\"world\"}"
    }
  }
}

An example initial response received from the APPC LCM response topic:

{
  "body": {
    "output": {
      "common-header": {
        "timestamp": "2020-05-14T19:19:32.619897000Z",
        "api-ver": "5.00",
        "originator-id": "664be3d2-6c12-4f4b-a3e7-c349acced200",
        "request-id": "664be3d2-6c12-4f4b-a3e7-c349acced200",
        "sub-request-id": "8c4c1914-00ed-4be0-ae3b-49dd22e8f461",
        "flags": {}
      },
      "status": {
        "code": 100,
        "message": "Restart accepted"
      }
    }
  },
  "version": "2.0",
  "rpc-name": "restart",
  "correlation-id": "664be3d2-6c12-4f4b-a3e7-c349acced200-1",
  "type": "response"
}

An example final response received from the APPC LCM on the same response topic:

{
  "body": {
    "output": {
      "common-header": {
        "timestamp": "2020-05-14T19:19:32.619898000Z",
        "api-ver": "5.00",
        "originator-id": "664be3d2-6c12-4f4b-a3e7-c349acced200",
        "request-id": "664be3d2-6c12-4f4b-a3e7-c349acced200",
        "sub-request-id": "8c4c1914-00ed-4be0-ae3b-49dd22e8f461",
        "flags": {}
      },
      "status": {
        "code": 400,
        "message": "Restart Successful"
      }
    }
  },
  "version": "2.0",
  "rpc-name": "restart",
  "correlation-id": "664be3d2-6c12-4f4b-a3e7-c349acced200-1",
  "type": "response"
}
Configuration of the APPC LCM Actor

The following table specifies the fields that should be provided to configure the APPC LCM actor.

Field name

type

Description

sinkTopic

string

Name of the topic to which the request should be published.

sourceTopic

string

Name of the topic from which the response should be read. This must not be the same as the sinkTopic.

timeoutSec

integer (optional)

Maximum time, in seconds, to wait for a response to be received on the topic.

The individual operations are configured using these same field names. However, all of them are optional, as they inherit their values from the corresponding actor-level fields. That being said, the APPC Legacy operation(s) use a different topic than the APPC LCM operations. As a result, the sink and source topics should be specified for each APPC Legacy operation supported by this actor.

CDS actor support in Policy

1. Overview of CDS Actor support in Policy

ONAP Policy Framework now enables Controller Design Studio (CDS) as one of the supported actors. This allows the users to configure Operational Policy to use CDS as an actor to remedy a situation.

Behind the scene, when an incoming event is received and validated against rules, Policy uses gRPC to trigger the CBA (Controller Blueprint Archive: CDS artifact) as configured in the operational policy and providing CDS with all the input parameters that is required to execute the chosen CBA.

2. Objective

The goal of the user guide is to clarify the contract between Policy and CDS so that a CBA developer can respect this input contract towards CDS when implementing the CBA.

3. Contract between Policy and CDS

Policy upon receiving an incoming event from DCAE fires the rules and decides which actor to trigger. If the CDS actor is the chosen, Policy triggers the CBA execution using gRPC.

The parameters required for the execution of a CBA are internally handled by Policy. It makes uses of the incoming event, the operational policy configured and AAI lookup to build the CDS request payload.

3.1 CDS Blueprint Execution Payload format as invoked by Policy

Below are the details of the contract established between Policy and CDS to enable the automation of a remediation action within the scope of a closed loop usecase in ONAP.

The format of the input payload for CDS follows the below guidelines, hence a CBA developer must consider this when implementing the CBA logic. For the sake of simplicity a JSON payload is used instead of a gRPC payload and each attribute of the child-nodes is documented.

3.1.1 CommonHeader

The “commonHeader” field in the CBA execute payload is built by policy.

“commonHeader” field name

type

Description

subRequestId

string

Generated by Policy. Is a UUID and used internally by policy.

requestId

string

Inserted by Policy. Maps to the UUID sent by DCAE i.e. the ID used throughout the closed loop lifecycle to identify a request.

originatorId

string

Generated by Policy and fixed to “POLICY”

3.1.2 ActionIdentifiers

The “actionIdentifiers” field uniquely identifies the CBA and the workflow to execute.

“actionIdentifiers” field name

type

Description

mode

string

Inserted by Policy and fixed to “sync” presently.

blueprintName

string

Inserted by Policy. Maps to the attribute that holds the blueprint-name in the operational policy configuration.

blueprintVersion

string

Inserted by Policy. Maps to the attribute that holds the blueprint-version in the operational policy configuration.

actionName

string

Inserted by Policy. Maps to the attribute that holds the action-name in the operational policy configuration.

3.1.3 Payload

The “payload” JSON node is generated by Policy for the action-name specified in the “actionIdentifiers” field which is eventually supplied through the operational policy configuration as indicated above.

3.1.3.1 Action request object

The “$actionName-request” object is generated by CDS for the action-name specified in the “actionIdentifiers” field.

The “$actionName-request” object contains:

  • a field called “resolution-key” which CDS uses to store the resolved parameters into the CDS context

  • a child node object called “$actionName-properties” which holds a map of all the parameters that serve as inputs to the CBA. It presently holds the below information:

    • all the AAI enriched attributes

    • additional parameters embedded in the Control Loop Event format which is sent by DCAE (analytics application).

    • any static information supplied through operational policy configuration which is not specific to an event but applies across all the events.

The data description for the action request object fields is as below:

  • Resolution-key

“$actionName-request” field name

type

Description

resolution-key

string

Generated by Policy. Is a UUID, generated each time CBA execute request is invoked.

  • Action properties object

“$actionName-properties” field name

type

Description

[$aai_node_type.$aai_attribute]

map

Inserted by Policy after performing AAI enrichment. Is a map that contains AAI parameters for the target and conforms to the notation: $aai_node_type.$aai_attribute. E.g. for PNF the map looks like below.

{
  "pnf.equip-vendor":"Vendor-A",
  "pnf.ipaddress-v4-oam":"10.10.10.10",
  "pnf.in-maint":false,
  "pnf.pnf-ipv4-address":"3.3.3.3",
  "pnf.resource-version":"1570746989505",
  "pnf.nf-role":"ToR DC101",
  "pnf.equip-type":"Router",
  "pnf.equip-model":"model-123456",
  "pnf.frame-id":"3",
  "pnf.pnf-name":"demo-pnf"
}

data

json object OR string

Inserted by Policy. Maps to the static payload supplied through operational policy configuration. Used to hold any static information which applies across all the events as described above. If the value of the data field is a valid JSON string it is converted to a JSON object, else will be retained as a string.

[$additionalEventParams]

map

Inserted by Policy. Maps to the map of additionalEvent parameters embedded into the Control Loop Event message from DCAE.

3.1.4 Summing it up: CBA execute payload generation as done by Policy

Putting all the above information together below is the REST equivalent of the CDS blueprint execute gRPC request generated by Policy.

REST equivalent of the gRPC request from Policy to CDS to execute a CBA.

curl -X POST \
  'http://{{ip}}:{{port}}/api/v1/execution-service/process' \
  -u 'ccsdkapps:ccsdkapps' \
  -H 'Content-Type: application/json' \
  -H 'cache-control: no-cache' \
  -d '{
    "commonHeader":{
        "subRequestId":"{generated_by_policy}",
        "requestId":"{req_id_from_DCAE}",
        "originatorId":"POLICY"
    },
    "actionIdentifiers":{
        "mode":"sync",
        "blueprintName":"{blueprint_name_from_operational_policy_config}",
        "blueprintVersion":"{blueprint_version_from_operational_policy_config}",
        "actionName":"{blueprint_action_name_from_operational_policy_config}"
    },
    "payload":{
        "$actionName-request":{
            "resolution-key":"{generated_by_policy}",
            "$actionName-properties":{
                "$aai_node_type.$aai_attribute_1":"",
                "$aai_node_type.$aai_attribute_2":"",
                .........
                "data":"{static_payload_data_from_operational_policy_config}",
                "$additionalEventParam_1":"",
                "$additionalEventParam_2":"",
                .........
            }
        }
    }
}'
3.1.5 Examples

Sample CBA execute request generated by Policy for PNF target type when “data” field is a string:

curl -X POST \
  'http://{{ip}}:{{port}}/api/v1/execution-service/process' \
  -u 'ccsdkapps:ccsdkapps' \
  -H 'Content-Type: application/json' \
  -H 'cache-control: no-cache' \
  -d '{
    "commonHeader":{
        "subRequestId":"14384b21-8224-4055-bb9b-0469397db801",
        "requestId":"d57709fb-bbec-491d-a2a6-8a25c8097ee8",
        "originatorId":"POLICY"
    },
    "actionIdentifiers":{
        "mode":"sync",
        "blueprintName":"PNF-demo",
        "blueprintVersion":"1.0.0",
        "actionName":"reconfigure-pnf"
    },
    "payload":{
        "reconfigure-pnf-request":{
            "resolution-key":"8338b828-51ad-4e7c-ac8b-08d6978892e2",
            "reconfigure-pnf-properties":{
                "pnf.equip-vendor":"Vendor-A",
                "pnf.ipaddress-v4-oam":"10.10.10.10",
                "pnf.in-maint":false,
                "pnf.pnf-ipv4-address":"3.3.3.3",
                "pnf.resource-version":"1570746989505",
                "pnf.nf-role":"ToR DC101",
                "pnf.equip-type":"Router",
                "pnf.equip-model":"model-123456",
                "pnf.frame-id":"3",
                "pnf.pnf-name":"demo-pnf",
                "data": "peer-as=64577",
                "peer-group":"demo-peer-group",
                "neighbor-address":"4.4.4.4"
            }
        }
    }
}'

Sample CBA execute request generated by Policy for VNF target type when “data” field is a valid JSON string:

curl -X POST \
  'http://{{ip}}:{{port}}/api/v1/execution-service/process' \
  -u 'ccsdkapps:ccsdkapps' \
  -H 'Content-Type: application/json' \
  -H 'cache-control: no-cache' \
  -d '{
    "commonHeader":{
        "subRequestId":"14384b21-8224-4055-bb9b-0469397db801",
        "requestId":"d57709fb-bbec-491d-a2a6-8a25c8097ee8",
        "originatorId":"POLICY"
    },
    "actionIdentifiers":{
        "mode":"sync",
        "blueprintName":"vFW-CDS",
        "blueprintVersion":"1.0.0",
        "actionName":"config-deploy"
    },
    "payload":{
        "config-deploy-request":{
            "resolution-key":"6128eb53-0eac-4c79-855c-ff56a7b81141",
            "config-deploy-properties":{
                "service-instance.service-instance-id":"40004db6-c51f-45b0-abab-ea4156bae422",
                "generic-vnf.vnf-id":"8d09e3bd-ae1d-4765-b26e-4a45f568a092",
                "data":{
                    "active-streams":"7"
                }
            }
        }
    }
}'
4. Operational Policy configuration to use CDS as an actor
4.1 TOSCA compliant Control Loop Operational Policy to support CDS actor

A common base TOSCA policy type for defining an operational policy is documented below:

APEX PDP specific operational policy is derived from the common operational TOSCA policy type as defined in the link below: * https://gerrit.onap.org/r/gitweb?p=policy/models.git;a=blob;f=models-examples/src/main/resources/policytypes/onap.policies.controlloop.operational.common.Apex.yaml;h=54b69c2d8a78ab7fd8d41d3f7c05632c4d7e433d;hb=HEAD

Drools PDP specific operational policy is also derived from the common operational TOSCA policy type and is defined in the link below: * https://gerrit.onap.org/r/gitweb?p=policy/models.git;a=blob;f=models-examples/src/main/resources/policytypes/onap.policies.controlloop.operational.common.Drools.yaml;h=69d73db5827cb6743172f9e0b1930eca8ba4ec0c;hb=HEAD

For integration testing CLAMP UI can be used to configure the Operational Policy.

E.g. Sample Operational Policy definition for vFW usecase to use CDS as an actor:

tosca_definitions_version: tosca_simple_yaml_1_1_0
topology_template:
    policies:
    -   operational.modifyconfig.cds:
            type: onap.policies.controlloop.operational.common.Drools
            type_version: 1.0.0
            version: 1.0.0
            metadata:
                policy-id: operational.modifyconfig.cds
            properties:
                id: ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a
                timeout: 1200
                abatement: false
                trigger: unique-policy-id-1-modifyConfig
                operations:
                -   id: unique-policy-id-1-modifyConfig
                    description: Modify the packet generator
                    operation:
                        actor: CDS
                        operation: ModifyConfig
                        target:
                            targetType: VNF
                            entityId:
                                resourceID: bbb3cefd-01c8-413c-9bdd-2b92f9ca3d38
                        payload:
                            artifact_name: vfw-cds
                            artifact_version: 1.0.0
                            mode: async
                            data: '{"active-streams":"7"}'
                    timeout: 300
                    retries: 0
                    success: final_success
                    failure: final_failure
                    failure_timeout: final_failure_timeout
                    failure_retries: final_failure_retries
                    failure_exception: final_failure_exception
                    failure_guard: final_failure_guard
                controllerName: usecases
4.2 API to configure the Control Loop Operational policy
4.2.1 Policy creation

Policy API endpoint is used to create policy i.e. an instance of the TOSCA compliant Operational policy type. E.g. For vFW usecase the policy-type is “onap.policies.controlloop.operational.common.Drools”.

In the below rest endpoint, the hostname points to K8S service “policy-api” and internal port 6969.

curl POST 'https://{$POLICY_API_URL}:{$POLICY_API_SERVICE_PORT}/policy/api/v1/policytypes/onap.policies.controlloop.operational.common.Drools/versions/1.0.0/policies/operational.modifyconfig/versions/1.0.0' \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
-u 'healthcheck:zb!XztG34' \
-d '{$vfw-tosca-policy}

Note: In order to create an operational policy when using APEX PDP use the policy-type: “onap.policies.controlloop.operational.common.Apex”.

4.2.2 Policy deployment to PDP

Policy PAP endpoint is used in order to deploy the policy to the appropriate PDP instance. In the rest endpoint URI, the hostname points to the service “policy-pap” and internal port 6969.

curl POST 'https://{$POLICY_PAP_URL}:{$POLICY_PAP_SERVICE_PORT}/policy/pap/v1/pdps/deployments/batch' \
-H 'Content-Type: application/json' \
-H 'Accept: application/json' \
-u 'healthcheck:zb!XztG34' \
-d '{
    "groups": [
        {
            "name": "defaultGroup",
            "deploymentSubgroups": [
                {
                    "pdpType": "drools",
                    "action": "POST",
                    "policies": [{
                            "name": "operational.modifyconfig.cds",
                            "version": "1.0.0"
                        }]
                }
            ]
        }
    ]
}'

To view the configured policies use the below REST API.

curl GET 'https://{$POLICY_API_URL}:{$POLICY_API_SERVICE_PORT}/policy/api/v1/policytypes/onap.policies.controlloop.operational.common.Drools/versions/1.0.0/policies/operational.modifyconfig/versions/1.0.0' \
-H 'Accept: application/json' \
-u 'healthcheck:zb!XztG34'
curl --location --request GET 'https://{$POLICY_PAP_URL}:{$POLICY_PAP_SERVICE_PORT}/policy/pap/v1/pdps' \
-H 'Accept: application/json' \
-u 'healthcheck:zb!XztG34'

GUARD Actor

Overview of GUARD Actor

Within ONAP Policy Framework, a GUARD is typically an implicit check performed at the start of each operation and is performed by making a REST call to the XACML-PDP. Previously, the request was built, and the REST call made, by the application. However, Guard checks have now been implemented using the new Actor framework.

Currently, there is a single operation, Decision, which is implemented by the java class, GuardOperation. This class is derived from HttpOperation.

Request

A number of the request fields are populated from values specified in the actor/operation’s configuration parameters (e.g., “onapName”). Additional fields are specified below.

Request ID

The “requestId” field is set to a UUID.

Resource

The “resource” field is populated with a Map containing a single item, “guard”. The value of the item is set to the contents of the payload specified within the ControlLoopOperationParams.

Examples

Suppose the ControlLoopOperationParams were populated as follows:

{
    "actor": "GUARD",
    "operation": "Decision",
    "payload": {
      "actor": "SO",
      "operation": "VF Module Create",
      "target": "OzVServer",
      "requestId": "c7c6a4aa-bb61-4a15-b831-ba1472dd4a65",
      "clname": "ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3",
      "vfCount": 2
    }
}

An example of a request constructed by the actor using the above parameters, sent to the GUARD REST server:

{
  "ONAPName": "Policy",
  "ONAPComponent": "Drools PDP",
  "ONAPInstance": "Usecases",
  "requestId": "90ee99d2-f2d8-4d90-b162-605203c30180",
  "action": "guard",
  "resource": {
    "guard": {
      "actor": "SO",
      "operation": "VF Module Create",
      "target": "OzVServer",
      "requestId": "c7c6a4aa-bb61-4a15-b831-ba1472dd4a65",
      "clname": "ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3",
      "vfCount": 2
    }
  }
}

An example response received from the GUARD REST service:

{
    "status": "Permit",
    "advice": {},
    "obligations": {},
    "policies": {}
}
Configuration of the GUARD Actor

The following table specifies the fields that should be provided to configure the GUARD actor.

Field name

type

Description

clientName

string

Name of the HTTP client to use to send the request to the GUARD REST server.

timeoutSec

integer (optional)

Maximum time, in seconds, to wait for a response to be received from the REST server. Defaults to 90s.

path

string

URI appended to the URL. This field only applies to individual operations; it does not apply at the actor level. Note: the path should not include a leading or trailing slash.

onapName

string

ONAP Name (e.g., “Policy”)

onapComponent

string

ONAP Component (e.g., “Drools PDP”)

onapInstance

string

ONAP Instance (e.g., “Usecases”)

action

string (optional)

Used to populate the “action” request field. Defaults to “guard”.

disabled

boolean (optional)

True, to disable guard checks, false otherwise. Defaults to false.

The individual operations are configured using these same field names. However, all of them, except the path, are optional, as they inherit their values from the corresponding actor-level fields.

SDNC Actor

Overview of SDNC Actor

ONAP Policy Framework enables SDNC as one of the supported actors. SDNC uses a REST-based interface, and supports the following operations: BandwidthOnDemand, Reroute.

Each operation supported by the actor is associated with its own java class, which is responsible for populating the request structure appropriately. The operation-specific classes are all derived from the SdncOperation class, which is, itself, derived from HttpOperation. Each operation class implements its own makeRequest() method to construct a request appropriate to the operation.

Request

A number of nested structures are populated within the request. The following table lists the contents of some of the fields that appear within these structures.

Field Name

Type

Description

top level:

requestId

string

Inserted by Policy. Maps to the UUID sent by DCAE i.e. the ID used throughout the closed loop lifecycle to identify a request.

sdnc-request-header:

svc-action

string

Set by Policy, based on the operation.

svc-request-id

string

Generated by Policy. Is a UUID.

request-information:

request-action

string

Set by Policy, based on the operation.

network-information:

Applicable to Reroute.

network-id

string

Set by Policy, using the “network-information.network-id” property found within the enrichment data provided by DCAE with the ONSET event.

vnf-information:

Applicable to BandwidthOnDemand.

vnf-id

string

Set by Policy, using the “vnfId” property found within the enrichment data provided by DCAE with the ONSET event.

vf-module-input-parameters:

Applicable to BandwidthOnDemand.

param[0]

string

Set by Policy, using the “bandwidth” property found within the enrichment data provided by DCAE with the ONSET event.

param[1]

string

Set by Policy, using the “bandwidth-change-time” property found within the enrichment data provided by DCAE with the ONSET event.

vf-module-information:

Applicable to BandwidthOnDemand.

vf-module-id

string

Set by Policy to “”.

Examples

Suppose the ControlLoopOperationParams were populated as follows:

{
    "actor": "SDNC",
    "operation": "Reroute",
    "context": {
        "enrichment": {
            "service-instance.service-instance-id": "c7c6a4aa-bb61-4a15-b831-ba1472dd4a65",
            "network-information.network-id": "2246ebc9-9b9f-42d0-a5e4-0248324fb884"
        }
    }
}

An example of a request constructed by the actor using the above parameters, sent to the SDNC REST server:

{
    "input": {
        "sdnc-request-header": {
            "svc-request-id": "2612653e-d946-423b-96d9-a8d5e8e39618",
            "svc-action": "reoptimize"
        },
        "request-information": {
            "request-action": "ReoptimizeSOTNInstance"
        },
        "service-information": {
            "service-instance-id": "c7c6a4aa-bb61-4a15-b831-ba1472dd4a65"
        },
        "network-information": {
            "network-id": "2246ebc9-9b9f-42d0-a5e4-0248324fb884"
        }
    }
}

An example response received from the SDNC REST service:

{
  "output": {
    "svc-request-id": "2612653e-d946-423b-96d9-a8d5e8e39618",
    "response-code": "200",
    "ack-final-indicator": "Y"
  }
}
Configuration of the SDNC Actor

The following table specifies the fields that should be provided to configure the SDNC actor.

Field name

type

Description

clientName

string

Name of the HTTP client to use to send the request to the SDNC REST server.

timeoutSec

integer (optional)

Maximum time, in seconds, to wait for a response to be received from the REST server. Defaults to 90s.

path

string

URI appended to the URL. This field only applies to individual operations; it does not apply at the actor level. Note: the path should not include a leading or trailing slash.

The individual operations are configured using these same field names. However, all of them, except the path, are optional, as they inherit their values from the corresponding actor-level fields.

SDNR Actor

Overview of SDNR Actor

ONAP Policy Framework enables SDNR as one of the supported actors. SDNR uses two DMaaP topics, one to which requests are published, and another from which responses are received. SDNR may generate more than one response for a particular request, the first response simply indicating that the request was accepted, while the second response indicates completion of the request. For each request, a unique sub-request ID is generated. This is used to match the received responses with the published requests.

When an SDNR request completes, whether successfully or unsuccessfully, the actor populates the controlLoopResponse within the OperationOutcome. The application will typically publish this to a notification topic so that downstream systems can take appropriate action.

All SDNR operations are currently supported by a single java class, SdnrOperation, which is responsible for populating the request structure appropriately. This class is derived from BidirectionalTopicOperation.

Request
CommonHeader

The “CommonHeader” field in the request is built by policy.

“CommonHeader” field name

type

Description

SubRequestID

string

Generated by Policy. Is a UUID and used internally by policy to match the response with the request.

RequestID

string

Inserted by Policy. Maps to the UUID sent by DCAE i.e. the ID used throughout the closed loop lifecycle to identify a request.

Action

The “action” field uniquely identifies the operation to perform. Operation names are not validated. Instead, they are passed to SDNR, untouched.

RPC Name

The “rpc-name” field is the same as the “action” field, with everything mapped to lower case.

Payload

The “payload” field is populated with the payload text that is provided within the ONSET event; no additional transformation is applied.

Examples

Suppose the ControlLoopOperationParams were populated as follows:

{
    "actor": "SDNR",
    "operation": "ModifyConfig",
    "context": {
        "event": {
            "requestId": "664be3d2-6c12-4f4b-a3e7-c349acced200",
            "payload": "some text"
        }
    }
}

An example of a request constructed by the actor using the above parameters, published to the SDNR request topic:

{
  "body": {
    "input": {
      "CommonHeader": {
        "TimeStamp": "2020-05-18T14:43:58.550499700Z",
        "APIVer": "1.0",
        "RequestID": "664be3d2-6c12-4f4b-a3e7-c349acced200",
        "SubRequestID": "848bfd15-b189-43a1-bdea-80982b41fa24",
        "RequestTrack": {},
        "Flags": {}
      },
      "Action": "ModifyConfig",
      "Payload": "some text"
    }
  },
  "version": "1.0",
  "rpc-name": "modifyconfig",
  "correlation-id": "664be3d2-6c12-4f4b-a3e7-c349acced200-848bfd15-b189-43a1-bdea-80982b41fa24",
  "type": "request"
}

An example initial response received from the SDNR response topic:

{
    "body": {
        "output": {
            "CommonHeader": {
                "TimeStamp": "2020-05-18T14:44:10.000Z",
                "APIver": "1.0",
                "RequestID": "664be3d2-6c12-4f4b-a3e7-c349acced200",
                "SubRequestID": "848bfd15-b189-43a1-bdea-80982b41fa24",
                "RequestTrack": [],
                "Flags": []
            },
            "Status": {
                "Code": 100,
                "Value": "ACCEPTED"
            }
        }
    },
    "version": "1.0",
    "rpc-name": "modifyconfig",
    "correlation-id": "664be3d2-6c12-4f4b-a3e7-c349acced200-848bfd15-b189-43a1-bdea-80982b41fa24",
    "type": "response"
}

An example final response received from the SDNR on the same response topic:

{
    "body": {
        "output": {
            "CommonHeader": {
                "TimeStamp": "2020-05-18T14:44:20.000Z",
                "APIver": "1.0",
                "RequestID": "664be3d2-6c12-4f4b-a3e7-c349acced200",
                "SubRequestID": "848bfd15-b189-43a1-bdea-80982b41fa24",
                "RequestTrack": [],
                "Flags": []
            },
            "Status": {
                "Code": 200,
                "Value": "SUCCESS"
            },
            "Payload": "{ \"Configurations\":[ { \"Status\": { \"Code\": 200, \"Value\": \"SUCCESS\" }, \"data\":{\"FAPService\":{\"alias\":\"Chn0330\",\"X0005b9Lte\":{\"phyCellIdInUse\":6,\"pnfName\":\"ncserver23\"},\"CellConfig\":{\"LTE\":{\"RAN\":{\"Common\":{\"CellIdentity\":\"Chn0330\"}}}}}} } ] }"
        }
    },
    "version": "1.0",
    "rpc-name": "modifyconfig",
    "correlation-id": "664be3d2-6c12-4f4b-a3e7-c349acced200-848bfd15-b189-43a1-bdea-80982b41fa24",
    "type": "response"
}
Configuration of the SDNR Actor

The following table specifies the fields that should be provided to configure the SNDR actor.

Field name

type

Description

sinkTopic

string

Name of the topic to which the request should be published.

sourceTopic

string

Name of the topic from which the response should be read. This must not be the same as the sinkTopic.

timeoutSec

integer (optional)

Maximum time, in seconds, to wait for a response to be received on the topic.

The individual operations are configured using these same field names. However, all of them are optional, as they inherit their values from the corresponding actor-level fields.

SO Actor

Overview of SO Actor

ONAP Policy Framework enables SO as one of the supported actors. SO uses a REST-based interface. However, as requests may not complete right away, a REST-based polling interface is used to check the status of the request. The requestId is extracted from the initial response and is appended to the pathGet configuration parameter to generate the URL used to poll for completion.

Each operation supported by the actor is associated with its own java class, which is responsible for populating the request structure appropriately and sending the request. Note: the request may be issued via POST, DELETE, etc., depending on the operation. The operation-specific classes are all derived from the SoOperation class, which is, itself, derived from HttpOperation. The following operations are currently supported:

  • VF Module Create

  • VF Module Delete

Request

A number of nested structures are populated within the request. Several of them are populated with data extracted from the A&AI Custom Query response that is retrieved using the Target resource ID specified in the ControlLoopOperationParams. The following table lists the contents of some of the fields that appear within these structures.

Field Name

Type

Description

top level:

operationType

string

Inserted by Policy. Name of the operation.

requestDetails:

requestParameters

Applicable to VF Module Create. Set by Policy from the requestParameters specified in the payload of the ControlLoopOperationParams. The value is treated as a JSON string and decoded into an SoRequestParameters object that is placed into this field.

configurationParameters

Applicable to VF Module Create. Set by Policy from the configurationParameters specified in the payload of the ControlLoopOperationParams. The value is treated as a JSON string and decoded into a List of Maps that is placed into this field.

modelInfo:

Set by Policy. Copied from the target specified in the ControlLoopOperationParams.

cloudConfiguration:

tenantId

string

The ID of the “default” Tenant selected from the A&AI Custom Query response.

lcpCloudRegionId

string

The ID of the “default” Cloud Region selected from the A&AI Custom Query response.

relatedInstanceList[0]:

Applicable to VF Module Create. The “default” Service Instance selected from the A&AI Custom Query response.

relatedInstanceList[1]:

Applicable to VF Module Create. The VNF selected from the A&AI Custom Query response.

Examples

Suppose the ControlLoopOperationParams were populated as follows:

{
    "actor": "SO",
    "operation": "Reroute",
    "target": {
        "modelInvariantId": "2246ebc9-9b9f-42d0-a5e4-0248324fb884",
        "modelName": "VlbCdsSb00..vdns..module-3",
        "modelVersion": "1",
        "modelCustomizationUuid": "3a74410a-6c74-4a32-94b2-71488be6da1a",
        "modelVersionId": "1f94cedb-f656-4ddb-9f55-60ba1fc7d4b1",
        "modelCustomizationId": "3a74410a-6c74-4a32-94b2-71488be6da1a",
        "modelUuid": "1f94cedb-f656-4ddb-9f55-60ba1fc7d4b1",
        "modelInvariantUuid": "2246ebc9-9b9f-42d0-a5e4-0248324fb884"
    },
    "context": {
        "cqdata": {
            "tenant": {
                "id": "41d6d38489bd40b09ea8a6b6b852dcbd"
            },
            "cloud-region": {
                "id": "RegionOne"
            },
            "service-instance": {
                "id": "c14e61b5-1ee6-4925-b4a9-b9c8dbfe3f34",
                "modelInvariantId": "6418bb39-61e1-45fc-a36b-3f211bb846c7",
                "modelName": "vLB_CDS_SB00_02",
                "modelVersion": "1.0",
                "modelVersionId": "d01d9dec-afb6-4a53-bd9e-2eb10ca07a51",
                "modelUuid": "d01d9dec-afb6-4a53-bd9e-2eb10ca07a51",
                "modelInvariantUuid": "6418bb39-61e1-45fc-a36b-3f211bb846c7"
            },
            "generic-vnf": [
                {
                    "vnfId": "6636c4d5-f608-4376-b6d8-7977e98cb16d",
                    "vf-modules": [
                        {
                            "modelInvariantId": "827356a9-cb60-4976-9713-c30b4f850b41",
                            "modelName": "vLB_CDS_SB00",
                            "modelVersion": "1.0",
                            "modelCustomizationUuid": "6478f94b-0b20-4b44-afc0-94e48070586a",
                            "modelVersionId": "ca3c4797-0cdd-4797-8bec-9a3ce78ac4da",
                            "modelCustomizationId": "6478f94b-0b20-4b44-afc0-94e48070586a",
                            "modelUuid": "ca3c4797-0cdd-4797-8bec-9a3ce78ac4da",
                            "modelInvariantUuid": "827356a9-cb60-4976-9713-c30b4f850b41"
                        }
                    ]
                }
            ]
        }
    },
    "payload": {
        "requestParameters": "{\"usePreload\": false}",
        "configurationParameters": "[{\"ip-addr\": \"$.vf-module-topology.vf-module-parameters.param[16].value\", \"oam-ip-addr\": \"$.vf-module-topology.vf-module-parameters.param[30].value\"}]"
    }
}

An example of a request constructed by the actor using the above parameters, sent to the SO REST server:

{
  "requestDetails": {
    "modelInfo": {
        "modelInvariantId": "2246ebc9-9b9f-42d0-a5e4-0248324fb884",
        "modelType": "vfModule",
        "modelName": "VlbCdsSb00..vdns..module-3",
        "modelVersion": "1",
        "modelCustomizationUuid": "3a74410a-6c74-4a32-94b2-71488be6da1a",
        "modelVersionId": "1f94cedb-f656-4ddb-9f55-60ba1fc7d4b1",
        "modelCustomizationId": "3a74410a-6c74-4a32-94b2-71488be6da1a",
        "modelUuid": "1f94cedb-f656-4ddb-9f55-60ba1fc7d4b1",
        "modelInvariantUuid": "2246ebc9-9b9f-42d0-a5e4-0248324fb884"
    },
    "cloudConfiguration": {
        "tenantId": "41d6d38489bd40b09ea8a6b6b852dcbd",
        "lcpCloudRegionId": "RegionOne"
    },
    "requestInfo": {
      "instanceName": "vfModuleName",
      "source": "POLICY",
      "suppressRollback": false,
      "requestorId": "policy"
    },
    "relatedInstanceList": [
      {
        "relatedInstance": {
            "instanceId": "c14e61b5-1ee6-4925-b4a9-b9c8dbfe3f34",
            "modelInfo": {
                "modelInvariantId": "6418bb39-61e1-45fc-a36b-3f211bb846c7",
                "modelType": "service",
                "modelName": "vLB_CDS_SB00_02",
                "modelVersion": "1.0",
                "modelVersionId": "d01d9dec-afb6-4a53-bd9e-2eb10ca07a51",
                "modelUuid": "d01d9dec-afb6-4a53-bd9e-2eb10ca07a51",
                "modelInvariantUuid": "6418bb39-61e1-45fc-a36b-3f211bb846c7"
            }
        }
      },
      {
        "relatedInstance": {
            "instanceId": "6636c4d5-f608-4376-b6d8-7977e98cb16d",
            "modelInfo": {
                "modelInvariantId": "827356a9-cb60-4976-9713-c30b4f850b41",
                "modelType": "vnf",
                "modelName": "vLB_CDS_SB00",
                "modelVersion": "1.0",
                "modelCustomizationUuid": "6478f94b-0b20-4b44-afc0-94e48070586a",
                "modelVersionId": "ca3c4797-0cdd-4797-8bec-9a3ce78ac4da",
                "modelCustomizationId": "6478f94b-0b20-4b44-afc0-94e48070586a",
                "modelUuid": "ca3c4797-0cdd-4797-8bec-9a3ce78ac4da",
                "modelInvariantUuid": "827356a9-cb60-4976-9713-c30b4f850b41"
            }
        }
      }
    ],
    "requestParameters": {
        "usePreload": false
    },
    "configurationParameters": [
        {
            "ip-addr": "$.vf-module-topology.vf-module-parameters.param[16].value",
            "oam-ip-addr": "$.vf-module-topology.vf-module-parameters.param[30].value"
        }
    ]
  }
}

An example response received to the initial request, from the SO REST service:

{
    "requestReferences": {
        "requestId": "70f28791-c271-4cae-b090-0c2a359e26d9",
        "instanceId": "68804843-18e0-41a3-8838-a6d90a035e1a",
        "requestSelfLink": "http://so.onap:8080/orchestrationRequests/v7/b789e4e6-0b92-42c3-a723-1879af9c799d"
    }
}

An example URL used for the “get” (i.e., poll) request subsequently sent to SO:

GET https://so.onap:6969/orchestrationRequests/v5/70f28791-c271-4cae-b090-0c2a359e26d9

An example response received to the poll request, when SO has not completed the request:

{
    "request": {
        "requestId": "70f28791-c271-4cae-b090-0c2a359e26d9",
        "startTime": "Fri, 15 May 2020 12:12:50 GMT",
        "requestScope": "vfModule",
        "requestType": "scaleOut",
        "requestDetails": {
            "modelInfo": {
                "modelInvariantId": "2246ebc9-9b9f-42d0-a5e4-0248324fb884",
                "modelType": "vfModule",
                "modelName": "VlbCdsSb00..vdns..module-3",
                "modelVersion": "1",
                "modelCustomizationUuid": "3a74410a-6c74-4a32-94b2-71488be6da1a",
                "modelVersionId": "1f94cedb-f656-4ddb-9f55-60ba1fc7d4b1",
                "modelCustomizationId": "3a74410a-6c74-4a32-94b2-71488be6da1a",
                "modelUuid": "1f94cedb-f656-4ddb-9f55-60ba1fc7d4b1",
                "modelInvariantUuid": "2246ebc9-9b9f-42d0-a5e4-0248324fb884"
            },
            "requestInfo": {
                "source": "POLICY",
                "instanceName": "vfModuleName",
                "suppressRollback": false,
                "requestorId": "policy"
            },
            "relatedInstanceList": [
                {
                    "relatedInstance": {
                        "instanceId": "c14e61b5-1ee6-4925-b4a9-b9c8dbfe3f34",
                        "modelInfo": {
                            "modelInvariantId": "6418bb39-61e1-45fc-a36b-3f211bb846c7",
                            "modelType": "service",
                            "modelName": "vLB_CDS_SB00_02",
                            "modelVersion": "1.0",
                            "modelVersionId": "d01d9dec-afb6-4a53-bd9e-2eb10ca07a51",
                            "modelUuid": "d01d9dec-afb6-4a53-bd9e-2eb10ca07a51",
                            "modelInvariantUuid": "6418bb39-61e1-45fc-a36b-3f211bb846c7"
                        }
                    }
                },
                {
                    "relatedInstance": {
                        "instanceId": "6636c4d5-f608-4376-b6d8-7977e98cb16d",
                        "modelInfo": {
                            "modelInvariantId": "827356a9-cb60-4976-9713-c30b4f850b41",
                            "modelType": "vnf",
                            "modelName": "vLB_CDS_SB00",
                            "modelVersion": "1.0",
                            "modelCustomizationUuid": "6478f94b-0b20-4b44-afc0-94e48070586a",
                            "modelVersionId": "ca3c4797-0cdd-4797-8bec-9a3ce78ac4da",
                            "modelCustomizationId": "6478f94b-0b20-4b44-afc0-94e48070586a",
                            "modelUuid": "ca3c4797-0cdd-4797-8bec-9a3ce78ac4da",
                            "modelInvariantUuid": "827356a9-cb60-4976-9713-c30b4f850b41"
                        }
                    }
                }
            ],
            "cloudConfiguration": {
                "tenantId": "41d6d38489bd40b09ea8a6b6b852dcbd",
                "tenantName": "Integration-SB-00",
                "cloudOwner": "CloudOwner",
                "lcpCloudRegionId": "RegionOne"
            },
            "requestParameters": {
                "usePreload": false
            },
            "configurationParameters": [
                {
                    "ip-addr": "$.vf-module-topology.vf-module-parameters.param[16].value",
                    "oam-ip-addr": "$.vf-module-topology.vf-module-parameters.param[30].value"
                }
            ]
        },
        "instanceReferences": {
            "serviceInstanceId": "c14e61b5-1ee6-4925-b4a9-b9c8dbfe3f34",
            "vnfInstanceId": "6636c4d5-f608-4376-b6d8-7977e98cb16d",
            "vfModuleInstanceId": "68804843-18e0-41a3-8838-a6d90a035e1a",
            "vfModuleInstanceName": "vfModuleName"
        },
        "requestStatus": {
            "requestState": "IN_PROGRESS",
            "statusMessage": "FLOW STATUS: Execution of ActivateVfModuleBB has completed successfully, next invoking ConfigurationScaleOutBB (Execution Path progress: BBs completed = 4; BBs remaining = 2). TASK INFORMATION: Last task executed: Call SDNC RESOURCE STATUS: The vf module was found to already exist, thus no new vf module was created in the cloud via this request",
            "percentProgress": 68,
            "timestamp": "Fri, 15 May 2020 12:13:41 GMT"
        }
    }
}

An example response received to the poll request, when SO has completed the request:

{
    "request": {
        "requestId": "70f28791-c271-4cae-b090-0c2a359e26d9",
        "startTime": "Fri, 15 May 2020 12:12:50 GMT",
        "finishTime": "Fri, 15 May 2020 12:14:21 GMT",
        "requestScope": "vfModule",
        "requestType": "scaleOut",
        "requestDetails": {
            "modelInfo": {
                "modelInvariantId": "2246ebc9-9b9f-42d0-a5e4-0248324fb884",
                "modelType": "vfModule",
                "modelName": "VlbCdsSb00..vdns..module-3",
                "modelVersion": "1",
                "modelCustomizationUuid": "3a74410a-6c74-4a32-94b2-71488be6da1a",
                "modelVersionId": "1f94cedb-f656-4ddb-9f55-60ba1fc7d4b1",
                "modelCustomizationId": "3a74410a-6c74-4a32-94b2-71488be6da1a",
                "modelUuid": "1f94cedb-f656-4ddb-9f55-60ba1fc7d4b1",
                "modelInvariantUuid": "2246ebc9-9b9f-42d0-a5e4-0248324fb884"
            },
            "requestInfo": {
                "source": "POLICY",
                "instanceName": "vfModuleName",
                "suppressRollback": false,
                "requestorId": "policy"
            },
            "relatedInstanceList": [
                {
                    "relatedInstance": {
                        "instanceId": "c14e61b5-1ee6-4925-b4a9-b9c8dbfe3f34",
                        "modelInfo": {
                            "modelInvariantId": "6418bb39-61e1-45fc-a36b-3f211bb846c7",
                            "modelType": "service",
                            "modelName": "vLB_CDS_SB00_02",
                            "modelVersion": "1.0",
                            "modelVersionId": "d01d9dec-afb6-4a53-bd9e-2eb10ca07a51",
                            "modelUuid": "d01d9dec-afb6-4a53-bd9e-2eb10ca07a51",
                            "modelInvariantUuid": "6418bb39-61e1-45fc-a36b-3f211bb846c7"
                        }
                    }
                },
                {
                    "relatedInstance": {
                        "instanceId": "6636c4d5-f608-4376-b6d8-7977e98cb16d",
                        "modelInfo": {
                            "modelInvariantId": "827356a9-cb60-4976-9713-c30b4f850b41",
                            "modelType": "vnf",
                            "modelName": "vLB_CDS_SB00",
                            "modelVersion": "1.0",
                            "modelCustomizationUuid": "6478f94b-0b20-4b44-afc0-94e48070586a",
                            "modelVersionId": "ca3c4797-0cdd-4797-8bec-9a3ce78ac4da",
                            "modelCustomizationId": "6478f94b-0b20-4b44-afc0-94e48070586a",
                            "modelUuid": "ca3c4797-0cdd-4797-8bec-9a3ce78ac4da",
                            "modelInvariantUuid": "827356a9-cb60-4976-9713-c30b4f850b41"
                        }
                    }
                }
            ],
            "cloudConfiguration": {
                "tenantId": "41d6d38489bd40b09ea8a6b6b852dcbd",
                "tenantName": "Integration-SB-00",
                "cloudOwner": "CloudOwner",
                "lcpCloudRegionId": "RegionOne"
            },
            "requestParameters": {
                "usePreload": false
            },
            "configurationParameters": [
                {
                    "ip-addr": "$.vf-module-topology.vf-module-parameters.param[16].value",
                    "oam-ip-addr": "$.vf-module-topology.vf-module-parameters.param[30].value"
                }
            ]
        },
        "instanceReferences": {
            "serviceInstanceId": "c14e61b5-1ee6-4925-b4a9-b9c8dbfe3f34",
            "vnfInstanceId": "6636c4d5-f608-4376-b6d8-7977e98cb16d",
            "vfModuleInstanceId": "68804843-18e0-41a3-8838-a6d90a035e1a",
            "vfModuleInstanceName": "vfModuleName"
        },
        "requestStatus": {
            "requestState": "COMPLETE",
            "statusMessage": "STATUS: ALaCarte-VfModule-scaleOut request was executed correctly. FLOW STATUS: Successfully completed all Building Blocks RESOURCE STATUS: The vf module was found to already exist, thus no new vf module was created in the cloud via this request",
            "percentProgress": 100,
            "timestamp": "Fri, 15 May 2020 12:14:21 GMT"
        }
    }
}
Configuration of the SO Actor

The following table specifies the fields that should be provided to configure the SO actor.

Field name

type

Description

clientName

string

Name of the HTTP client to use to send the request to the SO REST server.

timeoutSec

integer (optional)

Maximum time, in seconds, to wait for a response to be received from the REST server. Defaults to 90s.

path

string

URI appended to the URL. This field only applies to individual operations; it does not apply at the actor level. Note: the path should not include a leading or trailing slash.

maxGets

integer (optional)

Maximum number of get/poll requests to make to determine the final outcome of the request. Defaults to 20.

waitSecGet

integer (optional)

Time, in seconds, to wait between issuing “get” requests. Defaults to 20s.

pathGet

string (optional)

Path to use when polling (i.e., issuing “get” requests). Note: this should include a trailing slash, but no leading slash.

The individual operations are configured using these same field names. However, all of them, except the path, are optional, as they inherit their values from the corresponding actor-level fields.

VFC Actor

Overview of VFC Actor

ONAP Policy Framework enables VFC as one of the supported actors.

Note

There has not been any support given to the Policy Framework project for the VFC Actor in several releases. Thus, the code and information provided is to the best of the knowledge of the team. If there are any questions or problems, please consult the VFC Project to help provide guidance.

VFC uses a REST-based interface. However, as requests may not complete right away, a REST-based polling interface is used to check the status of the request. The jobId is extracted from each response and is appended to the pathGet configuration parameter to generate the URL used to poll for completion.

Each operation supported by the actor is associated with its own java class, which is responsible for populating the request structure appropriately and sending the request. The operation-specific classes are all derived from the VfcOperation class, which is, itself, derived from HttpOperation. The following operations are currently supported:

  • Restart

Request

A number of nested structures are populated within the request. Several of them are populated from items found within the A&AI “enrichment” data provided by DCAE with the ONSET event. The following table lists the contents of some of the fields that appear within these structures.

Field Name

Type

Description

top level:

requestId

string

Inserted by Policy. Maps to the UUID sent by DCAE i.e. the ID used throughout the closed loop lifecycle to identify a request.

nsInstanceId

string

Set by Policy, using the “service-instance.service-instance-id” property found within the enrichment data.

healVnfData:

cause

string

Set by Policy to the name of the operation.

vnfInstanceId

string

Set by Policy, using the “generic-vnf.vnf-id” property found within the enrichment data.

additionalParams:

action

Set by Policy to the name of the operation.

actionvminfo:

vmid

string

Set by Policy, using the “vserver.vserver-id” property found within the enrichment data.

vmname

string

Set by Policy, using the “vserver.vserver-name” property found within the enrichment data.

Examples

Suppose the ControlLoopOperationParams were populated as follows:

{
    TBD
}

An example of a request constructed by the actor using the above parameters, sent to the VFC REST server:

{
    TBD
}

An example response received to the initial request, from the VFC REST service:

{
    TBD
}

An example URL used for the “get” (i.e., poll) request subsequently sent to VFC:

TBD

An example response received to the poll request, when VFC has not completed the request:

{
    TBD
}

An example response received to the poll request, when VFC has completed the request:

{
    TBD
}
Configuration of the VFC Actor

The following table specifies the fields that should be provided to configure the VFC actor.

Field name

type

Description

clientName

string

Name of the HTTP client to use to send the request to the VFC REST server.

timeoutSec

integer (optional)

Maximum time, in seconds, to wait for a response to be received from the REST server. Defaults to 90s.

The individual operations are configured using these same field names. However, all of them are optional, as they inherit their values from the corresponding actor-level fields. The following additional fields are specified at the individual operation level.

Field name

type

Description

path

string

URI appended to the URL. Note: this should not include a leading or trailing slash.

maxGets

integer (optional)

Maximum number of get/poll requests to make to determine the final outcome of the request. Defaults to 0 (i.e., no polling).

waitSecGet

integer

Time, in seconds, to wait between issuing “get” requests. Defaults to 20s.

pathGet

string

Path to use when polling (i.e., issuing “get” requests). Note: this should include a trailing slash, but no leading slash.

Property-configuration mechanisms

This article explains how to implement handling and validation of common parameter into the Policy Framework Components.

Not Spring boot framework

The application should have a ParameterHandler class to support the map values from Json to a POJO, so it should be load the file, convert it performing all type conversion.

The code below shown an example of ParameterHandler:

public class PapParameterHandler {

    private static final Logger LOGGER = LoggerFactory.getLogger(PapParameterHandler.class);

    private static final Coder CODER = new StandardCoder();

 public PapParameterGroup getParameters(final PapCommandLineArguments arguments) throws PolicyPapException {
        PapParameterGroup papParameterGroup = null;

        try {
            var file = new File(arguments.getFullConfigurationFilePath());
            papParameterGroup = CODER.decode(file, PapParameterGroup.class);
        } catch (final CoderException e) {
            final String errorMessage = "error reading parameters from \"" + arguments.getConfigurationFilePath()
                    + "\"\n" + "(" + e.getClass().getSimpleName() + ")";
            throw new PolicyPapException(errorMessage, e);
        }

        if (papParameterGroup == null) {
            final String errorMessage = "no parameters found in \"" + arguments.getConfigurationFilePath() + "\"";
            LOGGER.error(errorMessage);
            throw new PolicyPapException(errorMessage);
        }

        final ValidationResult validationResult = papParameterGroup.validate();
        if (!validationResult.isValid()) {
            String returnMessage =
                    "validation error(s) on parameters from \"" + arguments.getConfigurationFilePath() + "\"\n";
            returnMessage += validationResult.getResult();

            LOGGER.error(returnMessage);
            throw new PolicyPapException(returnMessage);
        }

        return papParameterGroup;
    }
}

The POJO have to implement org.onap.policy.common.parameters.ParameterGroup interface or eventually extend org.onap.policy.common.parameters.ParameterGroupImpl. The last one already implements validate() method that performs error checking using validation org.onap.policy.common.parameters.annotations.

The code below shown an example of POJO:

@NotNull
@NotBlank
@Getter
public class PapParameterGroup extends ParameterGroupImpl {
    @Valid
    private RestServerParameters restServerParameters;
    @Valid
    private PdpParameters pdpParameters;
    @Valid
    private PolicyModelsProviderParameters databaseProviderParameters;
    private boolean savePdpStatisticsInDb;
    @Valid
    private TopicParameterGroup topicParameterGroup;

    private List<@NotNull @Valid RestClientParameters> healthCheckRestClientParameters;

    public PapParameterGroup(final String name) {
        super(name);
    }
}

The code shows below, is an example of Unit Test validation of the POJO PapParameterGroup:

private static final Coder coder = new StandardCoder();

@Test
void testPapParameterGroup_NullName() throws Exception {
    String json = commonTestData.getPapParameterGroupAsString(1).replace("\"PapGroup\"", "null");
    final PapParameterGroup papParameters = coder.decode(json, PapParameterGroup.class);
    final ValidationResult validationResult = papParameters.validate();
    assertFalse(validationResult.isValid());
    assertEquals(null, papParameters.getName());
    assertThat(validationResult.getResult()).contains("is null");
}

Using Spring boot framework

Spring loads automatically the property file and put it available under the org.springframework.core.env.Environment Spring component.

Environment

A component can use Environment component directly.

Environment component is not a good approach because there is not type conversion and error checking, but it could be useful when the name of the property you need to access changes dynamically.

@Component
@RequiredArgsConstructor
public class Example {

private Environment env;
....

public void method(String pathPropertyName) {
 .....
 String path = env.getProperty(pathPropertyName);
 .....
}
Annotation-based Spring configuration

All annotation-based Spring configurations support the Spring Expression Language (SpEL), a powerful expression language that supports querying and manipulating an object graph at runtime. A documentation about SpEL could be found here: https://docs.spring.io/spring-framework/docs/3.0.x/reference/expressions.html.

A component can use org.springframework.beans.factory.annotation.Value, which reads from properties, performs a type conversion and injects the value into the filed. There is not error checking, but it can assign default value if the property is not defined.

@Value("${security.enable-csrf:true}")
private boolean csrfEnabled = true;

The code below shows how to inject a value of a property into @Scheduled configuration.

@Scheduled(
        fixedRateString = "${runtime.participantParameters.heartBeatMs}",
        initialDelayString = "${runtime.participantParameters.heartBeatMs}")
public void schedule() {
}
ConfigurationProperties

@ConfigurationProperties can be used to map values from .properties( .yml also supported) to a POJO. It performs all type conversion and error checking using validation javax.validation.constraints.

@Validated
@Getter
@Setter
@ConfigurationProperties(prefix = "runtime")
public class ClRuntimeParameterGroup {
    @Min(100)
    private long heartBeatMs;

    @Valid
    @Positive
    private long reportingTimeIntervalMs;

    @Valid
    @NotNull
    private ParticipantUpdateParameters updateParameters;

    @NotBlank
    private String description;
}

In a scenario that we need to include into a POJO shown before, a class that implement ParameterGroup interface, we need to add the org.onap.policy.common.parameters.validation.ParameterGroupConstraint annotation. That annotation is configured to use ParameterGroupValidator that handles the conversion of a org.onap.policy.common.parameters.BeanValidationResult to a Spring validation.

The code below shown how to add TopicParameterGroup parameter into ClRuntimeParameterGroup:

@NotNull
@ParameterGroupConstraint
private TopicParameterGroup topicParameterGroup;

A bean configured with ConfigurationProperties, is automatically a Spring component and could be injected into other Spring components. The code below shown an example:

@Component
@RequiredArgsConstructor
public class Example {

   private ClRuntimeParameterGroup parameters;
   ....

   public void method() {
     .....
     long heartBeatMs = parameters.getHeartBeatMs();
     .....
   }

The code shows below, is an example of Unit Test validation of the POJO ClRuntimeParameterGroup:

private ValidatorFactory validatorFactory = Validation.buildDefaultValidatorFactory();

@Test
void testParameters_NullTopicParameterGroup() {
    final ClRuntimeParameterGroup parameters = CommonTestData.geParameterGroup();
    parameters.setTopicParameterGroup(null);
    assertThat(validatorFactory.getValidator().validate(parameters)).isNotEmpty();
}

Policy Drools PDP Engine

The Drools PDP, aka PDP-D, is the PDP in the Policy Framework that uses the Drools BRMS to enforce policies.

The PDP-D functionality has been partitioned into two functional areas:

  • PDP-D Engine.

  • PDP-D Applications.

PDP-D Engine

The PDP-D Engine is the infrastructure that policy applications use. It provides networking services, resource grouping, and diagnostics.

The PDP-D Engine supports the following Tosca Native Policy Types:

  • onap.policies.native.Drools

  • onap.policies.native.drools.Controller

These types are used to dynamically add and configure new application controllers.

The PDP-D Engine hosts applications by means of controllers. Controllers may support other Tosca Policy Types. The types supported by the Control Loop applications are:

  • onap.policies.controlloop.operational.common.Drools

PDP-D Applications

A PDP-D application, ie. a controller, contains references to the resources that the application needs. These include networked endpoint references, and maven coordinates.

Control Loop applications are used in ONAP to enforce operational policies.

The following guides offer more information in these two functional areas.

PDP-D Engine

Overview

The PDP-D Core Engine provides an infrastructure and services for drools based applications in the context of Policies and ONAP.

A PDP-D supports applications by means of controllers. A controller is a named grouping of resources. These typically include references to communication endpoints, maven artifact coordinates, and coders for message mapping.

Controllers use communication endpoints to interact with remote networked entities typically using messaging (dmaap or ueb), or http.

PDP-D Engine capabilities can be extended via features. Integration with other Policy Framework components (API, PAP, and PDP-X) is through one of them (feature-lifecycle).

The PDP-D Engine infrastructure provides mechanisms for data migration, diagnostics, and application management.

Software

Source Code repositories

The PDP-D software is mainly located in the policy/drools repository with the communication endpoints software residing in the policy/common repository and Tosca policy models in the policy/models repository.

Docker Image

Check the drools-pdp released versions page for the latest versions. At the time of this writing 1.8.2 is the latest version.

docker pull onap/policy-drools:1.8.2

A container instantiated from this image will run under the non-priviledged policy account.

The PDP-D root directory is located at the /opt/app/policy directory (or $POLICY_HOME), with the exception of the $HOME/.m2 which contains the local maven repository. The PDP-D configuration resides in the following directories:

  • /opt/app/policy/config: ($POLICY_HOME/config or $POLICY_CONFIG) contains engine, controllers, and endpoint configuration.

  • /home/policy/.m2: ($HOME/.m2) maven repository configuration.

  • /opt/app/policy/etc/: ($POLICY_HOME/etc) miscellaneous configuration such as certificate stores.

The following command can be used to explore the directory layout.

docker run --rm -it nexus3.onap.org:10001/onap/policy-drools:1.8.2 -- bash

Communication Endpoints

PDP-D supports the following networked infrastructures. This is also referred to as communication infrastructures in the source code.

  • DMaaP

  • UEB

  • NOOP

  • Http Servers

  • Http Clients

The source code is located at the policy-endpoints module in the policy/commons repository.

These network resources are named and typically have a global scope, therefore typically visible to the PDP-D engine (for administration purposes), application controllers, and features.

DMaaP, UEB, and NOOP are message-based communication infrastructures, hence the terminology of source and sinks, to denote their directionality into or out of the controller, respectively.

An endpoint can either be managed or unmanaged. The default for an endpoint is to be managed, meaning that they are globally accessible by name, and managed by the PDP-D engine. Unmanaged topics are used when neither global visibility, or centralized PDP-D management is desired. The software that uses unmanaged topics is responsible for their lifecycle management.

DMaaP Endpoints

These are messaging enpoints that use DMaaP as the communication infrastructure.

Typically, a managed endpoint configuration is stored in the <topic-name>-topic.properties files.

For example, the DCAE_TOPIC-topic.properties is defined as

dmaap.source.topics=DCAE_TOPIC

dmaap.source.topics.DCAE_TOPIC.effectiveTopic=${env:DCAE_TOPIC}
dmaap.source.topics.DCAE_TOPIC.servers=${env:DMAAP_SERVERS}
dmaap.source.topics.DCAE_TOPIC.consumerGroup=${env:DCAE_CONSUMER_GROUP}
dmaap.source.topics.DCAE_TOPIC.https=true

In this example, the generic name of the source endpoint is DCAE_TOPIC. This is known as the canonical name. The actual topic used in communication exchanges in a physical lab is contained in the $DCAE_TOPIC environment variable. This environment variable is usually set up by devops on a per installation basis to meet the needs of each lab spec.

In the previous example, DCAE_TOPIC is a source-only topic.

Sink topics are similarly specified but indicating that are sink endpoints from the perspective of the controller. For example, the APPC-CL topic is configured as

dmaap.source.topics=APPC-CL
dmaap.sink.topics=APPC-CL

dmaap.source.topics.APPC-CL.servers=${env:DMAAP_SERVERS}
dmaap.source.topics.APPC-CL.https=true

dmaap.sink.topics.APPC-CL.servers=${env:DMAAP_SERVERS}
dmaap.sink.topics.APPC-CL.https=true

Although not shown in these examples, additional configuration options are available such as user name, password, security keys, consumer group and consumer instance.

UEB Endpoints

Similary, UEB endpoints are messaging endpoints, similar to the DMaaP ones.

For example, the DCAE_TOPIC-topic.properties can be converted to an UEB one, by replacing the dmaap prefix with ueb. For example:

ueb.source.topics=DCAE_TOPIC

ueb.source.topics.DCAE_TOPIC.effectiveTopic=${env:DCAE_TOPIC}
ueb.source.topics.DCAE_TOPIC.servers=${env:DMAAP_SERVERS}
ueb.source.topics.DCAE_TOPIC.consumerGroup=${env:DCAE_CONSUMER_GROUP}
ueb.source.topics.DCAE_TOPIC.https=true
NOOP Endpoints

NOOP (no-operation) endpoints are messaging endpoints that don’t have any network attachments. They are used for testing convenience. To convert the DCAE_TOPIC-topic.properties to a NOOP endpoint, simply replace the dmaap prefix with noop:

noop.source.topics=DCAE_TOPIC
noop.source.topics.DCAE_TOPIC.effectiveTopic=${env:DCAE_TOPIC}
HTTP Clients

HTTP Clients are typically stored in files following the naming convention: <name>-http-client.properties convention. One such example is the AAI HTTP Client:

http.client.services=AAI

http.client.services.AAI.managed=true
http.client.services.AAI.https=true
http.client.services.AAI.host=${envd:AAI_HOST}
http.client.services.AAI.port=${envd:AAI_PORT}
http.client.services.AAI.userName=${envd:AAI_USERNAME}
http.client.services.AAI.password=${envd:AAI_PASSWORD}
http.client.services.AAI.contextUriPath=${envd:AAI_CONTEXT_URI}
HTTP Servers

HTTP Servers are stored in files that follow a similar naming convention <name>-http-server.properties. The following is an example of a server named CONFIG, getting most of its configuration from environment variables.

http.server.services=CONFIG

http.server.services.CONFIG.host=${envd:TELEMETRY_HOST}
http.server.services.CONFIG.port=7777
http.server.services.CONFIG.userName=${envd:TELEMETRY_USER}
http.server.services.CONFIG.password=${envd:TELEMETRY_PASSWORD}
http.server.services.CONFIG.restPackages=org.onap.policy.drools.server.restful
http.server.services.CONFIG.managed=false
http.server.services.CONFIG.swagger=true
http.server.services.CONFIG.https=true
http.server.services.CONFIG.aaf=${envd:AAF:false}

Endpoints configuration resides in the $POLICY_HOME/config (or $POLICY_CONFIG) directory in a container.

Controllers

Controllers are the means for the PDP-D to run applications. Controllers are defined in <name>-controller.properties files.

For example, see the usecases controller configuration.

This configuration file has two sections: a) application maven coordinates, and b) endpoint references and coders.

Maven Coordinates

The coordinates section (rules) points to the controller-usecases kjar artifact. It is the brain of the control loop application.

controller.name=usecases

rules.groupId=${project.groupId}
rules.artifactId=controller-usecases
rules.version=${project.version}
.....

This kjar contains the usecases DRL file (there may be more than one DRL file included).

...
rule "NEW.TOSCA.POLICY"
    when
        $policy : ToscaPolicy()
    then

    ...

    ControlLoopParams params = ControlLoopUtils.toControlLoopParams($policy);
    if (params != null) {
        insert(params);
    }
end
...

The DRL in conjuction with the dependent java libraries in the kjar pom realizes the application’s function. For intance, it realizes the vFirewall, vCPE, and vDNS use cases in ONAP.

..
<dependency>
    <groupId>org.onap.policy.models.policy-models-interactions.model-actors</groupId>
    <artifactId>actor.appclcm</artifactId>
    <version>${policy.models.version}</version>
    <scope>provided</scope>
</dependency>
...
Endpoints References and Coders

The usecases-controller.properties configuration also contains a mix of source (of incoming controller traffic) and sink (of outgoing controller traffic) configuration. This configuration also contains specific filtering and mapping rules for incoming and outgoing dmaap messages known as coders.

...
dmaap.source.topics=DCAE_TOPIC,APPC-CL,APPC-LCM-WRITE,SDNR-CL-RSP
dmaap.sink.topics=APPC-CL,APPC-LCM-READ,POLICY-CL-MGT,SDNR-CL,DCAE_CL_RSP


dmaap.source.topics.APPC-LCM-WRITE.events=org.onap.policy.appclcm.AppcLcmDmaapWrapper
dmaap.source.topics.APPC-LCM-WRITE.events.org.onap.policy.appclcm.AppcLcmDmaapWrapper.filter=[?($.type == 'response')]
dmaap.source.topics.APPC-LCM-WRITE.events.custom.gson=org.onap.policy.appclcm.util.Serialization,gson

dmaap.sink.topics.APPC-CL.events=org.onap.policy.appc.Request
dmaap.sink.topics.APPC-CL.events.custom.gson=org.onap.policy.appc.util.Serialization,gsonPretty
...

In this example, the coders specify that incoming messages over the DMaaP endpoint reference APPC-LCM-WRITE, that have a field called type under the root JSON object with value response are allowed into the controller application. In this case, the incoming message is converted into an object (fact) of type org.onap.policy.appclcm.AppcLcmDmaapWrapper. The coder has attached a custom implementation provided by the application with class org.onap.policy.appclcm.util.Serialization. Note that the coder filter is expressed in JSONPath notation.

Note that not all the communication endpoint references need to be explicitly referenced within the controller configuration file. For example, Http clients do not. The reasons are historical, as the PDP-D was initially intended to only communicate through messaging-based protocols such as UEB or DMaaP in asynchronous unidirectional mode. The introduction of Http with synchronous bi-directional communication with remote endpoints made it more convenient for the application to manage each network exchange.

Controllers configuration resides in the $POLICY_HOME/config (or $POLICY_CONFIG) directory in a container.

Other Configuration Files

There are other types of configuration files that controllers can use, for example .environment files that provides a means to share data across applications. The controlloop.properties.environment is one such example.

Tosca Policies

PDP-D supports Tosca Policies through the feature-lifecycle. The PDP-D receives its policy set from the PAP. A policy conforms to its Policy Type specification. Policy Types and policy creation is done by the API component. Policy deployments are orchestrated by the PAP.

All communication between PAP and PDP-D is over the DMaaP POLICY-PDP-PAP topic.

Native Policy Types

The PDP-D Engine supports two (native) Tosca policy types by means of the lifecycle feature:

  • onap.policies.native.drools.Controller

  • onap.policies.native.drools.Artifact

These types can be used to dynamically deploy or undeploy application controllers, assign policy types, and upgrade or downgrade their attached maven artifact versions.

For instance, an example native controller policy is shown below.

{
    "tosca_definitions_version": "tosca_simple_yaml_1_0_0",
    "topology_template": {
        "policies": [
            {
                "example.controller": {
                    "type": "onap.policies.native.drools.Controller",
                    "type_version": "1.0.0",
                    "version": "1.0.0",
                    "name": "example.controller",
                    "metadata": {
                        "policy-id": "example.controller"
                    },
                    "properties": {
                        "controllerName": "lifecycle",
                        "sourceTopics": [
                            {
                                "topicName": "DCAE_TOPIC",
                                "events": [
                                    {
                                        "eventClass": "java.util.HashMap",
                                        "eventFilter": "[?($.closedLoopEventStatus == 'ONSET')]"
                                    },
                                    {
                                        "eventClass": "java.util.HashMap",
                                        "eventFilter": "[?($.closedLoopEventStatus == 'ABATED')]"
                                    }
                                ]
                            }
                        ],
                        "sinkTopics": [
                            {
                                "topicName": "APPC-CL",
                                "events": [
                                    {
                                        "eventClass": "java.util.HashMap",
                                        "eventFilter": "[?($.CommonHeader && $.Status)]"
                                    }
                                ]
                            }
                        ],
                        "customConfig": {
                            "field1" : "value1"
                        }
                    }
                }
            }
        ]
    }
}

The actual application coordinates are provided with a policy of type onap.policies.native.drools.Artifact, see the example native artifact

{
    "tosca_definitions_version": "tosca_simple_yaml_1_0_0",
    "topology_template": {
        "policies": [
            {
                "example.artifact": {
                    "type": "onap.policies.native.drools.Artifact",
                    "type_version": "1.0.0",
                    "version": "1.0.0",
                    "name": "example.artifact",
                    "metadata": {
                        "policy-id": "example.artifact"
                    },
                    "properties": {
                        "rulesArtifact": {
                            "groupId": "org.onap.policy.drools.test",
                            "artifactId": "lifecycle",
                            "version": "1.0.0"
                        },
                        "controller": {
                            "name": "lifecycle"
                        }
                    }
                }
            }
        ]
    }
}
Operational Policy Types

The PDP-D also recognizes Tosca Operational Policies, although it needs an application controller that understands them to execute them. These are:

  • onap.policies.controlloop.operational.common.Drools

A minimum of one application controller that supports these capabilities must be installed in order to honor the operational policy types. One such controller is the usecases controller residing in the policy/drools-applications repository.

Controller Policy Type Support

Note that a controller may support other policy types. A controller may declare them explicitly in a native onap.policies.native.drools.Controller policy.

"customConfig": {
    "controller.policy.types" : "policy.type.A"
}

The controller application could declare its supported policy types in the kjar. For example, the usecases controller packages this information in the kmodule.xml. One advantage of this approach is that the PDP-D would only commit to execute policies against these policy types if a supporting controller is up and running.

<kmodule xmlns="http://jboss.org/kie/6.0.0/kmodule">
    <kbase name="onap.policies.controlloop.operational.common.Drools" default="false" equalsBehavior="equality"/>
    <kbase name="onap.policies.controlloop.Operational" equalsBehavior="equality"
           packages="org.onap.policy.controlloop" includes="onap.policies.controlloop.operational.common.Drools">
        <ksession name="usecases"/>
    </kbase>
</kmodule>

Software Architecture

PDP-D is divided into 2 layers:

Core Layer

The core layer directly interfaces with the drools libraries with 2 main abstractions:

Policy Container and Sessions

The PolicyContainer abstracts the drools KieContainer, while a PolicySession abstracts a drools KieSession. PDP-D uses stateful sessions in active mode (fireUntilHalt) (please visit the drools website for additional documentation).

Management Layer

The management layer manages the PDP-D and builds on top of the core capabilities.

PolicyEngine

The PDP-D PolicyEngine is the top abstraction and abstracts away the PDP-D and all the resources it holds. The reader looking at the source code can start looking at this component in a top-down fashion. Note that the PolicyEngine abstraction should not be confused with the sofware in the policy/engine repository, there is no relationship whatsoever other than in the naming.

The PolicyEngine represents the PDP-D, holds all PDP-D resources, and orchestrates activities among those.

The PolicyEngine manages applications via the PolicyController abstractions in the base code. The relationship between the PolicyEngine and PolicyController is one to many.

The PolicyEngine holds other global resources such as a thread pool, policies validator, telemetry server, and unmanaged topics for administration purposes.

The PolicyEngine has interception points that allow *features* to observe and alter the default PolicyEngine behavior.

The PolicyEngine implements the *Startable* and *Lockable* interfaces. These operations have a cascading effect on the resources the PolicyEngine holds, as it is the top level entity, thus affecting controllers and endpoints. These capabilities are intended to be used for extensions, for example active/standby multi-node capabilities. This programmability is exposed via the telemetry API, and feature hooks.

Configuration

PolicyEngine related configuration is located in the engine.properties, and engine-system.properties.

The engine configuration files reside in the $POLICY_CONFIG directory.

PolicyController

A PolicyController represents an application. Each PolicyController has an instance of a DroolsController. The PolicyController provides the means to group application specific resources into a single unit. Such resources include the application’s maven coordinates, endpoint references, and coders.

A PolicyController uses a DroolsController to interface with the core layer (PolicyContainer and PolicySession).

The relationship between the PolicyController and the DroolsController is one-to-one. The DroolsController currently supports 2 implementations, the MavenDroolsController, and the NullDroolsController. The DroolsController’s polymorphic behavior depends on whether a maven artifact is attached to the controller or not.

Configuration

The controllers configuration resides in the $POLICY_CONFIG directory.

Programmability

PDP-D is programmable through:

  • Features and Event Listeners.

  • Maven-Drools applications.

Using Features and Listeners

Features hook into the interception points provided by the the PDP-D main entities.

Endpoint Listeners, see here and here, can be used in conjuction with features for additional capabilities.

Using Maven-Drools applications

Maven-based drools applications can run any arbitrary functionality structured with rules and java logic.

Telemetry Extensions

It is recommended to features (extensions) to offer a diagnostics REST API to integrate with the telemetry API. This is done by placing JAX-RS files under the package org.onap.policy.drools.server.restful. The root context path for all the telemetry services is /policy/pdp/engine.

Features

Features is an extension mechanism for the PDP-D functionality. Features can be toggled on and off. A feature is composed of:

  • Java libraries.

  • Scripts and configuration files.

Java Extensions

Additional functionality can be provided in the form of java libraries that hook into the PolicyEngine, PolicyController, DroolsController, and PolicySession interception points to observe or alter the PDP-D logic.

See the Feature APIs available in the management and core layers.

The convention used for naming these extension modules are api-<name> for interfaces, and feature-<name> for the actual java extensions.

Configuration Items

Installation items such as scripts, SQL, maven artifacts, and configuration files.

The reader can refer to the policy/drools-pdp repository and the <https://git.onap.org/policy/drools-applications>`__ repository for miscellaneous feature implementations.

Layout

A feature is packaged in a feature-<name>.zip and has this internal layout:

# #######################################################################################
# Features Directory Layout:
#
# $POLICY_HOME/
#   L─ features/
#        L─ <feature-name>*/
#            L─ [config]/
#            |   L─ <config-file>+
#            L─ [bin]/
#            |   L─ <bin-file>+
#            L─ lib/
#            |   L─ [dependencies]/
#            |   |   L─ <dependent-jar>+
#            │   L─ feature/
#            │       L─ <feature-jar>
#            L─ [db]/
#            │   L─ <db-name>/+
#            │       L─ sql/
#            │           L─ <sql-scripts>*
#            L─ [artifacts]/
#                L─ <artifact>+
#            L─ [install]
#                L─ [enable]
#                L─ [disable]
#                L─ [other-directories-or-files]
#
# notes:  [] = optional , * = 0 or more , + = 1 or more
#   <feature-name> directory without "feature-" prefix.
#   [config]       feature configuration directory that contains all configuration
#                  needed for this features
#   [config]/<config-file>  preferably named with "feature-<feature-name>" prefix to
#                  precisely match it against the exact features, source code, and
#                  associated wiki page for configuration details.
#   [bin]       feature bin directory that contains helper scripts for this feature
#   [bin]/<executable-file>  preferably named with "feature-<feature-name>" prefix.
#   lib            jar libraries needed by this features
#   lib/[dependencies]  3rd party jar dependencies not provided by base installation
#                  of pdp-d that are necessary for <feature-name> to operate
#                  correctly.
#   lib/feature    the single feature jar that implements the feature.
#   [db]           database directory, if the feature contains sql.
#   [db]/<db-name> database to which underlying sql scripts should be applied.
#                  ideally, <db-name> = <feature-name> so it is easily to associate
#                  the db data with a feature itself.   In addition, since a feature is
#                  a somewhat independent isolated unit of functionality,the <db-name>
#                  database ideally isolates all its data.
#   [db]/<db-name>/sql  directory with all the sql scripts.
#   [db]/<db-name>/sql/<sql-scripts>  for this feature, sql
#                  upgrade scripts should be suffixed with ".upgrade.sql"
#                  and downgrade scripts should be suffixed with ".downgrade.sql"
#   [artifacts]    maven artifacts to be deployed in a maven repository.
#   [artifacts]/<artifact>  maven artifact with identifiable maven coordinates embedded
#                  in the artifact.
#   [install]      custom installation directory where custom enable or disable scripts
#                  and other free form data is included to be used for the enable and
#                  and disable scripts.
#   [install]/[enable] enable script executed when the enable operation is invoked in
#                  the feature.
#   [install]/[disable] disable script executed when the disable operation is invoked in
#                  the feature.
#   [install]/[other-directories-or-files] other executables, or data that can be used
#                  by the feature for any of its operations.   The content is determined
#                  by the feature designer.
# ########################################################################################

The features is the tool used for administration purposes:

Usage:  features status
            Get enabled/disabled status on all features
        features enable <feature> ...
            Enable the specified feature
        features disable <feature> ...
            Disable the specified feature
        features install [ <feature> | <file-name> ] ...
            Install the specified feature
        features uninstall <feature> ...
            Uninstall the specified feature
Features available in the Docker image

The only enabled feature in the onap/policy-drools image is:

  • lifecycle: enables the lifecycle capability to integrate with the Policy Framework components.

The following features are included in the image but disabled.

  • distributed locking: distributed resource locking.

  • healthcheck: basic PDP-D Engine healthcheck.

Healthcheck

The Healthcheck feature provides reports used to verify the health of PolicyEngine.manager in addition to the construction, operation, and deconstruction of HTTP server/client objects.

When enabled, the feature takes as input a properties file named “feature-healtcheck.properties. This file should contain configuration properties necessary for the construction of HTTP client and server objects.

Upon initialization, the feature first constructs HTTP server and client objects using the properties from its properties file. A healthCheck operation is then triggered. The logic of the healthCheck verifies that PolicyEngine.manager is alive, and iteratively tests each HTTP server object by sending HTTP GET requests using its respective client object. If a server returns a “200 OK” message, it is marked as “healthy” in its individual report. Any other return code results in an “unhealthy” report.

After the testing of the server objects has completed, the feature returns a single consolidated report.

Lifecycle

The “lifecycle” feature enables a PDP-D to work with the architectural framework introduced in the Dublin release.

The lifecycle feature maintains three states: TERMINATED, PASSIVE, and ACTIVE. The PAP interacts with the lifecycle feature to put a PDP-D in PASSIVE or ACTIVE states. The PASSIVE state allows for Tosca Operational policies to be deployed. Policy execution is enabled when the PDP-D transitions to the ACTIVE state.

This feature can coexist side by side with the legacy mode of operation that pre-dates the Dublin release.

Distributed Locking

The Distributed Locking Feature provides locking of resources across a pool of PDP-D hosts. The list of locks is maintained in a database, where each record includes a resource identifier, an owner identifier, and an expiration time. Typically, a drools application will unlock the resource when it’s operation completes. However, if it fails to do so, then the resource will be automatically released when the lock expires, thus preventing a resource from becoming permanently locked.

Other features

The following features have been contributed to the policy/drools-pdp but are either unnecessary or have not been thoroughly tested:

Feature: Active/Standby Management

When the Feature Session Persistence is enabled, there can only be one active/providing service Drools PDP due to the behavior of Drools persistence. The Active/Standby Management Feature controls the selection of the Drools PDP that is providing service. It utilizes its own database and the State Management Feature database in the election algorithm. All Drools PDP nodes periodically run the election algorithm and, since they all use the same data, all nodes come to the same conclusion with the “elected” node assuming an active/providingservice state. Thus, the algorithm is distributed and has no single point of failure - assuming the database is configured for high availability.

When the algorithm selects a Drools PDP to be active/providing service the controllers and topic endpoints are unlocked and allowed to process transactions. When a Drools PDP transitions to a hotstandby or coldstandby state, the controllers and topic endpoints are locked, preventing the Drools PDP from handling transactions.

Enabling and Disabling Feature State Management

The Active/Standby Management Feature is enabled from the command line when logged in as policy after configuring the feature properties file (see Description Details section). From the command line:

  • > features status - Lists the status of features

  • > features enable active-standby-management - Enables the Active-Standby Management Feature

  • > features disable active-standby-management - Disables the Active-Standby Management Feature

The Drools PDP must be stopped prior to enabling/disabling features and then restarted after the features have been enabled/disabled.

Enabling Active/Standby Management Feature
 policy@hyperion-4:/opt/app/policy$ policy stop
 [drools-pdp-controllers]
  L []: Stopping Policy Management... Policy Management (pid=354) is stopping... Policy Management has stopped.
 policy@hyperion-4:/opt/app/policy$ features enable active-standby-management
 name                      version         status
 ----                      -------         ------
 controlloop-utils         1.1.0-SNAPSHOT  disabled
 healthcheck               1.1.0-SNAPSHOT  disabled
 test-transaction          1.1.0-SNAPSHOT  disabled
 eelf                      1.1.0-SNAPSHOT  disabled
 state-management          1.1.0-SNAPSHOT  disabled
 active-standby-management 1.1.0-SNAPSHOT  enabled
 session-persistence       1.1.0-SNAPSHOT  disabled
Description Details
Election Algorithm

The election algorithm selects the active/providingservice Drools PDP. The algorithm on each node reads the standbystatus from the StateManagementEntity table for all other nodes to determine if they are providingservice or in a hotstandby state and able to assume an active status. It uses the DroolsPdpEntity table to verify that other node election algorithms are currently functioning and when the other nodes were last designated as the active Drools PDP.

In general terms, the election algorithm periodically gathers the standbystatus and designation status for all the Drools PDPs. If the node which is currently designated as providingservice is “current” in updating its status, no action is required. If the designated node is either not current or has a standbystatus other than providingservice, it is time to choose another designated DroolsPDP. The algorithm will build a list of all DroolsPDPs that are current and have a standbystatus of hotstandby. It will then give preference to DroolsPDPs within the same site, choosing the DroolsPDP with the lowest lexicographic value to the droolsPdpId (resourceName). If the chosen DroolsPDP is itself, it will promote its standbystatus from hotstandby to providingservice. If the chosen DroolsPDP is other than itself, it will do nothing.

When the DroolsPDP promotes its standbystatus from hotstandby to providing service, a state change notification will occur and the Standby State Change Handler will take appropriate action.

Standby State Change Handler

The Standby State Change Handler (PMStandbyStateChangeHandler class) extends the IntegrityMonitor StateChangeNotifier class which implements the Observer class. When the DroolsPDP is constructed, an instance of the handler is constructed and registered with StateManagement. Whenever StateManagement implements a state transition, it calls the handleStateChange() method of the handler. If the StandbyStatus transitions to hot or cold standby, the handler makes a call into the lower level management layer to lock the application controllers and topic endpoints, preventing it from handling transactions. If the StandbyStatus transitions to providingservice, the handler makes a call into the lower level management layer to unlock the application controllers and topic endpoints, allowing it to handle transactions.

Database

The Active/Standby Feature creates a database named activestandbymanagement with a single table, droolspdpentity. The election handler uses that table to determine which DroolsPDP was/is designated as the active DroolsPDP and which DroolsPDP election handlers are healthy enough to periodically update their status.

The droolspdpentity table has the following columns:
  • pdpId - The unique indentifier for the DroolsPDP. It is the same as the resourceName

  • designated - Has a value of 1 if the DroolsPDP is designated as active/providingservice. It has a value of 0 otherwise

  • priority - Indicates the priority level of the DroolsPDP for the election handler. In general, this is ignore and all have the same priority.

  • updatedDate - This is the timestamp for the most recent update of the record.

  • designatedDate - This is the timestamp that indicates when the designated column was most recently set to a value of 1

  • site - This is the name of the site

Properties

The properties are found in the feature-active-standby-management.properties file. In general, the properties are adequately described in the properties file. Parameters which must be replaced prior to usage are indicated thus: ${{parameter to be replaced}}

feature-active-standby-mangement.properties
 # DB properties
 javax.persistence.jdbc.driver=org.mariadb.jdbc.Driver
 javax.persistence.jdbc.url=jdbc:mariadb://${{SQL_HOST}}:3306/activestandbymanagement
 javax.persistence.jdbc.user=${{SQL_USER}}
 javax.persistence.jdbc.password=${{SQL_PASSWORD}}

 # Must be unique across the system
 resource.name=pdp1
 # Name of the site in which this node is hosted
 site_name=site1

 # Needed by DroolsPdpsElectionHandler
 pdp.checkInterval=1500 # The interval in ms between updates of the updatedDate
 pdp.updateInterval=1000 # The interval in ms between executions of the election handler
 #pdp.timeout=3000
 # Need long timeout, because testTransaction is only run every 10 seconds.
 pdp.timeout=15000
 #how long do we wait for the pdp table to populate on initial startup
 pdp.initialWait=20000

End of Document

Feature: Controller Logging

The controller logging feature provides a way to log network topic messages to a separate controller log file for each controller. This allows a clear separation of network traffic between all of the controllers.

Type “features enable controller-logging”. The feature will now display as “enabled”.

_images/ctrlog_enablefeature.png

When the feature’s enable script is executed, it will search the $POLICY_HOME/config directory for any logback files containing the prefix “logback-include-”. These logger configuration files are typically provided with a feature that installs a controlloop (ex: controlloop-amsterdam and controlloop-casablanca features). Once these configuration files are found by the enable script, the logback.xml config file will be updated to include the configurations.

_images/ctrlog_logback.png
Controller Logger Configuration

The contents of a logback-include-*.xml file follows the same configuration syntax as the logback.xml file. It will contain the configurations for the logger associated with the given controller.

Note

A controller logger MUST be configured with the same name as the controller (ex: a controller named “casablanca” will have a logger named “casablanca”).

_images/ctrlog_config.png
Viewing the Controller Logs

Once a logger for the controller is configured, start the drools-pdp and navigate to the $POLICY_LOGS directory. A new controller specific network log will be added that contains all the network topic traffic of the controller.

_images/ctrlog_view.png

The original network log remains and will append traffic information from all topics regardless of which controller it is for. To abbreviate and customize messages for the network log, refer to the Feature MDC Filters documentation.

End of Document

Feature: EELF (Event and Error Logging Framework)

The EELF feature provides backwards compatibility with R0 logging functionality. It supports the use of EELF/Common Framework style logging at the same time as traditional logging.

See also

Additional information for EELF logging can be found at EELF wiki.

To utilize the eelf logging capabilities, first stop policy engine and then enable the feature using the “features” command.

Enabling EELF Feature
 policy@hyperion-4:/opt/app/policy$ policy stop
 [drools-pdp-controllers]
  L []: Stopping Policy Management... Policy Management (pid=354) is stopping... Policy Management has stopped.
 policy@hyperion-4:/opt/app/policy$ features enable eelf
 name                      version         status
 ----                      -------         ------
 controlloop-utils         1.1.0-SNAPSHOT  disabled
 healthcheck               1.1.0-SNAPSHOT  disabled
 test-transaction          1.1.0-SNAPSHOT  disabled
 eelf                      1.1.0-SNAPSHOT  enabled
 state-management          1.1.0-SNAPSHOT  disabled
 active-standby-management 1.1.0-SNAPSHOT  disabled
 session-persistence       1.1.0-SNAPSHOT  disabled

The output of the enable command will indicate whether or not the feature was enabled successfully.

Policy engine can then be started as usual.

End of Document

Feature: MDC Filters

The MDC Filter Feature provides configurable properties for network topics to extract fields from JSON strings and place them in a mapped diagnostic context (MDC).

Before enabling the feature, the network log contains the entire content of each message received on a topic. Below is a sample message from the network log. Note that the topic used for this tutorial is DCAE-CL.

[2019-03-22T16:36:42.942+00:00|DMAAP-source-DCAE-CL][IN|DMAAP|DCAE-CL]
{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","closedLoopAlarmStart":1463679805324,"closedLoopEventClient":"DCAE_INSTANCE_ID.dcae-tca","closedLoopEventStatus":"ONSET","requestID":"664be3d2-6c12-4f4b-a3e7-c349acced200","target_type":"VNF","target":"generic-vnf.vnf-id","AAI":{"vserver.is-closed-loop-disabled":"false","vserver.prov-status":"ACTIVE","generic-vnf.vnf-id":"vCPE_Infrastructure_vGMUX_demo_app"},"from":"DCAE","version":"1.0.2"}

The network log can become voluminous if messages received from various topics carry large messages for various controllers. With the MDC Filter Feature, users can define keywords in JSON messages to extract and structure according to a desired format. This is done through configuring the feature’s properties.

Configuring the MDC Filter Feature

To configure the feature, the feature must be enabled using the following command:

features enable mdc-filters
_images/mdc_enablefeature.png

Once the feature is enabled, there will be a new properties file in $POLICY_HOME/config called feature-mdc-filters.properties.

_images/mdc_properties.png

The properties file contains filters to extract key data from messages on the network topics that are saved in an MDC, which can be referenced in logback.xml. The configuration format is as follows:

<protocol>.<type>.topics.<topic-name>.mdcFilters=<filters>

Where:
   <protocol> = ueb, dmaap, noop
   <type> = source, sink
   <topic-name> = Name of DMaaP or UEB topic
   <filters> = Comma separated list of key/json-path(s)

The filters consist of an MDC key used by logback.xml (see below) and the JSON path(s) to the desired data. The path always begins with ‘$’, which signifies the root of the JSON document. The underlying library, JsonPath, uses a query syntax for searching through a JSON file. The query syntax and some examples can be found at https://github.com/json-path/JsonPath. An example filter for the DCAE-CL is provided below:

dmaap.source.topics.DCAE-CL.mdcFilters=requestID=$.requestID

This filter is specifying that the dmaap source topic DCAE-CL will search each message received for requestID by following the path starting at the root ($) and searching for the field requestID. If the field is found, it is placed in the MDC with the key “requestID” as signified by the left hand side of the filter before the “=”.

Configuring Multiple Filters and Paths

Multiple fields can be found for a given JSON document by a comma separated list of <mdcKey,jsonPath> pairs. For the previous example, another filter is added by adding a comma and specifying the filter as follows:

dmaap.source.topics.DCAE-CL.mdcFilters=requestID=$.requestID,closedLoopName=$.closedLoopControlName

The feature will now search for both requestID and closedLoopControlName in a JSON message using the specified “$.” path notations and put them in the MDC using the keys “requestID” and “closedLoopName” respectively. To further refine the filter, if a topic receives different message structures (ex: a response message structure vs an error message structure) the “|” notation allows multiple paths to a key to be defined. The feature will search through each specified path until a match is found. An example can be found below:

dmaap.source.topics.DCAE-CL.mdcFilters=requestID=$.requestID,closedLoopName=$.closedLoopControlName|$.AAI.closedLoopControlName

Now when the filter is searching for closedLoopControlName it will check the first path “$.closedLoopControlName”, if it is not present then it will try the second path “$.AAI.closedLoopControlName”. If the user is unsure of the path to a field, JsonPath supports a deep scan by using the “..” notation. This will search the entire JSON document for the field without specifying the path.

Accessing the MDC Values in logback.xml

Once the feature properties have been defined, logback.xml contains a “abstractNetworkPattern” property that will hold the desired message structure defined by the user. The user has the flexibility to define the message structure however they choose but for this tutorial the following pattern is used:

<property name="abstractNetworkPattern" value="[%d{yyyy-MM-dd'T'HH:mm:ss.SSS+00:00, UTC}] [%X{networkEventType:-NULL}|%X{networkProtocol:-NULL}|%X{networkTopic:-NULL}|%X{requestID:-NULL}|%X{closedLoopName:-NULL}]%n" />

The “value” portion consists of two headers in bracket notation, the first header defines the timestamp while the second header references the keys from the MDC filters defined in the feature properties. The standard logback syntax is used and more information on the syntax can be found here. Note that some of the fields here were not defined in the feature properties file. The feature automatically puts the network infrastructure information in the keys that are prepended with “network”. The current supported network infrastructure information is listed below.

Field

Values

networkEventType

IN, OUT

networkProtocol

DMAAP, UEB, NOOP

networkTopic

The name of the topic that received the message

To reference the keys from the feature properties the syntax “%X{KEY_DEFINED_IN_PROPERTIES}” provides access to the value. An optional addition is to append “:-”, which specifies a default value to display in the log if the field was not found in the message received. For this tutorial, a default of “NULL” is displayed for any of the fields that were not found while filtering. The “|” has no special meaning and is just used as a field separator for readability; the user can decorate the log format to their desired visual appeal.

Network Log Structure After Feature Enabled

Once the feature and logback.xml is configured to the user’s desired settings, start the PDP-D by running “policy start”. Based on the configurations from the previous sections of this tutorial, the following log message is written to network log when a message is received on the DCAE-CL topic:

[2019-03-22T16:38:23.884+00:00] [IN|DMAAP|DCAE-CL|664be3d2-6c12-4f4b-a3e7-c349acced200|ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e]

The message has now been filtered to display the network infrastructure information and the extracted data from the JSON message based on the feature properties. In order to view the entire message received from a topic, a complementary feature was developed to display the entire message on a per controller basis while preserving the compact network log. Refer to the Feature Controller Logging documentation for details.

End of Document

Feature: Pooling

The Pooling feature provides the ability to load-balance work across a “pool” of active-active Drools-PDP hosts. This particular implementation uses a DMaaP topic for communication between the hosts within the pool.

The pool is adjusted automatically, with no manual intervention when:
  • a new host is brought online

  • a host goes offline, whether gracefully or due to a failure in the host or in the network

Assumptions and Limitations
  • Session persistence is not required

  • Data may be lost when processing is moved from one host to another

  • The entire pool may shut down if the inter-host DMaaP topic becomes inaccessible

_images/poolingDesign.png
Key Points
  • Requests are received on a common DMaaP topic
    • DMaaP distributes the requests randomly to the hosts

    • The request topic should have at least as many partitions as there are hosts

  • Uses a single, internal DMaaP topic for all inter-host communication

  • Allocates buckets to each host
    • Requests are assigned to buckets based on their respective “request IDs”

  • No session persistence

  • No objects copied between hosts

  • Requires feature(s): distributed-locking

  • Precludes feature(s): session-persistence, active-standby, state-management

Example Scenario
  1. Incoming DMaaP message is received on a topic — all hosts are listening, but only one random host receives the message

  2. Decode message to determine “request ID” key (message-specific operation)

  3. Hash request ID to determine the bucket number

  4. Look up host associated with hash bucket (most likely remote)

  5. Publish “forward” message to internal DMaaP topic, including remote host, bucket number, DMaaP topic information, and message body

  6. Remote host verifies ownership of bucket, and routes the DMaaP message to its own rule engine for processing

The figure below shows several different hosts in a pool. Each host has a copy of the bucket assignments, which specifies which buckets are assigned to which hosts. Incoming requests are mapped to a bucket, and a bucket is mapped to a host, to which the request is routed. The host table includes an entry for each active host in the pool, to which one or more buckets are mapped.

_images/poolingPdps.png
Bucket Reassignment
  • When a host goes up or down, buckets are rebalanced

  • Attempts to maintain an even distribution

  • Leaves buckets with their current owner, where possible

  • Takes a few buckets from each host to assign to new hosts

For example, in the diagram below, the left side shows how 32 buckets might be assigned among four different hosts. When the first host fails, the buckets from host 1 would be reassigned among the remaining hosts, similar to what is shown on the right side of the diagram. Any requests that were being processed by host 1 will be lost and must be restarted. However, the buckets that had already been assigned to the remaining hosts are unchanged, thus requests associated with those buckets are not impacted by the loss of host 1.

_images/poolingBuckets.png
Usage

For pooling to be enabled, the distributed-locking feature must be also be enabled.

Enable Feature Pooling
 policy stop

 features enable distributed-locking
 features enable pooling-dmaap

The configuration is located at:

  • $POLICY_HOME/config/feature-pooling-dmaap.properties

Start the PDP-D using pooling
 policy start
Disable the pooling feature
 policy stop
 features disable pooling-dmaap
 policy start

End of Document

Feature: Session Persistence

The session persistence feature allows drools kie sessions to be persisted in a database surviving pdp-d restarts.

Enable session persistence
1 policy stop
2 features enable session-persistence

The configuration is located at:

  • $POLICY_HOME/config/feature-session-persistence.properties

Each controller that wants to be started with persistence should contain the following line in its <controller-name>-controller.properties

  • persistence.type=auto

Start the PDP-D using session-persistence
1 db-migrator -o upgrade -s ALL
2 policy start

Facts will survive PDP-D restart using the native drools capabilities and introduce a performance overhead.

Disable the session-persistence feature
1 policy stop
2 features disable session-persistence
3 sed -i "/persistence.type=auto/d" <controller-name>-controller.properties
4 db-migrator -o erase -s sessionpersistence   # delete all its database data (optional)
5 policy start

End of Document

Feature: State Management

The State Management Feature provides:

  • Node-level health monitoring

  • Monitoring the health of dependency nodes - nodes on which a particular node is dependent

  • Ability to lock/unlock a node and suspend or resume all application processing

  • Ability to suspend application processing on a node that is disabled or in a standby state

  • Interworking/Coordination of state values

  • Support for ITU X.731 states and state transitions for:
    • Administrative State

    • Operational State

    • Availability Status

    • Standby Status

Enabling and Disabling Feature State Management

The State Management Feature is enabled from the command line when logged in as policy after configuring the feature properties file (see Description Details section). From the command line:

  • > features status - Lists the status of features

  • > features enable state-management - Enables the State Management Feature

  • > features disable state-management - Disables the State Management Feature

The Drools PDP must be stopped prior to enabling/disabling features and then restarted after the features have been enabled/disabled.

Enabling State Management Feature
 policy@hyperion-4:/opt/app/policy$ policy stop
 [drools-pdp-controllers]
  L []: Stopping Policy Management... Policy Management (pid=354) is stopping... Policy Management has stopped.
 policy@hyperion-4:/opt/app/policy$ features enable state-management
 name                      version         status
 ----                      -------         ------
 controlloop-utils         1.1.0-SNAPSHOT  disabled
 healthcheck               1.1.0-SNAPSHOT  disabled
 test-transaction          1.1.0-SNAPSHOT  disabled
 eelf                      1.1.0-SNAPSHOT  disabled
 state-management          1.1.0-SNAPSHOT  enabled
 active-standby-management 1.1.0-SNAPSHOT  disabled
 session-persistence       1.1.0-SNAPSHOT  disabled
Description Details
State Model
The state model follows the ITU X.731 standard for state management. The supported state values are:
Administrative State:
  • Locked - All application transaction processing is prohibited

  • Unlocked - Application transaction processing is allowed

Administrative State Transitions:
  • The transition from Unlocked to Locked state is triggered with a Lock operation

  • The transition from the Locked to Unlocked state is triggered with an Unlock operation

Operational State:
  • Enabled - The node is healthy and able to process application transactions

  • Disabled - The node is not healthy and not able to process application transactions

Operational State Transitions:
  • The transition from Enabled to Disabled is triggered with a disableFailed or disableDependency operation

  • The transition from Disabled to Enabled is triggered with an enableNotFailed and enableNoDependency operation

Availability Status:
  • Null - The Operational State is Enabled

  • Failed - The Operational State is Disabled because the node is no longer healthy

  • Dependency - The Operational State is Disabled because all members of a dependency group are disabled

  • Dependency.Failed - The Operational State is Disabled because the node is no longer healthy and all members of a dependency group are disabled

Availability Status Transitions:
  • The transition from Null to Failed is triggered with a disableFailed operation

  • The transtion from Null to Dependency is triggered with a disableDependency operation

  • The transition from Failed to Dependency.Failed is triggered with a disableDependency operation

  • The transition from Dependency to Dependency.Failed is triggered with a disableFailed operation

  • The transition from Dependency.Failed to Failed is triggered with an enableNoDependency operation

  • The transition from Dependency.Failed to Dependency is triggered with an enableNotFailed operation

  • The transition from Failed to Null is triggered with an enableNotFailed operation

  • The transition from Dependency to Null is triggered with an enableNoDependency operation

Standby Status:
  • Null - The node does not support active-standby behavior

  • ProvidingService - The node is actively providing application transaction service

  • HotStandby - The node is capable of providing application transaction service, but is currently waiting to be promoted

  • ColdStandby - The node is not capable of providing application service because of a failure

Standby Status Transitions:
  • The transition from Null to HotStandby is triggered by a demote operation when the Operational State is Enabled

  • The transition for Null to ColdStandby is triggered is a demote operation when the Operational State is Disabled

  • The transition from ColdStandby to HotStandby is triggered by a transition of the Operational State from Disabled to Enabled

  • The transition from HotStandby to ColdStandby is triggered by a transition of the Operational State from Enabled to Disabled

  • The transition from ProvidingService to ColdStandby is triggered by a transition of the Operational State from Enabled to Disabled

  • The transition from HotStandby to ProvidingService is triggered by a Promote operation

  • The transition from ProvidingService to HotStandby is triggered by a Demote operation

Database

The State Management feature creates a StateManagement database having three tables:

StateManagementEntity - This table has the following columns:
  • id - Automatically created unique identifier

  • resourceName - The unique identifier for a node

  • adminState - The Administrative State

  • opState - The Operational State

  • availStatus - The Availability Status

  • standbyStatus - The Standby Status

  • created_Date - The timestamp the resource entry was created

  • modifiedDate - The timestamp the resource entry was last modified

ForwardProgressEntity - This table has the following columns:
  • forwardProgressId - Automatically created unique identifier

  • resourceName - The unique identifier for a node

  • fpc_count - A forward progress counter which is periodically incremented if the node is healthy

  • created_date - The timestamp the resource entry was created

  • last_updated - The timestamp the resource entry was last updated

ResourceRegistrationEntity - This table has the following columns:
  • ResourceRegistrationId - Automatically created unique identifier

  • resourceName - The unique identifier for a node

  • resourceUrl - The JMX URL used to check the health of a node

  • site - The name of the site in which the resource resides

  • nodeType - The type of the node (i.e, pdp_xacml, pdp_drools, pap, pap_admin, logparser, brms_gateway, astra_gateway, elk_server, pypdp)

  • created_date - The timestamp the resource entry was created

  • last_updated - The timestamp the resource entry was last updated

Node Health Monitoring

Application Monitoring

Application monitoring can be implemented using the startTransaction() and endTransaction() methods. Whenever a transaction is started, the startTransaction() method is called. If the node is locked, disabled or in a hot/cold standby state, the method will throw an exception. Otherwise, it resets the timer which triggers the default testTransaction() method.

When a transaction completes, calling endTransaction() increments the forward process counter in the ForwardProgressEntity DB table. As long as this counter is updating, the integrity monitor will assume the node is healthy/sane.

If the startTransaction() method is not called within a provisioned period of time, a timer will expire which calls the testTransaction() method. The default implementation of this method simply increments the forward progress counter. The testTransaction() method may be overwritten to perform a more meaningful test of system sanity, if desired.

If the forward progress counter stops incrementing, the integrity monitoring routine will assume the node application has lost sanity and it will trigger a statechange (disableFailed) to cause the operational state to become disabled and the availability status attribute to become failed. Once the forward progress counter again begins incrementing, the operational state will return to enabled.

Application Monitoring with AllSeemsWell

The IntegrityMonitor class provides a facility for applications to directly control updates of the forwardprogressentity table. As previously described, startTransaction() and endTransaction() are provided to monitor the forward progress of transactions. This, however, does not monitor things such as internal threads that may be blocked or die. An example is the feature-state-management DroolsPdpElectionHandler.run() method.

The run() method is monitored by a timer task, checkWaitTimer(). If the run() method is stalled an extended period of time, the checkWaitTimer() method will call StateManagementFeature.allSeemsWell(<className>, <AllSeemsWell State>, <String message>) with the AllSeemsWell state of Boolean.FALSE.

The IntegrityMonitor instance owned by StateManagementFeature will then store an entry in the allSeemsWellMap and block updates of the forwardprogressentity table. This in turn, will cause the Drools PDP operational state to be set to “disabled” and availability status to be set to “failed”.

Once the blocking condition is cleared, the checkWaiTimer() will again call the allSeemsWell() method and include an AllSeemsWell state of Boolean.True. This will cause the IntegrityMonitor to remove the entry for that className from the allSeemsWellMap and allow updating of the forwardprogressentity table, so long as there are no other entries in the map.

Dependency Monitoring

When a Drools PDP (or other node using the IntegrityMonitor policy/common module) is dependent upon other nodes to perform its function, those other nodes can be defined as dependencies in the properties file. In order for the dependency algorithm to function, the other nodes must also be running the IntegrityMonitor. Periodically the Drools PDP will check the state of dependencies. If all of a node type have failed, the Drools PDP will declare that it can no longer function and change the operational state to disabled and the availability status to dependency.

In addition to other policy node types, there is a subsystemTest() method that is periodically called by the IntegrityMonitor. In Drools PDP, subsystemTest has been overwritten to execute an audit of the Database and of the Maven Repository. If the audit is unable to verify the function of either the DB or the Maven Repository, he Drools PDP will declare that it can no longer function and change the operational state to disabled and the availability status to dependency.

When a failed dependency returns to normal operation, the IntegrityMontor will change the operational state to enabled and availability status to null.

External Health Monitoring Interface

The Drools PDP has a http test interface which, when called, will return 200 if all seems well and 500 otherwise. The test interface URL is defined in the properties file.

Site Manager

The Site Manager is not deployed with the Drools PDP, but it is available in the policy/common repository in the site-manager directory. The Site Manager provides a lock/unlock interface for nodes and a way to display node information and status.

The following is from the README file included with the Site Manager.

Site Manager README extract
 Before using 'siteManager', the file 'siteManager.properties' needs to be
 edited to configure the parameters used to access the database:

     javax.persistence.jdbc.driver - typically 'org.mariadb.jdbc.Driver'

     javax.persistence.jdbc.url - URL referring to the database,
         which typically has the form: 'jdbc:mariadb://<host>:<port>/<db>'
         ('<db>' is probably 'xacml' in this case)

     javax.persistence.jdbc.user - the user id for accessing the database

     javax.persistence.jdbc.password - password for accessing the database

 Once the properties file has been updated, the 'siteManager' script can be
 invoked as follows:

     siteManager show [ -s <site> | -r <resourceName> ] :
         display node information (Site, NodeType, ResourceName, AdminState,
                                   OpState, AvailStatus, StandbyStatus)

     siteManager setAdminState { -s <site> | -r <resourceName> } <new-state> :
         update admin state on selected nodes

     siteManager lock { -s <site> | -r <resourceName> } :
         lock selected nodes

     siteManager unlock { -s <site> | -r <resourceName> } :
         unlock selected nodes

Note that the ‘siteManager’ script assumes that the script, ‘site-manager-${project.version}.jar’ file and ‘siteManager.properties’ file are all in the same directory. If the files are separated, the ‘siteManager’ script will need to be modified so it can locate the jar and properties files.

Properties

The feature-state-mangement.properties file controls the function of the State Management Feature. In general, the properties have adequate descriptions in the file. Parameters which must be replaced prior to usage are indicated thus: ${{parameter to be replaced}}.

feature-state-mangement.properties
 # DB properties
 javax.persistence.jdbc.driver=org.mariadb.jdbc.Driver
 javax.persistence.jdbc.url=jdbc:mariadb://${{SQL_HOST}}:3306/statemanagement
 javax.persistence.jdbc.user=${{SQL_USER}}
 javax.persistence.jdbc.password=${{SQL_PASSWORD}}

 # DroolsPDPIntegrityMonitor Properties
 # Test interface host and port defaults may be overwritten here
 http.server.services.TEST.host=0.0.0.0
 http.server.services.TEST.port=9981
 #These properties will default to the following if no other values are provided:
 # http.server.services.TEST.restClasses=org.onap.policy.drools.statemanagement.IntegrityMonitorRestManager
 # http.server.services.TEST.managed=false
 # http.server.services.TEST.swagger=true

 #IntegrityMonitor Properties

 # Must be unique across the system
 resource.name=pdp1
 # Name of the site in which this node is hosted
 site_name=site1
 # Forward Progress Monitor update interval seconds
 fp_monitor_interval=30
 # Failed counter threshold before failover
 failed_counter_threshold=3
 # Interval between test transactions when no traffic seconds
 test_trans_interval=10
 # Interval between writes of the FPC to the DB seconds
 write_fpc_interval=5
 # Node type Note: Make sure you don't leave any trailing spaces, or you'll get an 'invalid node type' error!
 node_type=pdp_drools
 # Dependency groups are groups of resources upon which a node operational state is dependent upon.
 # Each group is a comma-separated list of resource names and groups are separated by a semicolon.  For example:
 # dependency_groups=site_1.astra_1,site_1.astra_2;site_1.brms_1,site_1.brms_2;site_1.logparser_1;site_1.pypdp_1
 dependency_groups=
 # When set to true, dependent health checks are performed by using JMX to invoke test() on the dependent.
 # The default false is to use state checks for health.
 test_via_jmx=true
 # This is the max number of seconds beyond which a non incrementing FPC is considered a failure
 max_fpc_update_interval=120
 # Run the state audit every 60 seconds (60000 ms).  The state audit finds stale DB entries in the
 # forwardprogressentity table and marks the node as disabled/failed in the statemanagemententity
 # table. NOTE! It will only run on nodes that have a standbystatus = providingservice.
 # A value of <= 0 will turn off the state audit.
 state_audit_interval_ms=60000
 # The refresh state audit is run every (default) 10 minutes (600000 ms) to clean up any state corruption in the
 # DB statemanagemententity table. It only refreshes the DB state entry for the local node.  That is, it does not
 # refresh the state of any other nodes.  A value <= 0 will turn the audit off. Any other value will override
 # the default of 600000 ms.
 refresh_state_audit_interval_ms=600000

 # Repository audit properties
 # Assume it's the releaseRepository that needs to be audited,
 # because that's the one BRMGW will publish to.
 repository.audit.id=${{releaseRepositoryID}}
 repository.audit.url=${{releaseRepositoryUrl}}
 repository.audit.username=${{repositoryUsername}}
 repository.audit.password=${{repositoryPassword}}
 repository2.audit.id=${{releaseRepository2ID}}
 repository2.audit.url=${{releaseRepository2Url}}
 repository2.audit.username=${{repositoryUsername2}}
 repository2.audit.password=${{repositoryPassword2}}

 # Repository Audit Properties
 # Flag to control the execution of the subsystemTest for the Nexus Maven repository
 repository.audit.is.active=false
 repository.audit.ignore.errors=true
 repository.audit.interval_sec=86400
 repository.audit.failure.threshold=3

 # DB Audit Properties
 # Flag to control the execution of the subsystemTest for the Database
 db.audit.is.active=false

End of Document

Feature: Test Transaction

The Test Transaction feature provides a mechanism by which the health of drools policy controllers can be tested.

When enabled, the feature functions by injecting an event object (identified by a UUID) into the drools session of each policy controller that is active in the system. Only an object with this UUID can trigger the Test Transaction-specific drools logic to execute.

The injection of the event triggers the “TT” rule (see TestTransactionTemplate.drl below) to fire. The “TT” rule simply increments a ForwardProgress counter object, thereby confirming that the drools session for this particular controller is active and firing its rules accordingly. This cycle repeats at 20 second intervals.

If it is ever the case that a drools controller does not have the “TT” rule present in its .drl, or that the forward progress counter is not incremented, the Test Transaction thread for that particular drools session (i.e. controller) is terminated and a message is logged to error.log.

Prior to being enabled, the following drools rules need to be appended to the rules templates of any use-case that is to be monitored by the feature.

TestTransactionTemplate.drl
 1 /*
 2  * ============LICENSE_START=======================================================
 3  * feature-test-transaction
 4  * ================================================================================
 5  * Copyright (C) 2017 AT&T Intellectual Property. All rights reserved.
 6  * ================================================================================
 7  * Licensed under the Apache License, Version 2.0 (the "License");
 8  * you may not use this file except in compliance with the License.
 9  * You may obtain a copy of the License at
10  *
11  *      http://www.apache.org/licenses/LICENSE-2.0
12  *
13  * Unless required by applicable law or agreed to in writing, software
14  * distributed under the License is distributed on an "AS IS" BASIS,
15  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
16  * See the License for the specific language governing permissions and
17  * limitations under the License.
18  * ============LICENSE_END=========================================================
19  */
20
21 package org.onap.policy.drools.rules;
22
23 import java.util.EventObject;
24
25 declare ForwardProgress
26     counter : Long
27 end
28
29 rule "TT.SETUP"
30 when
31 then
32     ForwardProgress fp = new ForwardProgress();
33     fp.setCounter(0L);
34     insert(fp);
35 end
36
37 rule "TT"
38 when
39     $fp : ForwardProgress()
40     $tt : EventObject(source == "43868e59-d1f3-43c2-bd6f-86f89a61eea5")
41 then
42     $fp.setCounter($fp.getCounter() + 1);
43     retract($tt);
44 end
45 query "TT.FPC"
46     ForwardProgress(counter >= 0, $ttc : counter)
47 end

Once the proper artifacts are built and deployed with the addition of the TestTransactionTemplate rules, the feature can then be enabled by entering the following commands:

PDPD Features Command
 policy@hyperion-4:/opt/app/policy$ policy stop
 [drools-pdp-controllers]
  L []: Stopping Policy Management... Policy Management (pid=354) is stopping... Policy Management has stopped.
 policy@hyperion-4:/opt/app/policy$ features enable test-transaction
 name                      version         status
 ----                      -------         ------
 controlloop-utils         1.1.0-SNAPSHOT  disabled
 healthcheck               1.1.0-SNAPSHOT  disabled
 test-transaction          1.1.0-SNAPSHOT  enabled
 eelf                      1.1.0-SNAPSHOT  disabled
 state-management          1.1.0-SNAPSHOT  disabled
 active-standby-management 1.1.0-SNAPSHOT  disabled
 session-persistence       1.1.0-SNAPSHOT  disabled

The output of the enable command will indicate whether or not the feature was enabled successfully.

Policy engine can then be started as usual.

End of Document

Feature: no locking

The no-locking feature allows applications to use a Lock Manager that always succeeds. It does not deny acquiring resource locks.

To utilize the no-locking feature, first stop policy engine, disable other locking features, and then enable it using the “features” command.

In an official OOM installation, place a script with a .pre.sh suffix:

features.pre.sh
 #!/bin/sh

 sh -c "features disable distributed-locking"
 sh -c "features enable no-locking"

under the directory:

oom/kubernetes/policy/components/policy-drools-pdp/resources/configmaps

and rebuild the policy charts.

At container initialization, the distributed-locking will be disabled, and the no-locking feature will be enabled.

End of Document

Data Migration

PDP-D data is migrated across releases with the db-migrator.

The migration occurs when different release data is detected. db-migrator will look under the $POLICY_HOME/etc/db/migration for databases and SQL scripts to migrate.

$POLICY_HOME/etc/db/migration/<schema-name>/sql/<sql-file>

where <sql-file> is of the form:

<VERSION>-<pdp|feature-name>[-description](.upgrade|.downgrade).sql

The db-migrator tool syntax is

syntax: db-migrator
     -s <schema-name>
     [-b <migration-dir>]
     [-f <from-version>]
     [-t <target-version>]
     -o <operations>

     where <operations>=upgrade|downgrade|auto|version|erase|report

Configuration Options:
     -s|--schema|--database:  schema to operate on ('ALL' to apply on all)
     -b|--basedir: overrides base DB migration directory
     -f|--from: overrides current release version for operations
     -t|--target: overrides target release to upgrade/downgrade

Operations:
     upgrade: upgrade operation
     downgrade: performs a downgrade operation
     auto: autonomous operation, determines upgrade or downgrade
     version: returns current version, and in conjunction if '-f' sets the current version
     erase: erase all data related <schema> (use with care)
     report: migration detailed report on an schema
     ok: is the migration status valid

See the feature-distributed-locking sql directory for an example of upgrade/downgrade scripts.

The following command will provide a report on the upgrade or downgrade activies:

db-migrator -s ALL -o report

For example in the official guilin delivery:

policy@dev-drools-0:/tmp/policy-install$ db-migrator -s ALL -o report
+---------+---------+
| name    | version |
+---------+---------+
| pooling | 1811    |
+---------+---------+
+-------------------------------------+-----------+---------+---------------------+
| script                              | operation | success | atTime              |
+-------------------------------------+-----------+---------+---------------------+
| 1804-distributedlocking.upgrade.sql | upgrade   | 1       | 2020-05-22 19:33:09 |
| 1811-distributedlocking.upgrade.sql | upgrade   | 1       | 2020-05-22 19:33:09 |
+-------------------------------------+-----------+---------+---------------------+

In order to use the db-migrator tool, the system must be configured with a database.

SQL_HOST=mariadb

Maven Repositories

The drools libraries in the PDP-D uses maven to fetch rules artifacts and software dependencies.

The default settings.xml file specifies the repositories to search. This configuration can be overriden with a custom copy that would sit in a mounted configuration directory. See an example of the OOM override settings.xml.

The default ONAP installation of the control loop child image onap/policy-pdpd-cl:1.6.4 is OFFLINE. In this configuration, the rules artifact and the dependencies retrieves all the artifacts from the local maven repository. Of course, this requires that the maven dependencies are preloaded in the local repository for it to work.

An offline configuration requires two items:

  • OFFLINE environment variable set to true.

  • override settings.xml customization, see settings.xml.

The default mode in the onap/policy-drools:1.6.3 is ONLINE instead.

In ONLINE mode, the controller initialization can take a significant amount of time.

The Policy ONAP installation includes a nexus repository component that can be used to host any arbitrary artifacts that an PDP-D application may require. The following environment variables configure its location:

SNAPSHOT_REPOSITORY_ID=policy-nexus-snapshots
SNAPSHOT_REPOSITORY_URL=http://nexus:8080/nexus/content/repositories/snapshots/
RELEASE_REPOSITORY_ID=policy-nexus-releases
RELEASE_REPOSITORY_URL=http://nexus:8080/nexus/content/repositories/releases/
REPOSITORY_OFFLINE=false

The deploy-artifact tool is used to deploy artifacts to the local or remote maven repositories. It also allows for dependencies to be installed locally. The features tool invokes it when artifacts are to be deployed as part of a feature. The tool can be useful for developers to test a new application in a container.

syntax: deploy-artifact
     [-f|-l|-d]
     -s <custom-settings>
     -a <artifact>

Options:
     -f|--file-repo: deploy in the file repository
     -l|--local-repo: install in the local repository
     -d|--dependencies: install dependencies in the local repository
     -s|--settings: custom settings.xml
     -a|--artifact: file artifact (jar or pom) to deploy and/or install

AAF

Policy can talk to AAF for authorization requests. To enable AAF set the following environment variables:

AAF=true
AAF_NAMESPACE=org.onap.policy
AAF_HOST=aaf-locate.onap

By default AAF is disabled.

Policy Tool

The policy tool can be used to stop, start, and provide status on the PDP-D.

syntax: policy [--debug] status|start|stop

The status option provides generic status of the system.

[drools-pdp-controllers]
 L []: Policy Management (pid 408) is running
    0 cron jobs installed.

[features]
name                   version         status
----                   -------         ------
healthcheck            1.6.3           enabled
distributed-locking    1.6.3           enabled
lifecycle              1.6.3           enabled
controlloop-management 1.6.4           enabled
controlloop-utils      1.6.4           enabled
controlloop-trans      1.6.4           enabled
controlloop-usecases   1.6.4           enabled

[migration]
pooling: OK @ 1811

It contains 3 sections:

  • PDP-D running status

  • features applied

  • Data migration status on a per database basis.

The start and stop commands are useful for developers testing functionality on a docker container instance.

Telemetry Shell

PDP-D offers an ample set of REST APIs to debug, introspect, and change state on a running PDP-D. This is known as the telemetry API. The telemetry shell wraps these APIs for shell-like access using http-prompt.

policy@dev-drools-0:~$ telemetry
Version: 1.0.0
https://localhost:9696/policy/pdp/engine> get controllers
HTTP/1.1 200 OK
Content-Length: 13
Content-Type: application/json
Date: Thu, 04 Jun 2020 01:07:38 GMT
Server: Jetty(9.4.24.v20191120)

[
    "usecases"
]

https://localhost:9696/policy/pdp/engine> exit
Goodbye!
policy@dev-drools-0:~$

Other tools

Refer to the $POLICY_HOME/bin/ directory for additional tooling.

PDP-D Docker Container Configuration

Both the PDP-D onap/policy-drools and onap/policy-pdpd-cl images can be used without other components.

There are 2 types of configuration data provided to the container:

  1. environment variables.

  2. configuration files and shell scripts.

Environment variables

As it was shown in the controller and endpoint sections, PDP-D configuration can rely on environment variables. In a container environment, these variables are set up by the user in the host environment.

Configuration Files and Shell Scripts

PDP-D is very flexible in its configuration.

The following file types are recognized when mounted under /tmp/policy-install/config.

These are the configuration items that can reside externally and override the default configuration:

  • settings.xml if working with external nexus repositories.

  • standalone-settings.xml if an external policy nexus repository is not available.

  • *.conf files containing environment variables. This is an alternative to use environment variables, as these files will be sourced in before the PDP-D starts.

  • features*.zip to load any arbitrary feature not present in the image.

  • *.pre.sh scripts that will be executed before the PDP-D starts.

  • *.post.sh scripts that will be executed after the PDP-D starts.

  • policy-keystore to override the default PDP-D java keystore.

  • policy-truststore to override the default PDP-D java truststore.

  • aaf-cadi.keyfile to override the default AAF CADI Key generated by AAF.

  • *.properties to override or add any properties file for the PDP-D, this includes controller, endpoint, engine or system configurations.

  • logback*.xml to override the default logging configuration.

  • *.xml to override other .xml configuration that may be used for example by an application.

  • *.json json configuration that may be used by an application.

Running PDP-D with a single container

Environment File

First create an environment file (in this example env.conf) to configure the PDP-D.

# SYSTEM software configuration

POLICY_HOME=/opt/app/policy
POLICY_LOGS=/var/log/onap/policy/pdpd
KEYSTORE_PASSWD=Pol1cy_0nap
TRUSTSTORE_PASSWD=Pol1cy_0nap

# Telemetry credentials

TELEMETRY_PORT=9696
TELEMETRY_HOST=0.0.0.0
TELEMETRY_USER=demo@people.osaaf.org
TELEMETRY_PASSWORD=demo123456!

# nexus repository

SNAPSHOT_REPOSITORY_ID=
SNAPSHOT_REPOSITORY_URL=
RELEASE_REPOSITORY_ID=
RELEASE_REPOSITORY_URL=
REPOSITORY_USERNAME=
REPOSITORY_PASSWORD=
REPOSITORY_OFFLINE=true

# Relational (SQL) DB access

SQL_HOST=
SQL_USER=
SQL_PASSWORD=

# AAF

AAF=false
AAF_NAMESPACE=org.onap.policy
AAF_HOST=aaf.api.simpledemo.onap.org

# PDP-D DMaaP configuration channel

PDPD_CONFIGURATION_TOPIC=PDPD-CONFIGURATION
PDPD_CONFIGURATION_API_KEY=
PDPD_CONFIGURATION_API_SECRET=
PDPD_CONFIGURATION_CONSUMER_GROUP=
PDPD_CONFIGURATION_CONSUMER_INSTANCE=
PDPD_CONFIGURATION_PARTITION_KEY=

# PAP-PDP configuration channel

POLICY_PDP_PAP_TOPIC=POLICY-PDP-PAP
POLICY_PDP_PAP_API_KEY=
POLICY_PDP_PAP_API_SECRET=

# DMaaP

DMAAP_SERVERS=localhost

Note that SQL_HOST, and REPOSITORY are empty, so the PDP-D does not attempt to integrate with those components.

Configuration

In order to avoid the noise in the logs that relate to dmaap configuration, a startup script (noop.pre.sh) is added to convert dmaap endpoints to noop in the host directory to be mounted.

noop.pre.sh
#!/bin/bash -x

sed -i "s/^dmaap/noop/g" $POLICY_HOME/config/*.properties
active.post.sh

To put the controller directly in active mode at initialization, place an active.post.sh script under the mounted host directory:

#!/bin/bash -x

bash -c "http --verify=no -a ${TELEMETRY_USER}:${TELEMETRY_PASSWORD} PUT https://localhost:9696/policy/pdp/engine/lifecycle/state/ACTIVE"
Bring up the PDP-D
docker run --rm -p 9696:9696 -v ${PWD}/config:/tmp/policy-install/config --env-file ${PWD}/env/env.conf -it --name PDPD -h pdpd nexus3.onap.org:10001/onap/policy-drools:1.6.3

To run the container in detached mode, add the -d flag.

Note that in this command, we are opening the 9696 telemetry API port to the outside world, the config directory (where the noop.pre.sh customization script resides) is mounted as /tmp/policy-install/config, and the customization environment variables (env/env.conf) are passed into the container.

To open a shell into the PDP-D:

docker exec -it pdp-d bash

Once in the container, run tools such as telemetry, db-migrator, policy to look at the system state:

To run the telemetry shell and other tools from the host:

docker exec -it PDPD bash -c "/opt/app/policy/bin/telemetry"
docker exec -it PDPD bash -c "/opt/app/policy/bin/policy status"
docker exec -it PDPD bash -c "/opt/app/policy/bin/db-migrator -s ALL -o report"
Controlled instantiation of the PDP-D

Sometimes a developer may want to start and stop the PDP-D manually:

# start a bash

docker run --rm -p 9696:9696 -v ${PWD}/config:/tmp/policy-install/config --env-file ${PWD}/env/env.conf -it --name PDPD -h pdpd nexus3.onap.org:10001/onap/policy-drools:1.6.3 bash

# use this command to start policy applying host customizations from /tmp/policy-install/config

pdpd-entrypoint.sh vmboot

# or use this command to start policy without host customization

policy start

# at any time use the following command to stop the PDP-D

policy stop

# and this command to start the PDP-D back again

policy start

Running PDP-D with nexus and mariadb

docker-compose can be used to test the PDP-D with other components. This is an example configuration that brings up nexus, mariadb and the PDP-D (docker-compose-pdp.yml)

docker-compose-pdp.yml
version: '3'
services:
   mariadb:
      image: mariadb:10.2.25
      container_name: mariadb
      hostname: mariadb
      command: ['--lower-case-table-names=1', '--wait_timeout=28800']
      env_file:
         - ${PWD}/db/db.conf
      volumes:
         - ${PWD}/db:/docker-entrypoint-initdb.d
      ports:
         - "3306:3306"
   nexus:
      image: sonatype/nexus:2.14.8-01
      container_name: nexus
      hostname: nexus
      ports:
         - "8081:8081"
   drools:
      image: nexus3.onap.org:10001/onap/policy-drools:1.6.3
      container_name: drools
      depends_on:
         - mariadb
         - nexus
      hostname: drools
      ports:
         - "9696:9696"
      volumes:
         - ${PWD}/config:/tmp/policy-install/config
      env_file:
         - ${PWD}/env/env.conf

with ${PWD}/db/db.conf:

db.conf
MYSQL_ROOT_PASSWORD=secret
MYSQL_USER=policy_user
MYSQL_PASSWORD=policy_user

and ${PWD}/db/db.sh:

db.sh
for db in support onap_sdk log migration operationshistory10 pooling policyadmin operationshistory
do
    mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};"
    mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;"
done

mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;"
env.conf

The environment file env/env.conf for PDP-D can be set up with appropriate variables to point to the nexus instance and the mariadb database:

# SYSTEM software configuration

POLICY_HOME=/opt/app/policy
POLICY_LOGS=/var/log/onap/policy/pdpd
KEYSTORE_PASSWD=Pol1cy_0nap
TRUSTSTORE_PASSWD=Pol1cy_0nap

# Telemetry credentials

TELEMETRY_PORT=9696
TELEMETRY_HOST=0.0.0.0
TELEMETRY_USER=demo@people.osaaf.org
TELEMETRY_PASSWORD=demo123456!

# nexus repository

SNAPSHOT_REPOSITORY_ID=policy-nexus-snapshots
SNAPSHOT_REPOSITORY_URL=http://nexus:8081/nexus/content/repositories/snapshots/
RELEASE_REPOSITORY_ID=policy-nexus-releases
RELEASE_REPOSITORY_URL=http://nexus:8081/nexus/content/repositories/releases/
REPOSITORY_USERNAME=admin
REPOSITORY_PASSWORD=admin123
REPOSITORY_OFFLINE=false

MVN_SNAPSHOT_REPO_URL=https://nexus.onap.org/content/repositories/snapshots/
MVN_RELEASE_REPO_URL=https://nexus.onap.org/content/repositories/releases/

# Relational (SQL) DB access

SQL_HOST=mariadb
SQL_USER=policy_user
SQL_PASSWORD=policy_user

# AAF

AAF=false
AAF_NAMESPACE=org.onap.policy
AAF_HOST=aaf.api.simpledemo.onap.org

# PDP-D DMaaP configuration channel

PDPD_CONFIGURATION_TOPIC=PDPD-CONFIGURATION
PDPD_CONFIGURATION_API_KEY=
PDPD_CONFIGURATION_API_SECRET=
PDPD_CONFIGURATION_CONSUMER_GROUP=
PDPD_CONFIGURATION_CONSUMER_INSTANCE=
PDPD_CONFIGURATION_PARTITION_KEY=

# PAP-PDP configuration channel

POLICY_PDP_PAP_TOPIC=POLICY-PDP-PAP
POLICY_PDP_PAP_API_KEY=
POLICY_PDP_PAP_API_SECRET=

# DMaaP

DMAAP_SERVERS=localhost
prepare.pre.sh

A pre-start script config/prepare.pres.sh”can be added the custom config directory to prepare the PDP-D to activate the distributed-locking feature (using the database) and to use “noop” topics instead of *dmaap topics:

#!/bin/bash

bash -c "/opt/app/policy/bin/features enable distributed-locking"
sed -i "s/^dmaap/noop/g" $POLICY_HOME/config/*.properties
active.post.sh

A post-start script config/active.post.sh can place PDP-D in active mode at initialization:


bash -c “http –verify=no -a ${TELEMETRY_USER}:${TELEMETRY_PASSWORD} PUT https://localhost:9696/policy/pdp/engine/lifecycle/state/ACTIVE

Bring up the PDP-D, nexus, and mariadb

To bring up the containers:

docker-compose -f docker-compose-pdpd.yaml up -d

To take it down:

docker-compose -f docker-compose-pdpd.yaml down -v
Other examples

The reader can also look at the policy/docker repository. More specifically, these directories have examples of other PDP-D configurations:

Configuring the PDP-D in an OOM Kubernetes installation

The PDP-D OOM chart can be customized at the following locations:

  • values.yaml: custom values for your installation.

  • configmaps: place in this directory any configuration extensions or overrides to customize the PDP-D that does not contain sensitive information.

  • secrets: place in this directory any configuration extensions or overrides to customize the PDP-D that does contain sensitive information.

The same customization techniques described in the docker sections for PDP-D, fully apply here, by placing the corresponding files or scripts in these two directories.

Additional information

For additional information, please see the Drools PDP Development and Testing (In Depth) page.

PDP-D Applications

Overview

PDP-D applications uses the PDP-D Engine middleware to provide domain specific services. See PDP-D Engine for the description of the PDP-D infrastructure.

At this time Control Loops are the only type of applications supported.

Control Loop applications must support the following Policy Type:

  • onap.policies.controlloop.operational.common.Drools (Tosca Compliant Operational Policies)

Software

Source Code repositories

The PDP-D Applications software resides on the policy/drools-applications repository. The actor libraries introduced in the frankfurt release reside in the policy/models repository.

At this time, the control loop application is the only application supported in ONAP. All the application projects reside under the controlloop directory.

Docker Image

See the drools-applications released versions for the latest images:

docker pull onap/policy-pdpd-cl:1.8.2

At the time of this writing 1.8.2 is the latest version.

The onap/policy-pdpd-cl image extends the onap/policy-drools image with the usecases controller that realizes the control loop application.

Usecases Controller

The usecases controller is the control loop application in ONAP.

There are three parts in this controller:

The kmodule.xml specifies only one session, and declares in the kbase section the two operational policy types that it supports.

The Usecases controller relies on the new Actor framework to interact with remote components, part of a control loop transaction. The reader is referred to the Policy Platform Actor Development Guidelines in the documentation for further information.

Operational Policy Types

The usecases controller supports the following policy type:

  • onap.policies.controlloop.operational.common.Drools.

The onap.policies.controlloop.operational.common.Drools is the Tosca compliant policy type introduced in frankfurt.

The Tosca Compliant Operational Policy Type is defined at the onap.policies.controlloop.operational.common.Drools.

An example of a Tosca Compliant Operational Policy can be found here.

Policy Chaining

The usecases controller supports chaining of multiple operations inside a Tosca Operational Policy. The next operation can be chained based on the result/output from an operation. The possibilities available for chaining are:

  • success: chain after the result of operation is success

  • failure: chain after the result of operation is failure due to issues with controller/actor

  • failure_timeout: chain after the result of operation is failure due to timeout

  • failure_retries: chain after the result of operation is failure after all retries

  • failure_exception: chain after the result of operation is failure due to exception

  • failure_guard: chain after the result of operation is failure due to guard not allowing the operation

An example of policy chaining for VNF can be found here.

An example of policy chaining for PNF can be found here.

Features

Since the PDP-D Control Loop Application image was created from the PDP-D Engine one (onap/policy-drools), it inherits all features and functionality.

The enabled features in the onap/policy-pdpd-cl image are:

  • distributed locking: distributed resource locking.

  • healthcheck: healthcheck.

  • lifecycle: enables the lifecycle APIs.

  • controlloop-trans: control loop transaction tracking.

  • controlloop-management: generic controller capabilities.

  • controlloop-usecases: new controller introduced in the guilin release to realize the ONAP use cases.

The following features are installed but disabled:

  • controlloop-tdjam: experimental java-only controller to be deprecated post guilin.

  • controlloop-utils: actor simulators.

Control Loops Transaction (controlloop-trans)

It tracks Control Loop Transactions and Operations. These are recorded in the $POLICY_LOGS/audit.log and $POLICY_LOGS/metrics.log, and accessible through the telemetry APIs.

Control Loops Management (controlloop-management)

It installs common control loop application resources, and provides telemetry API extensions. Actor configurations are packaged in this feature.

Usecases Controller (controlloop-usecases)

It is the guilin release implementation of the ONAP use cases. It relies on the new Actor model framework to carry out a policy’s execution.

TDJAM Controller (controlloop-tdjam)

This is an experimental, java-only controller that will be deprecated after the guilin release.

Utilities (controlloop-utils)

Enables actor simulators for testing purposes.

Offline Mode

The default ONAP installation in onap/policy-pdpd-cl:1.8.2 is OFFLINE. In this configuration, the rules artifact and the dependencies are all in the local maven repository. This requires that the maven dependencies are preloaded in the local repository.

An offline configuration requires two configuration items:

  • OFFLINE environment variable set to true (see values.yaml.

  • override of the default settings.xml (see settings.xml) override.

Running the PDP-D Control Loop Application in a single container

Environment File

First create an environment file (in this example env.conf) to configure the PDP-D.

# SYSTEM software configuration

POLICY_HOME=/opt/app/policy
POLICY_LOGS=/var/log/onap/policy/pdpd
KEYSTORE_PASSWD=Pol1cy_0nap
TRUSTSTORE_PASSWD=Pol1cy_0nap

# Telemetry credentials

TELEMETRY_PORT=9696
TELEMETRY_HOST=0.0.0.0
TELEMETRY_USER=demo@people.osaaf.org
TELEMETRY_PASSWORD=demo123456!

# nexus repository

SNAPSHOT_REPOSITORY_ID=
SNAPSHOT_REPOSITORY_URL=
RELEASE_REPOSITORY_ID=
RELEASE_REPOSITORY_URL=
REPOSITORY_USERNAME=
REPOSITORY_PASSWORD=
REPOSITORY_OFFLINE=true

MVN_SNAPSHOT_REPO_URL=
MVN_RELEASE_REPO_URL=

# Relational (SQL) DB access

SQL_HOST=
SQL_USER=
SQL_PASSWORD=

# AAF

AAF=false
AAF_NAMESPACE=org.onap.policy
AAF_HOST=aaf.api.simpledemo.onap.org

# PDP-D DMaaP configuration channel

PDPD_CONFIGURATION_TOPIC=PDPD-CONFIGURATION
PDPD_CONFIGURATION_API_KEY=
PDPD_CONFIGURATION_API_SECRET=
PDPD_CONFIGURATION_CONSUMER_GROUP=
PDPD_CONFIGURATION_CONSUMER_INSTANCE=
PDPD_CONFIGURATION_PARTITION_KEY=

# PAP-PDP configuration channel

POLICY_PDP_PAP_TOPIC=POLICY-PDP-PAP
POLICY_PDP_PAP_GROUP=defaultGroup

# Symmetric Key for encoded sensitive data

SYMM_KEY=

# Healthcheck Feature

HEALTHCHECK_USER=demo@people.osaaf.org
HEALTHCHECK_PASSWORD=demo123456!

# Pooling Feature

POOLING_TOPIC=POOLING

# PAP

PAP_HOST=
PAP_USERNAME=
PAP_PASSWORD=

# PAP legacy

PAP_LEGACY_USERNAME=
PAP_LEGACY_PASSWORD=

# PDP-X

PDP_HOST=localhost
PDP_PORT=6669
PDP_CONTEXT_URI=pdp/api/getDecision
PDP_USERNAME=policy
PDP_PASSWORD=password
GUARD_DISABLED=true

# DCAE DMaaP

DCAE_TOPIC=unauthenticated.DCAE_CL_OUTPUT
DCAE_SERVERS=localhost
DCAE_CONSUMER_GROUP=dcae.policy.shared

# Open DMaaP

DMAAP_SERVERS=localhost

# AAI

AAI_HOST=localhost
AAI_PORT=6666
AAI_CONTEXT_URI=
AAI_USERNAME=policy
AAI_PASSWORD=policy

# SO

SO_HOST=localhost
SO_PORT=6667
SO_CONTEXT_URI=
SO_URL=https://localhost:6667/
SO_USERNAME=policy
SO_PASSWORD=policy

# VFC

VFC_HOST=localhost
VFC_PORT=6668
VFC_CONTEXT_URI=api/nslcm/v1/
VFC_USERNAME=policy
VFC_PASSWORD=policy

# SDNC

SDNC_HOST=localhost
SDNC_PORT=6670
SDNC_CONTEXT_URI=restconf/operations/
Configuration
noop.pre.sh

In order to avoid the noise in the logs that relate to dmaap configuration, a startup script (noop.pre.sh) is added to convert dmaap endpoints to noop in the host directory to be mounted.

#!/bin/bash -x

sed -i "s/^dmaap/noop/g" $POLICY_HOME/config/*.properties
features.pre.sh

We can enable the controlloop-utils and disable the distributed-locking feature to avoid using the database.

#!/bin/bash -x

bash -c "/opt/app/policy/bin/features disable distributed-locking"
bash -c "/opt/app/policy/bin/features enable controlloop-utils"
active.post.sh

The active.post.sh script makes the PDP-D active.

#!/bin/bash -x

bash -c "http --verify=no -a ${TELEMETRY_USER}:${TELEMETRY_PASSWORD} PUT https://localhost:9696/policy/pdp/engine/lifecycle/state/ACTIVE"
Actor Properties

In the guilin release, some actors configurations need to be overridden to support http for compatibility with the controlloop-utils feature.

AAI-http-client.properties
http.client.services=AAI

http.client.services.AAI.managed=true
http.client.services.AAI.https=false
http.client.services.AAI.host=${envd:AAI_HOST}
http.client.services.AAI.port=${envd:AAI_PORT}
http.client.services.AAI.userName=${envd:AAI_USERNAME}
http.client.services.AAI.password=${envd:AAI_PASSWORD}
http.client.services.AAI.contextUriPath=${envd:AAI_CONTEXT_URI}
SDNC-http-client.properties
http.client.services=SDNC

http.client.services.SDNC.managed=true
http.client.services.SDNC.https=false
http.client.services.SDNC.host=${envd:SDNC_HOST}
http.client.services.SDNC.port=${envd:SDNC_PORT}
http.client.services.SDNC.userName=${envd:SDNC_USERNAME}
http.client.services.SDNC.password=${envd:SDNC_PASSWORD}
http.client.services.SDNC.contextUriPath=${envd:SDNC_CONTEXT_URI}
VFC-http-client.properties
http.client.services=VFC

http.client.services.VFC.managed=true
http.client.services.VFC.https=false
http.client.services.VFC.host=${envd:VFC_HOST}
http.client.services.VFC.port=${envd:VFC_PORT}
http.client.services.VFC.userName=${envd:VFC_USERNAME}
http.client.services.VFC.password=${envd:VFC_PASSWORD}
http.client.services.VFC.contextUriPath=${envd:VFC_CONTEXT_URI:api/nslcm/v1/}
settings.xml

The standalone-settings.xml file is the default maven settings override in the container.

<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">

    <offline>true</offline>

    <profiles>
        <profile>
            <id>policy-local</id>
            <repositories>
                <repository>
                    <id>file-repository</id>
                    <url>file:${user.home}/.m2/file-repository</url>
                    <releases>
                        <enabled>true</enabled>
                        <updatePolicy>always</updatePolicy>
                    </releases>
                    <snapshots>
                        <enabled>true</enabled>
                        <updatePolicy>always</updatePolicy>
                    </snapshots>
                </repository>
            </repositories>
        </profile>
    </profiles>

    <activeProfiles>
        <activeProfile>policy-local</activeProfile>
    </activeProfiles>

</settings>
Bring up the PDP-D Control Loop Application
docker run --rm -p 9696:9696 -v ${PWD}/config:/tmp/policy-install/config --env-file ${PWD}/env/env.conf -it --name PDPD -h pdpd nexus3.onap.org:10001/onap/policy-pdpd-cl:1.6.4

To run the container in detached mode, add the -d flag.

Note that we are opening the 9696 telemetry API port to the outside world, mounting the config host directory, and setting environment variables.

To open a shell into the PDP-D:

docker exec -it pdp-d bash

Once in the container, run tools such as telemetry, db-migrator, policy to look at the system state:

docker exec -it PDPD bash -c "/opt/app/policy/bin/telemetry"
docker exec -it PDPD bash -c "/opt/app/policy/bin/policy status"
docker exec -it PDPD bash -c "/opt/app/policy/bin/db-migrator -s ALL -o report"
Controlled instantiation of the PDP-D Control Loop Appplication

Sometimes a developer may want to start and stop the PDP-D manually:

# start a bash

docker run --rm -p 9696:9696 -v ${PWD}/config:/tmp/policy-install/config --env-file ${PWD}/env/env.conf -it --name PDPD -h pdpd nexus3.onap.org:10001/onap/policy-pdpd-cl:1.6.4 bash

# use this command to start policy applying host customizations from /tmp/policy-install/config

pdpd-cl-entrypoint.sh vmboot

# or use this command to start policy without host customization

policy start

# at any time use the following command to stop the PDP-D

policy stop

# and this command to start the PDP-D back again

policy start

Scale-out use case testing

First step is to create the operational.scaleout policy.

policy.vdns.json
{
  "type": "onap.policies.controlloop.operational.common.Drools",
  "type_version": "1.0.0",
  "name": "operational.scaleout",
  "version": "1.0.0",
  "metadata": {
    "policy-id": "operational.scaleout"
  },
  "properties": {
    "id": "ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3",
    "timeout": 60,
    "abatement": false,
    "trigger": "unique-policy-id-1-scale-up",
    "operations": [
      {
        "id": "unique-policy-id-1-scale-up",
        "description": "Create a new VF Module",
        "operation": {
          "actor": "SO",
          "operation": "VF Module Create",
          "target": {
            "targetType": "VFMODULE",
            "entityIds": {
              "modelInvariantId": "e6130d03-56f1-4b0a-9a1d-e1b2ebc30e0e",
              "modelVersionId": "94b18b1d-cc91-4f43-911a-e6348665f292",
              "modelName": "VfwclVfwsnkBbefb8ce2bde..base_vfw..module-0",
              "modelVersion": 1,
              "modelCustomizationId": "47958575-138f-452a-8c8d-d89b595f8164"
            }
          },
          "payload": {
            "requestParameters": "{\"usePreload\":true,\"userParams\":[]}",
            "configurationParameters": "[{\"ip-addr\":\"$.vf-module-topology.vf-module-parameters.param[9]\",\"oam-ip-addr\":\"$.vf-module-topology.vf-module-parameters.param[16]\",\"enabled\":\"$.vf-module-topology.vf-module-parameters.param[23]\"}]"
          }
        },
        "timeout": 20,
        "retries": 0,
        "success": "final_success",
        "failure": "final_failure",
        "failure_timeout": "final_failure_timeout",
        "failure_retries": "final_failure_retries",
        "failure_exception": "final_failure_exception",
        "failure_guard": "final_failure_guard"
      }
    ]
  }
}

To provision the scale-out policy, issue the following command:

http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" https://localhost:9696/policy/pdp/engine/lifecycle/policies @usecases/policy.vdns.json

Verify that the policy shows with the telemetry tools:

docker exec -it PDPD bash -c "/opt/app/policy/bin/telemetry"
> get /policy/pdp/engine/lifecycle/policies
> get /policy/pdp/engine/controllers/usecases/drools/facts/usecases/controlloops
dcae.vdns.onset.json
{
  "closedLoopControlName": "ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3",
  "closedLoopAlarmStart": 1463679805324,
  "closedLoopEventClient": "microservice.stringmatcher",
  "closedLoopEventStatus": "ONSET",
  "requestID": "c7c6a4aa-bb61-4a15-b831-ba1472dd4a65",
  "target_type": "VNF",
  "target": "vserver.vserver-name",
  "AAI": {
    "vserver.is-closed-loop-disabled": "false",
    "vserver.prov-status": "ACTIVE",
    "vserver.vserver-name": "OzVServer"
  },
  "from": "DCAE",
  "version": "1.0.2"
}

To initiate a control loop transaction, simulate a DCAE ONSET to Policy:

http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" PUT https://localhost:9696/policy/pdp/engine/topics/sources/noop/DCAE_TOPIC/events @dcae.vdns.onset.json Content-Type:'text/plain'

This will trigger the scale out control loop transaction that will interact with the SO simulator to complete the transaction.

Verify in $POLICY_LOGS/network.log that a FINAL: SUCCESS notification is sent over the POLICY-CL-MGT channel. An entry in the $POLICY_LOGS/audit.log should indicate successful completion as well.

vCPE use case testing

First step is to create the operational.restart policy.

policy.vcpe.json
{
  "type": "onap.policies.controlloop.operational.common.Drools",
  "type_version": "1.0.0",
  "name": "operational.restart",
  "version": "1.0.0",
  "metadata": {
    "policy-id": "operational.restart"
  },
  "properties": {
    "id": "ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e",
    "timeout": 300,
    "abatement": false,
    "trigger": "unique-policy-id-1-restart",
    "operations": [
      {
        "id": "unique-policy-id-1-restart",
        "description": "Restart the VM",
        "operation": {
          "actor": "APPC",
          "operation": "Restart",
          "target": {
            "targetType": "VNF"
          }
        },
        "timeout": 240,
        "retries": 0,
        "success": "final_success",
        "failure": "final_failure",
        "failure_timeout": "final_failure_timeout",
        "failure_retries": "final_failure_retries",
        "failure_exception": "final_failure_exception",
        "failure_guard": "final_failure_guard"
      }
    ]
  }
}

To provision the operational.restart policy issue the following command:

http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" https://localhost:9696/policy/pdp/engine/lifecycle/policies @usecases/policy.vcpe.json

Verify that the policy shows with the telemetry tools:

docker exec -it PDPD bash -c "/opt/app/policy/bin/telemetry"
> get /policy/pdp/engine/lifecycle/policies
> get /policy/pdp/engine/controllers/usecases/drools/facts/usecases/controlloops
dcae.vcpe.onset.json
{
  "closedLoopControlName": "ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e",
  "closedLoopAlarmStart": 1463679805324,
  "closedLoopEventClient": "DCAE_INSTANCE_ID.dcae-tca",
  "closedLoopEventStatus": "ONSET",
  "requestID": "664be3d2-6c12-4f4b-a3e7-c349acced200",
  "target_type": "VNF",
  "target": "generic-vnf.vnf-id",
  "AAI": {
    "vserver.is-closed-loop-disabled": "false",
    "vserver.prov-status": "ACTIVE",
    "generic-vnf.vnf-id": "vCPE_Infrastructure_vGMUX_demo_app"
  },
  "from": "DCAE",
  "version": "1.0.2"
}

To initiate a control loop transaction, simulate a DCAE ONSET to Policy:

http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" PUT https://localhost:9696/policy/pdp/engine/topics/sources/noop/DCAE_TOPIC/events @dcae.vcpe.onset.json Content-Type:'text/plain'

This will spawn a vCPE control loop transaction in the PDP-D. Policy will send a restart message over the APPC-LCM-READ channel to APPC and wait for a response.

Verify that you see this message in the network.log by looking for APPC-LCM-READ messages.

Note the sub-request-id value from the restart message in the APPC-LCM-READ channel.

Replace REPLACEME in the appc.vcpe.success.json with this sub-request-id.

appc.vcpe.success.json
{
  "body": {
    "output": {
      "common-header": {
        "timestamp": "2017-08-25T21:06:23.037Z",
        "api-ver": "5.00",
        "originator-id": "664be3d2-6c12-4f4b-a3e7-c349acced200",
        "request-id": "664be3d2-6c12-4f4b-a3e7-c349acced200",
        "sub-request-id": "REPLACEME",
        "flags": {}
      },
      "status": {
        "code": 400,
        "message": "Restart Successful"
      }
    }
  },
  "version": "2.0",
  "rpc-name": "restart",
  "correlation-id": "664be3d2-6c12-4f4b-a3e7-c349acced200-1",
  "type": "response"
}

Send a simulated APPC response back to the PDP-D over the APPC-LCM-WRITE channel.

http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" PUT https://localhost:9696/policy/pdp/engine/topics/sources/noop/APPC-LCM-WRITE/events @appc.vcpe.success.json  Content-Type:'text/plain'

Verify in $POLICY_LOGS/network.log that a FINAL: SUCCESS notification is sent over the POLICY-CL-MGT channel, and an entry is added to the $POLICY_LOGS/audit.log indicating successful completion.

vFirewall use case testing

First step is to create the operational.modifyconfig policy.

policy.vfw.json
{
  "type": "onap.policies.controlloop.operational.common.Drools",
  "type_version": "1.0.0",
  "name": "operational.modifyconfig",
  "version": "1.0.0",
  "metadata": {
    "policy-id": "operational.modifyconfig"
  },
  "properties": {
    "id": "ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a",
    "timeout": 300,
    "abatement": false,
    "trigger": "unique-policy-id-1-modifyConfig",
    "operations": [
      {
        "id": "unique-policy-id-1-modifyConfig",
        "description": "Modify the packet generator",
        "operation": {
          "actor": "APPC",
          "operation": "ModifyConfig",
          "target": {
            "targetType": "VNF",
            "entityIds": {
              "resourceID": "bbb3cefd-01c8-413c-9bdd-2b92f9ca3d38"
            }
          },
          "payload": {
            "streams": "{\"active-streams\": 5 }"
          }
        },
        "timeout": 240,
        "retries": 0,
        "success": "final_success",
        "failure": "final_failure",
        "failure_timeout": "final_failure_timeout",
        "failure_retries": "final_failure_retries",
        "failure_exception": "final_failure_exception",
        "failure_guard": "final_failure_guard"
      }
    ]
  }
}

To provision the operational.modifyconfig policy, issue the following command:

http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" https://localhost:9696/policy/pdp/engine/lifecycle/policies @usecases/policy.vfw.json

Verify that the policy shows with the telemetry tools:

docker exec -it PDPD bash -c "/opt/app/policy/bin/telemetry"
> get /policy/pdp/engine/lifecycle/policies
> get /policy/pdp/engine/controllers/usecases/drools/facts/usecases/controlloops
dcae.vfw.onset.json
{
  "closedLoopControlName": "ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a",
  "closedLoopAlarmStart": 1463679805324,
  "closedLoopEventClient": "microservice.stringmatcher",
  "closedLoopEventStatus": "ONSET",
  "requestID": "c7c6a4aa-bb61-4a15-b831-ba1472dd4a65",
  "target_type": "VNF",
  "target": "generic-vnf.vnf-name",
  "AAI": {
    "vserver.is-closed-loop-disabled": "false",
    "vserver.prov-status": "ACTIVE",
    "generic-vnf.vnf-name": "fw0002vm002fw002",
    "vserver.vserver-name": "OzVServer"
  },
  "from": "DCAE",
  "version": "1.0.2"
}

To initiate a control loop transaction, simulate a DCAE ONSET to Policy:

http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" PUT https://localhost:9696/policy/pdp/engine/topics/sources/noop/DCAE_TOPIC/events @dcae.vfw.onset.json Content-Type:'text/plain'

This will spawn a vFW control loop transaction in the PDP-D. Policy will send a ModifyConfig message over the APPC-CL channel to APPC and wait for a response. This can be seen by searching the network.log for APPC-CL.

Note the SubRequestId field in the ModifyConfig message in the APPC-CL topic in the network.log

Send a simulated APPC response back to the PDP-D over the APPC-CL channel. To do this, change the REPLACEME text in the appc.vcpe.success.json with this SubRequestId.

appc.vcpe.success.json
{
  "CommonHeader": {
    "TimeStamp": 1506051879001,
    "APIver": "1.01",
    "RequestID": "c7c6a4aa-bb61-4a15-b831-ba1472dd4a65",
    "SubRequestID": "REPLACEME",
    "RequestTrack": [],
    "Flags": []
  },
  "Status": {
    "Code": 400,
    "Value": "SUCCESS"
  },
  "Payload": {
    "generic-vnf.vnf-id": "f17face5-69cb-4c88-9e0b-7426db7edddd"
  }
}
http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" PUT https://localhost:9696/policy/pdp/engine/topics/sources/noop/APPC-CL/events @appc.vcpe.success.json Content-Type:'text/plain'

Verify in $POLICY_LOGS/network.log that a FINAL: SUCCESS notification is sent over the POLICY-CL-MGT channel, and an entry is added to the $POLICY_LOGS/audit.log indicating successful completion.

Running PDP-D Control Loop Application with other components

The reader can also look at the policy/docker repository. More specifically, these directories have examples of other PDP-D Control Loop configurations:

Additional information

For additional information, please see the Drools PDP Development and Testing (In Depth) page.

Policy XACML PDP Engine

The ONAP XACML Policy PDP Engine uses an open source implementation of the OASIS XACML 3.0 Standard to support fine-grained policy decisions in the ONAP. The XACML 3.0 Standard is a language for both policies and requests/responses for access control decisions. The ONAP XACML PDP translates TOSCA Compliant Policies into the XACML policy language, loads the policies into the XACML engine and exposes a Decision API which uses the XACML request/response language to render decisions for ONAP components.

ONAP XACML PDP Supported Policy Types

The following Policy Types are supported by the XACML PDP Engine (PDP-X):

Supported Base Policy Types

Application

Base Policy Type

Action

Description

Monitoring

onap.policies.Monitoring

configure

Control Loop DCAE Monitoring Policies

Guard

onap.policies.controlloop.guard.Common

guard

Control Loop Guard and Coordination Policies

Optimization

onap.policies.Optimization

optimize

Optimization policy types used by OOF

Naming

onap.policies.Naming

naming

Naming policy types used by SDNC

Native

onap.policies.native.Xacml

native

Native XACML Policies

Match

onap.policies.Match

native

Matchable Policy Types for the ONAP community to use

Each Policy Type is implemented as an application that extends the XacmlApplicationServiceProvider, and provides a ToscaPolicyTranslator that translates the TOSCA representation of the policy into a XACML OASIS 3.0 standard policy.

By cloning the policy/xacml-pdp repository, a developer can run the JUnit tests for the applications to get a better understanding on how applications are built using translators and the XACML Policies that are generated for each Policy Type. Each application supports one or more Policy Types and an associated “action” used by the Decision API when making these calls.

See the Policy Platform Development Tools for more information on cloning and developing the policy repositories.

XACML-PDP applications are located in the ‘applications’ sub-module in the policy/xacml-pdp repo. Click here to view the applications sub-modules

XACML PDP TOSCA Translators

The following common translators are available in ONAP for use by developers. Each is used or extended by the standard PDP-X applications in ONAP.

StdCombinedPolicyResultsTranslator Translator

A simple translator that wraps the TOSCA policy into a XACML policy and performs matching of the policy based on either policy-id and/or policy-type. The use of this translator is discouraged as it behaves like a database call and does not take advantage of the fine-grain decision making features described by the XACML OASIS 3.0 standard. It is used to support backward compatibility of legacy “configure” policies.

Implementation of Combined Results Translator.

The Monitoring and Naming applications use this translator.

StdMatchableTranslator Translator

More robust translator that searches metadata of TOSCA properties for a matchable field set to true. The translator then uses those “matchable” properties to translate a policy into a XACML OASIS 3.0 policy which allows for fine-grained decision making such that ONAP applications can retrieve the appropriate policy(s) to be enforced during runtime.

Each of the properties designated as “matchable” are treated relative to each other as an “AND” during a Decision request call. In addition, each value of a “matchable property that is an array, is treated as an “OR”. The more properties specified in a decision request, the more fine-grained a policy will be returned. In addition, the use of “policy-type” can be used in a decision request to further filter the decision results to a specific type of policy.

Implementation of Matchable Translator.

The Optimization application uses this translator.

GuardTranslator and CoordinationGuardTranslator

These two translators are used by the Guard application and are very specific to those Policy Types. They are good examples on how to build your own translator for a very specific implementation of a policy type. This can be the case if any of the Std* translators are not appropriate to use directly or override for your application.

Implementation of Guard Translator

Implementation of Coordination Translator

Native XACML OAISIS 3.0 XML Policy Translator

This translator pulls a URL encoded XML XACML policy from a TOSCA Policy and loads it into a XACML Engine. This allows native XACML policies to be used to support complex use cases in which a translation from TOSCA to XACML is too difficult.

Implementation of Native Policy Translator

Monitoring Policy Types

These Policy Types are used by Control Loop DCAE microservice components to support monitoring of VNF/PNF entities to support an implementation of a Control Loops. The DCAE Platform makes a call to Decision API to request the contents of these policies. The implementation involves creating an overarching XACML Policy that contains the TOSCA policy as a payload that is returned to the DCAE Platform.

The following policy types derive from onap.policies.Monitoring:

Derived Policy Type

Action

Description

onap.policies.monitoring.tcagen2

configure

TCA DCAE microservice gen2 component

onap.policies.monitoring.dcaegen2.collectors.datafile.datafile-app-server

configure

REST Collector

onap.policies.monitoring.docker.sonhandler.app

configure

SON Handler microservice component

Note

DCAE project deprecated TCA DCAE microservice in lieu for their gen2 microservice. Thus, the policy type onap.policies.monitoring.cdap.tca.hi.lo.app was removed from Policy Framework.

This is an example Decision API payload made to retrieve a decision for a Monitoring Policy by id. Not recommended - as users may change id’s of a policy. Available for backward compatibility.

{
  "ONAPName": "DCAE",
  "ONAPComponent": "PolicyHandler",
  "ONAPInstance": "622431a4-9dea-4eae-b443-3b2164639c64",
  "action": "configure",
  "resource": {
      "policy-type": "onap.policies.monitoring.tcagen2"
  }
}

This is an example Decision API payload made to retrieve a decision for all deployed Monitoring Policies for a specific type of Monitoring policy.

{
  "ONAPName": "DCAE",
  "ONAPComponent": "PolicyHandler",
  "ONAPInstance": "622431a4-9dea-4eae-b443-3b2164639c64",
  "action": "configure",
  "resource": {
      "policy-id": "onap.scaleout.tca"
  }
}

Guard and Control Loop Coordination Policy Types

These Policy Types are used by Control Loop Drools Engine to support guarding control loop operations and coordination of Control Loops during runtime control loop execution.

Policy Type

Action

Description

onap.policies.controlloop.guard.common.FrequencyLimiter

guard

Limits frequency of actions over a specified time period

onap.policies.controlloop.guard.common.Blacklist

guard

Blacklists a regexp of VNF IDs

onap.policies.controlloop.guard.common.MinMax

guard

For scaling, enforces a min/max number of VNFS

onap.policies.controlloop.guard.common.Filter

guard

Used for filtering entities in A&AI from Control Loop actions

onap.policies.controlloop.guard.coordination.FirstBlocksSecond

guard

Gives priority to one control loop vs another

This is an example Decision API payload made to retrieve a decision for a Guard Policy Type.

{
  "ONAPName": "Policy",
  "ONAPComponent": "drools-pdp",
  "ONAPInstance": "usecase-template",
  "requestId": "unique-request-id-1",
  "action": "guard",
  "resource": {
      "guard": {
          "actor": "SO",
          "operation": "VF Module Create",
          "clname": "ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3",
          "target": "vLoadBalancer-00",
          "vfCount": "1"
      }
  }
}

The return decision simply has “permit” or “deny” in the response to tell the calling application whether they are allowed to perform the operation.

{"status":"Permit"}
Guard Common Base Policy Type

Each guard Policy Type derives from onap.policies.controlloop.guard.Common base policy type. Thus, they share a set of common properties.

Common Properties for all Guards

Property

Examples

Required

Type

Description

actor

APPC, SO

Required

String

Identifies the actor involved in the Control Loop operation.

operation

Restart, VF Module Create

Required

String

Identifies the Control Loop operation the actor must perform.

timeRange

start_time: T00:00:00Z end_time: T08:00:00Z

Optional

tosca.datatypes.TimeInterval

A given time range the guard is in effect. Following the TOSCA specification the format should be ISO 8601 format

id

control-loop-id

Optional

String

A specific Control Loop id the guard is in effect.

Common Guard Policy Type

Frequency Limiter Guard Policy Type

The Frequency Limiter Guard is used to specify limits as to how many operations can occur over a given time period.

Frequency Guard Properties

Property

Examples

Required

Type

Description

timeWindow

10, 60

Required

integer

The time window to count the actions against.

timeUnits

second minute, hour, day, week, month, year

Required

String

The units of time the window is counting

limit

5

Required

integer

The limit value to be checked against.

tosca_definitions_version: tosca_simple_yaml_1_1_0
topology_template:
  policies:
    -
      guard.frequency.scaleout:
        type: onap.policies.controlloop.guard.common.FrequencyLimiter
        type_version: 1.0.0
        version: 1.0.0
        name: guard.frequency.scaleout
        description: Here we limit the number of Restarts for my-controlloop to 3 in a ten minute period.
        metadata:
          policy-id : guard.frequency.scaleout
        properties:
          actor: APPC
          operation: Restart
          id: my-controlloop
          timeWindow: 10
          timeUnits: minute
          limit: 3

Frequency Limiter Guard Policy Type

Min/Max Guard Policy Type

The Min/Max Guard is used to specify a minimum or maximum number of instantiated entities in A&AI. Typically this is a VFModule for Scaling operations. One should specify either a min or a max value, or both a min and max value. At least one must be specified.

Min/Max Guard Properties

Property

Examples

Required

Type

Description

target

e6130d03-56f1-4b0a-9a1d-e1b2ebc30e0e

Required

String

The target entity that has scaling restricted.

min

1

Optional

integer

Minimum value. Optional only if max is not specified.

max

5

Optional

integer

Maximum value. Optional only if min is not specified.

tosca_definitions_version: tosca_simple_yaml_1_1_0
topology_template:
    policies:
    -   guard.minmax.scaleout:
            type: onap.policies.controlloop.guard.common.MinMax
            type_version: 1.0.0
            version: 1.0.0
            name: guard.minmax.scaleout
            metadata:
                policy-id: guard.minmax.scaleout
            properties:
                actor: SO
                operation: VF Module Create
                id: my-controlloop
                target: the-vfmodule-id
                min: 1
                max: 2

Min/Max Guard Policy Type

Blacklist Guard Policy Type

The Blacklist Guard is used to specify a list of A&AI entities that are blacklisted from having an operation performed on them. Recommendation is to use the vnf-id for the A&AI entity.

Blacklist Guard Properties

Property

Examples

Required

Type

Description

blacklist

e6130d03-56f1-4b0a-9a1d-e1b2ebc30e0e

Required

list of string

List of target entity’s that are blacklisted from an operation.

tosca_definitions_version: tosca_simple_yaml_1_1_0
topology_template:
    policies:
    -   guard.blacklist.scaleout:
            type: onap.policies.controlloop.guard.common.Blacklist
            type_version: 1.0.0
            version: 1.0.0
            name: guard.blacklist.scaleout
            metadata:
                policy-id: guard.blacklist.scaleout
            properties:
                actor: APPC
                operation: Restart
                id: my-controlloop
                blacklist:
                - vnf-id-1
                - vnf-id-2

Blacklist Guard Policy Type

Filter Guard Policy Type

The Filter Guard is a more robust guard for blacklisting and whitelisting A&AI entities when performing control loop operations. The intent for this guard is to filter in or out a block of entities, while allowing the ability to filter in or out specific entities. This allows a DevOps team to control the introduction of a Control Loop for a region or specific VNF’s, as well as block specific VNF’s that are being negatively affected when poor network conditions arise. Care and testing should be taken to understand the ramifications when combining multiple filters as well as their use in conjunction with other Guard Policy Types.

Filter Guard Properties

Property

Examples

Required

Type

Description

algorithm

blacklist-overrides

Required

What algorithm to be applied

blacklist-overrides or whitelist-overrides are the valid values. Indicates whether blacklisting or whitelisting has precedence.

filters

see table below

Required

list of onap.datatypes.guard.filter

List of datatypes that describe the filter.

Filter Guard onap.datatypes.guard.filter Properties

Property

Examples

Required

Type

Description

field

generic-vnf.vnf-name

Required

String

Field used to perform filter on and must be a string value. See the Policy Type below for valid values.

filter

vnf-id-1

Required

String

The filter being applied.

function

string-equal

Required

String

The function that is applied to the filter. See the Policy Type below for valid values.

blacklist

true

Required

boolean

Whether the result of the filter function applied to the filter is blacklisted or whitelisted (eg Deny or Permit).

tosca_definitions_version: tosca_simple_yaml_1_1_0
topology_template:
   policies:
   -  filter.block.region.allow.one.vnf:
         description: Block this region from Control Loop actions, but allow a specific vnf.
         type: onap.policies.controlloop.guard.common.Filter
         type_version: 1.0.0
         version: 1.0.0
         properties:
            actor: SO
            operation: VF Module Create
            algorithm: whitelist-overrides
            filters:
            -  field: cloud-region.cloud-region-id
               filter: RegionOne
               function: string-equal
               blacklist: true
            -  field: generic-vnf.vnf-id
               filter: e6130d03-56f1-4b0a-9a1d-e1b2ebc30e0e
               function: string-equal
               blacklist: false
   -  filter.allow.region.block.one.vnf:
         description: allow this region to do Control Loop actions, but block a specific vnf.
         type: onap.policies.controlloop.guard.common.Filter
         type_version: 1.0.0
         version: 1.0.0
         properties:
            actor: SO
            operation: VF Module Create
            algorithm: blacklist-overrides
            filters:
            -  field: cloud-region.cloud-region-id
               filter: RegionTwo
               function: string-equal
               blacklist: false
            -  field: generic-vnf.vnf-id
               filter: f17face5-69cb-4c88-9e0b-7426db7edddd
               function: string-equal
               blacklist: true

Filter Guard Policy Type

Optimization Policy Types

These Policy Types are designed to be used by the OOF Project support several domains including VNF placement in ONAP. The OOF Platform makes a call to the Decision API to request these Policies based on the values specified in the onap.policies.Optimization properties. Each of these properties are treated relative to each other as an “AND”. In addition, each value for each property itself is treated as an “OR”.

Policy Type

Action

onap.policies.Optimization

optimize

onap.policies.optimization.Service

optimize

onap.policies.optimization.Resource

optimize

onap.policies.optimization.resource.AffinityPolicy

optimize

onap.policies.optimization.resource.DistancePolicy

optimize

onap.policies.optimization.resource.HpaPolicy

optimize

onap.policies.optimization.resource.OptimizationPolicy

optimize

onap.policies.optimization.resource.PciPolicy

optimize

onap.policies.optimization.service.QueryPolicy

optimize

onap.policies.optimization.service.SubscriberPolicy

optimize

onap.policies.optimization.resource.Vim_fit

optimize

onap.policies.optimization.resource.VnfPolicy

optimize

The optimization application extends the StdMatchablePolicyTranslator in that the application applies a “closest match” algorithm internally after a XACML decision. This filters the results of the decision to return the one or more policies that match the incoming decision request as close as possible. In addition, there is special consideration for the Subscriber Policy Type. If a decision request contains subscriber context attributes, then internally the application will apply an initial decision to retrieve the scope of the subscriber. The resulting scope attributes are then added into a final internal decision call.

This is an example Decision API payload made to retrieve a decision for an Optimization Policy Type.

{
  "ONAPName": "OOF",
  "ONAPComponent": "OOF-component",
  "ONAPInstance": "OOF-component-instance",
  "action": "optimize",
  "resource": {
      "scope": [],
      "services": ["vCPE"],
      "resources": ["vGMuxInfra", "vG"],
      "geography": ["US", "INTERNATIONAL"]
  }
}

Native XACML Policy Type

This Policy type is used by any client or ONAP component who has the need of native XACML evaluation. A native XACML policy or policy set encoded in XML can be created off this policy type and loaded into the XACML PDP engine by invoking the PAP policy deployment API. Native XACML requests encoded in either JSON or XML can be sent to the XACML PDP engine for evaluation by invoking the native decision API. Native XACML responses will be returned upon evaluating the requests against the matching XACML policies. Those native XACML policies, policy sets, requests and responses all follow the OASIS XACML 3.0 Standard.

Policy Type

Action

Description

onap.policies.native.Xacml

native

any client or ONAP component

According to the XACML 3.0 specification, two content-types are supported and used to present the native requests/responses. They are formally defined as “application/xacml+json” and “application/xacml+xml”.

This is an example Native Decision API payload made to retrieve a decision for whether Julius Hibbert can read http://medico.com/record/patient/BartSimpson.

{
    "Request": {
        "ReturnPolicyIdList": false,
        "CombinedDecision": false,
        "AccessSubject": [
            {
                "Attribute": [
                    {
                        "IncludeInResult": false,
                        "AttributeId": "subject-id",
                        "Value": "Julius Hibbert"
                    }
                ]
            }
        ],
        "Resource": [
            {
                "Attribute": [
                    {
                        "IncludeInResult": false,
                        "AttributeId": "resource-id",
                        "Value": "http://medico.com/record/patient/BartSimpson",
                        "DataType": "anyURI"
                    }
                ]
            }
        ],
        "Action": [
            {
                "Attribute": [
                    {
                        "IncludeInResult": false,
                        "AttributeId": "action-id",
                        "Value": "read"
                    }
                ]
            }
        ],
        "Environment": []
    }
}

Match Policy Type

This Policy type can be used to design your own Policy Type and utilize the StdMatchableTranslator, and does not need to build your own custom application. You can design your Policy Type by inheriting from the Match policy type (eg. onap.policies.match.<YourPolicyType>) and adding a matchable metadata set to true for the properties that you would like to request a Decision on. All a user would need to do is then use the Policy Lifecycle API to add their Policy Type and then create policies from it. Then deploy those policies to the XACML PDP and they would be able to get Decisions without customizing their ONAP installation.

Here is an example Policy Type:

tosca_definitions_version: tosca_simple_yaml_1_1_0
policy_types:
   onap.policies.match.Test:
      derived_from: onap.policies.Match
      version: 1.0.0
      name: onap.policies.match.Test
      description: Test Matching Policy Type to test matchable policies
      properties:
         matchable:
            type: string
            metadata:
               matchable: true
            required: true
         nonmatchable:
            type: string
            required: true

Here are example Policies:

tosca_definitions_version: tosca_simple_yaml_1_1_0
topology_template:
    policies:
    -   test_match_1:
            type: onap.policies.match.Test
            version: 1.0.0
            type_version: 1.0.0
            name: test_match_1
            properties:
               matchable: foo
               nonmatchable: value1
    -   test_match_2:
            type: onap.policies.match.Test
            version: 1.0.0
            type_version: 1.0.0
            name: test_match_2
            properties:
               matchable: bar
               nonmatchable: value2

This is an example Decision API request that can be made:

{
  "ONAPName": "my-ONAP",
  "ONAPComponent": "my-component",
  "ONAPInstance": "my-instance",
  "requestId": "unique-request-1",
  "action": "match",
  "resource": {
      "matchable": "foo"
  }
}

Which would render the following decision response:

{
  "policies": {
    "test_match_1": {
      "type": "onap.policies.match.Test",
      "type_version": "1.0.0",
      "properties": {
        "matchable": "foo",
        "nonmatchable": "value1"
      },
      "name": "test_match_1",
      "version": "1.0.0",
      "metadata": {
        "policy-id": "test_match_1",
        "policy-version": "1.0.0"
      }
    }
  }
}

Overriding or Extending the ONAP XACML PDP Supported Policy Types

It is possible to extend or replace one or more of the existing ONAP application implementations with your own. Since the XACML application loader uses the java.util.Service class to search the classpath to find and load applications, it may be necessary via the configuration file to exclude the ONAP packaged applications in order for your custom application to be loaded. This can be done via the configuration file by adding an exclusions property with a list of the Java class names you wish to exclude.

A configuration file example is located here

A coding example is available in the JUnit test for the Application Manager called testXacmlPdpApplicationManagerSimple. This example demonstrates how to exclude the Match and Guard applications while verifying a custom TestGuardOverrideApplication class is loaded and associated with the guard action. Thus, replacing and extending the guard application.

Note that this XACML PDP feature is exclusive to the XACML PDP and is secondary to the ability of the PAP to group PDP’s and declare which Policy Types are supported by a PDP group. For example, even if a PDP group excludes a Policy Type for a XACML PDP, this simply prevents policies being deployed to that group using the PAP Deployment API. If there is no exclusions in the configuration file, then any application will be loaded that it is in the classpath. If needed, one could use both PDP group Policy Type supported feature and the exclusions configuration to completely restrict which Policy Types as well as which applications are loaded at runtime.

For more information on PDP groups and setting supported Policy Types, please refer to the PAP Documentation

Supporting Your Own Policy Types and Translators

In order to support your own custom Policy Type that the XACML PDP Engine can support, one needs to build a Java service application that extends the XacmlApplicationServiceProvider interface and implement a ToscaPolicyTranslator application. Your application should register itself as a Java service application and expose it in the classpath used to be loaded into the ONAP XACML PDP Engine. Ensure you define and create the TOSCA Policy Type according to these Policy Design and Development. You should be able to load your custom Policy Type using the Policy Lifecycle API. Once successful, you should be able to start creating policies from your custom Policy Type.

XacmlApplicationServiceProvider

Interface for XacmlApplicationServiceProvider

See each of the ONAP Policy Type application implementations which re-use the StdXacmlApplicationServiceProvider class. This implementation can be used as a basis for your own custom applications.

Standard Application Service Provider implementation

ToscaPolicyTranslator

Your custom XacmlApplicationServiceProvider must provide an implementation of a ToscaPolicyTranslator.

Interface for ToscaPolicyTranslator

See each of the ONAP Policy type application implementations which each have their own ToscaPolicyTranslator. Most use or extend the StdBaseTranslator which contain methods that applications can use to support XACML obligations, advice as well as return attributes to the calling client applications via the DecisionResponse.

Standard Tosca Policy Translator implementation.

XACML Application and Enforcement Tutorials

The following tutorials can be helpful to get started on building your own decision application as well as building enforcement into your application. They also show how to build and extend both the XacmlApplicationServiceProvider and ToscaPolicyTranslator classes.

Policy XACML - Custom Application Tutorial

This tutorial shows how to build a XACML application for a Policy Type. Please be sure to clone the policy repositories before going through the tutorial. See Policy Platform Development Tools for details.

Design a Policy Type

Follow TOSCA Policy Primer for more information. For the tutorial, we will use this example Policy Type in which an ONAP PEP client would like to enforce an action authorize for a user to execute a permission on an entity. See here for latest Tutorial Policy Type.

Example Tutorial Policy Type
 1tosca_definitions_version: tosca_simple_yaml_1_1_0
 2policy_types:
 3    onap.policies.Authorization:
 4        derived_from: tosca.policies.Root
 5        version: 1.0.0
 6        description: Example tutorial policy type for doing user authorization
 7        properties:
 8            user:
 9                type: string
10                required: true
11                description: The unique user name
12            permissions:
13                type: list
14                required: true
15                description: A list of resource permissions
16                entry_schema:
17                    type: onap.datatypes.Tutorial
18data_types:
19    onap.datatypes.Tutorial:
20        derived_from: tosca.datatypes.Root
21        version: 1.0.0
22        properties:
23            entity:
24                type: string
25                required: true
26                description: The resource
27            permission:
28                type: string
29                required: true
30                description: The permission level
31                constraints:
32                    - valid_values: [read, write, delete]

We would expect then to be able to create the following policies to allow the demo user to Read/Write an entity called foo, while the audit user can only read the entity called foo. Neither user has Delete permission. See here for latest Tutorial Policies.

Example Policies Derived From Tutorial Policy Type
 1tosca_definitions_version: tosca_simple_yaml_1_1_0
 2topology_template:
 3    policies:
 4        -
 5            onap.policy.tutorial.demo:
 6                type: onap.policies.Authorization
 7                type_version: 1.0.0
 8                version: 1.0.0
 9                metadata:
10                    policy-id: onap.policy.tutorial.demo
11                    policy-version: 1
12                properties:
13                    user: demo
14                    permissions:
15                        -
16                            entity: foo
17                            permission: read
18                        -
19                            entity: foo
20                            permission: write
21        -
22            onap.policy.tutorial.audit:
23                type: onap.policies.Authorization
24                version: 1.0.0
25                type_version: 1.0.0
26                metadata:
27                    policy-id: onap.policy.tutorial.bar
28                    policy-version: 1
29                properties:
30                    user: audit
31                    permissions:
32                        -
33                            entity: foo
34                            permission: read
Design Decision Request and expected Decision Response

For the PEP (Policy Enforcement Point) client applications that call the Decision API, you need to design how the Decision API Request resource fields will be sent via the PEP.

Example Decision Request
 1{
 2  "ONAPName": "TutorialPEP",
 3  "ONAPComponent": "TutorialPEPComponent",
 4  "ONAPInstance": "TutorialPEPInstance",
 5  "requestId": "unique-request-id-tutorial",
 6  "action": "authorize",
 7  "resource": {
 8    "user": "demo",
 9    "entity": "foo",
10    "permission" : "write"
11  }
12}

For simplicity, this tutorial expects only a Permit or Deny in the Decision Response. However, one could customize the Decision Response object and send back whatever information is desired.

Example Decision Response
1{
2    "status":"Permit"
3}
Create A Maven Project

Use whatever tool or environment to create your application project. This tutorial assumes you use Maven to build it.

Add Dependencies Into Application pom.xml

Here we import the XACML PDP Application common dependency which has the interfaces we need to implement. In addition, we are importing a testing dependency that has common code for producing a JUnit test.

pom.xml dependencies
  <dependency>
    <groupId>org.onap.policy.xacml-pdp.applications</groupId>
    <artifactId>common</artifactId>
    <version>2.3.3</version>
  </dependency>
  <dependency>
    <groupId>org.onap.policy.xacml-pdp</groupId>
    <artifactId>xacml-test</artifactId>
    <version>2.3.3</version>
    <scope>test</scope>
  </dependency>
Create META-INF to expose Java Service

The ONAP XACML PDP Engine will not be able to find the tutorial application unless it has a property file located in src/main/resources/META-INF/services that contains a property file declaring the class that implements the service.

The name of the file must match org.onap.policy.pdp.xacml.application.common.XacmlApplicationServiceProvider and the contents of the file is one line org.onap.policy.tutorial.tutorial.TutorialApplication.

META-INF/services/org.onap.policy.pdp.xacml.application.common.XacmlApplicationServiceProvider
  org.onap.policy.tutorial.tutorial.TutorialApplication
Create A Java Class That Extends StdXacmlApplicationServiceProvider

You could implement XacmlApplicationServiceProvider if you wish, but for simplicity if you just extend StdXacmlApplicationServiceProvider you will get a lot of implementation done for your application up front. All that needs to be implemented is providing a custom translator.

Custom Tutorial Application Service Provider
package org.onap.policy.tutorial.tutorial;

import org.onap.policy.pdp.xacml.application.common.ToscaPolicyTranslator;
import org.onap.policy.pdp.xacml.application.common.std.StdXacmlApplicationServiceProvider;

public class TutorialApplication extends StdXacmlApplicationServiceProvider {

      @Override
      protected ToscaPolicyTranslator getTranslator(String type) {
              // TODO Auto-generated method stub
              return null;
      }

}
Override Methods for Tutorial

Override these methods to differentiate Tutorial from other applications so that the XACML PDP Engine can determine how to route policy types and policies to the application.

Custom Tutorial Application Service Provider
package org.onap.policy.tutorial.tutorial;

import java.util.Arrays;
import java.util.List;

import org.onap.policy.models.tosca.authorative.concepts.ToscaPolicyTypeIdentifier;
import org.onap.policy.pdp.xacml.application.common.ToscaPolicyTranslator;
import org.onap.policy.pdp.xacml.application.common.std.StdXacmlApplicationServiceProvider;

public class TutorialApplication extends StdXacmlApplicationServiceProvider {

  private final ToscaPolicyTypeIdentifier supportedPolicyType = new ToscaPolicyTypeIdentifier();

  @Override
  public String applicationName() {
      return "tutorial";
  }

  @Override
  public List<String> actionDecisionsSupported() {
      return Arrays.asList("authorize");
  }

  @Override
  public synchronized List<ToscaPolicyTypeIdentifier> supportedPolicyTypes() {
      return Arrays.asList(supportedPolicyType);
  }

  @Override
  public boolean canSupportPolicyType(ToscaPolicyTypeIdentifier policyTypeId) {
      return supportedPolicyType.equals(policyTypeId);
  }

  @Override
      protected ToscaPolicyTranslator getTranslator(String type) {
      // TODO Auto-generated method stub
      return null;
  }

}
Create A Translation Class that extends the ToscaPolicyTranslator Class

Please be sure to review the existing translators in the policy/xacml-pdp repo to see if they could be re-used for your policy type. For the tutorial, we will create our own translator.

The custom translator is not only responsible for translating Policies derived from the Tutorial Policy Type, but also for translating Decision API Requests/Responses to/from the appropriate XACML requests/response objects the XACML engine understands.

Custom Tutorial Translator Class
package org.onap.policy.tutorial.tutorial;

import org.onap.policy.models.decisions.concepts.DecisionRequest;
import org.onap.policy.models.decisions.concepts.DecisionResponse;
import org.onap.policy.models.tosca.authorative.concepts.ToscaPolicy;
import org.onap.policy.pdp.xacml.application.common.ToscaPolicyConversionException;
import org.onap.policy.pdp.xacml.application.common.ToscaPolicyTranslator;

import com.att.research.xacml.api.Request;
import com.att.research.xacml.api.Response;

import oasis.names.tc.xacml._3_0.core.schema.wd_17.PolicyType;

public class TutorialTranslator implements ToscaPolicyTranslator {

  public PolicyType convertPolicy(ToscaPolicy toscaPolicy) throws ToscaPolicyConversionException {
      // TODO Auto-generated method stub
      return null;
  }

  public Request convertRequest(DecisionRequest request) {
      // TODO Auto-generated method stub
      return null;
  }

  public DecisionResponse convertResponse(Response xacmlResponse) {
      // TODO Auto-generated method stub
      return null;
  }

}
Implement the TutorialTranslator Methods

This is the part where knowledge of the XACML OASIS 3.0 specification is required. Please refer to that specification on the many ways to design a XACML Policy.

For the tutorial, we will build code that translates the TOSCA Policy into one XACML Policy that matches on the user and action. It will then have one or more rules for each entity and permission combination. The default combining algorithm for the XACML Rules are to “Deny Unless Permit”.

See the tutorial example for details on how the translator is implemented. Note that in the Tutorial Translator, it also shows how a developer could extend the translator to return or act upon obligations, advice and attributes.

Note

There are many ways to build the policy based on the attributes. How to do so is a matter of experience and fine tuning using the many options for combining algorithms, target and/or condition matching and the rich set of functions available.

Use the TutorialTranslator in the TutorialApplication

Be sure to go back to the TutorialApplication and create an instance of the translator to return to the StdXacmlApplicationServiceProvider. The StdXacmlApplicationServiceProvider uses the translator to convert a policy when a new policy is deployed to the ONAP XACML PDP Engine. See the Tutorial Application Example.

Final TutorialApplication Class
 1  package org.onap.policy.tutorial.tutorial;
 2
 3  import java.util.Arrays;
 4  import java.util.List;
 5  import org.onap.policy.models.tosca.authorative.concepts.ToscaPolicyTypeIdentifier;
 6  import org.onap.policy.pdp.xacml.application.common.ToscaPolicyTranslator;
 7  import org.onap.policy.pdp.xacml.application.common.std.StdXacmlApplicationServiceProvider;
 8
 9  public class TutorialApplication extends StdXacmlApplicationServiceProvider {
10
11      private final ToscaPolicyTypeIdentifier supportedPolicyType =
12              new ToscaPolicyTypeIdentifier("onap.policies.Authorization", "1.0.0");
13      private final TutorialTranslator translator = new TutorialTranslator();
14
15      @Override
16      public String applicationName() {
17          return "tutorial";
18      }
19
20      @Override
21      public List<String> actionDecisionsSupported() {
22          return Arrays.asList("authorize");
23      }
24
25      @Override
26      public synchronized List<ToscaPolicyTypeIdentifier> supportedPolicyTypes() {
27          return Arrays.asList(supportedPolicyType);
28      }
29
30      @Override
31      public boolean canSupportPolicyType(ToscaPolicyTypeIdentifier policyTypeId) {
32          return supportedPolicyType.equals(policyTypeId);
33      }
34
35      @Override
36      protected ToscaPolicyTranslator getTranslator(String type) {
37          return translator;
38      }
39
40  }
Create a XACML Request from ONAP Decision Request

The easiest way to do this is to use the annotations feature from XACML PDP library to create an example XACML request. Then create an instance and simply populate it from an incoming ONAP Decision Request.

Final TutorialApplication Class
   import com.att.research.xacml.std.annotations.XACMLAction;
   import com.att.research.xacml.std.annotations.XACMLRequest;
   import com.att.research.xacml.std.annotations.XACMLResource;
   import com.att.research.xacml.std.annotations.XACMLSubject;
   import java.util.Map;
   import java.util.Map.Entry;
   import lombok.Getter;
   import lombok.Setter;
   import lombok.ToString;
   import org.onap.policy.models.decisions.concepts.DecisionRequest;

   @Getter
   @Setter
   @ToString
   @XACMLRequest(ReturnPolicyIdList = true)
   public class TutorialRequest {
       @XACMLSubject(includeInResults = true)
       private String onapName;

       @XACMLSubject(attributeId = "urn:org:onap:onap-component", includeInResults = true)
       private String onapComponent;

       @XACMLSubject(attributeId = "urn:org:onap:onap-instance", includeInResults = true)
       private String onapInstance;

       @XACMLAction()
       private String action;

       @XACMLResource(attributeId = "urn:org:onap:tutorial-user", includeInResults = true)
       private String user;

       @XACMLResource(attributeId = "urn:org:onap:tutorial-entity", includeInResults = true)
       private String entity;

       @XACMLResource(attributeId = "urn:org:onap:tutorial-permission", includeInResults = true)
       private String permission;

       /**
        * createRequest.
        *
        * @param decisionRequest Incoming
        * @return TutorialRequest object
        */
       public static TutorialRequest createRequest(DecisionRequest decisionRequest) {
           //
           // Create our object
           //
           TutorialRequest request = new TutorialRequest();
           //
           // Add the subject attributes
           //
           request.onapName = decisionRequest.getOnapName();
           request.onapComponent = decisionRequest.getOnapComponent();
           request.onapInstance = decisionRequest.getOnapInstance();
           //
           // Add the action attribute
           //
           request.action = decisionRequest.getAction();
           //
           // Add the resource attributes
           //
           Map<String, Object> resources = decisionRequest.getResource();
           for (Entry<String, Object> entrySet : resources.entrySet()) {
               if ("user".equals(entrySet.getKey())) {
                   request.user = entrySet.getValue().toString();
               }
               if ("entity".equals(entrySet.getKey())) {
                   request.entity = entrySet.getValue().toString();
               }
               if ("permission".equals(entrySet.getKey())) {
                   request.permission = entrySet.getValue().toString();
               }
           }

           return request;
       }
   }

See the Tutorial Request

Create a JUnit and use the TestUtils.java class in xacml-test dependency

Be sure to create a JUnit that will test your translator and application code. You can utilize a TestUtils.java class from the policy/xamcl-pdp repo’s xacml-test submodule to use some utility methods for building the JUnit test.

Build the code and run the JUnit test. Its easiest to run it via a terminal command line using maven commands.

Running Maven Commands
1> mvn clean install
Building Docker Image

To build a docker image that incorporates your application with the XACML PDP Engine. The XACML PDP Engine must be able to find your Java.Service in the classpath. This is easy to do, just create a jar file for your application and copy into the same directory used to startup the XACML PDP.

Here is a Dockerfile as an example:

Dockerfile
1  FROM onap/policy-xacml-pdp
2
3  ADD maven/${project.build.finalName}.jar /opt/app/policy/pdpx/lib/${project.build.finalName}.jar
4
5  RUN mkdir -p /opt/app/policy/pdpx/apps/tutorial
6
7  COPY --chown=policy:policy xacml.properties /opt/app/policy/pdpx/apps/tutorial
Download Tutorial Application Example

If you clone the XACML-PDP repo, the tutorial is included for local testing without building your own.

Tutorial code located in xacml-pdp repo

There is an example Docker compose script that you can use to run the Policy Framework components locally and test the tutorial out.

Docker compose script

In addition, there is a POSTMAN collection available for setting up and running tests against a running instance of ONAP Policy Components (api, pap, dmaap-simulator, tutorial-xacml-pdp).

POSTMAN collection for testing

Policy XACML - Policy Enforcement Tutorial

This tutorial shows how to build Policy Enforcement into your application. Please be sure to clone the policy repositories before going through the tutorial. See Policy Platform Development Tools for details.

This tutorial can be found in the XACML PDP repository. See the tutorial

Policy Type being Enforced

For this tutorial, we will be enforcing a Policy Type that inherits from the onap.policies.Monitoring Policy Type. This Policy Type is used by DCAE analytics for configuration purposes. Any inherited Policy Type is automatically supported by the XACML PDP for Decisions.

See the latest example Policy Type

Example Policy Type
  tosca_definitions_version: tosca_simple_yaml_1_1_0
  policy_types:
     onap.policies.Monitoring:
        derived_from: tosca.policies.Root
        version: 1.0.0
        name: onap.policies.Monitoring
        description: a base policy type for all policies that govern monitoring provisioning
     onap.policies.monitoring.MyAnalytic:
        derived_from: onap.policies.Monitoring
        type_version: 1.0.0
        version: 1.0.0
        description: Example analytic
        properties:
           myProperty:
              type: string
              required: true
Example Policy

See the latest example policy

Example Policy
  tosca_definitions_version: tosca_simple_yaml_1_1_0
  topology_template:
     policies:
       -
         policy1:
             type: onap.policies.monitoring.MyAnalytic
             type_version: 1.0.0
             version: 1.0.0
             name: policy1
             metadata:
               policy-id: policy1
               policy-version: 1.0.0
             properties:
               myProperty: value1
Example Decision Requests and Responses

For onap.policies.Montoring Policy Types, the action used will be configure. For configure actions, you can specify a resource by policy-id or policy-type. We recommend using policy-type, as a policy-id may not necessarily be deployed. In addition, your application should request all the available policies for your policy-type that your application should be enforcing.

Example Decision Request
  {
    "ONAPName": "myName",
    "ONAPComponent": "myComponent",
    "ONAPInstance": "myInstanceId",
    "requestId": "1",
    "action": "configure",
    "resource": {
        "policy-type": "onap.policies.monitoring.MyAnalytic"
    }
  }

The configure action will return a payload containing your full policy:

Making Decision Call in your Application

Your application should be able to do a RESTful API call to the XACML PDP Decision API endpoint. If you have code that does this already, then utilize that to do something similar to the following curl command:

If your application does not have REST http client code, you can use some common code available in the policy/common repository for doing HTTP calls.

Also, if your application wants to use common code to serialize/deserialize Decision Requests and Responses, then you can include the following dependency:

Responding to Policy Update Notifications

Your application should also be able to respond to Policy Update Notifications that are published on the Dmaap topic POLICY-NOTIFICATION. This is because if a user pushes an updated Policy, your application should be able to dynamically start enforcing that policy without restart.

If your application does not have Dmaap client code, you can use some available code in policy/common to receive Dmaap events.

To parse the JSON send over the topic, your application can use the following dependency:

Policy APEX PDP Engine

A short Introduction to APEX

Introduction to APEX

APEX stand for Adaptive Policy EXecution. It is a lightweight engine for execution of policies. APEX allows you to specify logic as a policy, logic that you can adapt on the fly as your system executes. The APEX policies you design can be really simple, with a single snippet of logic, or can be very complex, with many states and tasks. APEX policies can even be designed to self-adapt at execution time, the choice is yours!

Simple APEX Overview

Figure 1. Simple APEX Overview

The Adaptive Policy Engine in APEX runs your policies. These policies are triggered by incoming events. The logic of the policies executes and produces a response event. The Incoming Context on the incoming event and the Outgoing Context on the outgoing event are simply the fields and attributes of the event. You design the policies that APEX executes and the trigger and action events that your policies accept and produce. Events are fed in and sent out as JSON or XML events over Kafka, a Websocket, a file or named pipe, or even standard input. If you run APEX as a library in your application, you can even feed and receive events over a Java API.

APEX States and Context

Figure 2. APEX States and Context

You design your policy as a chain of states, with each state being fed by the state before. The simplest policy can have just one state. We provide specific support for the four-state MEDA (Match Establish Decide Act) policy state model and the three-state ECA (Event Condition Action) policy state model. APEX is fully distributed. You can decide how many APEX engine instances to run for your application and on which real or virtual hosts to run them.

In APEX, you also have control of the Context used by your policies. Context is simply the state information and data used by your policies. You define what context your policies use and what the scope of that context is. Policy Context is private to a particular policy and is accessible only to whatever APEX engines are running that particular policy. Global Context is available to all policies. External Context is read-only context such as weather or topology information that is provided by other systems. APEX keeps context coordinated across all the the instances running a particular policy. If a policy running in an APEX engine changes the value of a piece of context, that value is available to all other APEX engines that use that piece of context. APEX takes care of distribution, locking, writing of context to persistent storage, and monitoring of context.

The APEX Eco-System

Figure 3. The APEX Eco-System

The APEX engine (AP-EN) is available as a Java library for inclusion in your application, as a microservice running in a Docker container, or as a stand-alone service available for integration into your system. APEX also includes a policy editor (AP-AUTH) that allows you to design your policies and a web-based policy management console you use to deploy policies and to keep track of the state of policies and context in policies. Context handling (AP-CTX) is integrated into the APEX engine and policy deployment (AP-DEP) is provided as a servlet running under a web framework such as Apache Tomcat.

APEX Configuration

An APEX engine can be configured to use various combinations of event input handlers, event output handlers, event protocols, context handlers, and logic executors. The system is built using a plugin architecture. Each configuration option is realized by a plugin, which can be loaded and configured when the engine is started. New plugins can be added to the system at any time, though to benefit from a new plugin, an engine will need to be restarted.

APEX Configuration Matrix

Figure 4. APEX Configuration Matrix

The APEX distribution already comes with a number of plugins. The figure above shows the provided plugins. Any combination of input, output, event protocol, context handlers, and executors is possible.

APEX Policy Matrix

APEX offers a lot of flexibility for defining, deploying, and executing policies. Based on a theoretic model, it supports virtually any policy model and supports translation of legacy policies into the APEX execution format. However, the most important aspect for using APEX is to decide what policy is needed, what underlying policy concepts should be used, and how the decision logic should be realized. Once these aspects are decided, APEX can be used to execute the policies. If the policy evolves, say from a simple decision table to a fully adaptable policy, only the policy definition requires change. APEX supports all of that.

The figure below shows a (non-exhaustive) matrix, which will help to decide what policy is required to solve your problem. Read the matrix from left to right choosing one cell in each column.

APEX Policy Matrix

Figure 5. APEX Policy Matrix

The policy can support one of a number of stimuli with an associated purpose/model of the policy, for instance:

  • Configuration, i.e. what should happen. An example is an event that states an intended network configuration and the policy should provide the detailed actions for it. The policy can be realized for instance as an obligation policy, a promise or an intent.

  • Report, i.e. something did happen. An example is an event about an error or fault and the policy needs to repair that problem. The policy would usually be an obligation, utility function, or goal policy.

  • Monitoring, i.e. something does happen. An example is a notification about certain network conditions, to which the policy might (or might not) react. The policy will mitigate the monitored events or permit (deny) related actions as an obligation or authorization.

  • Analysis, i.e. why did something happen. An example is an analytic component sends insights of a situation requiring a policy to act on it. The policy can solve the problem, escalate it, or delegate it as a refrain or delegation policy.

  • Prediction, i.e. what will happen next. An example are events that a policy uses to predict a future network condition. The policy can prevent or enforce the prediction as an adaptive policy, a utility function, or a goal.

  • Feedback, i.e. why did something happen or not happen. Similar to analysis, but here the feedback will be in the input event and the policy needs to do something with that information. Feedback can be related to history or experience, for instance a previous policy execution. The policy needs to be context-aware or be a meta-policy.

Once the purpose of the policy is decided, the next step is to look into what context information the policy will require to do its job. This can range from very simple to a lot of different information, for instance:

  • No context, nothing but a trigger event, e.g. a string or a number, is required

  • Event context, the incoming event provides all information (more than a string or number) for the policy

  • Policy context (read only), the policy has access to additional information related to its class but cannot change/alter them

  • Policy context (read and write), the policy has access to additional information related to its class and can alter this information (for instance to record historic information)

  • Global context (read only), the policy has access to additional information of any kind but cannot change/alter them

  • Global context (read and write), the policy has access to additional information of any kind and can alter this information (for instance to record historic information)

The next step is to decide how the policy should do its job, i.e. what flavor it has, how many states are needed, and how many tasks. There are many possible combinations, for instance:

  • Simple / God: a simple policy with 1 state and 1 task, which is doing everything for the decision-making. This is the ideal policy for simple situation, e.g. deciding on configuration parameters or simple access control.

  • Simple sequence: a simple policy with a number of states each having a single task. This is a very good policy for simple decision-making with different steps. For instance, a classic action policy (ECA) would have 3 states (E, C, and A) with some logic (1 task) in each state.

  • Simple selective: a policy with 1 state but more than one task. Here, the appropriate task (and it’s logic) will be selected at execution time. This policy is very good for dealing with similar (or the same) situation in different contexts. For instance, the tasks can be related to available external software, or to current work load on the compute node, or to time of day.

  • Selective: any number of states having any number of tasks (usually more than 1 task). This is a combination of the two policies above, for instance an ECA policy with more than one task in E, C, and A.

  • Classic directed: a policy with more than one state, each having one task, but a non-sequential execution. This means that the sequence of the states is not pre-defined in the policy (as would be for all cases above) but calculated at runtime. This can be good to realize decision trees based on contextual information.

  • Super Adaptive: using the full potential of the APEX policy model, states and tasks and state execution are fully flexible and calculated at runtime (per policy execution). This policy is very close to a general programming system (with only a few limitations), but can solve very hard problems.

The final step is to select a response that the policy creates. Possible responses have been discussed in the literature for a very long time. A few examples are:

  • Obligation (deontic for what should happen)

  • Authorization (e.g. for rule-based or other access control or security systems)

  • Intent (instead of providing detailed actions the response is an intent statement and a further system processes that)

  • Delegation (hand the problem over to someone else, possibly with some information or instructions)

  • Fail / Error (the policy has encountered a problem, and reports it)

  • Feedback (why did the policy make a certain decision)

Flexible Deployment

APEX can be deployed in various ways. The following figure shows a few of these deployment options. Engine and (policy) executors are named UPe (universal policy engine, APEX engine) and UPx (universal policy executor, the APEX internal state machine executor).

APEX Deployment Options

Figure 6. APEX Deployment Options

  1. For an interface or class

    • Either UPx or UPe as association

  2. For an application

    • UPx as object for single policies

    • UPe as object for multiple policies

  3. For a component (as service)

    • UPe as service for requests

    • UPec as service for requests

  4. As a service (PolaS)

    • One or more UPe with service i/f

    • One or more Upec/UPec with service i/f

    • One or more Upec/UPec with service i/f

  5. In a control loop

    • UPe as decision making part

    • UPec as decision making part

  6. On cloud compute nodes

    • Nodes with only UPe or Upec

    • Nodes with any combination of UPe, UPec

  7. A cloud example

    • Left: 2 UPec managing several UPe on different cloud nodes

    • Right: 2 large UPec with different UPe/UPec deployments

Flexible Clustering

APEX can be clustered in various ways. The following figure shows a few of these clustering options. Cluster, engine and (policy) executors are named UPec (universal policy cluster), UPe (universal policy engine, APEX engine) and UPx (universal policy executor, the APEX internal state machine executor).

APEX Clustering Options

Figure 7. APEX Clustering Options

  1. Single source/target, single UPx

    • Simple forward

  2. Multiple sources/targets, single UPx

    • Simple forward

  3. Single source/target, multiple UPx

    • Multithreading (MT) in UPe

  4. Multiple sources/targets, multiple UPx instances

    • Simple forward & MT in UPe

  5. Multiple non-MT UPe in UPec

    • Simple event routing

  6. Multiple MT UPe in UPec

    • Simple event routing

  7. Mixed UPe in UPec

    • Simple event routing

  8. Multiple non-MT UPec in UPec

    • Intelligent event routing

  9. Multiple mixed UPec in UPec

    • Intelligent event routing

  1. Mix of UPec in multiple UPec

    • External intelligent event routing

    • Optimized with UPec internal routing

Resources

APEX User Manual

Installation of Apex

Requirements

APEX is 100% written in Java and runs on any platform that supports a JVM, e.g. Windows, Unix, Cygwin. Some APEX applications (such as the monitoring application) come as web archives, they do require a war-capable web server installed.

Installation Requirements
  • Downloaded distribution: JAVA runtime environment (JRE, Java 11 or later, APEX is tested with the OpenJDK Java)

  • Building from source: JAVA development kit (JDK, Java 11 or later, APEX is tested with the OpenJDK Java)

  • A web archive capable webserver, for instance for the monitoring application

  • Sufficient rights to install APEX on the system

  • Installation tools depending on the installation method used:

    • ZIP to extract from a ZIP distribution

      • Windows for instance 7Zip

    • TAR and GZ to extract from that TAR.GZ distribution

      • Windows for instance 7Zip

    • DPKG to install from the DEB distribution

      • Install: sudo apt-get install dpkg

Feature Requirements

APEX supports a number of features that require extra software being installed.

  • Apache Kafka to connect APEX to a Kafka message bus

  • Hazelcast to use distributed hash maps for context

  • Infinispan for distributed context and persistence

  • Docker to run APEX inside a Docker container

Build (Install from Source) Requirements

Installation from source requires a few development tools

  • GIT to retrieve the source code

  • Java SDK, Java version 8 or later

  • Apache Maven 3 (the APEX build environment)

Get the APEX Source Code

The first APEX source code was hosted on Github in January 2018. By the end of 2018, APEX was added as a project in the ONAP Policy Framework, released later in the ONAP Casablanca release.

The APEX source code is hosted in ONAP as project APEX. The current stable version is in the master branch. Simply clone the master branch from ONAP using HTTPS.

1git clone https://gerrit.onap.org/r/policy/apex-pdp
Build APEX

The examples in this document assume that the APEX source repositories are cloned to:

  • Unix, Cygwin: /usr/local/src/apex-pdp

  • Windows: C:\dev\apex-pdp

  • Cygwin: /cygdrive/c/dev/apex-pdp

Important

A Build requires ONAP Nexus APEX has a dependency to ONAP parent projects. You might need to adjust your Maven M2 settings. The most current settings can be found in the ONAP oparent repo: Settings.

Important

A Build needs Space Building APEX requires approximately 2-3 GB of hard disc space, 1 GB for the actual build with full distribution and 1-2 GB for the downloaded dependencies

Important

A Build requires Internet (for first build) During the build, several (a lot) of Maven dependencies will be downloaded and stored in the configured local Maven repository. The first standard build (and any first specific build) requires Internet access to download those dependencies.

Use Maven to for a standard build without any tests.

Unix, Cygwin

Windows

1# cd /usr/local/src/apex-pdp
2# mvn clean install -Pdocker -DskipTests
1 >c:
2 >cd \dev\apex
3 >mvn clean install -Pdocker -DskipTests

The build takes 2-3 minutes on a standard development laptop. It should run through without errors, but with a lot of messages from the build process.

When Maven is finished with the build, the final screen should look similar to this (omitting some success lines):

 1[INFO] tools .............................................. SUCCESS [  0.248 s]
 2[INFO] tools-common ....................................... SUCCESS [  0.784 s]
 3[INFO] simple-wsclient .................................... SUCCESS [  3.303 s]
 4[INFO] model-generator .................................... SUCCESS [  0.644 s]
 5[INFO] packages ........................................... SUCCESS [  0.336 s]
 6[INFO] apex-pdp-package-full .............................. SUCCESS [01:10 min]
 7[INFO] Policy APEX PDP - Docker build 2.0.0-SNAPSHOT ...... SUCCESS [ 10.307 s]
 8[INFO] ------------------------------------------------------------------------
 9[INFO] BUILD SUCCESS
10[INFO] ------------------------------------------------------------------------
11[INFO] Total time: 03:43 min
12[INFO] Finished at: 2018-09-03T11:56:01+01:00
13[INFO] ------------------------------------------------------------------------

The build will have created all artifacts required for an APEX installation. The following example show how to change to the target directory and how it should look like.

Unix, Cygwin

 1 -rwxrwx---+ 1 esvevan Domain Users       772 Sep  3 11:55 apex-pdp-package-full_2.0.0~SNAPSHOT_all.changes*
 2 -rwxrwx---+ 1 esvevan Domain Users 146328082 Sep  3 11:55 apex-pdp-package-full-2.0.0-SNAPSHOT.deb*
 3 -rwxrwx---+ 1 esvevan Domain Users     15633 Sep  3 11:54 apex-pdp-package-full-2.0.0-SNAPSHOT.jar*
 4 -rwxrwx---+ 1 esvevan Domain Users 146296819 Sep  3 11:55 apex-pdp-package-full-2.0.0-SNAPSHOT-tarball.tar.gz*
 5 drwxrwx---+ 1 esvevan Domain Users         0 Sep  3 11:54 archive-tmp/
 6 -rwxrwx---+ 1 esvevan Domain Users        89 Sep  3 11:54 checkstyle-cachefile*
 7 -rwxrwx---+ 1 esvevan Domain Users     10621 Sep  3 11:54 checkstyle-checker.xml*
 8 -rwxrwx---+ 1 esvevan Domain Users       584 Sep  3 11:54 checkstyle-header.txt*
 9 -rwxrwx---+ 1 esvevan Domain Users        86 Sep  3 11:54 checkstyle-result.xml*
10 drwxrwx---+ 1 esvevan Domain Users         0 Sep  3 11:54 classes/
11 drwxrwx---+ 1 esvevan Domain Users         0 Sep  3 11:54 dependency-maven-plugin-markers/
12 drwxrwx---+ 1 esvevan Domain Users         0 Sep  3 11:54 etc/
13 drwxrwx---+ 1 esvevan Domain Users         0 Sep  3 11:54 examples/
14 drwxrwx---+ 1 esvevan Domain Users         0 Sep  3 11:55 install_hierarchy/
15 drwxrwx---+ 1 esvevan Domain Users         0 Sep  3 11:54 maven-archiver/

Windows

 1 03/09/2018  11:55    <DIR>          .
 2 03/09/2018  11:55    <DIR>          ..
 3 03/09/2018  11:55       146,296,819 apex-pdp-package-full-2.0.0-SNAPSHOT-tarball.tar.gz
 4 03/09/2018  11:55       146,328,082 apex-pdp-package-full-2.0.0-SNAPSHOT.deb
 5 03/09/2018  11:54            15,633 apex-pdp-package-full-2.0.0-SNAPSHOT.jar
 6 03/09/2018  11:55               772 apex-pdp-package-full_2.0.0~SNAPSHOT_all.changes
 7 03/09/2018  11:54    <DIR>          archive-tmp
 8 03/09/2018  11:54                89 checkstyle-cachefile
 9 03/09/2018  11:54            10,621 checkstyle-checker.xml
10 03/09/2018  11:54               584 checkstyle-header.txt
11 03/09/2018  11:54                86 checkstyle-result.xml
12 03/09/2018  11:54    <DIR>          classes
13 03/09/2018  11:54    <DIR>          dependency-maven-plugin-markers
14 03/09/2018  11:54    <DIR>          etc
15 03/09/2018  11:54    <DIR>          examples
16 03/09/2018  11:55    <DIR>          install_hierarchy
17 03/09/2018  11:54    <DIR>          maven-archiver
18 8 File(s)    292,652,686 bytes
19 9 Dir(s)  14,138,720,256 bytes free
Install APEX

APEX can be installed in different ways:

  • Unix: automatically using dpkg from .deb archive

  • Windows, Unix, Cygwin: manually from a .tar.gz archive

  • Windows, Unix, Cygwin: build from source using Maven, then install manually

Install with DPKG

You can get the APEX debian package from the ONAP Nexus Repository.

The install distributions of APEX automatically install the system. The installation directory is /opt/app/policy/apex-pdp. Log files are located in /var/log/onap/policy/apex-pdp. The latest APEX version will be available as /opt/app/policy/apex-pdp/apex-pdp.

For the installation, a new user apexuser and a new group apexuser will be created. This user owns the installation directories and the log file location. The user is also used by the standard APEX start scripts to run APEX with this user’s permissions.

DPKG Installation

 1 # sudo dpkg -i apex-pdp-package-full-2.0.0-SNAPSHOT.deb
 2 Selecting previously unselected package apex-uservice.
 3 (Reading database ... 288458 files and directories currently installed.)
 4 Preparing to unpack apex-pdp-package-full-2.0.0-SNAPSHOT.deb ...
 5 ********************preinst*******************
 6 arguments install
 7 **********************************************
 8 creating group apexuser . . .
 9 creating user apexuser . . .
10 Unpacking apex-uservice (2.0.0-SNAPSHOT) ...
11 Setting up apex-uservice (2.0.0-SNAPSHOT) ...
12 ********************postinst****************
13 arguments configure
14 ***********************************************

Once the installation is finished, APEX is fully installed and ready to run.

Install Manually from Archive (Unix, Cygwin)

You can download a tar.gz archive from the ONAP Nexus Repository.

Create a directory where APEX should be installed. Extract the tar archive. The following example shows how to install APEX in /opt/apex and create a link to /opt/apex/apex for the most recent installation.

1# cd /opt
2# mkdir apex
3# cd apex
4# mkdir apex-full-2.0.0-SNAPSHOT
5# tar xvfz ~/Downloads/apex-pdp-package-full-2.0.0-SNAPSHOT.tar.gz -C apex-full-2.0.0-SNAPSHOT
6# ln -s apex apex-pdp-package-full-2.0.0-SNAPSHOT
Install Manually from Archive (Windows, 7Zip, GUI)

You can download a tar.gz archive from the ONAP Nexus Repository.

Copy the tar.gz file into the install folder (in this example C:\apex). Assuming you are using 7Zip, right click on the file and extract the tar archive. Note: the screenshots might show an older version than you have.

Now, right-click on the new created TAR file and extract the actual APEX distribution. Inside the new APEX folder you will see the main directories: bin, etc, examples, lib, and war

Once extracted, please rename the created folder to apex-full-2.0.0-SNAPSHOT. This will keep the directory name in line with the rest of this documentation.

Install Manually from Archive (Windows, 7Zip, CMD)

You can download a tar.gz archive from the ONAP Nexus Repository.

Copy the tar.gz file into the install folder (in this example C:\apex). Start cmd, for instance typing Windows+R and then cmd in the dialog. Assuming 7Zip is installed in the standard folder, simply run the following commands (for APEX version 2.0.0-SNAPSHOT full distribution)

1 >c:
2 >cd \apex
3 >"\Program Files\7-Zip\7z.exe" x apex-pdp-package-full-2.0.0-SNAPSHOT.tar.gz -so | "\Program Files\7-Zip\7z.exe" x -aoa -si -ttar -o"apex-full-2.0.0-SNAPSHOT"

APEX is now installed in the folder C:\apex\apex-full-2.0.0-SNAPSHOT.

Build from Source
Build and Install Manually (Unix, Windows, Cygwin)

Clone the APEX GIT repositories into a directory. Go to that directory. Use Maven to build APEX (all details on building APEX from source can be found in APEX HowTo: Build). Install from the created artifacts (rpm, deb, tar.gz, or copying manually).

The following example shows how to build the APEX system, without tests (-DskipTests) to safe some time. It assumes that the APX GIT repositories are cloned to:

  • Unix, Cygwin: /usr/local/src/apex

  • Windows: C:\dev\apex

Unix, Cygwin

Windows

1# cd /usr/local/src/apex
2# mvn clean install -Pdocker -DskipTests
1>c:
2>cd \dev\apex
3>mvn clean install -Pdocker -DskipTests

The build takes about 2 minutes without test and about 4-5 minutes with tests on a standard development laptop. It should run through without errors, but with a lot of messages from the build process. If build with tests (i.e. without -DskipTests), there will be error messages and stack trace prints from some tests. This is normal, as long as the build finishes successful.

When Maven is finished with the build, the final screen should look similar to this (omitting some success lines):

 1[INFO] tools .............................................. SUCCESS [  0.248 s]
 2[INFO] tools-common ....................................... SUCCESS [  0.784 s]
 3[INFO] simple-wsclient .................................... SUCCESS [  3.303 s]
 4[INFO] model-generator .................................... SUCCESS [  0.644 s]
 5[INFO] packages ........................................... SUCCESS [  0.336 s]
 6[INFO] apex-pdp-package-full .............................. SUCCESS [01:10 min]
 7[INFO] Policy APEX PDP - Docker build 2.0.0-SNAPSHOT ...... SUCCESS [ 10.307 s]
 8[INFO] ------------------------------------------------------------------------
 9[INFO] BUILD SUCCESS
10[INFO] ------------------------------------------------------------------------
11[INFO] Total time: 03:43 min
12[INFO] Finished at: 2018-09-03T11:56:01+01:00
13[INFO] ------------------------------------------------------------------------

The build will have created all artifacts required for an APEX installation. The following example show how to change to the target directory and how it should look like.

Unix, Cygwin

 1 # cd packages/apex-pdp-package-full/target
 2 # ls -l
 3 -rwxrwx---+ 1 esvevan Domain Users       772 Sep  3 11:55 apex-pdp-package-full_2.0.0~SNAPSHOT_all.changes*
 4 -rwxrwx---+ 1 esvevan Domain Users 146328082 Sep  3 11:55 apex-pdp-package-full-2.0.0-SNAPSHOT.deb*
 5 -rwxrwx---+ 1 esvevan Domain Users     15633 Sep  3 11:54 apex-pdp-package-full-2.0.0-SNAPSHOT.jar*
 6 -rwxrwx---+ 1 esvevan Domain Users 146296819 Sep  3 11:55 apex-pdp-package-full-2.0.0-SNAPSHOT-tarball.tar.gz*
 7 drwxrwx---+ 1 esvevan Domain Users         0 Sep  3 11:54 archive-tmp/
 8 -rwxrwx---+ 1 esvevan Domain Users        89 Sep  3 11:54 checkstyle-cachefile*
 9 -rwxrwx---+ 1 esvevan Domain Users     10621 Sep  3 11:54 checkstyle-checker.xml*
10 -rwxrwx---+ 1 esvevan Domain Users       584 Sep  3 11:54 checkstyle-header.txt*
11 -rwxrwx---+ 1 esvevan Domain Users        86 Sep  3 11:54 checkstyle-result.xml*
12 drwxrwx---+ 1 esvevan Domain Users         0 Sep  3 11:54 classes/
13 drwxrwx---+ 1 esvevan Domain Users         0 Sep  3 11:54 dependency-maven-plugin-markers/
14 drwxrwx---+ 1 esvevan Domain Users         0 Sep  3 11:54 etc/
15 drwxrwx---+ 1 esvevan Domain Users         0 Sep  3 11:54 examples/
16 drwxrwx---+ 1 esvevan Domain Users         0 Sep  3 11:55 install_hierarchy/
17 drwxrwx---+ 1 esvevan Domain Users         0 Sep  3 11:54 maven-archiver/

Windows

 1 >cd packages\apex-pdp-package-full\target
 2 >dir
 3 03/09/2018  11:55    <DIR>          .
 4 03/09/2018  11:55    <DIR>          ..
 5 03/09/2018  11:55       146,296,819 apex-pdp-package-full-2.0.0-SNAPSHOT-tarball.tar.gz
 6 03/09/2018  11:55       146,328,082 apex-pdp-package-full-2.0.0-SNAPSHOT.deb
 7 03/09/2018  11:54            15,633 apex-pdp-package-full-2.0.0-SNAPSHOT.jar
 8 03/09/2018  11:55               772 apex-pdp-package-full_2.0.0~SNAPSHOT_all.changes
 9 03/09/2018  11:54    <DIR>          archive-tmp
10 03/09/2018  11:54                89 checkstyle-cachefile
11 03/09/2018  11:54            10,621 checkstyle-checker.xml
12 03/09/2018  11:54               584 checkstyle-header.txt
13 03/09/2018  11:54                86 checkstyle-result.xml
14 03/09/2018  11:54    <DIR>          classes
15 03/09/2018  11:54    <DIR>          dependency-maven-plugin-markers
16 03/09/2018  11:54    <DIR>          etc
17 03/09/2018  11:54    <DIR>          examples
18 03/09/2018  11:55    <DIR>          install_hierarchy
19 03/09/2018  11:54    <DIR>          maven-archiver
20 8 File(s)    292,652,686 bytes
21 9 Dir(s)  14,138,720,256 bytes free

Now, take the .deb or the .tar.gz file and install APEX. Alternatively, copy the content of the folder install_hierarchy to your APEX directory.

Installation Layout

A full installation of APEX comes with the following layout.

$APEX_HOME
    ├───bin             (1)
    ├───etc             (2)
    │   ├───editor
    │   ├───hazelcast
    │   ├───infinispan
    │   └───META-INF
    ├───examples            (3)
    │   ├───config          (4)
    │   ├───docker          (5)
    │   ├───events          (6)
    │   ├───html            (7)
    │   ├───models          (8)
    │   └───scripts         (9)
    ├───lib             (10)
    │   └───applications        (11)
    └───war             (12)

1

binaries, mainly scripts (bash and bat) to start the APEX engine and applications

2

configuration files, such as logback (logging) and third party library configurations

3

example policy models to get started

4

configurations for the examples (with sub directories for individual examples)

5

Docker files and additional Docker instructions for the exampples

6

example events for the examples (with sub directories for individual examples)

7

HTML files for some examples, e.g. the Decisionmaker example

8

the policy models, generated for each example (with sub directories for individual examples)

9

additional scripts for the examples (with sub directories for individual examples)

10

the library folder with all Java JAR files

11

applications, also known as jar with dependencies (or fat jars), individually deployable

12

WAR files for web applications

System Configuration

Once APEX is installed, a few configurations need to be done:

  • Create an APEX user and an APEX group (optional, if not installed using RPM and DPKG)

  • Create environment settings for APEX_HOME and APEX_USER, required by the start scripts

  • Change settings of the logging framework (optional)

  • Create directories for logging, required (execution might fail if directories do not exist or cannot be created)

APEX User and Group

On smaller installations and test systems, APEX can run as any user or group.

However, if APEX is installed in production, we strongly recommend you set up a dedicated user for running APEX. This will isolate the execution of APEX to that user. We recommend you use the userid apexuser but you may use any user you choose.

The following example, for UNIX, creates a group called apexuser, an APEX user called apexuser, adds the group to the user, and changes ownership of the APEX installation to the user. Substitute <apex-dir> with the directory where APEX is installed.

1# sudo groupadd apexuser
2# sudo useradd -g apexuser apexuser
3# sudo chown -R apexuser:apexuser <apex-dir>

For other operating systems please consult your manual or system administrator.

Environment Settings: APEX_HOME and APEX_USER

The provided start scripts for APEX require two environment variables being set:

  • APEX_USER with the user under whos name and permission APEX should be started (Unix only)

  • APEX_HOME with the directory where APEX is installed (Unix, Windows, Cygwin)

The first row in the following table shows how to set these environment variables temporary (assuming the user is apexuser). The second row shows how to verify the settings. The last row explains how to set those variables permanently.

Unix, Cygwin (bash/tcsh)

Windows

1# export APEX_USER=apexuser
2# cd /opt/app/policy/apex-pdp
3# export APEX_HOME=`pwd`
1>set APEX_HOME=C:\apex\apex-full-2.0.0-SNAPSHOT
1# env | grep APEX
2# APEX_USER=apexuser
3# APEX_HOME=/opt/app/policy/apex-pdp
1>set APEX_HOME
2APEX_HOME=\apex\apex-full-2.0.0-SNAPSHOT
Making Environment Settings Permanent (Unix, Cygwin)

For a per-user setting, edit the a user’s bash or tcsh settings in ~/.bashrc or ~/.tcshrc. For system-wide settings, edit /etc/profiles (requires permissions).

Making Environment Settings Permanent (Windows)

On Windows 7 do

  • Click on the Start Menu

  • Right click on Computer

  • Select Properties

On Windows 8/10 do

  • Click on the Start Menu

  • Select System

Then do the following

  • Select Advanced System Settings

  • On the Advanced tab, click the Environment Variables button

  • Edit an existing variable, or create a new System variable: ‘Variable name’=”APEX_HOME”, ‘Variable value’=”C:apexapex-full-2.0.0-SNAPSHOT”

For the settings to take effect, an application needs to be restarted (e.g. any open cmd window).

Edit the APEX Logging Settings

Configure the APEX logging settings to your requirements, for instance:

  • change the directory where logs are written to, or

  • change the log levels

Edit the file $APEX_HOME/etc/logback.xml for any required changes. To change the log directory change the line

<property name="logDir" value="/var/log/onap/policy/apex-pdp/" />

to

<property name="logDir" value="/PATH/TO/LOG/DIRECTORY/" />

On Windows, it is recommended to change the log directory to:

<property name="logDir" value="C:/apex/apex-full-2.0.0-SNAPSHOT/logs" />

Note: Be careful about when to use \ vs. / as the path separator!

Create Directories for Logging

Make sure that the log directory exists. This is important when APEX was installed manually or when the log directory was changed in the settings (see above).

Unix, Cygwin

Windows

1sudo mkdir -p /var/log/onap/policy/apex-pdp
2sudo chown -R apexuser:apexuser /var/log/onap/policy/apex-pdp
1>mkdir C:\apex\apex-full-2.0.0-SNAPSHOT\logs
Verify the APEX Installation

When APEX is installed and all settings are realized, the installation can be verified.

Verify Installation - run Engine

A simple verification of an APEX installation can be done by simply starting the APEX engine without specifying a tosca policy. On Unix (or Cygwin) start the engine using $APEX_HOME/bin/apexApps.sh engine. On Windows start the engine using %APEX_HOME%\bin\apexApps.bat engine. The engine will fail to fully start. However, if the output looks similar to the following line, the APEX installation is realized.

 1Starting Apex service with parameters [] . . .
 2start of Apex service failed.
 3org.onap.policy.apex.model.basicmodel.concepts.ApexException: Arguments validation failed.
 4 at org.onap.policy.apex.service.engine.main.ApexMain.populateApexParameters(ApexMain.java:238)
 5 at org.onap.policy.apex.service.engine.main.ApexMain.<init>(ApexMain.java:86)
 6 at org.onap.policy.apex.service.engine.main.ApexMain.main(ApexMain.java:351)
 7Caused by: org.onap.policy.apex.model.basicmodel.concepts.ApexException: Tosca Policy file was not specified as an argument
 8 at org.onap.policy.apex.service.engine.main.ApexCommandLineArguments.validateReadableFile(ApexCommandLineArguments.java:242)
 9 at org.onap.policy.apex.service.engine.main.ApexCommandLineArguments.validate(ApexCommandLineArguments.java:172)
10 at org.onap.policy.apex.service.engine.main.ApexMain.populateApexParameters(ApexMain.java:235)
11 ... 2 common frames omitted
Verify Installation - run an Example

A full APEX installation comes with several examples. Here, we can fully verify the installation by running one of the examples.

We use the example called SampleDomain and configure the engine to use standard in and standard out for events. Run the engine with the provided configuration. Note: Cygwin executes scripts as Unix scripts but runs Java as a Windows application, thus the configuration file must be given as a Windows path.

On Unix/Linux flavoured platforms, give the commands below:

1 sudo su - apexuser
2 export APEX_HOME <path to apex installation>
3 export APEX_USER apexuser

Create a Tosca Policy for the SampleDomain example using ApexCliToscaEditor as explained in the section “The APEX CLI Tosca Editor”. Assume the tosca policy name is SampleDomain_tosca.json. You can then try to run apex using the ToscaPolicy.

1 # $APEX_HOME/bin/apexApps.sh engine -p $APEX_HOME/examples/SampleDomain_tosca.json (1)
2 >%APEX_HOME%\bin\apexApps.bat engine -p %APEX_HOME%\examples\SampleDomain_tosca.json(2)

1

UNIX

2

Windows

The engine should start successfully. Assuming the logging levels are set to info in the built system, the output should look similar to this (last few lines)

 1Starting Apex service with parameters [-p, /home/ubuntu/apex/SampleDomain_tosca.json] . . .
 22018-09-05 15:16:42,800 Apex [main] INFO o.o.p.a.s.e.r.impl.EngineServiceImpl - Created apex engine MyApexEngine-0:0.0.1 .
 32018-09-05 15:16:42,804 Apex [main] INFO o.o.p.a.s.e.r.impl.EngineServiceImpl - Created apex engine MyApexEngine-1:0.0.1 .
 42018-09-05 15:16:42,804 Apex [main] INFO o.o.p.a.s.e.r.impl.EngineServiceImpl - Created apex engine MyApexEngine-2:0.0.1 .
 52018-09-05 15:16:42,805 Apex [main] INFO o.o.p.a.s.e.r.impl.EngineServiceImpl - Created apex engine MyApexEngine-3:0.0.1 .
 62018-09-05 15:16:42,805 Apex [main] INFO o.o.p.a.s.e.r.impl.EngineServiceImpl - APEX service created.
 72018-09-05 15:16:43,962 Apex [main] INFO o.o.p.a.s.e.e.EngDepMessagingService - engine<-->deployment messaging starting . . .
 82018-09-05 15:16:43,963 Apex [main] INFO o.o.p.a.s.e.e.EngDepMessagingService - engine<-->deployment messaging started
 92018-09-05 15:16:44,987 Apex [main] INFO o.o.p.a.s.e.r.impl.EngineServiceImpl - Registering apex model on engine MyApexEngine-0:0.0.1
102018-09-05 15:16:45,112 Apex [main] INFO o.o.p.a.s.e.r.impl.EngineServiceImpl - Registering apex model on engine MyApexEngine-1:0.0.1
112018-09-05 15:16:45,113 Apex [main] INFO o.o.p.a.s.e.r.impl.EngineServiceImpl - Registering apex model on engine MyApexEngine-2:0.0.1
122018-09-05 15:16:45,113 Apex [main] INFO o.o.p.a.s.e.r.impl.EngineServiceImpl - Registering apex model on engine MyApexEngine-3:0.0.1
132018-09-05 15:16:45,120 Apex [main] INFO o.o.p.a.s.e.r.impl.EngineServiceImpl - Added the action listener to the engine
14Started Apex service

Important are the last two line, stating that APEX has added the final action listener to the engine and that the engine is started.

The engine is configured to read events from standard input and write produced events to standard output. The policy model is a very simple policy.

The following table shows an input event in the left column and an output event in the right column. Past the input event into the console where APEX is running, and the output event should appear in the console. Pasting the input event multiple times will produce output events with different values.

Input Event

Example Output Event

 1 {
 2   "nameSpace": "org.onap.policy.apex.sample.events",
 3   "name": "Event0000",
 4   "version": "0.0.1",
 5   "source": "test",
 6   "target": "apex",
 7   "TestSlogan": "Test slogan for External Event0",
 8   "TestMatchCase": 0,
 9   "TestTimestamp": 1469781869269,
10   "TestTemperature": 9080.866
11 }
 1 {
 2   "name": "Event0004",
 3   "version": "0.0.1",
 4   "nameSpace": "org.onap.policy.apex.sample.events",
 5   "source": "Act",
 6   "target": "Outside",
 7   "TestActCaseSelected": 2,
 8   "TestActStateTime": 1536157104627,
 9   "TestDecideCaseSelected": 0,
10   "TestDecideStateTime": 1536157104625,
11   "TestEstablishCaseSelected": 0,
12   "TestEstablishStateTime": 1536157104623,
13   "TestMatchCase": 0,
14   "TestMatchCaseSelected": 1,
15   "TestMatchStateTime": 1536157104620,
16   "TestSlogan": "Test slogan for External Event0",
17   "TestTemperature": 9080.866,
18   "TestTimestamp": 1469781869269
19 }

Terminate APEX by simply using CTRL+C in the console.

Verify a Full Installation - REST Client

APEX has a REST application for deploying, monitoring, and viewing policy models. The application can also be used to create new policy models close to the engine native policy language. Start the REST client as follows.

1# $APEX_HOME/bin/apexApps.sh full-client
1>%APEX_HOME%\bin\apexApps.bat full-client

The script will start a simple web server (Grizzly) and deploy a war web archive in it. Once the client is started, it will be available on localhost:18989. The last few line of the messages should be:

1Apex Editor REST endpoint (ApexServicesRestMain: Config=[ApexServicesRestParameters: URI=http://localhost:18989/apexservices/, TTL=-1sec], State=READY) starting at http://localhost:18989/apexservices/ . . .
2Jul 02, 2020 2:57:39 PM org.glassfish.grizzly.http.server.NetworkListener start
3INFO: Started listener bound to [localhost:18989]
4Jul 02, 2020 2:57:39 PM org.glassfish.grizzly.http.server.HttpServer start
5INFO: [HttpServer] Started.
6Apex Editor REST endpoint (ApexServicesRestMain: Config=[ApexServicesRestParameters: URI=http://localhost:18989/apexservices/, TTL=-1sec], State=RUNNING) started at http://localhost:18989/apexservices/

Now open a browser (Firefox, Chrome, Opera, Internet Explorer) and use the URL http://localhost:18989/. This will connect the browser to the started REST client. Click on the “Policy Editor” button and the Policy Editor start screen should appear.

Now load a policy model by clicking the menu File and then Open. In the opened dialog, go to the directory where APEX is installed, then examples, models, SampleDomain, and there select the file SamplePolicyModelJAVA.json. This will load the policy model used to verify the policy engine (see above).

Now you can use the Policy editor. To finish this verification, simply terminate your browser (or the tab), and then use CTRL+C in the console where you started the Policy editor.

Installing the WAR Application

The three APEX clients are packaged in a WAR file. This is a complete application that can be installed and run in an application server. The application is realized as a servlet. You can find the WAR application in the ONAP Nexus Repository.

Installing and using the WAR application requires a web server that can execute war web archives. We recommend to use Apache Tomcat, however other web servers can be used as well.

Install Apache Tomcat including the Manager App, see V9.0 Docs for details. Start the Tomcat service, or make sure that Tomcat is running.

There are multiple ways to install the APEX WAR application:

  • copy the .war file into the Tomcat webapps folder

  • use the Tomcat Manager App to deploy via the web interface

  • deploy using a REST call to Tomcat

For details on how to install war files please consult the Tomcat Documentation or the Manager App HOW-TO. Once you installed an APEX WAR application (and wait for sufficient time for Tomcat to finalize the installation), open the Manager App in Tomcat. You should see the APEX WAR application being installed and running.

In case of errors, examine the log files in the Tomcat log directory. In a conventional install, those log files are in the logs directory where Tomcat is installed.

The WAR application file has a name similar to apex-client-full-<VERSION>.war.

Running APEX in Docker

Since APEX is in ONAP, we provide a full virtualization environment for the engine.

Run in ONAP

Running APEX from the ONAP docker repository only requires 2 commands:

  1. Log into the ONAP docker repo

docker login -u docker -p docker nexus3.onap.org:10003
  1. Run the APEX docker image

docker run -it --rm  nexus3.onap.org:10003/onap/policy-apex-pdp:latest
Build a Docker Image

Alternatively, one can use the Dockerfile defined in the Docker package to build an image.

APEX Dockerfile

 1#
 2# Docker file to build an image that runs APEX on Java 8 in Ubuntu
 3#
 4FROM ubuntu:16.04
 5
 6RUN apt-get update && \
 7        apt-get upgrade -y && \
 8        apt-get install -y software-properties-common && \
 9        add-apt-repository ppa:openjdk-r/ppa -y && \
10        apt-get update && \
11        apt-get install -y openjdk-8-jdk
12
13# Create apex user and group
14RUN groupadd apexuser
15RUN useradd --create-home -g apexuser apexuser
16
17# Add Apex-specific directories and set ownership as the Apex admin user
18RUN mkdir -p /opt/app/policy/apex-pdp
19RUN mkdir -p /var/log/onap/policy/apex-pdp
20RUN chown -R apexuser:apexuser /var/log/onap/policy/apex-pdp
21
22# Unpack the tarball
23RUN mkdir /packages
24COPY apex-pdp-package-full.tar.gz /packages
25RUN tar xvfz /packages/apex-pdp-package-full.tar.gz --directory /opt/app/policy/apex-pdp
26RUN rm /packages/apex-pdp-package-full.tar.gz
27
28# Ensure everything has the correct permissions
29RUN find /opt/app -type d -perm 755
30RUN find /opt/app -type f -perm 644
31RUN chmod a+x /opt/app/policy/apex-pdp/bin/*
32
33# Copy examples to Apex user area
34RUN cp -pr /opt/app/policy/apex-pdp/examples /home/apexuser
35
36RUN apt-get clean
37
38RUN chown -R apexuser:apexuser /home/apexuser/*
39
40USER apexuser
41ENV PATH /opt/app/policy/apex-pdp/bin:$PATH
42WORKDIR /home/apexuser
Running APEX in Standalone mode

APEX Engine can run in standalone mode by taking in a ToscaPolicy as an argument and executing it. Assume there is a tosca policy named ToscaPolicy.json in APEX_HOME directory This policy can be executed in standalone mode using any of the below methods.

Run in an APEX installation
1 # $APEX_HOME/bin/apexApps.sh engine -p $APEX_HOME/ToscaPolicy.json(1)
2 >%APEX_HOME%\bin\apexApps.bat engine -p %APEX_HOME%\ToscaPolicy.json(2)

1

UNIX

2

Windows

Run in a docker container
1 # docker run -p 6969:6969 -v $APEX_HOME/ToscaPolicy.json:/tmp/policy/ToscaPolicy.json \
2   --name apex -it nexus3.onap.org:10001/onap/policy-apex-pdp:latest \
3   -c "/opt/app/policy/apex-pdp/bin/apexEngine.sh -p /tmp/policy/ToscaPolicy.json"

APEX Configurations Explained

Introduction to APEX Configuration

An APEX engine can be configured to use various combinations of event input handlers, event output handlers, event protocols, context handlers, and logic executors. The system is build using a plugin architecture. Each configuration option is realized by a plugin, which can be loaded and configured when the engine is started. New plugins can be added to the system at any time, though to benefit from a new plugin an engine will need to be restarted.

_images/ApexEngineConfig.png

Figure 3. APEX Configuration Matrix

The APEX distribution already comes with a number of plugins. The figure above shows the provided plugins. Any combination of input, output, event protocol, context handlers, and executors is possible.

General Configuration Format

The APEX configuration file is a JSON file containing a few main blocks for different parts of the configuration. Each block then holds the configuration details. The following code shows the main blocks:

{
  "engineServiceParameters":{
    ... (1)
    "engineParameters":{ (2)
      "executorParameters":{...}, (3)
      "contextParameters":{...} (4)
      "taskParameters":[...] (5)
    }
  },
  "eventInputParameters":{ (6)
    "input1":{ (7)
      "carrierTechnologyParameters":{...},
      "eventProtocolParameters":{...}
    },
    "input2":{...}, (8)
      "carrierTechnologyParameters":{...},
      "eventProtocolParameters":{...}
    },
    ... (9)
  },
  "eventOutputParameters":{ (10)
    "output1":{ (11)
      "carrierTechnologyParameters":{...},
      "eventProtocolParameters":{...}
    },
    "output2":{ (12)
      "carrierTechnologyParameters":{...},
      "eventProtocolParameters":{...}
    },
    ... (13)
  }
}

1

main engine configuration

2

engine parameters for plugin configurations (execution environments and context handling)

3

engine specific parameters, mainly for executor plugins

4

context specific parameters, e.g. for context schemas, persistence, etc.

5

list of task parameters that should be made available in task logic (optional).

6

configuration of the input interface

7

an example input called input1 with carrier technology and event protocol

8

an example input called input2 with carrier technology and event protocol

9

any further input configuration

10

configuration of the output interface

11

an example output called output1 with carrier technology and event protocol

12

an example output called output2 with carrier technology and event protocol

13

any further output configuration

Engine Service Parameters

The configuration provides a number of parameters to configure the engine. An example configuration with explanations of all options is shown below.

"engineServiceParameters" : {
  "name"          : "AADMApexEngine", (1)
  "version"        : "0.0.1",  (2)
  "id"             :  45,  (3)
  "instanceCount"  : 4,  (4)
  "deploymentPort" : 12345,  (5)
  "policy_type_impl" : {...}, (6)
  "periodicEventPeriod": 1000, (7)
  "engineParameters":{ (8)
    "executorParameters":{...}, (9)
    "contextParameters":{...}, (10)
    "taskParameters":[...] (11)
  }
}

1

a name for the engine. The engine name is used to create a key in a runtime engine. An name matching the following regular expression can be used here: [A-Za-z0-9\\-_\\.]+

2

a version of the engine, use semantic versioning as explained here: Semantic Versioning _. This version is used in a runtime engine to create a version of the engine. For that reason, the version must match the following regular expression [A-Z0-9.]+

3

a numeric identifier for the engine

4

the number of threads (policy instances executed in parallel) the engine should use, use 1 for single threaded engines

5

the port for the deployment Websocket connection to the engine

6

the APEX policy model as a JSON or YAML block to load into the engine on startup when APEX is running a policy that has its logic and parameters specified in TOSCA (optional)

7

an optional timer for periodic policies, in milliseconds (a defined periodic policy will be executed every X milliseconds), not used of not set or 0

8

engine parameters for plugin configurations (execution environments and context handling)

9

engine specific parameters, mainly for executor plugins

10

context specific parameters, e.g. for context schemas, persistence, etc.

11

list of task parameters that should be made available in task logic (optional).

The model file is optional, it can also be specified via command line. In any case, make sure all execution and other required plug-ins for the loaded model are loaded as required.

Input and Output Interfaces

An APEX engine has two main interfaces:

  • An input interface to receive events: also known as ingress interface or consumer, receiving (consuming) events commonly named triggers, and

  • An output interface to publish produced events: also known as egress interface or producer, sending (publishing) events commonly named actions or action events.

The input and output interface is configured in terms of inputs and outputs, respectively. Each input and output is a combination of a carrier technology and an event protocol. Carrier technologies and event protocols are provided by plugins, each with its own specific configuration. Most carrier technologies can be configured for input as well as output. Most event protocols can be used for all carrier technologies. One exception is the JMS object event protocol, which can only be used for the JMS carrier technology. Some further restrictions apply (for instance for carrier technologies using bi- or uni-directional modes).

Input and output interface can be configured separately, in isolation, with any number of carrier technologies. The resulting general configuration options are:

  • Input interface with one or more inputs

    • each input with a carrier technology and an event protocol

    • some inputs with optional synchronous mode

    • some event protocols with additional parameters

  • Output interface with one or more outputs

    • each output with a carrier technology and an event encoding

    • some outputs with optional synchronous mode

    • some event protocols with additional parameters

The configuration for input and output is contained in eventInputParameters and eventOutputParameters, respectively. Inside here, one can configure any number of inputs and outputs. Each of them needs to have a unique identifier (name), the content of the name is free form. The example below shows a configuration for two inputs and two outputs.

"eventInputParameters": { (1)
  "FirstConsumer": { (2)
    "carrierTechnologyParameters" : {...}, (3)
    "eventProtocolParameters":{...}, (4)
    ... (5)
  },
  "SecondConsumer": { (6)
    "carrierTechnologyParameters" : {...}, (7)
    "eventProtocolParameters":{...}, (8)
    ... (9)
  },
},
"eventOutputParameters": { (10)
  "FirstProducer": { (11)
    "carrierTechnologyParameters":{...}, (12)
    "eventProtocolParameters":{...}, (13)
    ... (14)
  },
  "SecondProducer": { (15)
    "carrierTechnologyParameters":{...}, (16)
    "eventProtocolParameters":{...}, (17)
    ... (18)
  }
}

1

input interface configuration, APEX input plugins

2

first input called FirstConsumer

3

carrier technology for plugin

4

event protocol for plugin

5

any other input configuration (e.g. event name filter, see below)

6

second input called SecondConsumer

7

carrier technology for plugin

8

event protocol for plugin

9

any other plugin configuration

10

output interface configuration, APEX output plugins

11

first output called FirstProducer

12

carrier technology for plugin

13

event protocol for plugin

14

any other plugin configuration

15

second output called SecondProducer

16

carrier technology for plugin

17

event protocol for plugin

18

any other output configuration (e.g. event name filter, see below)

Event Name

Any event defined in APEX has to be unique. The “name” of of an event is used as an identifier for an ApexEvent. Every event has to be tagged to an eventName. This can be done in different ways. Either the actual event can have a field called “name”. Or, the event has some other field that can act as the identifier, which can be specified using “nameAlias”. But in other cases, where a “name” or “nameAlias” cannot be specified, the incoming event coming over an endpoint can be manually tagged to an “eventName” before consuming it.

The “eventName” can have a single event’s name if the event coming over the endpoint has to be always mapped to the specified eventName’s definition. Otherwise, if different events can come over the endpoint, then “eventName” field can consist of multiple event names separated by “|” symbol. In this case, based on the received event’s structure, it is mapped to any one of the event name specified in the “eventName” field.

The following code shows some examples on how to specify the eventName field:

"eventInputParameters": {
  "Input1": {
    "carrierTechnologyParameters" : {...},
    "eventProtocolParameters":{...},
    "eventName" : "VesEvent" (1)
  },
  "Input2": {
    "carrierTechnologyParameters" : {...},
    "eventProtocolParameters":{...},
    "eventName" : "AAISuccessResponseEvent|AAIFailureResponseEvent" (2)
  }
}
Event Filters

APEX will always send an event after a policy execution is finished. For a successful execution, the event sent is the output event created by the policy. In case the policy does not create an output event, APEX will create a new event with all input event fields plus an additional field exceptionMessage with an exception message.

There are situations in which this auto-generated error event might not be required or wanted:

  • when a policy failing should not result in an event send out via an output interface

  • when the auto-generated event goes back in an APEX engine (or the same APEX engine), this can create endless loops

  • the auto-generated event should go to a special output interface or channel

All of these situations are supported by a filter option using a wildecard (regular expression) configuration on APEX I/O interfaces. The parameter is called eventNameFilter and the value are Java regular expressions (a tutorial). The following code shows some examples:

"eventInputParameters": {
  "Input1": {
    "carrierTechnologyParameters" : {...},
    "eventProtocolParameters":{...},
    "eventNameFilter" : "^E[Vv][Ee][Nn][Tt][0-9]004$" (1)
  }
},
"eventOutputParameters": {
  "Output1": {
    "carrierTechnologyParameters":{...},
    "eventProtocolParameters":{...},
    "eventNameFilter" : "^E[Vv][Ee][Nn][Tt][0-9]104$" (2)
  }
}
Executors

Executors are plugins that realize the execution of logic contained in a policy model. Logic can be in a task selector, a task, and a state finalizer. Using plugins for execution environments makes APEX very flexible to support virtually any executable logic expressions.

APEX 2.0.0-SNAPSHOT supports the following executors:

  • Java, for Java implemented logic

    • This executor requires logic implemented using the APEX Java interfaces.

    • Generated JAR files must be in the classpath of the APEX engine at start time.

  • Javascript

  • JRuby,

  • Jython,

  • MVEL

    • This executor uses the latest version of the MVEL engine, which can be very hard to debug and can produce unwanted side effects during execution

Configure the Javascript Executor

The Javascript executor is added to the configuration as follows:

"engineServiceParameters":{
  "engineParameters":{
    "executorParameters":{
      "JAVASCRIPT":{
        "parameterClassName" :
        "org.onap.policy.apex.plugins.executor.javascript.JavascriptExecutorParameters"
      }
    }
  }
}
Configure the Jython Executor

The Jython executor is added to the configuration as follows:

"engineServiceParameters":{
  "engineParameters":{
    "executorParameters":{
      "JYTHON":{
        "parameterClassName" :
        "org.onap.policy.apex.plugins.executor.jython.JythonExecutorParameters"
      }
    }
  }
}
Configure the JRuby Executor

The JRuby executor is added to the configuration as follows:

"engineServiceParameters":{
  "engineParameters":{
    "executorParameters":{
      "JRUBY":{
        "parameterClassName" :
        "org.onap.policy.apex.plugins.executor.jruby.JrubyExecutorParameters"
      }
    }
  }
}
Configure the Java Executor

The Java executor is added to the configuration as follows:

"engineServiceParameters":{
  "engineParameters":{
    "executorParameters":{
      "JAVA":{
        "parameterClassName" :
        "org.onap.policy.apex.plugins.executor.java.JavaExecutorParameters"
      }
    }
  }
}
Configure the MVEL Executor

The MVEL executor is added to the configuration as follows:

"engineServiceParameters":{
  "engineParameters":{
    "executorParameters":{
      "MVEL":{
        "parameterClassName" :
        "org.onap.policy.apex.plugins.executor.mvel.MVELExecutorParameters"
      }
    }
  }
}
Context Handlers

Context handlers are responsible for all context processing. There are the following main areas:

  • Context schema: use schema handlers other than Java class (supported by default without configuration)

  • Context distribution: distribute context across multiple APEX engines

  • Context locking: mechanisms to lock context elements for read/write

  • Context persistence: mechanisms to persist context

APEX provides plugins for each of the main areas.

Configure AVRO Schema Handler

The AVRO schema handler is added to the configuration as follows:

"engineServiceParameters":{
  "engineParameters":{
    "contextParameters":{
      "parameterClassName" : "org.onap.policy.apex.context.parameters.ContextParameters",
      "schemaParameters":{
        "Avro":{
          "parameterClassName" :
            "org.onap.policy.apex.plugins.context.schema.avro.AvroSchemaHelperParameters"
        }
      }
    }
  }
}

Using the AVRO schema handler has one limitation: AVRO only supports field names that represent valid Java class names. This means only letters and the character _ are supported. Characters commonly used in field names, such as . and -, are not supported by AVRO. for more information see Avro Spec: Names.

To work with this limitation, the APEX Avro plugin will parse a given AVRO definition and replace all occurrences of . and - with a _. This means that

  • In a policy model, if the AVRO schema defined a field as my-name the policy logic should access it as my_name

  • In a policy model, if the AVRO schema defined a field as my.name the policy logic should access it as my_name

  • There should be no field names that convert to the same internal name

    • For instance the simultaneous use of my_name, my.name, and my-name should be avoided

    • If not avoided, the event processing might create unwanted side effects

  • If field names use any other not-supported character, the AVRO plugin will reject it

    • Since AVRO uses lazy initialization, this rejection might only become visible at runtime

Configure Task Parameters

The Task Parameters are added to the configuration as follows:

"engineServiceParameters": {
  "engineParameters": {
    "taskParameters": [
      {
        "key": "ParameterKey1",
        "value": "ParameterValue1"
      },
      {
        "taskId": "Task_Act0",
        "key": "ParameterKey2",
        "value": "ParameterValue2"
      }
    ]
  }
}

TaskParameters can be used to pass parameters from ApexConfig to the policy logic. In the config, these are optional. The list of task parameters provided in the config may be added to the tasks or existing task parameters in the task will be overriden.

If taskId is provided in ApexConfig for an entry, then that parameter is updated only for that particular task. Otherwise, the task parameter is added to all tasks.

Carrier Technologies

Carrier technologies define how APEX receives (input) and sends (output) events. They can be used in any combination, using asynchronous or synchronous mode. There can also be any number of carrier technologies for the input (consume) and the output (produce) interface.

Supported input technologies are:

  • Standard input, read events from the standard input (console), not suitable for APEX background servers

  • File input, read events from a file

  • Kafka, read events from a Kafka system

  • Websockets, read events from a Websocket

  • JMS,

  • REST (synchronous and asynchronous), additionally as client or server

  • Event Requestor, allows reading of events that have been looped back into APEX

Supported output technologies are:

  • Standard output, write events to the standard output (console), not suitable for APEX background servers

  • File output, write events to a file

  • Kafka, write events to a Kafka system

  • Websockets, write events to a Websocket

  • JMS

  • REST (synchronous and asynchronous), additionally as client or server

  • Event Requestor, allows events to be looped back into APEX

New carrier technologies can be added as plugins to APEX or developed outside APEX and added to an APEX deployment.

Standard IO

Standard IO does not require a specific plugin, it is supported be default.

Standard Input

APEX will take events from its standard input. This carrier is good for testing, but certainly not for a use case where APEX runs as a server. The configuration is as follows:

"carrierTechnologyParameters" : {
  "carrierTechnology" : "FILE", (1)
  "parameters" : {
    "standardIO" : true (2)
  }
}

1

standard input is considered a file

2

file descriptor set to standard input

Standard Output

APEX will send events to its standard output. This carrier is good for testing, but certainly not for a use case where APEX runs as a server. The configuration is as follows:

"carrierTechnologyParameters" : {
  "carrierTechnology" : "FILE", (1)
  "parameters" : {
    "standardIO" : true  (2)
  }
}

1

standard output is considered a file

2

file descriptor set to standard output

2.7.2. File IO

File IO does not require a specific plugin, it is supported be default.

File Input

APEX will take events from a file. The same file should not be used as an output. The configuration is as follows:

"carrierTechnologyParameters" : {
  "carrierTechnology" : "FILE", (1)
  "parameters" : {
    "fileName" : "examples/events/SampleDomain/EventsIn.xmlfile" (2)
  }
}

1

set file input

2

the name of the file to read events from

File Output

APEX will write events to a file. The same file should not be used as an input. The configuration is as follows:

"carrierTechnologyParameters" : {
  "carrierTechnology" : "FILE", (1)
  "parameters" : {
    "fileName"  : "examples/events/SampleDomain/EventsOut.xmlfile" (2)
  }
}

1

set file output

2

the name of the file to write events to

Event Requestor IO

Event Requestor IO does not require a specific plugin, it is supported be default. It should only be used with the APEX event protocol.

Event Requestor Input

APEX will take events from APEX.

"carrierTechnologyParameters" : {
  "carrierTechnology": "EVENT_REQUESTOR" (1)
}

1

set event requestor input

Event Requestor Output

APEX will write events to APEX.

"carrierTechnologyParameters" : {
  "carrierTechnology": "EVENT_REQUESTOR" (1)
}
Peering Event Requestors

When using event requestors, they need to be peered. This means an event requestor output needs to be peered (associated) with an event requestor input. The following example shows the use of an event requestor with the APEX event protocol and the peering of output and input.

"eventInputParameters": {
  "EventRequestorConsumer": {
    "carrierTechnologyParameters": {
      "carrierTechnology": "EVENT_REQUESTOR" (1)
    },
    "eventProtocolParameters": {
      "eventProtocol": "APEX" (2)
    },
    "eventNameFilter": "InputEvent", (3)
    "requestorMode": true, (4)
    "requestorPeer": "EventRequestorProducer", (5)
    "requestorTimeout": 500 (6)
  }
},
"eventOutputParameters": {
  "EventRequestorProducer": {
    "carrierTechnologyParameters": {
      "carrierTechnology": "EVENT_REQUESTOR" (7)
    },
    "eventProtocolParameters": {
      "eventProtocol": "APEX" (8)
    },
    "eventNameFilter": "EventListEvent", (9)
    "requestorMode": true, (10)
    "requestorPeer": "EventRequestorConsumer", (11)
    "requestorTimeout": 500 (12)
  }
}

1

event requestor on a consumer

2

with APEX event protocol

3

optional filter (best to use a filter to prevent unwanted events on the consumer side)

4

activate requestor mode

5

the peer to the output (must match the output carrier)

6

an optional timeout in milliseconds

7

event requestor on a producer

8

with APEX event protocol

9

optional filter (best to use a filter to prevent unwanted events on the consumer side)

10

activate requestor mode

11

the peer to the output (must match the input carrier)

12

an optional timeout in milliseconds

Kafka IO

Kafka IO is supported by the APEX Kafka plugin. The configurations below are examples. APEX will take any configuration inside the parameter object and forward it to Kafka. More information on Kafka specific configuration parameters can be found in the Kafka documentation:

Kafka Input

APEX will receive events from the Apache Kafka messaging system. The input is uni-directional, an engine will only receive events from the input but not send any event to the input.

"carrierTechnologyParameters" : {
  "carrierTechnology" : "KAFKA", (1)
  "parameterClassName" :
    "org.onap.policy.apex.plugins.event.carrier.kafka.KAFKACarrierTechnologyParameters",
  "parameters" : {
    "bootstrapServers"  : "localhost:49092", (2)
    "groupId"           : "apex-group-id", (3)
    "enableAutoCommit"  : true, (4)
    "autoCommitTime"    : 1000, (5)
    "sessionTimeout"    : 30000, (6)
    "consumerPollTime"  : 100, (7)
    "consumerTopicList" : ["apex-in-0", "apex-in-1"], (8)
    "keyDeserializer"   :
        "org.apache.kafka.common.serialization.StringDeserializer", (9)
    "valueDeserializer" :
        "org.apache.kafka.common.serialization.StringDeserializer" (10)
    "kafkaProperties": [  (11)
                         [
                           "security.protocol",
                           "SASL_SSL"
                         ],
                         [
                           "ssl.truststore.type",
                           "JKS"
                         ],
                         [
                           "ssl.truststore.location",
                           "/opt/app/policy/apex-pdp/etc/ssl/test.jks"
                         ],
                         [
                           "ssl.truststore.password",
                           "policy0nap"
                         ],
                         [
                           "sasl.mechanism",
                           "SCRAM-SHA-512"
                         ],
                         [
                           "sasl.jaas.config",
                           "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"policy\" password=\"policy\";"
                         ],
                         [
                           "ssl.endpoint.identification.algorithm",
                           ""
                         ]
                       ]
  }
}

1

set Kafka as carrier technology

2

bootstrap server and port

3

a group identifier

4

flag for auto-commit

5

auto-commit timeout in milliseconds

6

session timeout in milliseconds

7

consumer poll time in milliseconds

8

consumer topic list

9

key for the Kafka de-serializer

10

value for the Kafka de-serializer

11

properties for Kafka connectivity

Kindly note that the above Kafka properties is just a reference, and the actual properties required depends on the Kafka server installation.

In cases where the message produced in Kafka topic has been serialized using KafkaAvroSerializer, then the following parameters needs to be additionally added to KafkaProperties for the consumer to have the capability of deserializing the message properly while consuming.

[
  "value.deserializer",
  "io.confluent.kafka.serializers.KafkaAvroDeserializer"
],
[
  "schema.registry.url",
  "<url of the schema registry configured in Kafka cluster for registering Avro schemas>"
]

For more details on how to setup schema registry for Kafka cluster, kindly take a look here.

Kafka Output

APEX will send events to the Apache Kafka messaging system. The output is uni-directional, an engine will send events to the output but not receive any event from the output.

"carrierTechnologyParameters" : {
  "carrierTechnology" : "KAFKA", (1)
  "parameterClassName" :
    "org.onap.policy.apex.plugins.event.carrier.kafka.KAFKACarrierTechnologyParameters",
  "parameters" : {
    "bootstrapServers"  : "localhost:49092", (2)
    "acks"              : "all", (3)
    "retries"           : 0, (4)
    "batchSize"         : 16384, (5)
    "lingerTime"        : 1, (6)
    "bufferMemory"      : 33554432, (7)
    "producerTopic"     : "apex-out", (8)
    "keySerializer"     :
        "org.apache.kafka.common.serialization.StringSerializer", (9)
    "valueSerializer"   :
        "org.apache.kafka.common.serialization.StringSerializer" (10)
    "kafkaProperties": [  (11)
                         [
                           "security.protocol",
                           "SASL_SSL"
                         ],
                         [
                           "ssl.truststore.type",
                           "JKS"
                         ],
                         [
                           "ssl.truststore.location",
                           "/opt/app/policy/apex-pdp/etc/ssl/test.jks"
                         ],
                         [
                           "ssl.truststore.password",
                           "policy0nap"
                         ],
                         [
                           "sasl.mechanism",
                           "SCRAM-SHA-512"
                         ],
                         [
                           "sasl.jaas.config",
                           "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"policy\" password=\"policy\";"
                         ],
                         [
                           "ssl.endpoint.identification.algorithm",
                           ""
                         ]
                       ]
  }
}

1

set Kafka as carrier technology

2

bootstrap server and port

3

acknowledgement strategy

4

number of retries

5

batch size

6

time to linger in milliseconds

7

buffer memory in byte

8

producer topic

9

key for the Kafka serializer

10

value for the Kafka serializer

11

properties for Kafka connectivity

Kindly note that the above Kafka properties is just a reference, and the actual properties required depends on the Kafka server installation.

JMS IO

APEX supports the Java Messaging Service (JMS) as input as well as output. JMS IO is supported by the APEX JMS plugin. Input and output support an event encoding as text (JSON string) or object (serialized object). The input configuration is the same for both encodings, the output configuration differs.

JMS Input

APEX will receive events from a JMS messaging system. The input is uni-directional, an engine will only receive events from the input but not send any event to the input.

"carrierTechnologyParameters" : {
  "carrierTechnology" : "JMS", (1)
  "parameterClassName" :
      "org.onap.policy.apex.plugins.event.carrier.jms.JMSCarrierTechnologyParameters",
  "parameters" : { (2)
    "initialContextFactory" :
        "org.jboss.naming.remote.client.InitialContextFactory", (3)
    "connectionFactory" : "ConnectionFactory", (4)
    "providerURL" : "remote://localhost:5445", (5)
    "securityPrincipal" : "guest", (6)
    "securityCredentials" : "IAmAGuest", (7)
    "consumerTopic" : "jms/topic/apexIn" (8)
  }
}

1

set JMS as carrier technology

2

set all JMS specific parameters

3

the context factory, in this case from JBOSS (it requires the dependency org.jboss:jboss-remote-naming:2.0 .4.Final or a different version to be in the directory $APEX_HOME/lib or %APEX_HOME%\lib

4

a connection factory for the JMS connection

5

URL with host and port of the JMS provider

6

access credentials, user name

7

access credentials, user password

8

the JMS topic to listen to

JMS Output with Text

APEX engine send events to a JMS messaging system. The output is uni-directional, an engine will send events to the output but not receive any event from output.

"carrierTechnologyParameters" : {
  "carrierTechnology" : "JMS", (1)
  "parameterClassName" :
      "org.onap.policy.apex.plugins.event.carrier.jms.JMSCarrierTechnologyParameters",
  "parameters" : { (2)
    "initialContextFactory" :
        "org.jboss.naming.remote.client.InitialContextFactory", (3)
    "connectionFactory" : "ConnectionFactory", (4)
    "providerURL" : "remote://localhost:5445", (5)
    "securityPrincipal" : "guest", (6)
    "securityCredentials" : "IAmAGuest", (7)
    "producerTopic" : "jms/topic/apexOut", (8)
    "objectMessageSending": "false" (9)
  }
}

1

set JMS as carrier technology

2

set all JMS specific parameters

3

the context factory, in this case from JBOSS (it requires the dependency org.jboss:jboss-remote-naming:2.0 .4.Final or a different version to be in the directory $APEX_HOME/lib or %APEX_HOME%\lib

4

a connection factory for the JMS connection

5

URL with host and port of the JMS provider

6

access credentials, user name

7

access credentials, user password

8

the JMS topic to write to

9

set object messaging to false means it sends JSON text

JMS Output with Object

To configure APEX for JMS objects on the output interface use the same configuration as above (for output). Simply change the objectMessageSending parameter to true.

Websocket (WS) IO

APEX supports the Websockets as input as well as output. WS IO is supported by the APEX Websocket plugin. This carrier technology does only support uni-directional communication. APEX will not send events to a Websocket input and any event sent to a Websocket output will result in an error log.

The input can be configured as client (APEX connects to an existing Websocket server) or server (APEX starts a Websocket server). The same applies to the output. Input and output can both use a client or a server configuration, or separate configurations (input as client and output as server, input as server and output as client). Each configuration should use its own dedicated port to avoid any communication loops. The configuration of a Websocket client is the same for input and output. The configuration of a Websocket server is the same for input and output.

Websocket Client

APEX will connect to a given Websocket server. As input, it will receive events from the server but not send any events. As output, it will send events to the server and any event received from the server will result in an error log.

"carrierTechnologyParameters" : {
  "carrierTechnology" : "WEBSOCKET", (1)
  "parameterClassName" :
  "org.onap.policy.apex.plugins.event.carrier.websocket.WEBSOCKETCarrierTechnologyParameters",
  "parameters" : {
    "host" : "localhost", (2)
    "port" : 42451 (3)
  }
}

1

set Websocket as carrier technology

2

the host name on which a Websocket server is running

3

the port of that Websocket server

Websocket Server

APEX will start a Websocket server, which will accept any Websocket clients to connect. As input, it will receive events from the server but not send any events. As output, it will send events to the server and any event received from the server will result in an error log.

"carrierTechnologyParameters" : {
  "carrierTechnology" : "WEBSOCKET", (1)
  "parameterClassName" :
  "org.onap.policy.apex.plugins.event.carrier.websocket.WEBSOCKETCarrierTechnologyParameters",
  "parameters" : {
    "wsClient" : false, (2)
    "port"     : 42450 (3)
  }
}

1

set Websocket as carrier technology

2

disable client, so that APEX will start a Websocket server

3

the port for the Websocket server APEX will start

REST Client IO

APEX can act as REST client on the input as well as on the output interface. The media type is application/json, so this plugin only works with the JSON Event protocol.

REST Client Input

APEX will connect to a given URL to receive events, but not send any events. The server is polled, i.e. APEX will do an HTTP GET, take the result, and then do the next GET. Any required timing needs to be handled by the server configured via the URL. For instance, the server could support a wait timeout via the URL as ?timeout=100ms. The httpCodeFilter is used for filtering the status code, and it can be configured as a regular expression string. The default httpCodeFilter is “[2][0-9][0-9]” - for successful response codes. The response with HTTP status code that matches the given regular expression is forwarded to the task, otherwise it is logged as a failure.

"carrierTechnologyParameters" : {
  "carrierTechnology" : "RESTCLIENT", (1)
  "parameterClassName" :
    "org.onap.policy.apex.plugins.event.carrier.restclient.RESTClientCarrierTechnologyParameters",
  "parameters" : {
    "url" : "http://example.org:8080/triggers/events", (2)
    "httpMethod": "GET", (3)
    "httpCodeFilter" : "[2][0-9][0-9]", (4)
     "httpHeaders" : [ (5)
        ["Keep-Alive", "300"],
        ["Cache-Control", "no-cache"]
     ]
  }
}

1

set REST client as carrier technology

2

the URL of the HTTP server for events

3

the HTTP method to use (GET/PUT/POST/DELETE), optional, defaults to GET

4

use HTTP CODE FILTER for filtering status code, optional, defaults to [2][0-9][0-9]

5

HTTP headers to use on the REST request, optional

REST Client Output

APEX will connect to a given URL to send events, but not receive any events. The default HTTP operation is POST (no configuration required). To change it to PUT simply add the configuration parameter (as shown in the example below). The URL can be configured statically or tagged as ?example.{site}.org:8080/{trig}/events, all tags such as site and trig in the URL need to be set in the properties object available to the tasks. In addition, the keys should exactly match with the tags defined in url. The scope of the properties object is per HTTP call. Hence, key/value pairs set in the properties object by task are only available for that specific HTTP call.

"carrierTechnologyParameters" : {
  "carrierTechnology" : "RESTCLIENT", (1)
  "parameterClassName" :
    "org.onap.policy.apex.plugins.event.carrier.restclient.RESTClientCarrierTechnologyParameters",
  "parameters" : {
    "url" : "http://example.com:8888/actions/events", (2)
    "url" : "http://example.{site}.com:8888/{trig}/events", (2')
    "httpMethod" : "PUT". (3)
    "httpHeaders" : [ (4)
       ["Keep-Alive", "300"],
       ["Cache-Control", "no-cache"]
    ]                          }
}

1

set REST client as carrier technology

2

the static URL of the HTTP server for events

2’

the tagged URL of the HTTP server for events

3

the HTTP method to use (GET/PUT/POST/DELETE), optional, defaults to POST

4

HTTP headers to use on the REST request, optional

REST Server IO

APEX supports a REST server for input and output.

The REST server plugin always uses a synchronous mode. A client does a HTTP GET on the APEX REST server with the input event and receives the generated output event in the server reply. This means that for the REST server there has to always to be an input with an associated output. Input or output only are not permitted.

The plugin will start a Grizzly server as REST server for a normal APEX engine. If the APEX engine is executed as a servlet, for instance inside Tomcat, then Tomcat will be used as REST server (this case requires configuration on Tomcat as well).

Some configuration restrictions apply for all scenarios:

  • Minimum port: 1024

  • Maximum port: 65535

  • The media type is application/json, so this plugin only works with the JSON Event protocol.

The URL the client calls is created using

  • the configured host and port, e.g. http://localhost:12345

  • the standard path, e.g. /apex/

  • the name of the input/output, e.g. FirstConsumer/

  • the input or output name, e.g. EventIn.

The examples above lead to the URL http://localhost:12345/apex/FirstConsumer/EventIn.

A client can also get status information of the REST server using /Status, e.g. http://localhost:12345/apex/FirstConsumer/Status.

REST Server Stand-alone

We need to configure a REST server input and a REST server output. Input and output are associated with each other via there name.

Timeouts for REST calls need to be set carefully. If they are too short, the call might timeout before a policy finished creating an event.

The following example configures the input named as MyConsumer and associates an output named MyProducer with it.

"eventInputParameters": {
  "MyConsumer": {
    "carrierTechnologyParameters" : {
      "carrierTechnology" : "RESTSERVER", (1)
      "parameterClassName" :
        "org.onap.policy.apex.plugins.event.carrier.restserver.RESTServerCarrierTechnologyParameters",
      "parameters" : {
        "standalone" : true, (2)
        "host" : "localhost", (3)
        "port" : 12345 (4)
      }
    },
    "eventProtocolParameters":{
      "eventProtocol" : "JSON" (5)
    },
    "synchronousMode"    : true, (6)
    "synchronousPeer"    : "MyProducer", (7)
    "synchronousTimeout" : 500 (8)
  }
}

1

set REST server as carrier technology

2

set the server as stand-alone

3

set the server host

4

set the server listen port

5

use JSON event protocol

6

activate synchronous mode

7

associate an output MyProducer

8

set a timeout of 500 milliseconds

The following example configures the output named as MyProducer and associates the input MyConsumer with it. Note that for the output there are no more paramters (such as host or port), since they are already configured in the associated input

"eventOutputParameters": {
  "MyProducer": {
    "carrierTechnologyParameters":{
      "carrierTechnology" : "RESTSERVER",
      "parameterClassName" :
        "org.onap.policy.apex.plugins.event.carrier.restserver.RESTServerCarrierTechnologyParameters"
    },
    "eventProtocolParameters":{
      "eventProtocol" : "JSON"
    },
    "synchronousMode"    : true,
    "synchronousPeer"    : "MyConsumer",
    "synchronousTimeout" : 500
  }
}
REST Server Stand-alone, multi input

Any number of input/output pairs for REST servers can be configured. For instance, we can configure an input FirstConsumer with output FirstProducer and an input SecondConsumer with output SecondProducer. Important is that there is always one pair of input/output.

REST Server Stand-alone in Servlet

If APEX is executed as a servlet, e.g. inside Tomcat, the configuration becomes easier since the plugin can now use Tomcat as the REST server. In this scenario, there are not parameters (port, host, etc.) and the key standalone must not be used (or set to false).

For the Tomcat configuration, we need to add the REST server plugin, e.g.

<servlet>
  ...
  <init-param>
    ...
    <param-value>org.onap.policy.apex.plugins.event.carrier.restserver</param-value>
  </init-param>
  ...
</servlet>
REST Requestor IO

APEX can act as REST requestor on the input as well as on the output interface. The media type is application/json, so this plugin only works with the JSON Event protocol. This plugin allows APEX to send REST requests and to receive the reply of that request without tying up APEX resources while the request is being processed. The REST Requestor pairs a REST requestor producer and consumer together to handle the REST request and response. The REST request is created from an APEX output event and the REST response is input into APEX as a new input event.

REST Requestor Output (REST Request Producer)

APEX sends a REST request when events are output by APEX, the REST request configuration is specified on the REST Request Consumer (see below).

"carrierTechnologyParameters": {
  "carrierTechnology": "RESTREQUESTOR", (1)
  "parameterClassName": "org.onap.policy.apex.plugins.event.carrier.restrequestor.RESTRequestorCarrierTechnologyParameters"
},

1

set REST requestor as carrier technology

The settings below are required on the producer to define the event that triggers the REST request and to specify the peered consumer configuration for the REST request, for example:

"eventNameFilter": "GuardRequestEvent", (1)
"requestorMode": true, (2)
"requestorPeer": "GuardRequestorConsumer", (3)
"requestorTimeout": 500 (4)

1

a filter on the event

2

requestor mode must be set to true

3

the peered consumer for REST requests, that consumer specifies the full configuration for REST requests

4

the request timeout in milliseconds, overridden by timeout on consumer if that is set, optional defaults to 500 millisconds

REST Requestor Input (REST Request Consumer)

APEX will connect to a given URL to issue a REST request and wait for a REST response. The URL can be configured statically or tagged as ?example.{site}.org:8080/{trig}/events, all tags such as site and trig in the URL need to be set in the properties object available to the tasks. In addition, the keys should exactly match with the tags defined in url. The scope of the properties object is per HTTP call. Hence, key/value pairs set in the properties object by task are only available for that specific HTTP call. The httpCodeFilter is used for filtering the status code, and it can be configured as a regular expression string. The default httpCodeFilter is “[2][0-9][0-9]” - for successful response codes. The response with HTTP status code that matches the given regular expression is forwarded to the task, otherwise it is logged as a failure.

"carrierTechnologyParameters": {
  "carrierTechnology": "RESTREQUESTOR", (1)
  "parameterClassName": "org.onap.policy.apex.plugins.event.carrier.restrequestor.RESTRequestorCarrierTechnologyParameters",
  "parameters": {
    "url": "http://localhost:54321/some/path/to/rest/resource", (2)
    "url": "http://localhost:54321/{site}/path/to/rest/{resValue}", (2')
    "httpMethod": "POST", (3)
    "requestorMode": true, (4)
    "requestorPeer": "GuardRequestorProducer", (5)
    "restRequestTimeout": 2000, (6)
    "httpCodeFilter" : "[2][0-9][0-9]" (7)
    "httpHeaders" : [ (8)
       ["Keep-Alive", "300"],
       ["Cache-Control", "no-cache"]
    ]                          }
},

1

set REST requestor as carrier technology

2

the static URL of the HTTP server for events

2’

the tagged URL of the HTTP server for events

3

the HTTP method to use (GET/PUT/POST/DELETE), optional, defaults to GET

4

requestor mode must be set to true

5

the peered producer for REST requests, that producer specifies the APEX output event that triggers the REST request

6

request timeout in milliseconds, overrides any value set in the REST Requestor Producer, optional, defaults to 500 millisconds

7

use HTTP CODE FILTER for filtering status code optional, defaults to [2][0-9][0-9]

8

HTTP headers to use on the REST request, optional

Further settings may be required on the consumer to define the input event that is produced and forwarded into APEX, for example:

"eventName": "GuardResponseEvent", (1)
"eventNameFilter": "GuardResponseEvent" (2)

1

the event name

2

a filter on the event

gRPC IO

APEX can send requests over gRPC at the output side, and get back response at the input side. This can be used to send requests to CDS over gRPC. The media type is application/json, so this plugin only works with the JSON Event protocol.

gRPC Output

APEX will connect to a given host to send a request over gRPC.

"carrierTechnologyParameters": {
  "carrierTechnology": "GRPC", (1)
  "parameterClassName": "org.onap.policy.apex.plugins.event.carrier.grpc.GrpcCarrierTechnologyParameters",
  "parameters": {
    "host": "cds-blueprints-processor-grpc", (2)
    "port": 9111, (2')
    "username": "ccsdkapps", (3)
    "password": ccsdkapps, (4)
    "timeout" : 10 (5)
  }
},

1

set GRPC as carrier technology

2

the host to which request is sent

2’

the value for port

3

username required to initiate connection

4

password required to initiate connection

5

the timeout value for completing the request

Further settings are required on the producer to define the event that is requested, for example:

"eventName": "GRPCRequestEvent", (1)
"eventNameFilter": "GRPCRequestEvent", (2)
"requestorMode": true, (3)
"requestorPeer": "GRPCRequestConsumer", (4)
"requestorTimeout": 500 (5)

1

the event name

2

a filter on the event

3

the mode of the requestor

4

a peer for the requestor

5

a general request timeout

gRPC Input

APEX will connect to the host specified in the producer side, anad take in response back at the consumer side.

"carrierTechnologyParameters": {
  "carrierTechnology": "GRPC", (1)
  "parameterClassName": "org.onap.policy.apex.plugins.event.carrier.grpc.GrpcCarrierTechnologyParameters"
},

1

set GRPC as carrier technology

Further settings are required on the consumer to define the event that is requested, for example:

"eventNameFilter": "GRPCResponseEvent", (1)
"requestorMode": true, (2)
"requestorPeer": "GRPCRequestProducer", (3)
"requestorTimeout": 500 (4)

1

a filter on the event

2

the mode of the requestor

3

a peer for the requestor

4

a general request timeout

Event Protocols, Format and Encoding

Event protocols define what event formats APEX can receive (input) and should send (output). They can be used in any combination for input and output, unless further restricted by a carrier technology plugin (for instance for JMS output). There can only be 1 event protocol per event plugin.

Supported input event protocols are:

  • JSON, the event as a JSON string

  • APEX, an APEX event

  • JMS object, the event as a JMS object,

  • JMS text, the event as a JMS text,

  • XML, the event as an XML string,

  • YAML, the event as YAML text

Supported output event protocols are:

  • JSON, the event as a JSON string

  • APEX, an APEX event

  • JMS object, the event as a JMS object,

  • JMS text, the event as a JMS text,

  • XML, the event as an XML string,

  • YAML, the event as YAML text

New event protocols can be added as plugins to APEX or developed outside APEX and added to an APEX deployment.

JSON Event

The event protocol for JSON encoding does not require a specific plugin, it is supported by default. Furthermore, there is no difference in the configuration for the input and output interface.

For an input, APEX requires a well-formed JSON string. Well-formed here means according to the definitions of a policy. Any JSON string that is not defined as a trigger event (consume) will not be consumed (errors will be thrown). For output JSON events, APEX will always produce valid JSON strings according to the definition in the policy model.

The following JSON shows the configuration.

"eventProtocolParameters":{
  "eventProtocol" : "JSON"
}

For JSON events, there are a few more optional parameters, which allow to define a mapping for standard event fields. An APEX event must have the fields name, version, source, and target defined. Sometimes it is not possible to configure a trigger or actioning system to use those fields. However, they might be in an event generated outside APEX (or used outside APEX) just with different names. To configure APEX to map between the different event names, simply add the following parameters to a JSON event:

"eventProtocolParameters":{
  "eventProtocol" : "JSON",
  "nameAlias"     : "policyName", (1)
  "versionAlias"  : "policyVersion", (2)
  "sourceAlias"   : "from", (3)
  "targetAlias"   : "to", (4)
  "nameSpaceAlias": "my.name.space" (5)
}

1

mapping for the name field, here from a field called policyName

2

mapping for the version field, here from a field called policyVersion

3

mapping for the source field, here from a field called from (only for an input event)

4

mapping for the target field, here from a field called to (only for an output event)

5

mapping for the nameSpace field, here from a field called my.name.space

APEX Event

The event protocol for APEX events does not require a specific plugin, it is supported by default. Furthermore, there is no difference in the configuration for the input and output interface.

For input and output APEX uses APEX events.

The following JSON shows the configuration.

"eventProtocolParameters":{
  "eventProtocol" : "APEX"
}
JMS Event

The event protocol for JMS is provided by the APEX JMS plugin. The plugin supports encoding as JSON text or as object. There is no difference in the configuration for the input and output interface.

JMS Text

If used as input, APEX will take a JMS message and extract a JSON string, then proceed as if a JSON event was received. If used as output, APEX will take the event produced by a policy, create a JSON string, and then wrap it into a JMS message.

The configuration for JMS text is as follows:

"eventProtocolParameters":{
  "eventProtocol" : "JMSTEXT",
  "parameterClassName" :
    "org.onap.policy.apex.plugins.event.protocol.jms.JMSTextEventProtocolParameters"
}
JMS Object

If used as input, APEX will will take a JMS message, extract a Java Bean from the ObjectMessage message, construct an APEX event and put the bean on the APEX event as a parameter. If used as output, APEX will take the event produced by a policy, create a Java Bean and send it as a JMS message.

The configuration for JMS object is as follows:

"eventProtocolParameters":{
  "eventProtocol" : "JMSOBJECT",
  "parameterClassName" :
    "org.onap.policy.apex.plugins.event.protocol.jms.JMSObjectEventProtocolParameters"
}
YAML Event

The event protocol for YAML is provided by the APEX YAML plugin. There is no difference in the configuration for the input and output interface.

If used as input, APEX will consume events as YAML and map them to policy trigger events. Not well-formed YAML and not understood trigger events will be rejected. If used as output, APEX produce YAML encoded events from the event a policy produces. Those events will always be well-formed according to the definition in the policy model.

The following code shows the configuration.

"eventProtocolParameters":{
  "eventProtocol" : "XML",
  "parameterClassName" :
      "org.onap.policy.apex.plugins.event.protocol.yaml.YamlEventProtocolParameters"
}
XML Event

The event protocol for XML is provided by the APEX XML plugin. There is no difference in the configuration for the input and output interface.

If used as input, APEX will consume events as XML and map them to policy trigger events. Not well-formed XML and not understood trigger events will be rejected. If used as output, APEX produce XML encoded events from the event a policy produces. Those events will always be well-formed according to the definition in the policy model.

The following code shows the configuration.

"eventProtocolParameters":{
  "eventProtocol" : "XML",
  "parameterClassName" :
      "org.onap.policy.apex.plugins.event.protocol.xml.XMLEventProtocolParameters"
}
A configuration example

The following example loads all available plug-ins.

Events are consumed from a Websocket, APEX as client. Consumed event format is JSON.

Events are produced to Kafka. Produced event format is XML.

{
  "engineServiceParameters" : {
    "name"          : "MyApexEngine",
    "version"        : "0.0.1",
    "id"             :  45,
    "instanceCount"  : 4,
    "deploymentPort" : 12345,
    "engineParameters"    : {
      "executorParameters" : {
        "JAVASCRIPT" : {
          "parameterClassName" :
              "org.onap.policy.apex.plugins.executor.javascript.JavascriptExecutorParameters"
        },
        "JYTHON" : {
          "parameterClassName" :
              "org.onap.policy.apex.plugins.executor.jython.JythonExecutorParameters"
        },
        "JRUBY" : {
          "parameterClassName" :
              "org.onap.policy.apex.plugins.executor.jruby.JrubyExecutorParameters"
        },
        "JAVA" : {
          "parameterClassName" :
              "org.onap.policy.apex.plugins.executor.java.JavaExecutorParameters"
        },
        "MVEL" : {
          "parameterClassName" :
              "org.onap.policy.apex.plugins.executor.mvel.MVELExecutorParameters"
        }
      },
      "contextParameters" : {
        "parameterClassName" :
            "org.onap.policy.apex.context.parameters.ContextParameters",
        "schemaParameters" : {
          "Avro":{
             "parameterClassName" :
                 "org.onap.policy.apex.plugins.context.schema.avro.AvroSchemaHelperParameters"
          }
        }
      }
    }
  },
  "producerCarrierTechnologyParameters" : {
    "carrierTechnology" : "KAFKA",
    "parameterClassName" :
        "org.onap.policy.apex.plugins.event.carrier.kafka.KAFKACarrierTechnologyParameters",
    "parameters" : {
      "bootstrapServers"  : "localhost:49092",
      "acks"              : "all",
      "retries"           : 0,
      "batchSize"         : 16384,
      "lingerTime"        : 1,
      "bufferMemory"      : 33554432,
      "producerTopic"     : "apex-out",
      "keySerializer"     : "org.apache.kafka.common.serialization.StringSerializer",
      "valueSerializer"   : "org.apache.kafka.common.serialization.StringSerializer"
    }
  },
  "producerEventProtocolParameters" : {
    "eventProtocol" : "XML",
         "parameterClassName" :
             "org.onap.policy.apex.plugins.event.protocol.xml.XMLEventProtocolParameters"
  },
  "consumerCarrierTechnologyParameters" : {
    "carrierTechnology" : "WEBSOCKET",
    "parameterClassName" :
        "org.onap.policy.apex.plugins.event.carrier.websocket.WEBSOCKETCarrierTechnologyParameters",
    "parameters" : {
      "host" : "localhost",
      "port" : 88888
    }
  },
  "consumerEventProtocolParameters" : {
    "eventProtocol" : "JSON"
  }
}

Engine and Applications of the APEX System

Introduction to APEX Engine and Applications

The core of APEX is the APEX Engine, also known as the APEX Policy Engine or the APEX PDP (since it is in fact a Policy Decision Point). Beside this engine, an APEX system comes with a few applications intended to help with policy authoring, deployment, and execution.

The engine itself and most applications are started from the command line with command line arguments. This is called a Command Line Interface (CLI). Some applications require an installation on a webserver, as for instance the REST Editor. Those applications can be accessed via a web browser.

You can also use the available APEX APIs and applications to develop other applications as required. This includes policy languages (and associated parsers and compilers / interpreters), GUIs to access APEX or to define policies, clients to connect to APEX, etc.

For this documentation, we assume an installation of APEX as a full system based on a current ONAP release.

CLI on Unix, Windows, and Cygwin

A note on APEX CLI applications: all applications and the engine itself have been deployed and tested on different operating systems: Red Hat, Ubuntu, Debian, Mac OSX, Windows, Cygwin. Each operating system comes with its own way of configuring and executing Java. The main items here are:

  • For UNIX systems (RHL, Ubuntu, Debian, Mac OSX), the provided bash scripts work as expected with absolute paths (e.g. /opt/app/policy/apex-pdp/apex-pdp-2.0.0-SNAPSHOT/examples), indirect and linked paths (e.g. ../apex/apex), and path substitutions using environment settings (e.g. $APEX_HOME/bin/)

  • For Windows systems, the provided batch files (.bat) work as expected with with absolute paths (e.g. C:\apex\apex-2.0.0-SNAPSHOT\examples), and path substitutions using environment settings (e.g. %APEX_HOME%\bin\)

  • For Cygwin system we assume a standard Cygwin installation with standard tools (mainly bash) using a Windows Java installation. This means that the bash scripts can be used as in UNIX, however any argument pointing to files and directories need to use either a DOS path (e.g. C:\apex\apex-2.0.0-SNAPSHOT\examples\config...) or the command cygpath with a mixed option. The reason for that is: Cygwin executes Java using UNIX paths but then runs Java as a DOS/WINDOWS process, which requires DOS paths for file access.

The APEX Engine

The APEX engine can be started in different ways, depending your requirements. All scripts are located in the APEX bin directory

On UNIX and Cygwin systems use:

  • apexEngine.sh - this script will

    • Test if $APEX_USER is set and if the user exists, terminate with an error otherwise

    • Test if $APEX_HOME is set. If not set, it will use the default setting as /opt/app/policy/apex-pdp/apex-pdp. Then the set directory is tested to exist, the script will terminate if not.

    • When all tests are passed successfully, the script will call apexApps.sh with arguments to start the APEX engine.

  • apexApps.sh engine - this is the general APEX application launcher, which will

    • Start the engine with the argument engine

    • Test if $APEX_HOME is set and points to an existing directory. If not set or directory does not exist, script terminates.

    • Not test for any settings of $APEX_USER.

On Windows systems use apexEngine.bat and apexApps.bat engine respectively. Note: none of the windows batch files will test for %APEX_USER%.

Summary of alternatives to start the APEX Engine:

Unix, Cygwin

Windows

# $APEX_HOME/bin/apexEngine.sh [args]
# $APEX_HOME/bin/apexApps.sh engine [args]
> %APEX_HOME%\bin\apexEngine.bat [args]
> %APEX_HOME%\bin\apexApps.bat engine [args]

The APEX engine comes with a few CLI arguments, the main one is for setting the tosca policy file for execution. The tosca policy file is always required. The option -h prints a help screen.

usage: org.onap.policy.apex.service.engine.main.ApexMain [options...]
options
-p,--tosca-policy-file <TOSCA_POLICY_FILE>     the full path to the ToscaPolicy file to use.
-h,--help                                      outputs the usage of this command
-v,--version                                   outputs the version of Apex
The APEX CLI Editor

The CLI Editor allows to define policies from the command line. The application uses a simple language and supports all elements of an APEX policy. It can be used in to different ways:

  • non-interactive, specifying a file with the commands to create a policy

  • interactive, using the editors CLI to create a policy

When a policy is fully specified, the editor will generate the APEX core policy specification in JSON. This core specification is called the policy model in the APEX engine and can be used directly with the APEX engine.

On UNIX and Cygwin systems use:

  • apexCLIEditor.sh - simply starts the CLI editor, arguments to the script determine the mode of the editor

  • apexApps.sh cli-editor - simply starts the CLI editor, arguments to the script determine the mode of the editor

On Windows systems use:

  • apexCLIEditor.bat - simply starts the CLI editor, arguments to the script determine the mode of the editor

  • apexApps.bat cli-editor - simply starts the CLI editor, arguments to the script determine the mode of the editor

Summary of alternatives to start the APEX CLI Editor:

Unix, Cygwin

Windows

# $APEX_HOME/bin/apexCLIEditor.sh.sh [args]
# $APEX_HOME/bin/apexApps.sh cli-editor [args]
> %APEX_HOME%\bin\apexCLIEditor.bat [args]
> %APEX_HOME%\bin\apexApps.bat cli-editor [args]

The option -h provides a help screen with all command line arguments.

usage: org.onap.policy.apex.auth.clieditor.ApexCLIEditorMain [options...]
options
 -a,--model-props-file <MODEL_PROPS_FILE>       name of the apex model properties file to use
 -c,--command-file <COMMAND_FILE>               name of a file containing editor commands to run into the editor
 -h,--help                                      outputs the usage of this command
 -i,--input-model-file <INPUT_MODEL_FILE>       name of a file that contains an input model for the editor
 -if,--ignore-failures <IGNORE_FAILURES_FLAG>   true or false, ignore failures of commands in command files and continue
                                                executing the command file
 -l,--log-file <LOG_FILE>                       name of a file that will contain command logs from the editor, will log
                                                to standard output if not specified or suppressed with "-nl" flag
 -m,--metadata-file <CMD_METADATA_FILE>         name of the command metadata file to use
 -nl,--no-log                                   if specified, no logging or output of commands to standard output or log
                                                file is carried out
 -nm,--no-model-output                          if specified, no output of a model to standard output or model output
                                                file is carried out, the user can use the "save" command in a script to
                                                save a model
 -o,--output-model-file <OUTPUT_MODEL_FILE>     name of a file that will contain the output model for the editor, will
                                                output model to standard output if not specified or suppressed with
                                                "-nm" flag
 -wd,--working-directory <WORKING_DIRECTORY>    the working directory that is the root for the CLI editor and is the
                                                root from which to look for included macro files
The APEX CLI Tosca Editor

As per the new Policy LifeCycle API, the policies are expected to be defined as ToscaServiceTemplate. The CLI Tosca Editor is an extended version of the APEX CLI Editor which can generate the policies in ToscaServiceTemplate way.

The APEX config file(.json), command file(.apex) and the tosca template skeleton(.json) file paths need to be passed as input arguments to the CLI Tosca Editor. Policy in ToscaServiceTemplate format is generated as the output. This can be used as the input to Policy API for creating policies.

On UNIX and Cygwin systems use:

  • apexCLIToscaEditor.sh - starts the CLI Tosca editor, all the arguments supported by the basic CLI Editor are supported in addition to the mandatory arguments needed to generate ToscaServiceTemplate.

  • apexApps.sh cli-tosca-editor - starts the CLI Tosca editor, all the arguments supported by the basic CLI Editor are supported in addition to the mandatory arguments needed to generate ToscaServiceTemplate.

On Windows systems use:

  • apexCLIToscaEditor.bat - starts the CLI Tosca editor, all the arguments supported by the basic CLI Editor are supported in addition to the mandatory arguments needed to generate ToscaServiceTemplate.

  • apexApps.bat cli-tosca-editor - starts the CLI Tosca editor, all the arguments supported by the basic CLI Editor are supported in addition to the mandatory arguments needed to generate ToscaServiceTemplate.

Summary of alternatives to start the APEX CLI Tosca Editor:

Unix, Cygwin

Windows

# $APEX_HOME/bin/apexCLIToscaEditor.sh.sh [args]
# $APEX_HOME/bin/apexApps.sh cli-tosca-editor [args]
> %APEX_HOME%\bin\apexCLIToscaEditor.bat [args]
> %APEX_HOME%\bin\apexApps.bat cli-tosca-editor [args]

The option -h provides a help screen with all command line arguments.

usage: org.onap.policy.apex.auth.clieditor.tosca.ApexCliToscaEditorMain [options...]
options
 -a,--model-props-file <MODEL_PROPS_FILE>         name of the apex model properties file to use
 -ac,--apex-config-file <APEX_CONFIG_FILE>        name of the file containing apex configuration details
 -c,--command-file <COMMAND_FILE>                 name of a file containing editor commands to run into the editor
 -h,--help                                        outputs the usage of this command
 -i,--input-model-file <INPUT_MODEL_FILE>         name of a file that contains an input model for the editor
 -if,--ignore-failures <IGNORE_FAILURES_FLAG>     true or false, ignore failures of commands in command files and
                                                  continue executing the command file
 -l,--log-file <LOG_FILE>                         name of a file that will contain command logs from the editor, will
                                                  log to standard output if not specified or suppressed with "-nl" flag
 -m,--metadata-file <CMD_METADATA_FILE>           name of the command metadata file to use
 -nl,--no-log                                     if specified, no logging or output of commands to standard output or
                                                  log file is carried out
 -ot,--output-tosca-file <OUTPUT_TOSCA_FILE>      name of a file that will contain the output ToscaServiceTemplate
 -t,--tosca-template-file <TOSCA_TEMPLATE_FILE>   name of the input file containing tosca template which needs to be
                                                  updated with policy
 -wd,--working-directory <WORKING_DIRECTORY>      the working directory that is the root for the CLI editor and is the
                                                  root from which to look for included macro files

An example command to run the APEX CLI Tosca editor on windows machine is given below.

%APEX_HOME%/\bin/\apexCLIToscaEditor.bat -c %APEX_HOME%\examples\PolicyModel.apex -ot %APEX_HOME%\examples\test.json  -l %APEX_HOME%\examples\test.log -ac %APEX_HOME%\examples\RESTServerStandaloneJsonEvent.json -t %APEX_HOME%\examples\ToscaTemplate.json
The APEX Client

The APEX Client combines the Policy Editor, the Monitoring Client, and the Deployment Client into a single application. The standard way to use the APEX Full Client is via an installation of the war file on a webserver. However, the Full Client can also be started via command line. This will start a Grizzly webserver with the war deployed. Access to the Full Client is then via the provided URL

On UNIX and Cygwin systems use:

  • apexApps.sh full-client - simply starts the webserver with the Full Client

On Windows systems use:

  • apexApps.bat full-client - simply starts the webserver with the Full Client

The option -h provides a help screen with all command line arguments.

usage: org.onap.policy.apex.client.full.rest.ApexServicesRestMain [options...]
-h,--help                        outputs the usage of this command
-p,--port <PORT>                 port to use for the Apex Services REST calls
-t,--time-to-live <TIME_TO_LIVE> the amount of time in seconds that the server will run for before terminating

If the Full Client is started without any arguments the final messages will look similar to this:

Apex Editor REST endpoint (ApexServicesRestMain: Config=[ApexServicesRestParameters: URI=http://localhost:18989/apexservices/, TTL=-1sec], State=READY) starting at http://localhost:18989/apexservices/ . . .
Sep 05, 2018 11:28:28 PM org.glassfish.grizzly.http.server.NetworkListener start
INFO: Started listener bound to [localhost:18989]
Sep 05, 2018 11:28:28 PM org.glassfish.grizzly.http.server.HttpServer start
INFO: [HttpServer] Started.
Apex Editor REST endpoint (ApexServicesRestMain: Config=[ApexServicesRestParameters: URI=http://localhost:18989/apexservices/, TTL=-1sec], State=RUNNING) started at http://localhost:18989/apexservices/

The last line states the URL on which the Monitoring Client can be accessed. The example above stated http://localhost:18989/apexservices. In a web browser use the URL http://localhost:18989.

The APEX Application Launcher

The standard applications (Engine and CLI Editor) come with dedicated start scripts. For all other APEX applications, we provide an application launcher.

On UNIX and Cygwin systems use:

  • apexApps.sh` - simply starts the application launcher

On Windows systems use:

  • apexApps.bat - simply starts the application launcher

Summary of alternatives to start the APEX application launcher:

Unix, Cygwin

Windows

# $APEX_HOME/bin/apexApps.sh [args]
> %APEX_HOME%\bin\apexApps.bat [args]

The option -h provides a help screen with all launcher command line arguments.

apexApps.sh - runs APEX applications

       Usage:  apexApps.sh [options] | [<application> [<application options>]]

       Options
         -d <app>    - describes an application
         -l          - lists all applications supported by this script
         -h          - this help screen

Using -l lists all known application the launcher can start.

apexApps.sh: supported applications:
 --> ws-echo engine eng-monitoring full-client eng-deployment tpl-event-json model-2-cli rest-editor cli-editor ws-console

Using the -d <name> option describes the named application, for instance for the ws-console:

apexApps.sh: application 'ws-console'
 --> a simple console sending events to APEX, connect to APEX consumer port

Launching an application is done by calling the script with only the application name and any CLI arguments for the application. For instance, starting the ws-echo application with port 8888:

apexApps.sh ws-echo -p 8888
Application: Create Event Templates

Status: Experimental

This application takes a policy model (JSON or XML encoded) and generates templates for events in JSON format. This can help when a policy defines rather complex trigger or action events or complex events between states. The application can produce events for the types: stimuli (policy trigger events), internal (events between policy states), and response (action events).

Unix, Cygwin

Windows

# $APEX_HOME/bin/apexApps.sh tpl-event-json [args]
> %APEX_HOME%\bin\apexApps.bat tpl-event-json [args]

The option -h provides a help screen.

gen-model2event v{release-version} - generates JSON templates for events generated from a policy model
usage: gen-model2event
 -h,--help                 prints this help and usage screen
 -m,--model <MODEL-FILE>   set the input policy model file
 -t,--type <TYPE>          set the event type for generation, one of:
                           stimuli (trigger events), response (action
                           events), internal (events between states)
 -v,--version              prints the application version

The created templates are not valid events, instead they use some markup for values one will need to change to actual values. For instance, running the tool with the Sample Domain policy model as:

apexApps.sh tpl-event-json -m $APEX_HOME/examples/models/SampleDomain/SamplePolicyModelJAVA.json -t stimuli

will produce the following status messages:

gen-model2event: starting Event generator
 --> model file: examples/models/SampleDomain/SamplePolicyModelJAVA.json
 --> type: stimuli

and then run the generator application producing two event templates. The first template is called Event0000.

{
        "name" : "Event0000",
        "nameSpace" : "org.onap.policy.apex.sample.events",
        "version" : "0.0.1",
        "source" : "Outside",
        "target" : "Match",
        "TestTemperature" : ###double: 0.0###,
        "TestTimestamp" : ###long: 0###,
        "TestMatchCase" : ###integer: 0###,
        "TestSlogan" : "###string###"
}

The values for the keys are marked with # and the expected type of the value. To create an actual stimuli event, all these markers need to be change to actual values, for instance:

{
        "name" : "Event0000",
        "nameSpace" : "org.onap.policy.apex.sample.events",
        "version" : "0.0.1",
        "source" : "Outside",
        "target" : "Match",
        "TestTemperature" : 25,
        "TestTimestamp" : 123456789123456789,
        "TestMatchCase" : 1,
        "TestSlogan" : "Testing the Match Case with Temperature 25"
}
Application: Convert a Policy Model to CLI Editor Commands

Status: Experimental

This application takes a policy model (JSON or XML encoded) and generates commands for the APEX CLI Editor. This effectively reverses a policy specification realized with the CLI Editor.

Unix, Cygwin

Windows

# $APEX_HOME/bin/apexApps.sh model-2-cli [args]
> %APEX_HOME%\bin\apexApps.bat model-2-cli [args]

The option -h provides a help screen.

usage: gen-model2cli
 -h,--help                 prints this help and usage screen
 -m,--model <MODEL-FILE>   set the input policy model file
 -sv,--skip-validation     switch of validation of the input file
 -v,--version              prints the application version

For instance, running the tool with the Sample Domain policy model as:

apexApps.sh model-2-cli -m $APEX_HOME/examples/models/SampleDomain/SamplePolicyModelJAVA.json

will produce the following status messages:

gen-model2cli: starting CLI generator
 --> model file: examples/models/SampleDomain/SamplePolicyModelJAVA.json

and then run the generator application producing all CLI Editor commands and printing them to standard out.

Application: Websocket Clients (Echo and Console)

Status: Production

The application launcher also provides a Websocket echo client and a Websocket console client. The echo client connects to APEX and prints all events it receives from APEX. The console client connects to APEX, reads input from the command line, and sends this input as events to APEX.

Unix, Cygwin

Windows

# $APEX_HOME/bin/apexApps.sh ws-echo [args]
# $APEX_HOME/bin/apexApps.sh ws-console [args]
> %APEX_HOME%\bin\apexApps.bat ws-echo [args]
> %APEX_HOME%\bin\apexApps.bat ws-console [args]

The arguments are the same for both applications:

  • -p defines the Websocket port to connect to (defaults to 8887)

  • -s defines the host on which a Websocket server is running (defaults to localhost)

A discussion on how to use these two applications to build an APEX system is detailed HowTo-Websockets.

APEX Logging

Introduction to APEX Logging

All APEX components make extensive use of logging using the logging façade SLF4J with the backend Logback. Both are used off-the-shelve, so the standard documentation and configuration apply to APEX logging. For details on how to work with logback please see the logback manual.

The APEX applications is the logback configuration file $APEX_HOME/etc/logback.xml (Windows: %APEX_HOME%\etc\logback.xml). The logging backend is set to no debug, i.e. logs from the logging framework should be hidden at runtime.

The configurable log levels work as expected:

  • error (or ERROR) is used for serious errors in the APEX runtime engine

  • warn (or WARN) is used for warnings, which in general can be ignored but might indicate some deeper problems

  • info (or INFO) is used to provide generally interesting messages for startup and policy execution

  • debug (or DEBUG) provides more details on startup and policy execution

  • trace (or TRACE) gives full details on every aspect of the APEX engine from start to end

The loggers can also be configured as expected. The standard configuration (after installing APEX) uses log level info on all APEX classes (components).

The applications and scripts in $APEX_HOME/bin (Windows: %APEX_HOME\bin) are configured to use the logback configuration $APEX_HOME/etc/logback.xml (Windows: %APEX_HOME\etc\logback.xml). There are multiple ways to use different logback configurations, for instance:

  • Maintain multiple configurations in etc, for instance a logback-debug.xml for deep debugging and a logback-production.xml for APEX in production mode, then copy the required configuration file to the used logback.xml prior starting APEX

  • Edit the scripts in bin to use a different logback configuration file (only recommended if you are familiar with editing bash scripts or windows batch files)

Standard Logging Configuration

The standard logging configuration defines a context APEX, which is used in the standard output pattern. The location for log files is defined in the property logDir and set to /var/log/onap/policy/apex-pdp. The standard status listener is set to NOP and the overall logback configuration is set to no debug.

1<configuration debug="false">
2  <statusListener class="ch.qos.logback.core.status.NopStatusListener" />
3
4   <contextName>Apex</contextName>
5   <property name="logDir" value="/var/log/onap/policy/apex-pdp/" />
6
7  ...appenders
8  ...loggers
9</configuration>

The first appender defined is called STDOUT for logs to standard out.

1<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
2 <encoder>
3    <Pattern>%d %contextName [%t] %level %logger{36} - %msg%n</Pattern>
4  </encoder>
5</appender>

The root level logger then is set to the level info using the standard out appender.

1<root level="info">
2  <appender-ref ref="STDOUT" />
3</root>

The second appender is called FILE. It writes logs to a file apex.log.

1<appender name="FILE" class="ch.qos.logback.core.FileAppender">
2  <file>${logDir}/apex.log</file>
3  <encoder>
4    <pattern>%d %-5relative [procId=${processId}] [%thread] %-5level %logger{26} - %msg %n %ex{full}</pattern>
5  </encoder>
6</appender>

The third appender is called CTXT_FILE. It writes logs to a file apex_ctxt.log.

1<appender name="CTXT_FILE" class="ch.qos.logback.core.FileAppender">
2  <file>${logDir}/apex_ctxt.log</file>
3  <encoder>
4    <pattern>%d %-5relative [procId=${processId}] [%thread] %-5level %logger{26} - %msg %n %ex{full}</pattern>
5  </encoder>
6</appender>

The last definitions are for specific loggers. The first logger captures all standard APEX classes. It is configured for log level info and uses the standard output and file appenders. The second logger captures APEX context classes responsible for context monitoring. It is configured for log level trace and uses the context file appender.

1<logger name="org.onap.policy.apex" level="info" additivity="false">
2  <appender-ref ref="STDOUT" />
3  <appender-ref ref="FILE" />
4</logger>
5
6<logger name="org.onap.policy.apex.core.context.monitoring" level="TRACE" additivity="false">
7  <appender-ref ref="CTXT_FILE" />
8</logger>
Adding Logback Status and Debug

To activate logback status messages change the status listener from ‘NOP’ to for instance console.

<statusListener class="ch.qos.logback.core.status.OnConsoleStatusListener" />

To activate all logback debugging, for instance to debug a new logback configuration, activate the debug attribute in the configuration.

<configuration debug="true">
...
</configuration>
Logging External Components

Logback can also be configured to log any other, external components APEX is using, if they are using the common logging framework.

For instance, the context component of APEX is using Infinispan and one can add a logger for this external component. The following example adds a logger for Infinispan using the standard output appender.

<logger name="org.infinispan" level="INFO" additivity="false">
  <appender-ref ref="STDOUT" />
</logger>

Another example is Apache Zookeeper. The following example adds a logger for Zookeeper using the standard outout appender.

<logger name="org.apache.zookeeper.ClientCnxn" level="INFO" additivity="false">
  <appender-ref ref="STDOUT" />
</logger>
Configuring loggers for Policy Logic

The logging for the logic inside a policy (task logic, task selection logic, state finalizer logic) can be configured separate from standard logging. The logger for policy logic is org.onap.policy.apex.executionlogging. The following example defines

  • a new appender for standard out using a very simple pattern (simply the actual message)

  • a logger for policy logic to standard out using the new appender and the already described file appender.

<appender name="POLICY_APPENDER_STDOUT" class="ch.qos.logback.core.ConsoleAppender">
  <encoder>
    <pattern>policy: %msg\n</pattern>
  </encoder>
</appender>

<logger name="org.onap.policy.apex.executionlogging" level="info" additivity="false">
  <appender-ref ref="POLICY_APPENDER_STDOUT" />
  <appender-ref ref="FILE" />
</logger>

It is also possible to use specific logging for parts of policy logic. The following example defines a logger for task logic.

<logger name="org.onap.policy.apex.executionlogging.TaskExecutionLogging" level="TRACE" additivity="false">
  <appender-ref ref="POLICY_APPENDER_STDOUT" />
</logger>
Rolling File Appenders

Rolling file appenders are a good option for more complex logging of a production or complex testing APEX installation. The standard logback configuration can be used for these use cases. This section gives two examples for the standard logging and for context logging.

First the standard logging. The following example defines a rolling file appender. The appender rolls over on a daily basis. It allows for a file size of 100 MB.

<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
  <file>${logDir}/apex.log</file>
  <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
    <!-- rollover daily -->
    <!-- <fileNamePattern>xstream-%d{yyyy-MM-dd}.%i.txt</fileNamePattern> -->
    <fileNamePattern>${logDir}/apex_%d{yyyy-MM-dd}.%i.log.gz
    </fileNamePattern>
    <maxHistory>4</maxHistory>
    <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
      <!-- or whenever the file size reaches 100MB -->
      <maxFileSize>100MB</maxFileSize>
    </timeBasedFileNamingAndTriggeringPolicy>
  </rollingPolicy>
  <encoder>
    <pattern>
      %d %-5relative [procId=${processId}] [%thread] %-5level %logger{26} - %msg %ex{full} %n
    </pattern>
  </encoder>
</appender>

A very similar configuration can be used for a rolling file appender logging APEX context.

<appender name="CTXT-FILE"
      class="ch.qos.logback.core.rolling.RollingFileAppender">
  <file>${logDir}/apex_ctxt.log</file>
  <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
    <fileNamePattern>${logDir}/apex_ctxt_%d{yyyy-MM-dd}.%i.log.gz
    </fileNamePattern>
    <maxHistory>4</maxHistory>
    <timeBasedFileNamingAndTriggeringPolicy
        class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
      <maxFileSize>100MB</maxFileSize>
    </timeBasedFileNamingAndTriggeringPolicy>
  </rollingPolicy>
  <encoder>
    <pattern>
      %d %-5relative [procId=${processId}] [%thread] %-5level %logger{26} - %msg %ex{full} %n
    </pattern>
  </encoder>
</appender>
Example Configuration for Logging Logic

The following example shows a configuration that logs policy logic to standard out and a file (info). All other APEX components are logging to a file (debug).. This configuration an be used in a pre-production phase with the APEX engine still running in a separate terminal to monitor policy execution. This logback configuration is in the APEX installation as etc/logback-logic.xml.

<configuration debug="false">
    <statusListener class="ch.qos.logback.core.status.NopStatusListener" />

    <contextName>Apex</contextName>
    <property name="logDir" value="/var/log/onap/policy/apex-pdp/" />

    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <Pattern>%d %contextName [%t] %level %logger{36} - %msg%n</Pattern>
        </encoder>
    </appender>

    <appender name="FILE" class="ch.qos.logback.core.FileAppender">
        <file>${logDir}/apex.log</file>
        <encoder>
            <pattern>
                %d %-5relative [procId=${processId}] [%thread] %-5level%logger{26} - %msg %n %ex{full}
            </pattern>
        </encoder>
    </appender>

    <appender name="POLICY_APPENDER_STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>policy: %msg\n</pattern>
        </encoder>
    </appender>

    <root level="error">
        <appender-ref ref="STDOUT" />
    </root>

    <logger name="org.onap.policy.apex" level="debug" additivity="false">
        <appender-ref ref="FILE" />
    </logger>

    <logger name="org.onap.policy.apex.executionlogging" level="info" additivity="false">
        <appender-ref ref="POLICY_APPENDER_STDOUT" />
        <appender-ref ref="FILE" />
    </logger>
</configuration>
Example Configuration for a Production Server

The following example shows a configuration that logs all APEX components, including policy logic, to a file (debug). This configuration an be used in a production phase with the APEX engine being executed as a service on a system without console output. This logback configuration is in the APEX installation as logback-server.xml

<configuration debug="false">
    <statusListener class="ch.qos.logback.core.status.NopStatusListener" />

    <contextName>Apex</contextName>
    <property name="logDir" value="/var/log/onap/policy/apex-pdp/" />

    <appender name="FILE" class="ch.qos.logback.core.FileAppender">
        <file>${logDir}/apex.log</file>
        <encoder>
            <pattern>
                %d %-5relative [procId=${processId}] [%thread] %-5level%logger{26} - %msg %n %ex{full}
            </pattern>
        </encoder>
    </appender>

    <root level="debug">
        <appender-ref ref="FILE" />
    </root>

    <logger name="org.onap.policy.apex.executionlogging" level="debug" additivity="false">
        <appender-ref ref="FILE" />
    </logger>
</configuration>

Unsupported Features

This section documents some legacy and unsupported features in apex-pdp. The documentation here has not been updated for recent versions of apex-pdp. For example, the apex-pdp models specified in this example should now be in TOSCA format.

Building a System with Websocket Backend
Websockets

Websocket is a protocol to run sockets of HTTP. Since it in essence a socket, the connection is realized between a server (waiting for connections) and a client (connecting to a server). Server/client separation is only important for connection establishment, once connected, everyone can send/receive on the same socket (as any standard socket would allow).

Standard Websocket implementations are simple, no publish/subscribe and no special event handling. Most servers simply send all incoming messages to all connections. There is a PubSub definition on top of Websocket called WAMP. APEX does not support WAMP at the moment.

Websocket in Java

In Java, JSR 356 defines the standard Websocket API. This JSR is part of Jave EE 7 standard. For Java SE, several implementations exist in open source. Since Websockets are a stable standard and simple, most implementations are stable and ready to use. A lot of products support Websockets, like Spring, JBoss, Netty, … there are also Kafka extensions for Websockets.

Websocket Example Code for Websocket clients (FOSS)

There are a lot of implementations and examples available on Github for Websocket clients. If one is using Java EE 7, then one can also use the native Websocket implementation. Good examples for clients using simply Java SE are here:

For Java EE, the native Websocket API is explained here:

BCP: Websocket Configuration

The probably best is to configure APEX for Websocket servers for input (ingress, consume) and output (egress, produce) interfaces. This means that APEX will start Websocket servers on named ports and wait for clients to connect. Advantage: once APEX is running all connectivity infrastructure is running as well. Consequence: if APEX is not running, everyone else is in the dark, too.

The best protocol to be used is JSON string. Each event on any interface is then a string with a JSON encoding. JSON string is a little bit slower than byte code, but we doubt that this will be noticeable. A further advantage of JSON strings over Websockets with APEX starting the servers: it is very easy to connect web browsers to such a system. Simple connect the web browser to the APEX sockets and send/read JSON strings.

Once APEX is started you simply connect Websocket clients to it, and send/receive event. When APEX is terminated, the Websocket servers go down, and the clients will be disconnected. APEX does not (yet) support auto-client reconnect nor WAMP, so clients might need to be restarted or reconnected manually after an APEX boot.

Demo with VPN Policy Model

We assume that you have an APEX installation using the full package, i.e. APEX with all examples, of version 0.5.6 or higher. We will use the VPN policy from the APEX examples here.

Now, have the following ready to start the demo:

  • 3 terminals on the host where APEX is running (we need 1 for APEX and 1 for each client)

  • the events in the file $APEX_HOME/examples/events/VPN/SetupEvents.json open in an editor (we need to send those events to APEX)

  • the events in the file $APEX_HOME/examples/events/VPN/Link09Events.json open in an editor (we need to send those events to APEX)

A Websocket Configuration for the VPN Domain

Create a new APEX configuration using the VPN policy model and configuring APEX as discussed above for Websockets. Copy the following configuration into $APEX_HOME/examples/config/VPN/Ws2WsServerAvroContextJsonEvent.json (for Windows use %APEX_HOME%\examples\config\VPN\Ws2WsServerAvroContextJsonEvent.json):

 1{
 2  "engineServiceParameters" : {
 3    "name"          : "VPNApexEngine",
 4    "version"        : "0.0.1",
 5    "id"             :  45,
 6    "instanceCount"  : 1,
 7    "deploymentPort" : 12345,
 8    "policyModelFileName" : "examples/models/VPN/VPNPolicyModelAvro.json",
 9    "engineParameters"    : {
10      "executorParameters" : {
11        "MVEL" : {
12          "parameterClassName" : "org.onap.policy.apex.plugins.executor.mvel.MVELExecutorParameters"
13        }
14      },
15      "contextParameters" : {
16        "parameterClassName" : "org.onap.policy.apex.context.parameters.ContextParameters",
17        "schemaParameters":{
18          "Avro":{
19            "parameterClassName" : "org.onap.policy.apex.plugins.context.schema.avro.AvroSchemaHelperParameters"
20          }
21        }
22      }
23    }
24  },
25  "producerCarrierTechnologyParameters" : {
26    "carrierTechnology" : "WEBSOCKET",
27    "parameterClassName" : "org.onap.policy.apex.plugins.event.carrier.websocket.WEBSOCKETCarrierTechnologyParameters",
28    "parameters" : {
29      "wsClient" : false,
30      "port"     : 42452
31    }
32  },
33  "producerEventProtocolParameters" : {
34    "eventProtocol" : "JSON"
35  },
36  "consumerCarrierTechnologyParameters" : {
37    "carrierTechnology" : "WEBSOCKET",
38    "parameterClassName" : "org.onap.policy.apex.plugins.event.carrier.websocket.WEBSOCKETCarrierTechnologyParameters",
39    "parameters" : {
40     "wsClient" : false,
41      "port"     : 42450
42    }
43  },
44  "consumerEventProtocolParameters" : {
45    "eventProtocol" : "JSON"
46  }
47}

Start APEX Engine

In a new terminal, start APEX with the new configuration for Websocket-Server ingress/egress:

1#: $APEX_HOME/bin/apexApps.sh engine -c $APEX_HOME/examples/config/VPN/Ws2WsServerAvroContextJsonEvent.json
1#: %APEX_HOME%\bin\apexApps.bat engine -c %APEX_HOME%\examples\config\VPN\Ws2WsServerAvroContextJsonEvent.json

Wait for APEX to start, it takes a while to create all Websocket servers (about 8 seconds on a standard laptop without cached binaries). depending on your log messages, you will see no (some, a lot) log messages. If APEX starts correctly, the last few messages you should see are:

1 2017-07-28 13:17:20,834 Apex [main] INFO c.e.a.s.engine.runtime.EngineService - engine model VPNPolicyModelAvro:0.0.1 added to the engine-AxArtifactKey:(name=VPNApexEngine-0,version=0.0.1)
2 2017-07-28 13:17:21,057 Apex [Apex-apex-engine-service-0:0] INFO c.e.a.s.engine.runtime.EngineService - Engine AxArtifactKey:(name=VPNApexEngine-0,version=0.0.1) processing ...
3 2017-07-28 13:17:21,296 Apex [main] INFO c.e.a.s.e.r.impl.EngineServiceImpl - Added the action listener to the engine
4 Started Apex service

APEX is running in the new terminal and will produce output when the policy is triggered/executed.

Run the Websocket Echo Client

The echo client is included in an APEX full installation. To run the client, open a new shell (Unix, Cygwin) or command prompt (cmd on Windows). Then use the APEX application launcher to start the client.

Important

APEX engine needs to run first The example assumes that an APEX engine configured for produce carrier technology Websocket and JSON event protocol is executed first.

Unix, Cygwin

Windows

# $APEX_HOME/bin/apexApps.sh ws-echo [args]
> %APEX_HOME%\bin\apexApps.bat ws-echo [args]

Use the following command line arguments for server and port of the Websocket server. The port should be the same as configured in the APEX engine. The server host should be the host on which the APEX engine is running

  • -p defines the Websocket port to connect to (defaults to 8887)

  • -s defines the host on which a Websocket server is running (defaults to localhost)

Let’s assume that there is an APEX engine running, configured for produce Websocket carrier technology, as server, for port 42452, with produce event protocol JSON,. If we start the console client on the same host, we can omit the -s options. We start the console client as:

# $APEX_HOME/bin/apexApps.sh ws-echo -p 42452 (1)
> %APEX_HOME%\bin\apexApps.bat ws-echo -p 42452 (2)

1

Start client on Unix or Cygwin

2

Start client on Windows

Once started successfully, the client will produce the following messages (assuming we used -p 42452 and an APEX engine is running on localhost with the same port:

ws-simple-echo: starting simple event echo
 --> server: localhost
 --> port: 42452

Once started, the application will simply print out all received events to standard out.
Each received event will be prefixed by '---' and suffixed by '===='


ws-simple-echo: opened connection to APEX (Web Socket Protocol Handshake)

Run the Websocket Console Client

The console client is included in an APEX full installation. To run the client, open a new shell (Unix, Cygwin) or command prompt (cmd on Windows). Then use the APEX application launcher to start the client.

Important

APEX engine needs to run first The example assumes that an APEX engine configured for consume carrier technology Websocket and JSON event protocol is executed first.

Unix, Cygwin

Windows

# $APEX_HOME/bin/apexApps.sh ws-console [args]
> %APEX_HOME%\bin\apexApps.bat ws-console [args]

Use the following command line arguments for server and port of the Websocket server. The port should be the same as configured in the APEX engine. The server host should be the host on which the APEX engine is running

  • -p defines the Websocket port to connect to (defaults to 8887)

  • -s defines the host on which a Websocket server is running (defaults to localhost)

Let’s assume that there is an APEX engine running, configured for consume Websocket carrier technology, as server, for port 42450, with consume event protocol JSON,. If we start the console client on the same host, we can omit the -s options. We start the console client as:

# $APEX_HOME/bin/apexApps.sh ws-console -p 42450 (1)
> %APEX_HOME%\bin\apexApps.sh ws-console -p 42450 (2)

1

Start client on Unix or Cygwin

2

Start client on Windows

Once started successfully, the client will produce the following messages (assuming we used -p 42450 and an APEX engine is running on localhost with the same port:

ws-simple-console: starting simple event console
 --> server: localhost
 --> port: 42450

 - terminate the application typing 'exit<enter>' or using 'CTRL+C'
 - events are created by a non-blank starting line and terminated by a blank line


ws-simple-console: opened connection to APEX (Web Socket Protocol Handshake)

Send Events

Now you have the full system up and running:

  • Terminal 1: APEX ready and loaded

  • Terminal 2: an echo client, printing received messages produced by the VPN policy

  • Terminal 2: a console client, waiting for input on the console (standard in) and sending text to APEX

We started the engine with the VPN policy example. So all the events we are using now are located in files in the following example directory:

1#: $APEX_HOME/examples/events/VPN
2> %APEX_HOME%\examples\events\VPN

To sends events, simply copy the content of the event files into Terminal 3 (the console client). It will read multi-line JSON text and send the events. So copy the content of SetupEvents.json into the client. APEX will trigger a policy and produce some output, the echo client will also print some events created in the policy. In Terminal 1 (APEX) you’ll see some status messages from the policy as:

 1{Link=L09, LinkUp=true}
 2L09     true
 3outFields: {Link=L09, LinkUp=true}
 4{Link=L10, LinkUp=true}
 5L09     true
 6L10     true
 7outFields: {Link=L10, LinkUp=true}
 8{CustomerName=C, LinkList=L09 L10, SlaDT=300, YtdDT=300}
 9*** Customers ***
10C       300     300     [L09, L10]
11outFields: {CustomerName=C, LinkList=L09 L10, SlaDT=300, YtdDT=300}
12{CustomerName=A, LinkList=L09 L10, SlaDT=300, YtdDT=50}
13*** Customers ***
14A       300     50      [L09, L10]
15C       300     300     [L09, L10]
16outFields: {CustomerName=A, LinkList=L09 L10, SlaDT=300, YtdDT=50}
17{CustomerName=D, LinkList=L09 L10, SlaDT=300, YtdDT=400}
18*** Customers ***
19A       300     50      [L09, L10]
20C       300     300     [L09, L10]
21D       300     400     [L09, L10]
22outFields: {CustomerName=D, LinkList=L09 L10, SlaDT=300, YtdDT=400}
23{CustomerName=B, LinkList=L09 L10, SlaDT=300, YtdDT=299}
24*** Customers ***
25A       300     50      [L09, L10]
26B       300     299     [L09, L10]
27C       300     300     [L09, L10]
28D       300     400     [L09, L10]
29outFields: {CustomerName=B, LinkList=L09 L10, SlaDT=300, YtdDT=299}

In Terminal 2 (echo-client) you see the received events, the last two should look like:

 1ws-simple-echo: received
 2---------------------------------
 3{
 4  "name": "VPNCustomerCtxtActEvent",
 5  "version": "0.0.1",
 6  "nameSpace": "org.onap.policy.apex.domains.vpn.events",
 7  "source": "Source",
 8  "target": "Target",
 9  "CustomerName": "C",
10  "LinkList": "L09 L10",
11  "SlaDT": 300,
12  "YtdDT": 300
13}
14=================================
15
16ws-simple-echo: received
17---------------------------------
18{
19  "name": "VPNCustomerCtxtActEvent",
20  "version": "0.0.1",
21  "nameSpace": "org.onap.policy.apex.domains.vpn.events",
22  "source": "Source",
23  "target": "Target",
24  "CustomerName": "D",
25  "LinkList": "L09 L10",
26  "SlaDT": 300,
27  "YtdDT": 400
28}
29=================================

Congratulations, you have triggered a policy in APEX using Websockets, the policy did run through, created events, picked up by the echo-client.

Now you can send the Link 09 and Link 10 events, they will trigger the actual VPN policy and some calculations are made. Let’s take the Link 09 events from Link09Events.json, copy them all into Terminal 3 (the console). APEX will run the policy (with some status output), and the echo client will receive and print events.

To terminate the applications, simply press CTRL+C in Terminal 1 (APEX). This will also terminate the echo-client in Terminal 2. Then type exit<enter> in Terminal 3 (or CTRL+C) to terminate the console-client.

APEX Policy Guide

APEX Policy Matrix

APEX offers a lot of flexibility for defining, deploying, and executing policies. Based on a theoretic model, it supports virtually any policy model and supports translation of legacy policies into the APEX execution format. However, the most important aspect for using APEX is to decide what policy is needed, what underlying policy concepts should be used, and how the decision logic should be realized. Once these aspects are decided, APEX can be used to execute the policies. If the policy evolves, say from a simple decision table to a fully adaptable policy, only the policy definition requires change. APEX supports all of that.

The figure below shows a (non-exhaustive) matrix, which will help to decide what policy is required to solve your problem. Read the matrix from left to right choosing one cell in each column.

APEX Policy Matrix

Figure 1. APEX Policy Matrix

The policy can support one of a number of stimuli with an associated purpose/model of the policy, for instance:

  • Configuration, i.e. what should happen. An example is an event that states an intended network configuration and the policy should provide the detailed actions for it. The policy can be realized for instance as an obligation policy, a promise or an intent.

  • Report, i.e. something did happen. An example is an event about an error or fault and the policy needs to repair that problem. The policy would usually be an obligation, utility function, or goal policy.

  • Monitoring, i.e. something does happen. An example is a notification about certain network conditions, to which the policy might (or might not) react. The policy will mitigate the monitored events or permit (deny) related actions as an obligation or authorization.

  • Analysis, i.e. why did something happen. An example is an analytic component sends insights of a situation requiring a policy to act on it. The policy can solve the problem, escalate it, or delegate it as a refrain or delegation policy.

  • Prediction, i.e. what will happen next. An example are events that a policy uses to predict a future network condition. The policy can prevent or enforce the prediction as an adaptive policy, a utility function, or a goal.

  • Feedback, i.e. why did something happen or not happen. Similar to analysis, but here the feedback will be in the input event and the policy needs to something with that information. Feedback can be related to history or experience, for instance a previous policy execution. The policy needs to be context-aware or be a meta-policy.

Once the purpose of the policy is decided, the next step is to look into what context information the policy will require to do its job. This can range from very simple to a lot of different information, for instance:

  • No context, nothing but a trigger event, e.g. a string or a number, is required

  • Event context, the incoming event provides all information (more than a string or number) for the policy

  • Policy context (read only), the policy has access to additional information related to its class but cannot change/alter them

  • Policy context (read and write), the policy has access to additional information related to its class and can alter this information (for instance to record historic information)

  • Global context (read only), the policy has access to additional information of any kind but cannot change/alter them

  • Global context (read and write), the policy the policy has access to additional information of any kind and can alter this information (for instance to record historic information)

The next step is to decide how the policy should do its job, i.e. what flavor it has, how many states are needed, and how many tasks. There are many possible combinations, for instance:

  • Simple / God: a simple policy with 1 state and 1 task, which is doing everything for the decision-making. This is the ideal policy for simple situation, e.g. deciding on configuration parameters or simple access control.

  • Simple sequence: a simple policy with a number of states each having a single task. This is a very good policy for simple decision-making with different steps. For instance, a classic action policy (ECA) would have 3 states (E, C, and A) with some logic (1 task) in each state.

  • Simple selective: a policy with 1 state but more than one task. Here, the appropriate task (and it’s logic) will be selected at execution time. This policy is very good for dealing with similar (or the same) situation in different contexts. For instance, the tasks can be related to available external software, or to current work load on the compute node, or to time of day.

  • Selective: any number of states having any number of tasks (usually more than 1 task). This is a combination of the two policies above, for instance an ECA policy with more than one task in E, C, and A.

  • Classic directed: a policy with more than one state, each having one task, but a non-sequential execution. This means that the sequence of the states is not pre-defined in the policy (as would be for all cases above) but calculated at runtime. This can be good to realize decision trees based on contextual information.

  • Super Adaptive: using the full potential of the APEX policy model, states and tasks and state execution are fully flexible and calculated at runtime (per policy execution). This policy is very close to a general programming system (with only a few limitations), but can solve very hard problems.

The final step is to select a response that the policy creates. Possible responses have been discussed in the literature for a very long time. A few examples are:

  • Obligation (deontic for what should happen)

  • Authorization (e.g. for rule-based or other access control or security systems)

  • Intent (instead of providing detailed actions the response is an intent statement and a further system processes that)

  • Delegation (hand the problem over to someone else, possibly with some information or instructions)

  • Fail / Error (the policy has encountered a problem, and reports it)

  • Feedback (why did the policy make a certain decision)

APEX Policy Model

The APEX policy model is shown in UML notation in the figure below. A policy model can be stored in JSON or XML format in a file or can be held in a database. The APEX editor creates and modifies APEX policy models. APEX deployment deploys policy models, and a policy model is loaded into APEX engines so that the engines can run the policies in the policy model.

The figure shows four different views of the policy model:

  • The general model view shows the main parts of a policy: state, state output, event, and task. A task can also have parameters. Data types can be defined on a per-model basis using either standard atomic types (such as character, string, numbers) or complex types from a policy domain.

  • The logic model view emphasizes how decision-making logic is injected into a policy. There are essentially three different types of logic: task logic (for decision making in a task), task selection logic (to select a task if more than one is defined in a state), and state finalizer logic (to compute the final output event of a state and select an appropriate next state from the policy model).

  • The context model view shows how context is injected into a policy. States collect all context from their tasks. A task can define what context it requires for the decision making, i.e. what context the task logic will process. Context itself is a collection of items (individual context information) with data types. Context can be templated.

  • The event and field model view shows the events in the policy model. Tasks define what information they consume (input) and produce (output). This information is modeled as fields, essentially a key/type tuple in the model and a key/type/value triple at execution. Events then are collection of fields.

APEX Policy Model for Execution

Figure 2. APEX Policy Model for Execution

Concepts and Keys

Each element of the policy model is called a concept. Each concept is a subclass of the abstract Concept class, as shown in the next figure. Every concept implements the following abstract methods:

Concepts and Keys

Figure 3. Concepts and Keys

  • getKey() - gets the unique key for this concept instance in the system

  • validate() - validates the structure of this concept, its sub-concepts and its relationships

  • clean() - carries out housekeeping on the concept such as trimming strings, remove any hanging references

  • clone() - creates a deep copy of an instance of this concept

  • equals() - checks if two instances of this concept are equal

  • toString() - returns a string representation of the concept

  • hashCode() - returns a hash code for the concept

  • copyTo() - carries out a deep copy of one instance of the concept to another instance, overwriting the target fields.

All concepts must have a key, which uniquely identifies a concept instance. The key of a subclass of an Concept must either be an ArtifactKey or an ReferenceKey. Concepts that have a stand-alone independent existence such as Policy, Task, and Event must have an ArtifctKey key. Concepts that are contained in other concepts, that do not exist as stand-alone concepts must have an ReferenceKey key. Examples of such concepts are State and EventParameter.

An ArticactKey has two fields; the Name of the concept it is the key for and the concept’s Version. A concept’s name must be unique in a given PolicyModel. A concept version is represented using the well known major.minor.path scheme as used in semantic versioning.

A ReferenceKey has three fields. The UserKeyName and UserKeyVersion fields identify the ArtifactKey of the concept in which the concept keyed by the ReferenceKey is contained. The LocalName field identifies the contained concept instance. The LocalName must be unique in the concepts of a given type contained by a parent.

For example, a policy called SalesPolicy with a Version of 1.12.4 has a state called Decide. The Decide state is linked to the SalesPolicy with a ReferenceKey with fields UserKeyName of SalesPolicy, UserKeyVersion of 1.12.4, and LocalName of Decide. There must not be another state called Decide in the policy SalesPolicy. However, there may well be a state called Decide in some other policy called PurchasingPolicy.

Each concept in the model is also a JPA (Java Persistence API) Entity. This means that every concept can be individually persisted or the entire model can be persisted en-bloc to any persistence mechanism using an JPA framework such as Hibernate or EclipseLink.

Concept: PolicyModel

The PolicyModel concept is a container that holds the definition of a set of policies and their associated events, context maps, and tasks. A PolicyModel is implemented as four maps for policies, events, context maps, and tasks. Each map is indexed by the key of the policy, event, context map, or task. Any non-empty policy model must have at least one entry in its policy, event, and task map because all policies must have at least one input and output event and must execute at least one task.

A PolicyModel concept is keyed with an ArtifactKey key. Because a PolicyModel is an AxConcept, calling the validate() method on a policy model validates the concepts, structure, and relationships of the entire policy model.

Concept: DataType

Data types are tightly controlled in APEX in order to provide a very high degree of consistency in policies and to facilitate tracking of changes to context as policies execute. All context is modeled as a DataType concept. Each DataType concept instance is keyed with an ArtifactKey key. The DataType field identifies the Java class of objects that is used to represent concept instances that use this data type. All context has a DataType; incoming and outgoing context is represented by EventField concepts and all other context is represented by ContextItem concepts.

Concept: Event

An Event defines the structure of a message that passes into or out of an APEX engine or that passes between two states in an APEX engine. APEX supports message reception and sending in many formats and all messages are translated into an Event prior to processing by an APEX engine. Event concepts are keyed with an ArtifactKey key. The parameters of an event are held as a map of EventField concept instances with each parameter indexed by the LocalName of its ReferenceKey. An Event has three fields:

  • The NameSpace identifies the domain of application of the event

  • The Source of the event identifies the system that emitted the event

  • The Target of the event identifies the system that the event was sent to

A PolicyModel contains a map of all the events known to a given policy model. Although an empty model may have no events in its event map, any sane policy model must have at least one Event defined.

Concept: EventField

The incoming context and outgoing context of an event are the fields of the event. Each field representing a single piece of incoming or outgoing context. Each field of an Event is represented by an instance of the EventField concept. Each EventField concept instance in an event is keyed with a ReferenceKey key, which references the event. The LocalName field of the ReferenceKey holds the name of the field A reference to a DataType concept defines the data type that values of this parameter have at run time.

Concept: ContextMap

The set of context that is available for use by the policies of a PolicyModel is defined as ContextMap concept instances. The PolicyModel holds a map of all the ContextMap definitions. A ContextMap is itself a container for a group of related context items, each of which is represented by a ContextItem concept instance. ContextMap concepts are keyed with an ArtifactKey key. A developer can use the APEX Policy Editor to create context maps for their application domain.

A ContextMap uses a map to hold the context items. The ContextItem concept instances in the map are indexed by the LocalName of their ReferenceKey.

The ContextMapType field of a ContextMap defines the type of a context map. The type can have either of two values:

  • A BAG context map is a context map with fixed content. Each possible context item in the context map is defined at design time and is held in the ContextMap context instance as ContextItem concept definitions and only the values of the context items in the context map can be changed at run time. The context items in a BAG context map have mixed types and distinct ContextItem concept instances of the same type can be defined. A BAG context map is convenient for defining a group of context items that are diverse but are related by domain, such as the characteristics of a device. A fully defined BAG context map has a fully populated ContextItem map but its ContextItemTemplate reference is not defined.

  • A SAMETYPE context map is used to represent a group of ContextItem instances of the same type. Unlike a BAG context map, the ContextItem concept instances of a SAMETYPE context map can be added, modified, and deleted at runtime. All ContextItem concept instances in a SAMETYPE context map must be of the same type, and that context item is defined as a single ContextItemTemplate concept instances at design time. At run time, the ContextItemTemplate definition is used to create new ContextItem concept instances for the context map on demand. A fully defined SAMETYPE context map has an empty ContextItem map and its ContextItemTemplate_ reference is defined.

The Scope of a ContextMap defines the range of applicability of a context map in APEX. The following scopes of applicability are defined:

  • EPHEMERAL scope means that the context map is owned, used, and modified by a single application but the context map only exists while that application is running

  • APPLICATION scope specifies that the context map is owned, used, and modified by a single application, the context map is persistent

  • GLOBAL scope specifies that the context map is globally owned and is used and modified by any application, the context map is persistent

  • EXTERNAL scope specifies that the context map is owned by an external system and may be used in a read-only manner by any application, the context map is persistent

A much more sophisticated scoping mechanism for context maps is envisaged for Apex in future work. In such a mechanism, the scope of a context map would work somewhat like the way roles work in security authentication systems.

Concept: ContextItem

Each piece of context in a ContextMap is represented by an instance of the ContextItem concept. Each ContextItem concept instance in a context map keyed with a ReferenceKey key, which references the context map of the context item. The LocalName field of the ReferenceKey holds the name of the context item in the context map A reference to a DataType concept defines the data type that values of this context item have at run time. The WritableFlag indicates if the context item is read only or read-write at run time.

Concept: ContextItemTemplate

In a SAMETYPE ContextMap, the ContextItemTemplate definition provides a template for the ContextItem instances that will be created on the context map at run time. Each ContextItem concept instance in the context map is created using the ContextItemTemplate template. It is keyed with a ReferenceKey key, which references the context map of the context item. The LocalName field of the ReferenceKey, supplied by the creator of the context item at run time, holds the name of the context item in the context map. A reference to a DataType concept defines the data type that values of this context item have at run time. The WritableFlag indicates if the context item is read only or read-write at run time.

Concept: Task

The smallest unit of logic in a policy is a Task. A task encapsulates a single atomic unit of logic, and is designed to be a single indivisible unit of execution. A task may be invoked by a single policy or by many policies. A task has a single trigger event, which is sent to the task when it is invoked. Tasks emit one or more outgoing events, which carry the result of the task execution. Tasks may use or modify context as they execute.

The Task concept definition captures the definition of an APEX task. Task concepts are keyed with an ArtifactKey key. The Trigger of the task is a reference to the Event concept that triggers the task. The OutgoingEvents of a task are a set of references to Event concepts that may be emitted by the task.

All tasks have logic, some code that is programmed to execute the work of the task. The Logic concept of the task holds the definition of that logic.

The Task definition holds a set of ContextItem and ContextItemTemplate context items that the task is allow to access, as defined by the task developer at design time. The type of access (read-only or read write) that a task has is determined by the WritableFlag flag on the individual context item definitions. At run time, a task may only access the context items specified in its context item set, the APEX engine makes only the context items in the task context item set is available to the task.

A task can be configured with startup parameters. The set of parameters that can be configured on a task are defined as a set of TaskParameter concept definitions.

Concept: TaskParameter

Each configuration parameter of a task are represented as a Taskparameter concept keyed with a ReferenceKey key, which references the task. The LocalName field of the ReferenceKey holds the name of the parameter. The DefaultValue field defines the default value that the task parameter is set to. The value of TaskParameter instances can be overridden at deployment time by specifying their values in the configuration information passed to APEX engines.

The taskParameters field is specified under engineParameters in the ApexConfig. It can contain one or more task parameters, where each item can contain the parameter key, value as well as the taskId to which it is associated. If the taskId is not specified, then the parameters are added to all tasks.

Concept: Logic

The Logic concept instance holds the actual programmed task logic for a task defined in a Task concept or the programmed task selection logic for a state defined in a State concept. It is keyed with a ReferenceKey key, which references the task or state that owns the logic. The LocalName field of the Logic concept is the name of the logic.

The LogicCode field of a Logic concept definition is a string that holds the program code that is to be executed at run time. The LogicType field defines the language of the code. The standard values are the logic languages supported by APEX: JAVASCRIPT, JAVA, JYTHON, JRUBY, or MVEL.

The APEX engine uses the LogicType field value to decide which language interpreter to use for a task and then sends the logic defined in the LogicCode field to that interpreter.

Concept: Policy

The Policy concept defines a policy in APEX. The definition is rather straightforward. A policy is made up of a set of states with the flavor of the policy determining the structure of the policy states and the first state defining what state in the policy executes first. Policy concepts are keyed with an ArtifactKey key.

The PolicyFlavour of a Policy concept specifies the structure that will be used for the states in the policy. A number of commonly used policy patterns are supported as APEX policy flavors. The standard policy flavors are:

  • The MEDA flavor supports policies written to the MEDA policy pattern and require a sequence of four states: namely Match, Establish, Decide and Act.

  • The OODA flavor supports policies written to the OODA loop pattern and require a sequence of four states: namely Observe, Orient, Decide and Act.

  • The ECA flavor supports policies written to the ECA active rule pattern and require a sequence of three states: namely Event, Condition and Action

  • The XACML flavor supports policies written in XACML and require a single state: namely XACML

  • The FREEFORM flavor supports policies written in an arbitrary style. A user can define a FREEFORM policy as an arbitrarily long chain of states.

The FirstState field of a Policy definition is the starting point for execution of a policy. Therefore, the trigger event of the state referenced in the FirstState field is also the trigger event for the entire policy.

Concept: State

The State concept represents a phase or a stage in a policy, with a policy being composed of a series of states. Each state has at least one but may have many tasks and, on each run of execution, a state executes one and only one of its tasks. If a state has more than one task, then its task selection logic is used to select which task to execute. Task selection logic is programmable logic provided by the state designer. That logic can use incoming, policy, global, and external context to select which task best accomplishes the purpose of the state in a give situation if more than one task has been specified on a state. A state calls one and only one task when it is executed.

Each state is triggered by an event, which means that all tasks of a state must also be triggered by that same event. The set of output events for a state is the union of all output events from all tasks for that task. In practice at the moment, because a state can only have a single input event, a state that is not the final state of a policy may only output a single event and all tasks of that state may also only output that single event. In future work, the concept of having a less restrictive trigger pattern will be examined.

A state that is the final state of a policy may output multiple events, and the task associated with the final state outputs those events.

A State concept is keyed with a ReferenceKey key, which references the Policy concept that owns the state. The LocalName field of the ReferenceKey holds the name of the state. As a state is part of a chain of states, the NextState field of a state holds the ReferenceKey key of the state in the policy to execute after this state.

The Trigger field of a state holds the ArtifactKey of the event that triggers this state. The OutgoingEvents field holds the ArtifactKey references of all possible events that may be output from the state. This is a set that is the union of all output events of all tasks of the state.

The Task concepts that hold the definitions of the task for the state are held as a set of ArtifactKey references in the state. The DefaultTask field holds a reference to the default task for the state, a task that is executed if no task selection logic is specified. If the state has only one task, that task is the default task.

The Logic concept referenced by a state holds the task selection logic for a state. The task selection logic uses the incoming context (parameters of the incoming event) and other context to determine the best task to use to execute its goals. The state holds a set of references to ContextItem and ContextItemTemplate definitions for the context used by its task selection logic.

Writing Logic

Writing APEX Task Logic

Task logic specifies the behavior of an Apex Task. This logic can be specified in a number of ways, exploiting Apex’s plug-in architecture to support a range of logic executors. In Apex scripted Task Logic can be written in any of these languages:

These languages were chosen because the scripts can be compiled into Java bytecode at runtime and then efficiently executed natively in the JVM. Task Logic an also be written directly in Java but needs to be compiled, with the resulting classes added to the classpath. There are also a number of other Task Logic types (e.g. Fuzzy Logic), but these are not supported as yet. This guide will focus on the scripted Task Logic approaches, with MVEL and JavaScript being our favorite languages. In particular this guide will focus on the Apex aspects of the scripts. However, this guide does not attempt to teach you about the scripting languages themselves …​ that is up to you!

Tip

JVM-based scripting languages For more more information on scripting for the Java platform see: https://docs.oracle.com/javase/8/docs/technotes/guides/scripting/prog_guide/index.html

Note

What do Tasks do? The function of an Apex Task is to provide the logic that can be executed for an Apex State as one of the steps in an Apex Policy. Each task receives some incoming fields, executes some logic (e.g: make a decision based on shared state or context, incoming fields, external context, etc.), perhaps set some shared state or context and then emits outgoing fields (in case of a single outgoing event), or a set of outgoing fields (in case of multiple outgoing events). The state that uses the task is responsible for extracting the incoming fields from the state input event. The state also has an output mapper associated with the task, and this output mapper is responsible for mapping the outgoing fields from the task into an appropriate output event for the state.

First lets start with a sample task, drawn from the “My First Apex Policy” example: The task “MorningBoozeCheck” from the “My First Apex Policy” example is available in both MVEL and JavaScript:

Javascript code for the MorningBoozeCheck task

 1/*
 2 * ============LICENSE_START=======================================================
 3 *  Copyright (C) 2016-2018 Ericsson. All rights reserved.
 4 *  Modifications Copyright (C) 2020 Nordix Foundation.
 5 * ================================================================================
 6 * Licensed under the Apache License, Version 2.0 (the "License");
 7 * you may not use this file except in compliance with the License.
 8 * You may obtain a copy of the License at
 9 *
10 *      http://www.apache.org/licenses/LICENSE-2.0
11 *
12 * Unless required by applicable law or agreed to in writing, software
13 * distributed under the License is distributed on an "AS IS" BASIS,
14 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15 * See the License for the specific language governing permissions and
16 * limitations under the License.
17 *
18 * SPDX-License-Identifier: Apache-2.0
19 * ============LICENSE_END=========================================================
20 */
21
22executor.logger.info("Task Execution: '"+executor.subject.id+"'. Input Fields: '"+executor.inFields+"'");
23
24executor.outFields.put("amount"      , executor.inFields.get("amount"));
25executor.outFields.put("assistant_ID", executor.inFields.get("assistant_ID"));
26executor.outFields.put("notes"       , executor.inFields.get("notes"));
27executor.outFields.put("quantity"    , executor.inFields.get("quantity"));
28executor.outFields.put("branch_ID"   , executor.inFields.get("branch_ID"));
29executor.outFields.put("item_ID"     , executor.inFields.get("item_ID"));
30executor.outFields.put("time"        , executor.inFields.get("time"));
31executor.outFields.put("sale_ID"     , executor.inFields.get("sale_ID"));
32
33item_id = executor.inFields.get("item_ID");
34
35//All times in this script are in GMT/UTC since the policy and events assume time is in GMT.
36var timenow_gmt =  new Date(Number(executor.inFields.get("time")));
37
38var midnight_gmt = new Date(Number(executor.inFields.get("time")));
39midnight_gmt.setUTCHours(0,0,0,0);
40
41var eleven30_gmt = new Date(Number(executor.inFields.get("time")));
42eleven30_gmt.setUTCHours(11,30,0,0);
43
44var timeformatter = new java.text.SimpleDateFormat("HH:mm:ss z");
45
46var itemisalcohol = false;
47if(item_id != null && item_id >=1000 && item_id < 2000)
48    itemisalcohol = true;
49
50if( itemisalcohol
51    && timenow_gmt.getTime() >= midnight_gmt.getTime()
52    && timenow_gmt.getTime() <  eleven30_gmt.getTime()) {
53
54  executor.outFields.put("authorised", false);
55  executor.outFields.put("message", "Sale not authorised by policy task " +
56    executor.subject.taskName+ " for time " + timeformatter.format(timenow_gmt.getTime()) +
57    ". Alcohol can not be sold between " + timeformatter.format(midnight_gmt.getTime()) +
58    " and " + timeformatter.format(eleven30_gmt.getTime()));
59}
60else{
61  executor.outFields.put("authorised", true);
62  executor.outFields.put("message", "Sale authorised by policy task " +
63    executor.subject.taskName + " for time "+timeformatter.format(timenow_gmt.getTime()));
64}
65
66/*
67This task checks if a sale request is for an item that is an alcoholic drink.
68If the local time is between 00:00:00 GMT and 11:30:00 GMT then the sale is not
69authorised. Otherwise the sale is authorised.
70In this implementation we assume that items with item_ID value between 1000 and
712000 are all alcoholic drinks :-)
72*/
73
74true;

MVEL code for the MorningBoozeCheck task

 1/*
 2 * ============LICENSE_START=======================================================
 3 *  Copyright (C) 2016-2018 Ericsson. All rights reserved.
 4 *  Modifications Copyright (C) 2020 Nordix Foundation.
 5 * ================================================================================
 6 * Licensed under the Apache License, Version 2.0 (the "License");
 7 * you may not use this file except in compliance with the License.
 8 * You may obtain a copy of the License at
 9 *
10 *      http://www.apache.org/licenses/LICENSE-2.0
11 *
12 * Unless required by applicable law or agreed to in writing, software
13 * distributed under the License is distributed on an "AS IS" BASIS,
14 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15 * See the License for the specific language governing permissions and
16 * limitations under the License.
17 *
18 * SPDX-License-Identifier: Apache-2.0
19 * ============LICENSE_END=========================================================
20 */
21import java.util.Date;
22import java.util.Calendar;
23import java.util.TimeZone;
24import java.text.SimpleDateFormat;
25
26logger.info("Task Execution: '"+subject.id+"'. Input Fields: '"+inFields+"'");
27
28outFields.put("amount"      , inFields.get("amount"));
29outFields.put("assistant_ID", inFields.get("assistant_ID"));
30outFields.put("notes"       , inFields.get("notes"));
31outFields.put("quantity"    , inFields.get("quantity"));
32outFields.put("branch_ID"   , inFields.get("branch_ID"));
33outFields.put("item_ID"     , inFields.get("item_ID"));
34outFields.put("time"        , inFields.get("time"));
35outFields.put("sale_ID"     , inFields.get("sale_ID"));
36
37item_id = inFields.get("item_ID");
38
39//The events used later to test this task use GMT timezone!
40gmt = TimeZone.getTimeZone("GMT");
41timenow = Calendar.getInstance(gmt);
42df = new SimpleDateFormat("HH:mm:ss z");
43df.setTimeZone(gmt);
44timenow.setTimeInMillis(inFields.get("time"));
45
46midnight = timenow.clone();
47midnight.set(
48    timenow.get(Calendar.YEAR),timenow.get(Calendar.MONTH),
49    timenow.get(Calendar.DATE),0,0,0);
50eleven30 = timenow.clone();
51eleven30.set(
52    timenow.get(Calendar.YEAR),timenow.get(Calendar.MONTH),
53    timenow.get(Calendar.DATE),11,30,0);
54
55itemisalcohol = false;
56if(item_id != null && item_id >=1000 && item_id < 2000)
57    itemisalcohol = true;
58
59if( itemisalcohol
60    && timenow.after(midnight) && timenow.before(eleven30)){
61  outFields.put("authorised", false);
62  outFields.put("message", "Sale not authorised by policy task "+subject.taskName+
63    " for time "+df.format(timenow.getTime())+
64    ". Alcohol can not be sold between "+df.format(midnight.getTime())+
65    " and "+df.format(eleven30.getTime()));
66  return true;
67}
68else{
69  outFields.put("authorised", true);
70  outFields.put("message", "Sale authorised by policy task "+subject.taskName+
71    " for time "+df.format(timenow.getTime()));
72  return true;
73}
74
75/*
76This task checks if a sale request is for an item that is an alcoholic drink.
77If the local time is between 00:00:00 GMT and 11:30:00 GMT then the sale is not
78authorised. Otherwise the sale is authorised.
79In this implementation we assume that items with item_ID value between 1000 and
802000 are all alcoholic drinks :-)
81*/

The role of the task in this simple example is to copy the values in the incoming fields into the outgoing fields, then examine the values in some incoming fields (item_id and time), then set the values in some other outgoing fields (authorised and message).

Both MVEL and JavaScript like most JVM-based scripting languages can use standard Java libraries to perform complex tasks. Towards the top of the scripts you will see how to import Java classes and packages to be used directly in the logic. Another thing to notice is that Task Logic should return a java.lang.Boolean value true if the logic executed correctly. If the logic fails for some reason then false can be returned, but this will cause the policy invoking this task will fail and exit.

Note

How to return a value from task logic Some languages explicitly support returning values from the script (e.g. MVEL and JRuby) using an explicit return statement (e.g. return true), other languages do not (e.g. Jython). For languages that do not support the return statement, a special field called returnValue must be created to hold the result of the task logic operation (i.e. assign a java.lang.Boolean value to the returnValue field before completing the task). Also, in MVEL if there is no explicit return statement then the return value of the last executed statement will return (e.g. the statement a=(1+2) will return the value 3).

For Javascript, the last statement of a script must be a statement that evaluates to true or false, indicating whether the script executed correctly or not. In the case where the script always executes to compeletion sucessfully, simply add a last line with the statement true’. In cases where success or failure is assessed in the script, create a boolean local variable with a name such as returnvalue. In the execution of the script, set returnValue to be true or false as appropriate. The last line of the scritp tehn should simply be returnValue;, which returns the value of returnValue.

Besides these imported classes and normal language features Apex provides some natively available parameters and functions that can be used directly. At run-time these parameters are populated by the Apex execution environment and made natively available to logic scripts each time the logic script is invoked. (These can be accessed using the executor keyword for most languages, or can be accessed directly without the executor keyword in MVEL):

Table 1. The executor Fields / Methods

Name

Type

Java type

Description

inFields

Fields

java.util.Map <String,Object>

The incoming task fields, implemented as a standard Java (unmodifiable) Map

Example:

executor.logger.debug("Incoming fields: " +executor.inFields.entrySet());
var item_id = executor.incomingFields["item_ID"];
if (item_id >=1000) { ... }

outFields

Fields

java.util.Map <String,Object>

The outgoing task fields. This is implemented as a standard initially empty Java (modifiable) Map. To create a new schema-compliant instance of a field object see the utility method subject.getOutFieldSchemaHelper() below that takes the fieldName as an argument.

Example:

executor.outFields["authorised"] = false;

outFieldsList

Fields

java.util.Collection

<Map<String, Object>>

The collection of outgoing task fields when there are multiple outputs from the final state. To create a new schema-compliant instance of a field, see the utility method subject.getOutFieldSchemaHelper() below that takes eventName and fieldName as arguments. To add the set of output fields to the outFieldsList, the utility method executor.addFieldsToOutput can be used as shown below.

void addFieldsToOutput(Map<String, Object> fields)

A utility method to add fields to outgoing fields. When there are multiple output events emitted from the task associated with a final state, this utility method can be used to add the corresponding fields to the outFieldsList.

Example:

var cdsRequestEventFields = java.util.HashMap();
var actionIdentifiers = executor.subject.getOutFieldSchemaHelper
("CDSRequestEvent","actionIdentifiers").createNewInstance();
cdsRequestEventFields.put("actionIdentifiers", actionIdentifiers);
executor.addFieldsToOutput(cdsRequestEventFields);

var logEventFields = java.util.HashMap();
logEventFields.put("status", "FINAL_SUCCESS");
executor.addFieldsToOutput(logEventFields);

logger

Logger

org.slf4j.ext.XLogger

A helpful logger

Example:

executor.logger.info("Executing task: " +executor.subject.id);

TRUE/FALSE

boolean

java.lang.Boolean

2 helpful constants. These are useful to retrieve correct return values for the task logic

Example:

var returnValue = executor.isTrue;
var returnValueType = Java.type("java.lang.Boolean");
var returnValue = new returnValueType(true);

subject

Task

TaskFacade

This provides some useful information about the task that contains this task logic. This object has some useful fields and methods :

  • AxTask task to get access to the full task definition of the host task

  • String getTaskName() to get the name of the host task

  • String getId() to get the ID of the host task

  • SchemaHelper getInFieldSchemaHelper( String fieldName ) to get a SchemaHelper helper object to manipulate incoming task fields in a schema-aware manner

  • SchemaHelper getOutFieldSchemaHelper( String fieldName ) to get a SchemaHelper helper object to manipulate outgoing task fields in a schema-aware manner, e.g. to instantiate new schema-compliant field objects to populate the executor.outFields outgoing fields map. This can be used only when there is a single outgoing event from a task.

  • SchemaHelper getOutFieldSchemaHelper( String eventname, String fieldName ) to get a SchemaHelper helper object to manipulate outgoing task fields in a schema-aware manner, e.g. to instantiate new schema-compliant field objects to populate the executor.outFieldsList collection of outgoing fields map. This must be used in case of multiple outgoing events from a task, as the intention is to fetch the schema of a field associated to one of the expected events. This method works fine in case of single outgoing event too, but the previous method is enough as the field anyway belongs to the single event.

Example:

executor.logger.info("Task name: " + executor.subject.getTaskName());
executor.logger.info("Task id: " + executor.subject.getId());
executor.outFields["authorised"] = executor.subject
  .getOutFieldSchemaHelper("authorised").createNewInstance("false");

var actionIdentifiers = executor.subject.getOutFieldSchemaHelper
  ("CDSRequestEvent","actionIdentifiers").createNewInstance();
actionIdentifiers.put("blueprintName", "sample-bp");
var cdsRequestEventFields = java.util.HashMap();
cdsRequestEventFields.put("actionIdentifiers", actionIdentifiers);
executor.addFieldsToOutput(cdsRequestEventFields);

ContextAlbum getContextAlbum(String ctxtAlbumName )

A utility method to retrieve a ContextAlbum for use in the task. This is how you access the context used by the task. The returned ContextAlbum implements the java.util.Map <String,Object> interface to get and set context as appropriate. The returned ContextAlbum also has methods to lock context albums, get information about the schema of the items to be stored in a context album, and get a SchemaHelper to manipulate context album items. How to define and use context in a task is described in the Apex Programmer’s Guide and in the My First Apex Policy guide.

Example:

var bkey = executor.inFields.get("branch_ID");
var cnts = executor.getContextMap("BranchCounts");
cnts.lockForWriting(bkey);
cnts.put(bkey, cnts.get(bkey) + 1);
cnts.unlockForWriting(bkey);
Writing APEX Task Selection Logic

The function of Task Selection Logic is to choose which task should be executed for an Apex State as one of the steps in an Apex Policy. Since each state must define a default task there is no need for Task Selection Logic unless the state uses more than one task. This logic can be specified in a number of ways, exploiting Apex’s plug-in architecture to support a range of logic executors. In Apex scripted Task Selection Logic can be written in any of these languages:

These languages were chosen because the scripts can be compiled into Java bytecode at runtime and then efficiently executed natively in the JVM. Task Selection Logic an also be written directly in Java but needs to be compiled, with the resulting classes added to the classpath. There are also a number of other Task Selection Logic types but these are not supported as yet. This guide will focus on the scripted Task Selection Logic approaches, with MVEL and JavaScript being our favorite languages. In particular this guide will focus on the Apex aspects of the scripts. However, this guide does not attempt to teach you about the scripting languages themselves …​ that is up to you!

Tip

JVM-based scripting languages For more more information on Scripting for the Java platform see: https://docs.oracle.com/javase/8/docs/technotes/guides/scripting/prog_guide/index.html

Note

What does Task Selection Logic do? When an Apex state references multiple tasks, there must be a way to dynamically decide which task should be chosen and executed. This can depend on the many factors, e.g. the incoming event for the state, shared state or context, external context, etc.. This is the function of a state’s Task Selection Logic. Obviously, if there is only one task then Task only one task then Task Selection Logic is not needed. Each state must also select one of the tasks a the default state. If the Task Selection Logic is unable to select an appropriate task, then it should select the default task. Once the task has been selected the Apex Engine will then execute that task.

First lets start with some simple Task Selection Logic, drawn from the “My First Apex Policy” example: The Task Selection Logic from the “My First Apex Policy” example is specified in JavaScript here:

Javascript code for the “My First Policy” Task Selection Logic

/*
 * ============LICENSE_START=======================================================
 *  Copyright (C) 2016-2018 Ericsson. All rights reserved.
 *  Modifications Copyright (C) 2020 Nordix Foundation.
 * ================================================================================
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 *
 * SPDX-License-Identifier: Apache-2.0
 * ============LICENSE_END=========================================================
 */

executor.logger.info("Task Selection Execution: '"+executor.subject.id+
    "'. Input Event: '"+executor.inFields+"'");

branchid = executor.inFields.get("branch_ID");
taskorig = executor.subject.getTaskKey("MorningBoozeCheck");
taskalt = executor.subject.getTaskKey("MorningBoozeCheckAlt1");
taskdef = executor.subject.getDefaultTaskKey();

if(branchid >=0 && branchid <1000){
  taskorig.copyTo(executor.selectedTask);
}
else if (branchid >=1000 && branchid <2000){
  taskalt.copyTo(executor.selectedTask);
}
else{
  taskdef.copyTo(executor.selectedTask);
}

/*
This task selection logic selects task "MorningBoozeCheck" for branches with
0<=branch_ID<1000 and selects task "MorningBoozeCheckAlt1" for branches with
1000<=branch_ID<2000. Otherwise the default task is selected.
In this case the default task is also "MorningBoozeCheck"
*/

true;

The role of the Task Selection Logic in this simple example is to examine the value in one incoming field (branchid), then depending on that field’s value set the value for the selected task to the appropriate task (MorningBoozeCheck, MorningBoozeCheckAlt1, or the default task).

Another thing to notice is that Task Selection Logic should return a java.lang.Boolean value true if the logic executed correctly. If the logic fails for some reason then false can be returned, but this will cause the policy invoking this task will fail and exit.

Note

How to return a value from Task Selection Logic Some languages explicitly support returning values from the script (e.g. MVEL and JRuby) using an explicit return statement (e.g. return true), other languages do not (e.g. JavaScript and Jython). For languages that do not support the return statement, a special field called returnValue must be created to hold the result of the task logic operation (i.e. assign a java.lang.Boolean value to the returnValue field before completing the task). Also, in MVEL if there is not explicit return statement then the return value of the last executed statement will return (e.g. the statement a=(1+2) will return the value 3).

Each of the scripting languages used in Apex can import and use standard Java libraries to perform complex tasks. Besides imported classes and normal language features Apex provides some natively available parameters and functions that can be used directly. At run-time these parameters are populated by the Apex execution environment and made natively available to logic scripts each time the logic script is invoked. (These can be accessed using the executor keyword for most languages, or can be accessed directly without the executor keyword in MVEL):

Table 2. The executor Fields / Methods

Unix, Cygwin

Windows

1>c:
2>cd \dev\apex
3>mvn clean install -DskipTests
1# cd /usr/local/src/apex-pdp
2# mvn clean install -DskipTests

Name

Type

Java type

Description

inFields

Fields

java.util.Map <String,Object>

All fields in the state’s incoming event. This is implemented as a standard Java Java (unmodifiable) Map

Example:

executor.logger.debug("Incoming fields: " + executor.inFields.entrySet());
var item_id = executor.incomingFields["item_ID"];
if (item_id >=1000) { ... }

outFields

Fields

java.util.Map <String,Object>

The outgoing task fields. This is implemented as a standard initially empty Java (modifiable) Map. To create a new schema-compliant instance of a field object see the utility method subject.getOutFieldSchemaHelper() below

Example:

executor.outFields["authorised"] = false;

logger

Logger

org.slf4j.ext.XLogger

A helpful logger

Example:

executor.logger.info("Executing task: "
+executor.subject.id);

TRUE/FALSE

boolean

java.lang.Boolean

2 helpful constants. These are useful to retrieve correct return values for the task logic

Example:

var returnValue = executor.isTrue;
var returnValueType = Java.type("java.lang.Boolean");
var returnValue = new returnValueType(true);

subject

Task

TaskFacade

This provides some useful information about the task that contains this task logic. This object has some useful fields and methods :

  • AxTask task to get access to the full task definition of the host task

  • String getTaskName() to get the name of the host task

  • String getId() to get the ID of the host task

  • SchemaHelper getInFieldSchemaHelper( String fieldName ) to get a SchemaHelper helper object to manipulate incoming task fields in a schema-aware manner

  • SchemaHelper getOutFieldSchemaHelper( String fieldName ) to get a SchemaHelper helper object to manipulate outgoing task fields in a schema-aware manner, e.g. to instantiate new schema-compliant field objects to populate the executor.outFields outgoing fields map

Example:

executor.logger.info("Task name: " + executor.subject.getTaskName());
executor.logger.info("Task id: " + executor.subject.getId());
executor.outFields["authorised"] = executor.subject
  .getOutFieldSchemaHelper("authorised")
  .createNewInstance("false");

parameters

Fields

java.util.Map <String,String>

All parameters in the current task. This is implemented as a standard Java Map.

Example:

executor.parameters.get("ParameterKey1"))

ContextAlbum getContextAlbum(String ctxtAlbumName )

A utility method to retrieve a ContextAlbum for use in the task. This is how you access the context used by the task. The returned ContextAlbum implements the java.util.Map <String,Object> interface to get and set context as appropriate. The returned ContextAlbum also has methods to lock context albums, get information about the schema of the items to be stored in a context album, and get a SchemaHelper to manipulate context album items. How to define and use context in a task is described in the Apex Programmer’s Guide and in the My First Apex Policy guide.

Example:

var bkey = executor.inFields.get("branch_ID");
var cnts = executor.getContextMap("BranchCounts");
cnts.lockForWriting(bkey);
cnts.put(bkey, cnts.get(bkey) + 1);
cnts.unlockForWriting(bkey);
Logic Cheat Sheet

Examples given here use Javascript (if not stated otherwise), other execution environments will be similar.

Finish Logic with Success or Error

To finish logic, i.e. return to APEX, with success use the following line close to the end of the logic.

JS Success

true;

To notify a problem, finish with an error.

JS Fail

false;
Logic Logging

Logging can be made easy using a local variable for the logger. Line 1 below does that. Then we start with a trace log with the task (or task logic) identifier followed by the infields.

JS Logging

var logger = executor.logger;
logger.trace("start: " + executor.subject.id);
logger.trace("-- infields: " + executor.inFields);

For larger logging blocks you can use the standard logging API to detect log levels, for instance:

JS Logging Blocks

if(logger.isTraceEnabled()){
  // trace logging block here
}

Note: the shown logger here logs to org.onap.policy.apex.executionlogging. The behavior of the actual logging can be specified in the $APEX_HOME/etc/logback.xml.

If you want to log into the APEX root logger (which is sometimes necessary to report serious logic errors to the top), then import the required class and use this logger.

JS Root Logger

var rootLogger = LoggerFactory.getLogger(logger.ROOT_LOGGER_NAME);
rootLogger.error("Serious error in logic detected: " + executor.subject.id);
Accessing TaskParameters

TaskParameters available in a Task can be accessed in the logic. The parameters in each task are made available at the executor level. This example assumes a parameter with key ParameterKey1.

JS TaskParameter value

executor.parameters.get("ParameterKey1"))

Alternatively, the task parameters can also be accessed from the task object.

JS TaskParameter value using task object

executor.subject.task.getTaskParameters.get("ParameterKey1").getTaskParameterValue()
Local Variable for Infields

It is a good idea to use local variables for infields. This avoids long code lines and policy evolution. The following example assumes infields named nodeName and nodeAlias.

JS Infields Local Var

var ifNodeName = executor.inFields["nodeName"];
var ifNodeAlias = executor.inFields["nodeAlias"];
Local Variable for Context Albums

Similar to the infields it is good practice to use local variables for context albums as well. The following example assumes that a task can access a context album albumTopoNodes. The second line gets a particular node from this context album.

JS Infields Local Var

var albumTopoNodes = executor.getContextAlbum("albumTopoNodes");
var ctxtNode = albumTopoNodes.get(ifNodeName);
Set Outfields in Logic

The task logic needs to set outfields with content generated. The exception are outfields that are a direct copy from an infield of the same name, APEX does that autmatically.

JS Set Outfields

executor.outFields["report"] = "node ctxt :: added node " + ifNodeName;
Create a instance of an Outfield using Schemas

If an outfield is not an atomic type (string, integer, etc.) but uses a complex schema (with a Java or Avro backend), APEX can help to create new instances. The executor provides a field called subject, which provides a schem helper with an API for this. The complete API of the schema helper is documented here: API Doc: SchemaHelper.

If the backend is Java, then the Java class implementing the schema needs to be imported.

Single outgoing event

When there is a single outgoing event associated with a task, the fieldName alone is enough to fetch its schema. The following example assumes an outfield situation. The subject method getOutFieldSchemaHelper() is used to create a new instance.

JS Outfield Instance with Schema

var situation = executor.subject.getOutFieldSchemaHelper("situation").createNewInstance();

If the schema backend is Java, the new instance will be as implemented in the Java class. If the schema backend is Avro, the new instance will have all fields from the Avro schema specification, but set to null. So any entry here needs to be done separately. For instance, the situation schema has a field problemID which we set.

JS Outfield Instance with Schema, set

situation.put("problemID", "my-problem");

Multiple outgoing events

When there are multiple outgoing events associated with a task, the fieldName along with the eventName it belongs to are needed to fetch its schema. The following example assumes an outfield actionIdentifiers which belongs to CDSRequestEvent. The subject method getOutFieldSchemaHelper() is used to create a new instance.

var actionIdentifiers = executor.subject.getOutFieldSchemaHelper("CDSRequestEvent", "actionIdentifiers").createNewInstance();
Create a instance of an Context Album entry using Schemas

Context album instances can be created using very similar to the outfields. Here, the schema helper comes from the context album directly. The API of the schema helper is the same as for outfields, see API Doc: SchemaHelper.

If the backend is Java, then the Java class implementing the schema needs to be imported.

The following example creates a new instance of a context album instance named albumProblemMap.

JS Outfield Instance with Schema

var albumProblemMap = executor.getContextAlbum("albumProblemMap");
var linkProblem = albumProblemMap.getSchemaHelper().createNewInstance();

This can of course be also done in a single call without the local variable for the context album.

JS Outfield Instance with Schema, one line

var linkProblem = executor.getContextAlbum("albumProblemMap").getSchemaHelper().createNewInstance();

If the schema backend is Java, the new instance will be as implemented in the Java class. If the schema backend is Avro, the new instance will have all fields from the Avro schema specification, but set to null. So any entry here needs to be done separately (see above in outfields for an example).

Enumerates

When dealing with enumerates (Avro or Java defined), it is sometimes and in some execution environments necessary to convert them to a string. For example, assume an Avro enumerate schema as:

Avro Enumerate Schema

{
  "type": "enum", "name": "Status", "symbols" : [
    "UP", "DOWN"
  ]
}

Using a switch over a field initialized with this enumerate in Javascript will fail. Instead, use the toString method, for example:

JS Outfield Instance with Schema, one line

var switchTest = executor.inFields["status"]; switch(switchTest.toString()){
  case "UP": ...; break; case "DOWN": ...; break; default: ...;
}
MVEL Initialize Outfields First!

In MVEL, we observed a problem when accessing (setting) outfields without a prior access to them. So in any MVEL task logic, before setting any outfield, simply do a get (with any string), to load the outfields into the MVEL cache.

MVEL Outfield Initialization

outFields.get("initialize outfields");
Using Java in Scripting Logic

Since APEX executes the logic inside a JVM, most scripting languages provide access to all standard Java classes. Simply add an import for the required class and then use it as in actual Java.

The following example imports java.util.arraylist into a Javascript logic, and then creates a new list.

JS Import ArrayList

var myList = new ArrayList();
Converting Javascript scripts from Nashorn to Rhino dialects

The Nashorn Javascript engine was removed from Java in the Java 11 release. Java 11 was introduced into the Policy Framework in the Frankfurt release, so from Frankfurt on, APEX Javascript scripts use the Rhino Javascript engine and scripts must be in the Rhino dialect.

There are some minor but important differences between the dialects that users should be aware of so that they can convert their scripts into the Rhino dialect.

Return Values

APEX scripts must always return a value of true indicating that the script executed correctly or false indicating that there was an error in script execution.

Pre Frankfurt

In Nashorn dialect scripts, the user had to create a special variable called returnValue and set the value of that variable to be the return value for the script.

Frankfurt and Later

In Rhino dialect scripts, the return value of the script is the logical result of the last statement. Therefore the last line of the script must evaluate to either true or false.

JS Rhino script last executed line examples

true;

returnValue; // Where returnValue is assigned earlier in the script

someValue == 1; // Where the value of someValue is assigned earlier in the script
return statement

The return statement is not supported from the main script called in the Rhino interpreter.

Pre Frankfurt

In Nashorn dialect scripts, the user could return a value of true or false at any point in their script.

JS Nashorn main script returning true and false

var n;

// some code assigns n a value

if (n < 2) {
  return false;
} else {
  return true;
}

Frankfurt and Later

In Rhino dialect scripts, the return statement cannot be used in the main method, but it can still be used in functions. If you want to have a return statement in your code prior to the last statement, encapsulate your code in a function.

JS Rhino script with return statements in a function

someFunction();

function someFunction() {
  var n;

  // some code assigns n a value

  if (n < 2) {
      return false;
  } else {
      return true;
  }
}
Compatibility Script

For Nashorn, the user had to call a compatibility script at the beginning of their Javascript script. This is not required in Rhino.

Pre Frankfurt

In Nashorn dialect scripts, the compatibility script must be loaded.

Nashorn compatability script loading

load("nashorn:mozilla_compat.js");

Frankfurt and Later

Not required.

Import of Java classes

For Nashorn, the user had explicitly import all the Java packages and classes they wished to use in their Javascript script. In Rhino, all Java classes on the classpath are available for use.

Pre Frankfurt

In Nashorn dialect scripts, Java classes must be imported.

Importation of Java packages and classes

importPackage(java.text);
importClass(java.text.SimpleDateFormat);

Frankfurt and Later

Not required.

Using Java Classes and Objects as Variables

Setting a Javascript variable to hold a Java class or a Java object is more straightforward in Rhino than it is in Nashorn. The examples below show how to instantiate a Javascript variable as a Java class and how to use that variable to create an instance of the Java class in another Javascript variable in both dialects.

Pre Frankfurt

Create Javascript variables to hold a Java class and instance

var webClientClass = Java.type("org.onap.policy.apex.examples.bbs.WebClient");
var webClientObject = new webClientClass();

Frankfurt and Later

Create Javascript variables to hold a Java class and instance

var webClientClass = org.onap.policy.apex.examples.bbs.WebClient;
var webClientObject = new webClientClass();
Equal Value and Equal Type operator ===

The Equal Value and Equal Type operator === is not supported in Rhino. Developers must use the Equal To operator == instead. To check types, they may need to explicitly find and check the type of the variables they are using.

Writing Multiple Output Events from a Final State

APEX-PDP now supports sending multiple events from a final state in a Policy. The task assocaiated with the final state can populate the fields of multiple events, and then they can be passed over as the output events from the final state of a policy.

Note

inputfields and outputfields are not needed as part of the task definition anymore. Fields of an event are already defined as part of the event definition. Input event (single trigger event) and output event/events can be populated to a task as part of the policy/state definition because the event tagging is done there anyway.

Consider a simple example where a policy CDSActionPolicy has a state MakeCDSRequestState which is also a final state. The state is triggered by an event AAIEvent. A task called HandleCDSActionTask is associated with MakeCDSRequestState.There are two output events expected from MakeCDSRequestState which are CDSRequestEvent (request event sent to CDS) and LogEvent (log event sent to DMaaP). Writing an APEX policy with this example will involve the below changes.

Command File:

Define all the concepts in the Policy. Only relevant parts for the multiple output support are shown.

## Define Events
event create name=AAIEvent version=0.0.1 nameSpace=org.onap.policy.apex.test source=AAI target=APEX
..
event create name=CDSRequestEvent version=0.0.1 nameSpace=org.onap.policy.apex.test source=APEX target=CDS
event parameter create name=CDSRequestEvent parName=actionIdentifiers schemaName=CDSActionIdentifiersType
..
event create name=LogEvent version=0.0.1 nameSpace=org.onap.policy.apex.test source=APEX target=DMaaP
event parameter create name=LogEvent  parName=status schemaName=SimpleStringType
..

## Define Tasks
task create name=HandleCDSActionTask
task contextref create name=HandleCDSActionTask albumName=EventDetailsAlbum
task logic create name=HandleCDSActionTask logicFlavour=JAVASCRIPT logic=LS
#MACROFILE:"src/main/resources/logic/HandleCDSActionTask.js"
LE
..

## Define Policies and States
policy create name=CDSActionPolicy template=Freestyle firstState=MakeCDSRequestState
policy state create name=CDSActionPolicy stateName=MakeCDSRequestState triggerName=AAIEvent defaultTaskName=HandleCDSActionTask
# Specify CDSRequestEvent as output
policy state output create name=CDSActionPolicy stateName=MakeCDSRequestState outputName=CDSActionStateOutput eventName=CDSRequestEvent
# Specify LogEvent as output
policy state output create name=CDSActionPolicy stateName=MakeCDSRequestState outputName=CDSActionStateOutput eventName=LogEvent
policy state taskref create name=CDSActionPolicy stateName=MakeCDSRequestState taskName=HandleCDSActionTask outputType=DIRECT outputName=CDSActionStateOutput

Task Logic File:

Create outfields’ instance if required, populate and add them the output events

..
var cdsRequestEventFields = java.util.HashMap();
var actionIdentifiers = executor.subject.getOutFieldSchemaHelper("CDSRequestEvent","actionIdentifiers").createNewInstance();
actionIdentifiers.put("blueprintName", "sample-bp");
cdsRequestEventFields.put("actionIdentifiers", actionIdentifiers);
executor.addFieldsToOutput(cdsRequestEventFields);

var logEventFields = java.util.HashMap();
logEventFields.put("status", "FINAL_SUCCESS");
executor.addFieldsToOutput(logEventFields);

With the above changes, the task populates the fields for both the expected events, and the corresponding state which is MakeCDSRequestState outputs both CDSRequestEvent and LogEvent

APEX OnapPf Guide

Installation

Build and Install

Refer Apex User Manual to find details on the build and installation of the APEX component. Information on the requirements and system configuration can also be found here.

Installation Layout

A full installation of APEX comes with the following layout.

$APEX_HOME
    ├───bin             (1)
    ├───etc             (2)
    │   ├───editor
    │   ├───hazelcast
    │   ├───infinispan
    │   └───META-INF
    │   ├───onappf
    |       └───config      (3)
    │   └───ssl             (4)
    ├───examples            (5)
    │   ├───config          (6)
    │   ├───docker          (7)
    │   ├───events          (8)
    │   ├───html            (9)
    │   ├───models          (10)
    │   └───scripts         (11)
    ├───lib             (12)
    │   └───applications        (13)
    └───war             (14)

1

binaries, mainly scripts (bash and bat) to start the APEX engine and applications

2

configuration files, such as logback (logging) and third party library configurations

3

configuration file for APEXOnapPf, such as OnapPfConfig.json (initial configuration for APEXOnapPf)

4

ssl related files such as policy-keystore and policy-truststore

5

example policy models to get started

6

configurations for the examples (with sub directories for individual examples)

7

Docker files and additional Docker instructions for the examples

8

example events for the examples (with sub directories for individual examples)

9

HTML files for some examples, e.g. the Decisionmaker example

10

the policy models, generated for each example (with sub directories for individual examples)

11

additional scripts for the examples (with sub directories for individual examples)

12

the library folder with all Java JAR files

13

applications, also known as jar with dependencies (or fat jars), individually deployable

14

WAR files for web applications

Verify the APEXOnapPf Installation

When APEX is installed and all settings are realized, the installation can be verified.

Verify Installation - run APEXOnapPf

A simple verification of an APEX installation can be done by simply starting the APEXOnapPf without any configuration. On Unix (or Cygwin) start the engine using $APEX_HOME/bin/apexOnapPf.sh. On Windows start the engine using %APEX_HOME%\bin\apexOnapPf.bat. The engine will fail to fully start. However, if the output looks similar to the following line, the APEX installation is realized.

1Apex [main] INFO o.o.p.a.s.onappf.ApexStarterMain - In ApexStarter with parameters []
2Apex [main] ERROR o.o.p.a.s.onappf.ApexStarterMain - start of services-onappf failed
3org.onap.policy.apex.services.onappf.exception.ApexStarterException: apex starter configuration file was not specified as an argument
4at org.onap.policy.apex.services.onappf.ApexStarterCommandLineArguments.validateReadableFile(ApexStarterCommandLineArguments.java:278)
5        at org.onap.policy.apex.services.onappf.ApexStarterCommandLineArguments.validate(ApexStarterCommandLineArguments.java:165)
6        at org.onap.policy.apex.services.onappf.ApexStarterMain.<init>(ApexStarterMain.java:66)
7        at org.onap.policy.apex.services.onappf.ApexStarterMain.main(ApexStarterMain.java:165)

To fully verify the installation, run the ApexOnapPf by providing the configuration files.

OnapPfConfig.json is the file which contains the initial configuration to startup the ApexStarter service. The dmaap topics to be used for sending or receiving messages is also specified in the this file. Provide this file as argument while running the ApexOnapPf.

1# $APEX_HOME/bin/apexOnapPf.sh -c $APEX_HOME/etc/onappf/config/OnapPfConfig.json (1)
2# $APEX_HOME/bin/apexOnapPf.sh -c C:/apex/apex-full-2.0.0-SNAPSHOT/etc/onappf/config/OnapPfConfig.json (2)
3>%APEX_HOME%\bin\apexOnapPf.bat -c %APEX_HOME%\etc\onappf\config\OnapPfConfig.json (3)

1

UNIX

2

Cygwin

3

Windows

The APEXOnapPf should start successfully. Assuming the logging levels are not changed (default level is info), the output should look similar to this (last few lines)

 1In ApexStarter with parameters [-c, C:/apex/etc/onappf/config/OnapPfConfig.json] . . .
 2Apex [main] INFO o.o.p.c.u.services.ServiceManager - service manager starting set alive
 3Apex [main] INFO o.o.p.c.u.services.ServiceManager - service manager starting register pdp status context object
 4Apex [main] INFO o.o.p.c.u.services.ServiceManager - service manager starting topic sinks
 5Apex [main] INFO o.o.p.c.u.services.ServiceManager - service manager starting Pdp Status publisher
 6Apex [main] INFO o.o.p.c.u.services.ServiceManager - service manager starting Register pdp update listener
 7Apex [main] INFO o.o.p.c.u.services.ServiceManager - service manager starting Register pdp state change request dispatcher
 8Apex [main] INFO o.o.p.c.u.services.ServiceManager - service manager starting Message Dispatcher . . .
 9Apex [main] INFO o.o.p.c.u.services.ServiceManager - service manager starting Rest Server . . .
10Apex [main] INFO o.o.p.c.u.services.ServiceManager - service manager started
11Apex [main] INFO o.o.p.a.s.onappf.ApexStarterMain - Started ApexStarter service

The ApexOnapPf service is now running, sending heartbeat messages to dmaap (which will be received by PAP) and listening for messages from PAP on the dmaap topic specified. Based on instructions from PAP, the ApexOnapPf will deploy or undeploy policies on the ApexEngine.

Terminate APEX by simply using CTRL+C in the console.

Running APEXOnapPf in Docker

Running APEX from the ONAP docker repository only requires 2 commands:

  1. Log into the ONAP docker repo

docker login -u docker -p docker nexus3.onap.org:10003
  1. Run the APEX docker image

docker run -p 6969:6969 -p 23324:23324 -it --rm  nexus3.onap.org:10001/onap/policy-apex-pdp:2.1-SNAPSHOT-latest /bin/bash -c "/opt/app/policy/apex-pdp/bin/apexOnapPf.sh -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json"

To run the ApexOnapPf, the startup script apexOnapPf.sh along with the required configuration files are specified. Also, the ports 6969 (healthcheck) and 23324 (deployment port for the ApexEngine) are exposed.

Build a Docker Image

Alternatively, one can use the Dockerfile defined in the Docker package to build an image.

APEX Dockerfile

 1#
 2# Docker file to build an image that runs APEX on Java 11 or better in alpine
 3#
 4FROM onap/policy-jre-alpine:2.0.1
 5
 6LABEL maintainer="Policy Team"
 7
 8ARG POLICY_LOGS=/var/log/onap/policy/apex-pdp
 9ENV POLICY_HOME=/opt/app/policy/apex-pdp
10ENV POLICY_LOGS=$POLICY_LOGS
11
12RUN apk add --no-cache \
13        vim \
14        iproute2 \
15        iputils \
16    && addgroup -S apexuser && adduser -S apexuser -G apexuser \
17    && mkdir -p $POLICY_HOME \
18    && mkdir -p $POLICY_LOGS \
19    && chown -R apexuser:apexuser $POLICY_LOGS \
20    && mkdir /packages
21
22COPY /maven/apex-pdp-package-full.tar.gz /packages
23RUN tar xvfz /packages/apex-pdp-package-full.tar.gz --directory $POLICY_HOME \
24    && rm /packages/apex-pdp-package-full.tar.gz \
25    && find /opt/app -type d -perm 755 \
26    && find /opt/app -type f -perm 644 \
27    && chmod 755 $POLICY_HOME/bin/* \
28    && cp -pr $POLICY_HOME/examples /home/apexuser \
29    && chown -R apexuser:apexuser /home/apexuser/* $POLICY_HOME \
30    && chmod 644 $POLICY_HOME/etc/*
31
32USER apexuser
33ENV PATH $POLICY_HOME/bin:$PATH
34WORKDIR /home/apexuser

APEXOnapPf Configuration File Explained

The ApexOnapPf is initialized using a configuration file:

  • OnapPfConfig.json

Format of the configuration file (OnapPfConfig.json) explained

The configuration file is a JSON file containing the initial values for configuring the rest server for healthcheck and the pdp itself. The topic infrastructure and the topics to be used for sending or receiving messages is specified in this configuration file. A sample can be found below:

{
    "name":"ApexStarterParameterGroup",
    "restServerParameters": {  (1)
        "host": "0.0.0.0",
        "port": 6969,
        "userName": "...",
        "password": "...",
        "https": true  (2)
    },
    "pdpStatusParameters":{
        "timeIntervalMs": 120000,  (3)
        "pdpType":"apex",  (4)
        "pdpGroup":"defaultGroup",  (5)
        "description":"Pdp Heartbeat",
        "supportedPolicyTypes":[{"name":"onap.policies.controlloop.operational.Apex","version":"1.0.0"}]  (6)
    },
    "topicParameterGroup": {
        "topicSources" : [{  (7)
            "topic" : "POLICY-PDP-PAP",  (8)
            "servers" : [ "message-router" ],  (9)
            "topicCommInfrastructure" : "dmaap"  (10)
        }],
        "topicSinks" : [{  (11)
            "topic" : "POLICY-PDP-PAP",  (12)
            "servers" : [ "message-router" ],  (13)
            "topicCommInfrastructure" : "dmaap"  (14)
        }]
    }
}

1

parameters for setting up the rest server such as host, port userName and password.

2

https flag if enabled will enable https support by the rest server.

3

time interval in which PDP-A has to send heartbeats to PAP. Specified in milliseconds.

4

Type of the pdp.

5

The group to which the pdp belong to.

6

List of policy types supported by the PDP. A trailing “.*” can be used to specify multiple policy types; for example, “onap.policies.controlloop.operational.apex.*” would match any policy type beginning with “onap.policies.controlloop.operational.apex.”

7

List of topics’ details from which messages are received.

8

Topic name of the source to which PDP-A listens to for messages from PAP.

9

List of servers for the source topic.

10

The source topic infrastructure. For e.g. dmaap, noop, ueb

11

List of topics’ details to which messages are sent.

12

Topic name of the sink to which PDP-A sends messages.

13

List of servers for the sink topic.

14

The sink topic infrastructure. For e.g. dmaap, noop, ueb

Policy Examples

HowTo: My First Policy

Introduction

Consider a scenario where a supermarket chain called HyperM controls how it sells items in a policy-based manner. Each time an item is processed by HyperM’s point-of-sale (PoS) system an event is generated and published about that item of stock being sold. This event can then be used to update stock levels, etc..

HyperM want to extend this approach to allow some checks to be performed before the sale can be completed. This can be achieved by requesting a policy-controlled decision as each item is processed by for sale by each PoS system. The decision process is integrated with HyperM’s other IT systems that manage stock control, sourcing and purchasing, personnel systems, etc.

In this document we will show how APEX and APEX Policies can be used to achieve this, starting with a simple policy, building up to more complicated policy that demonstrates the features of APEX.

Data Models

Sales Input Event

Each time a PoS system processes a sales item an event with the following format is emitted:

Table 1. Sale Input Event

Event

Fields

Description

SALE_INPUT

time, sale_ID, amount, item_ID, quantity, assistant_ID, branch_ID, notes, …​

Event indicating a sale of an item is occurring

In each SALE_INPUT event the sale_ID field is a unique ID generated by the PoS system. A timestamp for the event is stored in the time field. The amount field refers to the value of the item(s) to be sold (in cents). The item_ID field is a unique identifier for each item type, and can be used to retrieve more information about the item from HyperM’s stock control system. The quantity field refers to the quantity of the item to be sold. The assistant_ID field is a unique identifier for the PoS operator, and can be used to retrieve more information about the operator from the HyperM’s personnel system. Since HyperM has many branches the branch_ID identifies the shop. The notes field contains arbitrary notes about the sale.

Sales Decision Event

After a SALE_INPUT event is emitted by the PoS system HyperM’s policy-based controlled sales checking system emits a Sale Authorization Event indicating whether the sale is authorized or denied. The PoS system can then listen for this event before continuing with the sale.

Table 2. Sale Authorisation Event

Event

Fields

Description

SALE_AUTH

sale_ID, time, authorized, amount, item_ID, quantity, assistant_ID, branch_ID, notes, message…​

Event indicating a sale of an item is authorized or denied

In each SALE_AUTH event the sale_ID field is copied from the SALE_INPUT event that trigger the decision request. The SALE_AUTH event is also timestamped using the time field, and a field called authorised is set to true or false depending on whether the sale is authorized or denied. The message field carries an optional message about why a sale was not authorized. The other fields from the SALE_INPUT event are also included for completeness.

Stock Control: Items

HyperM maintains information about each item for sale in a database table called ITEMS.

Table 3. Items Database

Table

Fields

Description

ITEMS

item_ID, description, cost_price, barcode, supplier_ID, category, …​

Database table describing each item for sale

The database table ITEMS has a row for each items that HyperM sells. Each item is identified by an item_ID value. The description field stores a description of the item. The cost price of the item is given in cost_price. The barcode of the item is encoded in barcode, while the item supplier is identified by supplier_ID. Items may also be classified into categories using the category field. Useful categories might include: soft drinks, alcoholic drinks, cigarettes, knives, confectionery, bakery, fruit&vegetables, meat, etc..

Personnel System: Assistants

Table 4. Assistants Database

Table

Fields

Description

ASSISTANTS

assistant_ID, surname, firstname, middlename, age, grade, phone_number, …​

Database table describing each HyperM sales assistant

The database table ASSISTANTS has a row for each sales assistant employed by HyperM. Each assistant is identified by an assistant_ID value, with their name given in the firstname, middlename and surname fields. The assistant’s age in years is given in age, while their phone number is contained in the phone_number field. The assistant’s grade is encoded in grade. Useful values for grade might include: trainee, operator, supervisor, etc..

Locations: Branches

Table 5. Branches Database

Table

Fields

Description

BRANCHES

branch_ID, branch_Name, category, street, city, country, postcode, …​

Database table describing each HyperM branch

HyperM operates a number of branches. Each branch is described in the BRANCHES database table. Each branch is identified by a branch_ID, with a branch name given in branch_Name. The address for the branch is encoded in street, city, country and postcode. The branch category is given in the category field. Useful values for category might include: Small, Large, Super, Hyper, etc..

Policy Step 1

Scenario

For the first version of our policy, let’s start with something simple. Let us assume that there exists some restriction that alcohol products cannot be sold before 11:30am. In this section we will go through the necessary steps to define a policy that can enforce this for HyperM.

  • Alcohol cannot be sold before 11:30am…

New Policy Model

Create the an new empty Policy Model MyFirstPolicyModel

Since an organisation like HyperM may have many policies covering many different domains, policies should be grouped into policy sets. In order to edit or deploy a policy, or policy set, the definition of the policy(ies) and all required events, tasks, states, etc., are grouped together into a ‘Policy Model’. An organization might define many Policy Models, each containing a different set of policies.

So the first step is to create a new empty Policy Model called MyFirstPolicyModel. Using the APEX Policy Editor, click on the ‘File’ menus and select ‘New’. Then define our new policy model called MyFirstPolicyModel. Use the ‘Generate UUID’ button to create a new unique ID for the policy model, and fill in a description for the policy model. Press the Submit button to save your changes.

File > New to create a new Policy Model

Create a new Policy Model

Events

Create the input event SALE_INPUT and the output event SALE_AUTH

Using the APEX Policy Editor, click on the ‘Events’ tab. In the ‘Events’ pane, right click and select ‘New’:

Right click to create a new event

Create a new event type called SALE_INPUT. Use the ‘Generate UUID’ button to create a new unique ID for the event type, and fill in a description for the event. Add a namespace, e.g. com.hyperm. We can add hard-coded strings for the Source and Target, e.g. POS and APEX. At this stage we will not add any parameter fields, we will leave this until later. Use the Submit button to create the event.

Fill in the necessary information for the 'SALE_INPUT' event and click 'Submit'

Repeat the same steps for a new event type called SALE_AUTH. Just use APEX as source and POS as target, since this is the output event coming from APEX going to the sales point.

Before we can add parameter fields to an event we must first define APEX Context Item Schemas that can be used by those fields.

To create new item schemas, click on the ‘Context Item Schemas’ tab. In that ‘Context Item Schemas’ pane, right click and select ‘Create new ContextSchema’.

Right click to create a new Item Schema

Create item schemas with the following characteristics, each with its own unique UUID:

Table 1. Item Schemas

Name

Schema Flavour

Schema Definition

Description

timestamp_type

Java

java.lang.Long

A type for time values

sale_ID_type

Java

java.lang.Long

A type for sale_ID values

price_type

Java

java.lang.Long

A type for amo unt/price values

item_ID_type

Java

java.lang.Long

A type for item_ID values

as sistant_ID_type

Java

java.lang.Long

A type for ` assistant_ID` values

quantity_type

Java

ja va.lang.Integer

A type for quantity values

branch_ID_type

Java

java.lang.Long

A type for branch_ID values

notes_type

Java

j ava.lang.String

A type for notes values

authorised_type

Java

ja va.lang.Boolean

A type for authorised values

message_type

Java

j ava.lang.String

A type for message values

Create a new Item Schema

The item schemas can now be seen on the ‘Context Item Schemas’ tab, and can be updated at any time by right-clicking on the item schemas on the ‘Context Item Schemas’ tab. Now we can go back to the event definitions for SALE_INPUT and SALE_AUTH and add some parameter fields.

Tip

APEX natively supports schema definitions in Java and Avro. Java schema definitions are simply the name of a Java Class. There are some restrictions:

  • the class must be instantiatable, i.e. not an Java interface or abstract class

  • primitive types are not supported, i.e. use java.lang.Integer instead of int, etc.

  • it must be possible to find the class, i.e. the class must be contained in the Java classpath.

Avro schema definitions can be any valid Avro schema. For events using fields defined with Avro schemas, any incoming event containing that field must contain a value that conforms to the Avro schema.

Click on the ‘Events’ tab, then right click the SALE_INPUT row and select ‘Edit Event SALE_INPUT’. To add a new event parameter use the 'Add Event Parameter' button at the bottom of the screen. For the `SALE_INPUT event add the following event parameters:

Table 2. Event Parameter Fields for the SALE_INPUT Event

Parameter Name

Parameter Type

Optional

time

timestamp_type

no

sale_ID

sale_ID_type

no

amount

price_type

no

item_ID

item_ID_type

no

quantity

quantity_type

no

assistant_ID

assistant_ID_type

no

branch_ID

branch_ID_type

no

notes

notes_type

yes

Remember to click the ‘Submit’ button at the bottom of the event definition pane.

Tip

Parameter fields can be optional in events. If a parameter is not marked as optional then by default it is mandatory, so it must appear in any input event passed to APEX. If an optional field is not set for an output event then value will be set to null.

Add new event parameters to an event

Select the SALE_AUTH event and add the following event parameters:

Table 3. Event Parameter Fields for the SALE_AUTH Event

Parameter Name

Parameter Type

no

sale_ID

sale_ID_type

no

time

timestamp_type

no

authorised

authorised_type

no

message

message_type

yes

amount

price_type

no

item_ID

item_ID_type

no

assistant_ID

assistant_ID_type

no

quantity

quantity_type

no

branch_ID

branch_ID_type

no

notes

notes_type

yes

Remember to click the ‘Submit’ button at the bottom of the event definition pane.

The events for our policy are now defined.

New Policy

Create a new Policy and add the “No Booze before 11:30” check

APEX policies are defined using a state-machine model. Each policy comprises one or more states that can be individually executed. Where there is more than one state the states are chained together to form a Directed Acyclic Graph (DAG) of states. A state is triggered by passing it a single input (or ‘trigger’) event and once executed each state then emits an output event. For each state the logic for the state is embedded in one or more tasks. Each task contains specific task logic that is executed by the APEX execution environment each time the task is invoked. Where there is more than one task in a state then the state also defines some task selection logic to select an appropriate task each time the state is executed.

Therefore, to create a new policy we must first define one or more tasks.

To create a new Task click on the ‘Tasks’ tab. In the ‘Tasks’ pane, right click and select ‘Create new Task’. Create a new Task called MorningBoozeCheck. Use the ‘Generate UUID’ button to create a new unique ID for the task, and fill in a description for the task.

Right click to create a new task

Tasks are configured with a set of input fields and a set of output fields. To add new input/output fields for a task use the ‘Add Task Input Field’ and ‘Add Task Output Field’ button. The list of input and out fields to add for the MorningBoozeCheck task are given below. The input fields are drawn from the parameters in the state’s input event, and the task’s output fields are used to populate the state’s output event. The task’s input and output fields must be a subset of the event parameters defined for the input and output events for any state that uses that task. (You may have noticed that the input and output fields for the MorningBoozeCheck task have the exact same names and reuse the item schemas that we used for the parameters in the SALE_INPUT and SALE_AUTH events respectively).

Table 1. Input fields for MorningBoozeCheck task

Parameter Name

Parameter Type

time

timestamp_type

sale_ID

sale_ID_type

amount

price_type

item_ID

item_ID_type

quantity

quantity_type

assistant_ID

assistant_ID_type

branch_ID

branch_ID_type

notes

notes_type

Table 2. Output fields for MorningBoozeCheck task

Parameter Name

Parameter Type

sale_ID

sale_ID_type

time

timestamp_type

authorised

authorised_type

message

message_type

amount

price_type

item_ID

item_ID_type

assistant_ID

assistant_ID_type

quantity

quantity_type

branch_ID

branch_ID_type

notes

notes_type

Add input and out fields for the task

Each task must include some ‘Task Logic’ that implements the behaviour for the task. Task logic can be defined in a number of different ways using a choice of languages. For this task we will author the logic using the Java-like scripting language called `MVEL <https://en.wikipedia.org/wiki/MVEL>`__.

For simplicity use the code for the task logic here(Task Logic: MorningBoozeCheck.mvel). Paste the script text into the ‘Task Logic’ box, and use “MVEL” as the ‘Task Logic Type / Flavour’.

This logic assumes that all items with item_ID between 1000 and 2000 contain alcohol, which is not very realistic, but we will see a better approach for this later. It also uses the standard Java time utilities to check if the current time is between 00:00:00 GMT and 11:30:00 GMT. For a detailed guide to how to write your own logic in `JavaScript <https://en.wikipedia.org/wiki/JavaScript>`__, `MVEL <https://en.wikipedia.org/wiki/MVEL>`__ or one of the other supported languages please refer to APEX Programmers Guide.

Add task logic the task

An alternative version of the same logic is available in JavaScript(Task Logic: MorningBoozeCheck.js). Just use “JAVASCRIPT” as the ‘Task Logic Type / Flavour’ instead.

The task definition is now complete so click the ‘Submit’ button to save the task. The task can now be seen on the ‘Tasks’ tab, and can be updated at any time by right-clicking on the task on the ‘Task’ tab. Now that we have created our task, we can can create a policy that uses that task.

To create a new Policy click on the ‘Policies’ tab. In the ‘Policies’ pane, right click and select ‘Create new Policy’:

Create a new Policy called MyFirstPolicy. Use the ‘Generate UUID’ button to create a new unique ID for the policy, and fill in a description for the policy. Use ‘FREEFORM’ as the ‘Policy Flavour’.

Each policy must have at least one state. Since this is ‘freeform’ policy we can add as many states as we wish. Let’s start with one state. Add a new state called BoozeAuthDecide to this MyFirstPolicy policy using the ‘Add new State’ button after filling in the name of our new state.

Create a new policy

Each state must uses one input event type. For this new state select the SALE_INPUT event as the input event.

Each policy must define a ‘First State’ and a ‘Policy Trigger Event’. The ‘Policy Trigger Event’ is the input event for the policy as a whole. This event is then passed to the first state in the chain of states in the policy, therefore the ‘Policy Trigger Event’ will be the input event for the first state. Each policy can only have one ‘First State’. For our MyFirstPolicy policy, select BoozeAuthDecide as the ‘First State’. This will automatically select SALE_INPUT as the ‘Policy Trigger Event’ for our policy.

Create a state

In this case we will create a reference the pre-existing MorningBoozeCheck task that we defined above using the ‘Add New Task’ button. Select the MorningBoozeCheck task, and use the name of the task as the ‘Local Name’ for the task.

in the case where a state references more than one task, a ‘Default Task’ must be selected for the state and some logic (‘Task Selection Logic’) must be specified to select the appropriate task at execution time. Since our new state BoozeAuthDecide only has one task the default task is automatically selected and no ‘Task Selection Logic’ is required.

Note

In a ‘Policy’ ‘State’ a ‘State Output Mapping’ has 3 roles: 1) Select which ‘State’ should be executed next, 2) Select the type of the state’s ‘Outgoing Event’, and 3) Populate the state’s ‘Outgoing Event’. This is how states are chained together to form a (Directed Acyclic Graph (DAG)) of states. The final state(s) of a policy are those that do not select any ‘next’ state. Since a ‘State’ can only accept a single type of event, the type of the event emitted by a previous ‘State’ must match the incoming event type of the next ‘State’. This is also how the last state(s) in a policy can emit events of different types. The ‘State Output Mapping’ is also responsible for taking the fields that are output by the task executed in the state and populating the state’s output populating the state’s output event before it is emitted.

Each ‘Task’ referenced in ‘State’ must have a defined ‘Output Mapping’ to take the output of the task, select an ‘Outgoing Event’ type for the state, populate the state’s outgoing event, and then select the next state to be executed (if any).

There are 2 basic types of output mappings:

  1. Direct Output Mappings have a single value for ‘Next State’ and a single value for ‘State Output Event’. The outgoing event for the state is automatically created, any outgoing event parameters that were present in the incoming event are copied into the outgoing event, then any task output fields that have the same name and type as parameters in the outgoing event are automatically copied into the outgoing event.

  2. Logic-Based State Output Mappings / Finalizers have some logic defined that dynamically selects and creates the ‘State Outgoing Event’, manages the population of the outgoing event parameters (perhaps changing or adding to the outputs from the task), and then dynamically selects the next state to be executed (if any).

Each task reference must also have an associated ‘Output State Mapping’ so we need an ‘Output State Mapping’ for the BoozeAuthDecide state to use when the MorningBoozeCheck task is executed. The simplest type of output mapping is a ‘Direct Output Mapping’.

Create a new ‘Direct Output Mapping’ for the state called MorningBoozeCheck_Output_Direct using the ‘Add New Direct State Output Mapping’ button. Select SALE_AUTH as the output event and select None for the next state value. We can then select this output mapping for use when the the MorningBoozeCheck task is executed. Since there is only state, and only one task for that state, this output mapping ensures that the BoozeAuthDecide state is the only state executed and the state (and the policy) can only emit events of type SALE_AUTH. (You may remember that the output fields for the MorningBoozeCheck task have the exact same names and reuse the item schemas that we used for the parameters in SALE_AUTH event. The MorningBoozeCheck_Output_Direct direct output mapping can now automatically copy the values from the MorningBoozeCheck task directly into outgoing SALE_AUTH events.)

Add a Task and Output Mapping

Click the ‘Submit’ button to complete the definition of our MyFirstPolicy policy. The policy MyFirstPolicy can now be seen in the list of policies on the ‘Policies’ tab, and can be updated at any time by right-clicking on the policy on the ‘Policies’ tab.

The MyFirstPolicyModel, including our MyFirstPolicy policy can now be checked for errors. Click on the ‘Model’ menu and select ‘Validate’. The model should validate without any ‘Warning’ or ‘Error’ messages. If you see any ‘Error’ or ‘Warning’ messages, carefully read the message as a hint to find where you might have made a mistake when defining some aspect of your policy model.

Validate the policy model for error using the 'Model' > 'Validate' menu item

Congratulations, you have now completed your first APEX policy. The policy model containing our new policy can now be exported from the editor and saved. Click on the ‘File’ menu and select ‘Download’ to save the policy model in JSON format. The exported policy model is then available in the directory you selected, for instance $APEX_HOME/examples/models/MyFirstPolicy/1/MyFirstPolicyModel_0.0.1.json. The exported policy can now be loaded into the APEX Policy Engine, or can be re-loaded and edited by the APEX Policy Editor.

Download the completed policy model using the 'File' > 'Download' menu item

Test The Policy

Test Policy Step 1

To start a new APEX Engine you can use the following configuration. In a full APEX installation you can find this configuration in $APEX_HOME/examples/config/MyFirstPolicy/1/MyFirstPolicyConfigStdin2StdoutJsonEvent.json. This configuration expects incoming events to be in JSON format and to be passed into the APEX Engine from stdin, and result events will be printed in JSON format to stdout. This configuration loads the policy model stored in the file ‘MyFirstPolicyModel_0.0.1.json’ as exported from the APEX Editor. Note, you may need to edit this file to provide the full path to wherever you stored the exported policy model file.

To test the policy try paste the following events into the console as the APEX engine executes:

Title

Input Event (JSON)

Output Event (JSON)

comment

{
  "nameSpace": "com.hyperm",
  "name": "SALE_INPUT",
  "version": "0.0.1",
  "time": 1483351989000,
  "sale_ID": 99999991,
  "amount": 299,
  "item_ID": 5123,
  "quantity": 1,
  "assistant_ID": 23,
  "branch_ID": 1,
  "notes": "Special Offer!!"
}
{
  "name": "SALE_AUTH",
  "version": "0.0.1",
  "nameSpace": "com.hyperm",
  "source": "",
  "target": "",
  "amount": 299,
  "assistant_ID": 23,
  "authorised": true,
  "branch_ID": 1,
  "item_ID": 5123,
  "message": "Sale authorised by policy task MorningBoozeCheck for time 10:13:09 GMT",
  "notes": "Special Offer!!",
  "quantity": 1,
  "sale_ID": 99999991,
  "time": 1483351989000
}

Request to buy a non-alcoholic item (item_ID=5123) at 10:13:09 on Tuesday, 10 January 2017. Sale is authorized.

{
  "nameSpace": "com.hyperm",
  "name": "SALE_INPUT",
  "version": "0.0.1",
  "time": 1483346466000,
  "sale_ID": 99999992,
  "amount": 1249,
  "item_ID": 1012,
  "quantity": 1,
  "assistant_ID": 12,
  "branch_ID": 2
}
{
  "nameSpace": "com.hyperm",
  "name": "SALE_AUTH",
  "version": "0.0.1",
  "source": "",
  "target": "",
  "amount": 1249,
  "assistant_ID": 12,
  "authorised": false,
  "branch_ID": 2,
  "item_ID": 1012,
  "message": "Sale not authorised by policy task MorningBoozeCheck for time 08:41:06 GMT. Alcohol can not be sold between 00:00:00 GMT and 11:30:00 GMT",
  "notes": null,
  "quantity": 1,
  "sale_ID": 99999992,
  "time": 1483346466000
}

Request to buy alcohol item (item_ID=1249) at 08:41:06 on Monday, 02 January 2017. Sale is not authorized.

{
  "nameSpace": "com.hyperm",
  "name": "SALE_INPUT",
  "version": "0.0.1",
  "time": 1482265033000,
  "sale_ID": 99999993,
  "amount": 4799,
  "item_ID": 1943,
  "quantity": 2,
  "assistant_ID": 9,
  "branch_ID": 3
}
{
  "name": "SALE_AUTH",
  "version": "0.0.1",
  "nameSpace": "com.hyperm",
  "source": "",
  "target": "",
  "amount": 4799,
  "assistant_ID": 9,
  "authorised": true,
  "branch_ID": 3,
  "item_ID": 1943,
  "message": "Sale authorised by policy task MorningBoozeCheck for time 20:17:13 GMT",
  "notes": null,
  "quantity": 2,
  "sale_ID": 99999993,
  "time": 1482265033000
}

Request to buy alcohol (item_ID=1943) at 20:17:13 on Tuesday, 20 December 2016. Sale is authorized.

CLI Editor File

Policy 1 in CLI Editor

An equivalent version of the MyFirstPolicyModel policy model can again be generated using the APEX CLI editor. A sample APEX CLI script is shown below:

Policy Step 2

Scenario

HyperM have just opened a new branch in a different country, but that country has different rules about when alcohol can be sold! In this section we will go through the necessary steps to extend our policy to enforce this for HyperM.

  • In some branches alcohol cannot be sold before 1pm, and not at all on Sundays.

Although there are a number of ways to accomplish this the easiest approach for us is to define another task and then select which task is appropriate at runtime depending on the branch identifier in the incoming event.

Extend Policy Model

Extend the Policy with the new Scenario

To create a new Task click on the ‘Tasks’ tab. In the ‘Tasks’ pane, right click and select ‘Create new Task’:

Create a new Task called MorningBoozeCheckAlt1. Use the ‘Generate UUID’ button to create a new unique ID for the task, and fill in a description for the task. Select the same input and output fields that we used earlier when we defined the MorningBoozeCheck task earlier.

Table 1. Input fields for MorningBoozeCheckAlt1 task

Parameter Name

Parameter Type

time

timestamp_type

sale_ID

sale_ID_type

amount

price_type

item_ID

item_ID_type

quantity

quantity_type

assistant_ID

assistant_ID_type

branch_ID

branch_ID_type

notes

notes_type

Table 2. Output fields for MorningBoozeCheckAlt1 task

Parameter Name

Parameter Type

sale_ID

sale_ID_type

time

timestamp_type

authorised

authorised_type

message

message_type

amount

price_type

item_ID

item_ID_type

assistant_ID

assistant_ID_type

quantity

quantity_type

branch_ID

branch_ID_type

notes

notes_type

This task also requires some ‘Task Logic’ to implement the new behaviour for this task.

For simplicity use the following code for the task logic (`MorningBoozeCheckAlt1` task logic (`MVEL`)). It again assumes that all items with item_ID between 1000 and 2000 contain alcohol. We again use the standard Java time utilities to check if the current time is between 00:00:00 CET and 13:00:00 CET or if it is Sunday.

For this task we will again author the logic using the `MVEL <https://en.wikipedia.org/wiki/MVEL>`__ scripting language. Sample task logic code (specified in `MVEL <https://en.wikipedia.org/wiki/MVEL>`__) is given below. For a detailed guide to how to write your own logic in `JavaScript <https://en.wikipedia.org/wiki/JavaScript>`__, `MVEL <https://en.wikipedia.org/wiki/MVEL>`__ or one of the other supported languages please refer to APEX Programmers Guide.

Create a new alternative task `MorningBoozeCheckAlt1`

The task definition is now complete so click the ‘Submit’ button to save the task. Now that we have created our task, we can can add this task to the single pre-existing state (BoozeAuthDecide) in our policy.

To edit the BoozeAuthDecide state in our policy click on the ‘Policies’ tab. In the ‘Policies’ pane, right click on our MyFirstPolicy policy and select ‘Edit’. Navigate to the BoozeAuthDecide state in the ‘states’ section at the bottom of the policy definition pane.

Right click to edit a policy

To add our new task MorningBoozeCheckAlt1, scroll down to the BoozeAuthDecide state in the ‘States’ section. In the ‘State Tasks’ section for BoozeAuthDecide use the ‘Add new task’ button. Select our new MorningBoozeCheckAlt1 task, and use the name of the task as the ‘Local Name’ for the task. The MorningBoozeCheckAlt1 task can reuse the same MorningBoozeCheck_Output_Direct ‘Direct State Output Mapping’ that we used for the MorningBoozeCheck task. (Recall that the role of the ‘State Output Mapping’ is to select the output event for the state, and select the next state to be executed. These both remain the same as before.)

Since our state has more than one task we must define some logic to determine which task should be used each time the state is executed. This task selection logic is defined in the state definition. For our BoozeAuthDecide state we want the choice of which task to use to be based on the branch_ID from which the SALE_INPUT event originated. For simplicity sake let us assume that branches with branch_ID between 0 and 999 should use the MorningBoozeCheck task, and the branches with with branch_ID between 1000 and 1999 should use the MorningBoozeCheckAlt1 task.

This time, for variety, we will author the task selection logic using the `JavaScript <https://en.wikipedia.org/wiki/JavaScript>`__ scripting language. Sample task selection logic code is given here (`BoozeAuthDecide` task selection logic (`JavaScript`)). Paste the script text into the ‘Task Selection Logic’ box, and use “JAVASCRIPT” as the ‘Task Selection Logic Type / Flavour’. It is necessary to mark one of the tasks as the ‘Default Task’ so that the task selection logic always has a fallback default option in cases where a particular task cannot be selected. In this case the MorningBoozeCheck task can be the default task.

State definition with 2 Tasks and Task Selection Logic

When complete don’t forget to click the ‘Submit’ button at the bottom of ‘Policies’ pane for our MyFirstPolicy policy after updating the BoozeAuthDecide state.

Congratulations, you have now completed the second step towards your first APEX policy. The policy model containing our new policy can again be validated and exported from the editor and saved as shown in Step 1.

The exported policy model is then available in the directory you selected, as MyFirstPolicyModel_0.0.1.json. The exported policy can now be loaded into the APEX Policy Engine, or can be re-loaded and edited by the APEX Policy Editor.

Test The Policy

Test Policy Step 2

To start a new APEX Engine you can use the following configuration. In a full APEX installation you can find this configuration in $APEX_HOME/examples/config/MyFirstPolicy/2/MyFirstPolicyConfigStdin2StdoutJsonEvent.json. Note, this has changed from the configuration file in Step 1 to enable the JAVASCRIPT executor for our new ‘Task Selection Logic’.

To test the policy try paste the following events into the console as the APEX engine executes. Note, all tests from Step 1 will still work perfectly since none of those events originate from a branch with branch_ID between 1000 and 2000. The ‘Task Selection Logic’ will therefore pick the MorningBoozeCheck task as expected, and will therefore give the same results.

Table 1. Inputs and Outputs when testing My First Policy

Input Event (JSON)

Output Event (JSON)

comment

{
  "nameSpace": "com.hyperm",
  "name": "SALE_INPUT",
  "version": "0.0.1",
  "time": 1483346466000,
  "sale_ID": 99999992,
  "amount": 1249,
  "item_ID": 1012,
  "quantity": 1,
  "assistant_ID": 12,
  "branch_ID": 2
}
{
  "nameSpace": "com.hyperm",
  "name": "SALE_AUTH",
  "version": "0.0.1",
  "source": "",
  "target": "",
  "amount": 1249,
  "assistant_ID": 12,
  "authorised": false,
  "branch_ID": 2,
  "item_ID": 1012,
  "message": "Sale not authorised by policy task MorningBoozeCheck for time 08:41:06 GMT. Alcohol can not be sold between 00:00:00 GMT and 11:30:00 GMT",
  "notes": null,
  "quantity": 1,
  "sale_ID": 99999992,
  "time": 1483346466000
}

Request to buy alcohol item (item_ID=1249) at 08:41:06 GMT on Monday, 02 January 2017. Sale is not authorized. Uses the MorningBoozeCheck task.

{
  "nameSpace": "com.hyperm",
  "name": "SALE_INPUT",
  "version": "0.0.1",
  "time": 1482398073000,
  "sale_ID": 99999981,
  "amount": 299,
  "item_ID": 1047,
  "quantity": 1,
  "assistant_ID": 1212,
  "branch_ID": 1002
}
{
  "nameSpace" : "com.hyperm",
  "name" : "SALE_AUTH",
  "version" : "0.0.1",
  "source" : "",
  "target" : "",
  "sale_ID" : 99999981,
  "amount" : 299,
  "assistant_ID" : 1212,
  "notes" : null,
  "quantity" : 1,
  "branch_ID" : 1002,
  "item_ID" : 1047,
  "authorised" : false,
  "time" : 1482398073000,
  "message" : "Sale not authorised by policy task MorningBoozeCheckAlt1 for time 10:14:33 CET. Alcohol can not be sold between 00:00:00 CET and 13:00:00 CET or on Sunday"
}

Request to buy alcohol (item_ID=1047) at 10:14:33 on Thursday, 22 December 2016. Sale is not authorized. Uses the MorningBoozeCheckAlt1 task.

{
  "nameSpace": "com.hyperm",
  "name": "SALE_INPUT",
  "version": "0.0.1",
  "time": 1482077977000,
  "sale_ID": 99999982,
  "amount": 2199,
  "item_ID": 1443,
  "quantity": 12,
  "assistant_ID": 94,
  "branch_ID": 1003,
  "notes": "Buy 3, get 1 free!!"
}
{
  "nameSpace" : "com.hyperm",
  "name" : "SALE_AUTH",
  "version" : "0.0.1",
  "source" : "",
  "target" : "",
  "sale_ID" : 99999982,
  "amount" : 2199,
  "assistant_ID" : 94,
  "notes" : "Buy 3, get 1 free!!",
  "quantity" : 12,
  "branch_ID" : 1003,
  "item_ID" : 1443,
  "authorised" : false,
  "time" : 1482077977000,
  "message" : "Sale not authorised by policy task MorningBoozeCheckAlt1 for time 17:19:37 CET. Alcohol can not be sold between 00:00:00 CET and 13:00:00 CET or on Sunday"
}

Request to buy alcohol (item_ID=1443) at 17:19:37 on Sunday, 18 December 2016. Sale is not authorized. Uses the MorningBoozeCheckAlt1 task.

{
  "nameSpace": "com.hyperm",
  "name": "SALE_INPUT",
  "version": "0.0.1",
  "time": 1483351989000,
  "sale_ID": 99999983,
  "amount": 699,
  "item_ID": 5321,
  "quantity": 1,
  "assistant_ID": 2323,
  "branch_ID": 1001,
  "notes": ""
}
{
  "nameSpace" : "com.hyperm",
  "name" : "SALE_AUTH",
  "version" : "0.0.1",
  "source" : "",
  "target" : "",
  "sale_ID" : 99999983,
  "amount" : 699,
  "assistant_ID" : 2323,
  "notes" : "",
  "quantity" : 1,
  "branch_ID" : 1001,
  "item_ID" : 5321,
  "authorised" : true,
  "time" : 1483351989000,
  "message" : "Sale authorised by policy task MorningBoozeCheckAlt1 for time 11:13:09 CET"
}

Request to buy non-alcoholic item (item_ID=5321) at 11:13:09 on Monday, 2 January 2017. Sale is authorized. Uses the MorningBoozeCheckAlt1 task.

CLI Editor File

Policy 2 in CLI Editor

An equivalent version of the MyFirstPolicyModel policy model can again be generated using the APEX CLI editor. A sample APEX CLI script is shown below:

Policy-controlled Video Streaming (pcvs) with APEX

Introduction

This module contains several demos for Policy-controlled Video Streaming (PCVS). Each demo defines a policy using AVRO and Javascript (or other scripting languages for the policy logic). To run the demo, a vanilla Ubuntu server with some extra software packages is required:

  • Mininet as network simulator

  • Floodlight as SDN controller

  • Kafka as messaging system

  • Zookeeper for Kafka configuration

  • APEX for policy control

Install Ubuntu Server and SW

Install Demo

Requirements:

  • Ubuntu server: 1.4 GB

  • Ubuntu with Xubuntu Desktop, git, Firefox: 2.3 GB

  • Ubuntu with all, system updated: 3 GB

  • With ZK, Kafka, VLC, Mininet, Floodlight, Python: 4.4 GB

  • APEX Build (M2 and built): M2 ~ 2 GB, APEX ~3.5 GB

  • APEX install (not build locally): ~ 300 MB

On a Ubuntu OS (install a stable or LTS server first)

# pre for Ubuntu, tools and X
sudo apt-get  -y install --no-install-recommends software-properties-common
sudo apt-get  -y install --no-install-recommends build-essential
sudo apt-get  -y install --no-install-recommends git
sudo aptitude -y install --no-install-recommends xubuntu-desktop
sudo apt-get  -y install --no-install-recommends firefox


# install Java
sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get -y install --no-install-recommends oracle-java8-installer
java -version


# reboot system, run system update, then continue

# if VBox additions are needed, install and reboot
sudo (cd /usr/local/share; wget https://www.virtualbox.org/download/testcase/VBoxGuestAdditions_5.2.7-120528.iso)
sudo mount /usr/local/share/VBoxGuestAdditions_5.2.7-120528.iso /media/cdrom
sudo (cd /media/cdrom;VBoxLinuxAdditions.run)


# update apt-get DB
sudo apt-get update

# if APEX is build from source, install maven and rpm
sudo apt-get install maven rpm

# install ZooKeeper
sudo apt-get install zookeeperd

# install Kafka
(cd /tmp;wget http://ftp.heanet.ie/mirrors/www.apache.org/dist/kafka/1.0.0/kafka_2.12-1.0.0.tgz --show-progress)
sudo mkdir /opt/Kafka
sudo tar -xvf /tmp/kafka_2.12-1.0.0.tgz -C /opt/Kafka/

# install mininet
cd /usr/local/src
sudo git clone https://github.com/mininet/mininet.git
(cd mininet;util/install.sh -a)

# install floodlight, requires ant
sudo apt-get install ant
cd /usr/local/src
sudo wget --no-check-certificate https://github.com/floodlight/floodlight/archive/master.zip
sudo unzip master.zip
cd floodlight-master
sudo ant
sudo mkdir /var/lib/floodlight
sudo chmod 777 /var/lib/floodlight

# install python pip
sudo apt-get install python-pip

# install kafka-python (need newer version from github)
cd /usr/local/src
sudo git clone https://github.com/dpkp/kafka-python
sudo pip install ./kafka-python

# install vlc
sudo apt-get install vlc

Install APEX either from source or from a distribution package. See the APEX documentation for details. We assume that APEX is installed in /opt/ericsson/apex/apex

Copy the LinkMonitor file to Kafka-Python

sudo cp /opt/ericsson/apex/apex/examples/scripts/pcvs/vpnsla/LinkMonitor.py /usr/local/src/kafka-python

Change the Logback configuration in APEX to logic logging

(cd /opt/ericsson/apex/apex/etc; sudo cp logback-logic.xml logback.xml)

Get the Demo Video

sudo mkdir /usr/local/src/videos

Standard 720p (recommended)

(cd /usr/local/src/videos; sudo curl -o big_buck_bunny_480p_surround.avi http://download.blender.org/peach/bigbuckbunny_movies/big_buck_bunny_480p_surround-fix.avi)

Full HD video

(cd videos; sudo curl -o bbb_sunflower_1080p_60fps_normal.mp4 http://distribution.bbb3d.renderfarming.net/video/mp4/bbb_sunflower_1080p_60fps_normal.mp4)
VPN SLA Demo

This demo uses a network with several central office and core switches, over which two VPNs are run. A customer A has two location A1 and A2 and a VPN between them. A customer B has two location B1 and B2 and a VPN between them.

VPN SLA Architecture

The architecture above shows the scenario. The components are realized in this demo as follows:

  • CEP / Analytics - a simple Python script taking events from Kafka and sending them to APEX

  • APEX / Policy - the APEX engine running the VPA SLA policy

  • Controller - A vanilla Floodlight controller taking events from the Link Monitor and configuring Mininet

  • Network - A network created using Mininet

The demo requires to start some software (detailed below). To show actual video streams, we use VLC. If you do not want to show video streams, but only the policy, skip the VLC section.

All shown scripts are available in a full APEX installation in $APEX_HOME/examples/scripts/pcvs/vpnsla.

Start all Software

Create environment variables in a file, say env.sh. In each new Xterm

  • Source these environment settings, e.g. . ./env.sh

  • Run the commands below as root (sudo per command or sudo -i for interactive mode as shown below)

#!/usr/bin/env bash

export src_dir=/usr/local/src
export APEX_HOME=/opt/ericsson/apex/apex
export APEX_USER=apexuser

In a new Xterm, start Floodlight

sudo -i
. ./env.sh
cd $src_dir/floodlight-master && java -jar target/floodlight.jar

In a new Xterm start Mininet

sudo -i
. ./env.sh
mn -c && python $APEX_HOME/examples/scripts/pcvs/vpnsla/MininetTopology.py

In a new Xterm, start Kafka

sudo -i
. ./env.sh
/opt/Kafka/kafka_2.12-1.0.0/bin/kafka-server-start.sh /opt/Kafka/kafka_2.12-1.0.0/config/server.properties

In a new Xerm start APEX with the Kafka configuration for this demo

cd $APEX_HOME
./bin/apexApps.sh engine -c examples/config/pcvs/vpnsla/kafka2kafka.json

In a new Xterm start the Link Monitor. The Link Monitor has a 30 second sleep to slow down the demonstration. So the first action of it comes 30 seconds after start. Every new action in 30 second intervals.

sudo -i
. ./env.sh
cd $src_dir
xterm -hold -e 'python3 $src_dir/kafka-python/LinkMonitor.py' &

Now all software should be started and the demo is running. The Link Monitor will send link up events, picked up by APEX which triggers the policy. Since there is no problem, the policy will do nothing.

Create 2 Video Streams with VLC

In the Mininet console, type xterm A1 A2 and xterm B1 B2 to open terminals on these nodes.

A2 and B2 are the receiving nodes. In these terminals, run vlc-wrapper. In each opened VLC window do

  • Click Media → Open Network Stream

  • Give the URL as rtp://@:5004

A1 and B1 are the sending nodes (sending the video stream) In these terminals, run vlc-wrapper. In each opened VLC window do

  • Click Media → Stream

  • Add the video (from /usr/local/src/videos)

  • Click Stream

  • Click Next

  • Change the destination RTP / MPEG Transport Stream and click Add

  • Change the address and type to 10.0.0.2 in A1 and to 10.0.0.4 in B1

  • Turn off Active Transcoding (this is important to minimize CPU load)

  • Click Next

  • Click Stream

The video should be streaming across the network from A1 to A2 and from B1 to B2. If the video streams a slow or interrupted the CPU load is too high. In these cases either try a better machine or use a different (lower quality) video stream.

Take out L09 and let the Policy do it’s Magic

Now it is time to take out the link L09. This will be picked up by the Link Monitor, which sends a new event (L09 DOWN) to the policy. The policy then will calculate which customer should be impeded (throttled). This will continue, until SLAs are violated, then a priority calculation will kick in (Customer A is prioritized in the setup).

To initiate this, simply type link s5 s6 down in the Mininet console followed by exit.

If you have the video streams running, you will see one or the other struggeling, depending on the policy decision.

Reset the Demo

If you want to reset the demo, simple stop (in this order) the following process

  • Link Monitor

  • APEX

  • Mininet

  • Floodlight

Then restart them in this order

  • Floodlight

  • Mininet

  • APEX

  • Link Monitor

Monitor the Demo

Floodlight and APEX provide REST interfaces for monitoring.

  • Floodlight: see Floodlight Docs for details on how to access the monitoring. In a standard installation as we use here, pointing browser to the URL http://localhost:8080/ui/pages/index.html should work on the same host

  • APEX please see the APEX documentation for Monitoring Client or Full Client for details on how to monitor APEX.

VPN SLA Policy

The VPN SLA policy is designed as a MEDA policy. The first state (M = Match) takes the trigger event (a link up or down) and checks if this is a change to the known topology. The second state (E = Establish) takes all available information (trigger event, local context) and defines what situation we have. The third state (D = Decide) takes the situation and selects which algorithm is best to process it. This state can select between none (nothing to do), solved (a problem is solved now), sla (compare the current customer SLA situation and select one to impede), and priority (impede non-priority customers). The fourth and final state (A = Act) selects the right action for the taken decision and creates the response event sent to the orchestrator.

We have added three more policies to set the local context: one for adding nodes, one for adding edges (links), and one for adding customers. These policies do not realize any action, they are only here for updating the local context. This mechanism is the fasted way to update local context, and it is independent of any context plugin.

The policy uses data defined in Avro, so we have a number of Avro schema definitions.

Context Schemas

The context schemas are for the local context. We model edges and nodes for the topology, customers, and problems with all information on detected problems.

Trigger Schemas

The trigger event provides a status as UP or DOWN. To avoid tests for these strings in the logic, we defined an Avro schema for an enumeration (AVRO Schema Link Status). This does not impact the trigger system (it can still send the strings), but makes the task logic simpler.

Context Logic Nodes

The node context logic simply takes the trigger event (for context) and creates a new node in the local context topology (Logic Node Context).

Context Logic Edges

The edge context logic simply takes the trigger event (for context) and creates a new edge in the local context topology (Logic Edge Context).

Context Logic Customer

The customer context logic simply takes the trigger event (for context) and creates a new customer in the local context topology (Logic Customer Context).

Logic: Match

This is the logic for the match state. It is kept very simple. Beside taking the trigger event, it also creates a timestamp. This timestamp is later used for SLA and downtime calculations as well as for some performance information of the policy . Sample Logic Policy Match State

Logic: Policy Establish State

This is the logic for the establish state. It is the most complicated logic, since establishing a situation for a decision is the most important part of any policy. First, the policy describes what we find (the switch block), in terms of 8 normal situations and 1 extreme error case.

If required, it creates local context information for the problem (if it is new) or updates it (if the problem still exists). It also calculates customer SLA downtime and checks for any SLA violations. Finally, it creates a situation object. Sample Logic Policy Establish State

Logic: Policy Decide State

The decide state can select between different algorithms depending on the situation. So it needs a Task Selection Logic (TSL). This TSL select a task in the current policy execution (i.e. potentially a different one per execution). Sample JS Logic Policy Decide State - TSL

The actual task logic are then none, solved, sla, and priority. Sample task logic are as given below :

Logic: Policy Act State

This is the logic for the act state. It is simply selecting an action, and creating the repsonse event for the orchestrator (the output of the policy). Sample Logic Policy Act State

CLI Spec

Complete Policy Definition

The complete policy definition is realized using the APEX CLI Editor. The script below shows the actual policy specification. All logic and schemas are included (as macro file). Sample APEX VPN SLA Policy Specification

Context Events Nodes

The following events create all nodes of the topology.

Context Events Edges

The following events create all edges of the topology.

Context Events Customers

The following events create all customers of the topology.

Trigger Examples

The following events are examples for trigger events

Mininet Topology

The topology is realized using Mininet. This script is used to establish the topology and to realize network configurations. Sample Mininet Topology

APEX Examples Decision Maker

Sample APEX Policy in TOSCA format

An example APEX policy in TOSCA format for the vCPE use case can be found here:

My First Policy

A good starting point is the My First Policy example. It describes a sales problem, to which policy can be applied. The example details the policy background, shows how to use the REST Editor to create a policy, and provides details for running the policies. The documentation can be found:

VPN SLA

The domain Policy-controlled Video Streaming (PCVS) contains a policy for controlling video streams with different strategies. It also provides details for installing an actual testbed with off-the-shelve software (Mininet, Floodlight, Kafka, Zookeeper). The policy model here demonstrates virtually all APEX features: local context and policies controlling it, task selection logic and multiple tasks in a single state, AVRO schemas for context, AVOR schemas for events (trigger and local), and a CLI editor specification of the policy. The documentation can be found:

Decision Maker

The domain Decision Maker shows a very simple policy for decisions. Interesting here is that the it creates a Docker image to run the policy and that it uses the APEX REST applications to update the policy on the-fly. It also has local context to remember past decisions, and shows how to use that to no make the same decision twice in a row. The documentation can be found:

Policy Distribution Component

Introduction to Policy Distribution

The main job of policy distribution component is to receive incoming notifications, download artifacts, decode policies from downloaded artifacts & forward the decoded policies to all configured policy forwarders.


The current implementation of distribution component comes with built-in SDC reception handler for receiving incoming distribution notifications from SDC using SDC client library. Upon receiving the notification, the corresponding CSAR artifacts are downloaded using SDC client library.The downloaded CSAR is then given to the configured policy decoder for decoding and generating policies. The generated policies are then forwarded to all configured policy forwarders. Related distribution status is sent to SDC at each step (download/deploy/done) during the entire flow.


The distribution component also comes with built-in REST based endpoints for fetching health check status & statistical data of running distribution system.


The distribution component is designed using plugin based architecture. All the handlers, decoders & forwarders are basically plugins to the running distribution engine. The plugins are configured in the configuration JSON file provided during startup of distribution engine. Adding a new plugin is simply implementing the related interfaces, adding them to the configuration JSON file & making the classes available in the classpath while starting distribution engine. There is no need to edit anything in the distribution core engine. Refer to distribution user manual for more details about the system and the configuration.

Policy Distribution User Manual

Installation

Requirements

Distribution is 100% written in Java and runs on any platform that supports a JVM, e.g. Windows, Unix, Cygwin.

Installation Requirements
  • Downloaded distribution: JAVA runtime environment (JRE, Java 11, Distribution is tested with the OpenJDK)

  • Building from source: JAVA development kit (JDK, Java 11, Distribution is tested with the OpenJDK)

  • Sufficient rights to install Distribution on the system

  • Installation tools

    • TAR and GZ to extract from that TAR.GZ distribution

      • Windows for instance 7Zip

    • Docker to run Distribution inside a Docker container

Build (Install from Source) Requirements

Installation from source requires a few development tools

  • GIT to retrieve the source code

  • Java SDK, Java version 8 or later

  • Apache Maven 3 (the Distribution build environment)

Get the Distribution Source Code

The Distribution source code is hosted in ONAP as project distribution. The current stable version is in the master branch. Simply clone the master branch from ONAP using HTTPS.

git clone https://gerrit.onap.org/r/policy/distribution
Build Distribution

The examples in this document assume that the distribution source repositories are cloned to:

  • Unix, Cygwin: /usr/local/src/distribution

  • Windows: C:\dev\distribution

  • Cygwin: /cygdrive/c/dev/distribution

Important

A Build requires ONAP Nexus Distribution has a dependency to ONAP parent projects. You might need to adjust your Maven M2 settings. The most current settings can be found in the ONAP oparent repo: Settings.

Important

A Build needs Space Building distribution requires approximately 1-2 GB of hard disc space, 100 MB for the actual build with full distribution and around 1 GB for the downloaded dependencies.

Important

A Build requires Internet (for first build) During the build, several (a lot) of Maven dependencies will be downloaded and stored in the configured local Maven repository. The first standard build (and any first specific build) requires Internet access to download those dependencies.

Use Maven for a standard build without any tests.

Unix, Cygwin

Windows

# cd /usr/local/src/distribution
# mvn clean install -DskipTest
>c:
>cd \dev\distribution
>mvn clean install -DskipTests

The build takes 2-3 minutes on a standard development laptop. It should run through without errors, but with a lot of messages from the build process.


When Maven is finished with the build, the final screen should look similar to this (omitting some success lines):

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] policy-distribution ................................ SUCCESS [  3.666 s]
[INFO] distribution-model ................................. SUCCESS [ 11.234 s]
[INFO] forwarding ......................................... SUCCESS [  7.611 s]
[INFO] reception .......................................... SUCCESS [  7.072 s]
[INFO] main ............................................... SUCCESS [ 21.017 s]
[INFO] plugins ............................................ SUCCESS [  0.453 s]
[INFO] forwarding-plugins ................................. SUCCESS [01:20 min]
[INFO] reception-plugins .................................. SUCCESS [ 18.545 s]
[INFO] Policy Distribution Packages ....................... SUCCESS [  0.419 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:39 min
[INFO] Finished at: 2018-11-15T13:59:09Z
[INFO] Final Memory: 73M/1207M
[INFO] ------------------------------------------------------------------------

The build will have created all artifacts required for distribution installation. The following example show how to change to the target directory and how it should look.

Unix, Cygwin

-rw-r--r-- 1 user 1049089    10616 Oct 31 13:35 checkstyle-checker.xml
-rw-r--r-- 1 user 1049089      609 Oct 31 13:35 checkstyle-header.txt
-rw-r--r-- 1 user 1049089      245 Oct 31 13:35 checkstyle-result.xml
-rw-r--r-- 1 user 1049089       89 Oct 31 13:35 checkstyle-cachefile
drwxr-xr-x 1 user 1049089        0 Oct 31 13:35 maven-archiver/
-rw-r--r-- 1 user 1049089     7171 Oct 31 13:35 policy-distribution-tarball-2.0.1-SNAPSHOT.jar
drwxr-xr-x 1 user 1049089        0 Oct 31 13:35 archive-tmp/
-rw-r--r-- 1 user 1049089 66296012 Oct 31 13:35 policy-distribution-tarball-2.0.1-SNAPSHOT-tarball.tar.gz
drwxr-xr-x 1 user 1049089        0 Nov 12 10:56 test-classes/
drwxr-xr-x 1 user 1049089        0 Nov 20 14:31 classes/

Windows

11/12/2018  10:56 AM    <DIR>          .
11/12/2018  10:56 AM    <DIR>          ..
10/31/2018  01:35 PM    <DIR>          archive-tmp
10/31/2018  01:35 PM                89 checkstyle-cachefile
10/31/2018  01:35 PM            10,616 checkstyle-checker.xml
10/31/2018  01:35 PM               609 checkstyle-header.txt
10/31/2018  01:35 PM               245 checkstyle-result.xml
11/20/2018  02:31 PM    <DIR>          classes
10/31/2018  01:35 PM    <DIR>          maven-archiver
10/31/2018  01:35 PM        66,296,012 policy-distribution-tarball-2.0.1-SNAPSHOT-tarball.tar.gz
10/31/2018  01:35 PM             7,171 policy-distribution-tarball-2.0.1-SNAPSHOT.jar
11/12/2018  10:56 AM    <DIR>          test-classes
Install Distribution

Distribution can be installed in different ways:

  • Windows, Unix, Cygwin: manually from a .tar.gz archive

  • Windows, Unix, Cygwin: build from source using Maven, then install manually

Install Manually from Archive (Windows, 7Zip, GUI)

Download a tar.gz archive and copy the file into the install folder (in this example C:\distribution). Assuming you are using 7Zip, right click on the file and extract the tar archive.


Extract the TAR archive

Then right-click on the new created TAR file and extract the actual distribution.


Extract the distribution

Inside the new distribution folder you see the main directories: bin, etc``and ``lib


Once extracted, please rename the created folder to distribution-full-2.0.2-SNAPSHOT. This will keep the directory name in line with the rest of this documentation.

Build from Source
Build and Install Manually (Unix, Windows, Cygwin)

Clone the Distribution GIT repositories into a directory. Go to that directory. Use Maven to build Distribution (all details on building Distribution from source can be found in Distribution HowTo: Build).

Now, take the .tar.gz file and install distribution.

Installation Layout

A full installation of distribution comes with the following layout.

  • bin

  • etc

  • lib

Running Distribution in Docker
Run in ONAP

Running distribution from the ONAP docker repository only requires 2 commands:

  1. Log into the ONAP docker repo

docker login -u docker -p docker nexus3.onap.org:10003
  1. Run the distribution docker image

docker run -it --rm  nexus3.onap.org:10003/onap/policy-distribution:latest
Build a Docker Image

Alternatively, one can use the Dockerfile defined in the Docker package to build an image.

Distribution Configurations Explained

Introduction to Distribution Configuration

A distribution engine can be configured to use various combinations of policy reception handlers, policy decoders and policy forwarders. The system is built using a plugin architecture. Each configuration option is realized by a plugin, which can be loaded and configured when the engine is started. New plugins can be added to the system at any time, though to benefit from a new plugin, an engine will need to be restarted.


The distribution already comes with sdc reception handler, file reception handler, hpa optimization policy decoder, file in csar policy decoder, policy lifecycle api forwarder.

General Configuration Format

The distribution configuration file is a JSON file containing a few main blocks for different parts of the configuration. Each block then holds the configuration details. The following code shows the main blocks:

{
  "restServerParameters":{
    ... (1)
  },
  "receptionHandlerParameters":{ (2)
    "pluginHandlerParameters":{ (3)
      "policyDecoders":{...}, (4)
      "policyForwarders":{...} (5)
    }
  },
  "receptionHandlerConfigurationParameters":{
    ... (6)
  }
  ,
  "policyForwarderConfigurationParameters":{
    ... (7)
  }
  ,
  "policyDecoderConfigurationParameters":{
    ... (8)
  }
}

1

rest server configuration

2

reception handler plugin configurations

3

plugin handler parameters configuration

4

policy decoder plugin configuration

5

policy forwarder plugin configuration

6

reception handler plugin parameters

7

policy forwarder plugin parameters

8

policy decoder plugin parameters

A configuration example

The following example loads HPA use case & general tosca policy related plug-ins.

Notifications are consumed from SDC through SDC client. Consumed artifacts format is CSAR.

Generated policies are forwarded to policy lifecycle api’s for creation & deployment.

{
    "name":"SDCDistributionGroup",
    "restServerParameters":{
        "host":"0.0.0.0",
        "port":6969,
        "userName":"healthcheck",
        "password":"zb!XztG34"
      },
    "receptionHandlerParameters":{
         "SDCReceptionHandler":{
            "receptionHandlerType":"SDC",
            "receptionHandlerClassName":"org.onap.policy.distribution.reception.handling.sdc.SdcReceptionHandler",
                "receptionHandlerConfigurationName":"sdcConfiguration",
            "pluginHandlerParameters":{
                "policyDecoders":{
                    "ToscaPolicyDecoder":{
                        "decoderType":"ToscaPolicyDecoder",
                        "decoderClassName":"org.onap.policy.distribution.reception.decoding.policy.file.PolicyDecoderFileInCsarToPolicy",
                        "decoderConfigurationName": "toscaPolicyDecoderConfiguration"
                    }
                },
                "policyForwarders":{
                    "LifeCycleApiForwarder":{
                        "forwarderType":"LifeCycleAPI",
                        "forwarderClassName":"org.onap.policy.distribution.forwarding.lifecycle.api.LifecycleApiPolicyForwarder",
                        "forwarderConfigurationName": "lifecycleApiConfiguration"
                    }
                }
            }
        }
    },
    "receptionHandlerConfigurationParameters":{
        "sdcConfiguration":{
            "parameterClassName":"org.onap.policy.distribution.reception.handling.sdc.SdcReceptionHandlerConfigurationParameterGroup",
            "parameters":{
                "asdcAddress": "sdc-be.onap:8443",
                "messageBusAddress": [
                "message-router.onap"
                 ],
                "user": "policy",
                "password": "Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U",
                "pollingInterval":20,
                "pollingTimeout":30,
                "consumerId": "policy-id",
                "artifactTypes": [
                "TOSCA_CSAR",
                "HEAT"
                ],
                "consumerGroup": "policy-group",
                "environmentName": "AUTO",
                "keyStorePath": "null",
                "keyStorePassword": "null",
                "activeserverTlsAuth": false,
                "isFilterinEmptyResources": true,
                "isUseHttpsWithDmaap": true
            }
        }
    },
    "policyDecoderConfigurationParameters":{
        "toscaPolicyDecoderConfiguration":{
            "parameterClassName":"org.onap.policy.distribution.reception.decoding.policy.file.PolicyDecoderFileInCsarToPolicyParameterGroup",
            "parameters":{
                "policyFileName": "tosca_policy",
                "policyTypeFileName": "tosca_policy_type"
            }
        }
    },
    "policyForwarderConfigurationParameters":{
        "lifecycleApiConfiguration": {
            "parameterClassName": "org.onap.policy.distribution.forwarding.lifecycle.api.LifecycleApiForwarderParameters",
            "parameters": {
                "apiParameters": {
                    "clientName": "policy-api",
                    "hostname": "policy-api",
                    "port": 6969,
                    "useHttps": true,
                    "userName": "healthcheck",
                    "password": "zb!XztG34"
                },
                "papParameters": {
                    "clientName": "policy-pap",
                    "hostname": "policy-pap",
                    "port": 6969,
                    "useHttps": true,
                    "userName": "healthcheck",
                    "password": "zb!XztG34"
                },
                "deployPolicies": true
            }
        }
    }
}

The Distribution Engine

The Distribution engine can be started using policy-dist.sh script. The script is located in the source code at distribution/packages/policy-distribution-docker/src/main/docker directory


On UNIX and Cygwin systems use policy-dist.sh script.


On Windows systems navigate to the distribution installation directory. Run the following command java -cp "etc:lib\*" org.onap.policy.distribution.main.startstop.Main -c <config-file-path>


The Distribution engine comes with CLI arguments for setting configuration. The configuration file is always required. The option -h prints a help screen.

usage: org.onap.policy.distribution.main.startstop.Main [options...]
options
-c,--config-file <CONFIG_FILE>  the full path to the configuration file to use, the configuration file must be a Json file
                                containing the distribution configuration parameters
-h,--help                       outputs the usage of this command
-v,--version                    outputs the version of distribution system

The Distribution REST End-points

The distribution engine comes with built-in REST based endpoints for fetching health check status & statistical data of running distribution system.

# Example Output from curl http -a '{user}:{password}' :6969/healthcheck

    HTTP/1.1 200 OK
  Content-Length: XXX
  Content-Type: application/json
  Date: Tue, 17 Apr 2018 10:51:14 GMT
  Server: Jetty(9.3.20.v20170531)
  {
       "code":200,
       "healthy":true,
       "message":"alive",
       "name":"Policy SSD",
       "url":"self"
  }

# Example Output from curl http -a '{user}:{password}' :6969/statistics

  HTTP/1.1 200 OK
  Content-Length: XXX
  Content-Type: application/json
  Date: Tue, 17 Apr 2018 10:51:14 GMT
  Server: Jetty(9.3.20.v20170531)
  {
       "code":200,
       "distributions":10,
       "distribution_complete_ok":8,
       "distribution_complete_fail":2,
       "downloads":15,
       "downloads_ok"; 10,
       "downloads_error": 5
  }

Policy/CLAMP - Control Loop Automation Management Platform

CLAMP handles control loops in two ways, either using built in ONAP Control Loop support or using Control Loops defined in metadata using TOSCA. The sections below describe both ways of managing control loops.

Policy/CLAMP - Control Loop Automation Management Platform

CLAMP supports the definition, deployment, and life cycle management of control loops.

Policy/Clamp in the ONAP Architecture

CLAMP platform has been integrated into the Policy framework project, starting as a POC during Honolulu release and as a official feature with Istanbul release. So CLAMP must be seen now as a function provided by the Policy Framework project.

CLAMP is a function for designing and managing control loops and a UI to manage Policies. It is used to visualize a control loop, configure it with specific parameters for a particular network service, then deploying and undeploying it. Once deployed, the user can also update the loop with new parameters during runtime, as well as suspending and restarting it.

Starting with Istanbul release, CLAMP GUI can also be used to create/read/update/delete/list policies outside of a control loop, and therefore CLAMP is also the front-end for Policies management in ONAP.

CLAMP interacts with other systems to deploy and execute the control loop. For example, it extracts the control loop blueprint from CSAR distributed by SDC. CLAMP also calls the internal Policy framework API to get the Policy Models(Model Driven Control Loop) It requests from DCAE the instantiation of microservices. to manage the control loop flow. Furthermore, it creates and updates multiple policies (for DCAE mS configuration and actual Control Operations) in the Policy Engine that define the closed loop flow.

clamp-flow

The ONAP CLAMP function abstracts the details of these systems under the concept of a control loop model. The design of a control loop and its management is represented by a workflow in which all relevant system interactions take place. This is essential for a self-service model of creating and managing control loops, where no low-level user interaction with other components is required.

At a higher level, CLAMP is about supporting and managing the broad operational life cycle of VNFs/VMs and ultimately ONAP components itself. It offers the ability to design, test, deploy and update control loop automation - both closed and open. Automating these functions represents a significant saving on operational costs compared to traditional methods.

closed-loop

Policy/CLAMP - Design, and Packaging Information

This page describes design and packaging information for release planning and delivery.

Offered APIs

The list of APIs that Policy/CLAMP offers can be found on the following table:

swagger-icon

html-icon

pdf-icon

swagger json file

html doc

pdf doc

link

link

link

Consumed APIs

Policy/CLAMP uses the API’s exposed by the following ONAP components:

  • SDC : REST based interface exposed by the SDC, Distribution of service to DCAE

  • DCAE: REST based interface exposed by DCAE, Common Controller Framework, DCAE microservices onboarded (TCA, Stringmatch, Holmes (optional))

  • Policy Core: REST based interface, Policy Core engine target both XACML and Drools PDP, Policy Engine trigger operations to App-C/VF-C/SDN-C

  • CDS: REST based interface, to retrieve list of operations/actions with their corresponding payload at runtime for Operational Policies where the field ‘actor’ is ‘CDS’.

Delivery

Policy/CLAMP component is composed of a UI layer and a backend layer, each layer having its own container. Policy/CLAMP also requires a database instance with 1 DB, it uses MariaDB, which is the same DB as for the core Policy.

clamp-policy-archi

Logging & Diagnostic Information

CLAMP uses logback framework to generate logs. The logback.xml file can be found under the src/main/resources folder.

With the default log settings, all logs will be generated into console and into root.log file under the CLAMP root folder. The root.log file is not allowed to be appended, thus restarting the CLAMP will result in cleaning of the old log files.

Installation

A docker-compose example file extra/docker/clamp/docker-compose.yml can be found under the extra/docker/clamp/ folder.

Once the image has been built and is available locally, you can use the docker-compose up command to deploy a pre-populated database and a CLAMP instance available on https://localhost:3000.

Configuration

Currently, the CLAMP docker images can be deployed with small configuration needs. Though, you might need to make small adjustments to the configuration. As CLAMP is spring based, you can use the SPRING_APPLICATION_JSON environment variable to update its parameters.

There are one datasource for Clamp. By default, it will try to connect to the localhost server using the credentials available in the example SQL files. If you need to change the default database host and/or credentials, you can do it by using the following json as SPRING_APPLICATION_JSON environment variable :

{
    "spring.datasource.cldsdb.url": "jdbc:mariadb:sequential://clampdb.{{ include "common.namespace" . }}:3306/cldsdb4?autoReconnect=true&connectTimeout=10000&socketTimeout=10000&retriesAllDown=3",
    "clamp.config.files.sdcController": "file:/opt/clamp/sdc-controllers-config.json",
    "clamp.config.dcae.inventory.url": "https://inventory.{{ include "common.namespace" . }}:8080",
    "clamp.config.dcae.dispatcher.url": "https://deployment-handler.{{ include "common.namespace" . }}:8443",
    "clamp.config.dcae.deployment.url": "https://deployment-handler.{{ include "common.namespace" . }}:8443",
    "clamp.config.dcae.deployment.userName": "none",
    "clamp.config.dcae.deployment.password": "none",
    "clamp.config.policy.api.url": "https://policy-api.{{ include "common.namespace" . }}:6969",
    "clamp.config.policy.api.userName": "healthcheck",
    "clamp.config.policy.api.password": "zb!XztG34",
    "clamp.config.policy.pap.url": "https://policy-pap.{{ include "common.namespace" . }}:6969",
    "clamp.config.policy.pap.userName": "healthcheck",
    "clamp.config.policy.pap.password": "zb!XztG34",
    "clamp.config.cadi.aafLocateUrl": "https://aaf-locate.{{ include "common.namespace" . }}:8095",
    "com.att.eelf.logging.path": "/opt/clamp",
    "com.att.eelf.logging.file": "logback.xml"
}
SDC-Controllers config

This file is a JSON that must be specified to Spring config, here is an example:

{
 "sdc-connections":{
   "sdc-controller1":{
       "user": "clamp",
       "consumerGroup": "consumerGroup1",
       "consumerId": "consumerId1",
       "environmentName": "AUTO",
       "sdcAddress": "localhost:8443",
       "password": "b7acccda32b98c5bb7acccda32b98c5b05D511BD6D93626E90D18E9D24D9B78CD34C7EE8012F0A189A28763E82271E50A5D4EC10C7D93E06E0A2D27CAE66B981",
       "pollingInterval":30,
       "pollingTimeout":30,
       "activateServerTLSAuth":"false",
       "keyStorePassword":"",
       "keyStorePath":"",
       "messageBusAddresses":["dmaaphost.com"]
   },
   "sdc-controller2":{
       "user": "clamp",
       "consumerGroup": "consumerGroup1",
       "consumerId": "consumerId1",
       "environmentName": "AUTO",
       "sdcAddress": "localhost:8443",
       "password": "b7acccda32b98c5bb7acccda32b98c5b05D511BD6D93626E90D18E9D24D9B78CD34C7EE8012F0A189A28763E82271E50A5D4EC10C7D93E06E0A2D27CAE66B981",
       "pollingInterval":30,
       "pollingTimeout":30,
       "activateServerTLSAuth":"false",
       "keyStorePassword":"",
       "keyStorePath":"",
       "messageBusAddresses":["dmaaphost.com"]
   }
 }
}

Multiple controllers can be configured so that Clamp is able to receive the notifications from different SDC servers. Each Clamp existing in a cluster should have different consumerGroup and consumerId so that they can each consume the SDC notification. The environmentName is normally the Dmaap Topic used by SDC. If the sdcAddress is not specified or not available (connection failure) the messageBusAddresses will be used (Dmaap servers)

Administration

A user can access Policy/CLAMP UI at the following URL : https://localhost:3000. (in this URL ‘localhost’ must be replaced by the actual host where Policy/CLAMP has been installed if it is not your current localhost) For OOM, the URL is https://<host-ip>:30258

- Without AAF, the credentials are
  Default username : admin
  Default password : password

- With AAF enabled, the certificate p12 must be added to the browser
  ca path: src/main/resources/clds/aaf/org.onap.clamp.p12, password "China in the Spring"
  Or get it from this page : https://wiki.onap.org/display/DW/Control+Loop+Flows+and+Models+for+Casablanca
Human Interfaces

User Interface - serve to configure control loop Policy/CLAMP UI is used to configure the Control Loop designed and distributed by SDC. From that UI it’s possible to distribute the configuration policies and control the life-cycle of the DCAE Micro Services. Policy/CLAMP UI is also used to manage Policies outside of a Control Loop.

The following actions are done using the UI:

  • Design a control loop flow by selecting a predefined template from a list (a template is an orchestration chain of Micro-services, so the template defines how the micro-services of the control loop are chained together)

  • Give value to the configuration the parameters of each micro-service of the control loop

  • Select the service and VNF(of that service) to which the control loop will be attached

  • Configure the operational policy(the actual operation resulting from the control loop)

  • Send the “TOSCA” blueprint parameters that will be used by DCAE to start the control loop (The blueprint will be sent first to SDC and SDC will publish it to DCAE)

  • Trigger the deployment of the Control loop in DCAE

  • Control (start/stop) the operation of the control loop in DCAE

HealthCheck API - serve to verify CLAMP status (see offered API’s section) * https://<host-ip>:8443/restservices/clds/v1/healthcheck This one does not require the certificate

Walk-through can be found here: https://wiki.onap.org/display/DW/CLAMP+videos

User Guide: Control loop in Policy/CLAMP

There are 2 control loop levels in Policy/CLAMP:

  • Control loop template: This is created from the DCAE blueprint (designed in the DCAE designer), and distributed by SDC to CLAMP.

  • Control loop instance: Based on the template, it represents a physical control loop in the platform related to a service and a VNF.

There is no way to design the microservice components of the control loop from scratch in CLAMP, you can only configure it and manage its life-cycle. For more info on how to design the service in SDC, check this: https://wiki.onap.org/display/DW/CLAMP+videos#CLAMPvideos-DesignpartinSDC

There is a specific menu to view the available Control loop templates.

clamp-template-menu

Each microservice policies and operational policies is related to a Policy Model. Clamp either communicates with Policy Engine periodically to download the available Policy Models automatically or user can upload the Policy Model manually. Policy Models related operations could be found under Policy Models menu.

clamp-policy-model-menu

Under the menu Loop Instance, there’s a list of actions to perform regarding to the loops.

clamp-loop-menu

Option Create creates the loop from the templates distributed by SDC.

clamp-create-loop

Option Open opens the saved loops. Once the distributed control loop has been chosen, the control loop is shown to the user.

clamp-open-loop

Option Close will close the current opened loop.

Option Modify opens the window to add/remove different Operational Policies to the loop. Tab Add Operational Policies lists all the available operational policies. Click Add button to add the selected operational policies to the loop.

clamp-add-operational-policies

Tab Remove Operational Policies lists all the operational policies added to the loop. Click Remove button to remove the selected operational policies from the loop.

clamp-remove-operational-policies

Once opened, the user can start configure empty control loop using Closed loop modeller.

clamp-opened-loop

Loop modeler has 3 main parts:

  1. Loop configuration view

    Visualizes event flow in Control Loop. This view is auto-generated by Clamp. To generate it Clamp parses DCAE_INVENTORY_BLUEPRINT from CSAR distributed by SDC. It Always consists of VES -> <nodes from blueprint> -> OperationalPolicy. Not all nodes are visualized. Only those with type dcae.nodes.* blueprint-node

  2. Loop status

    Visualizes status of opened loop.

  3. Loop logs

    Table with log data of opened loop

Control Loop properties

In Dublin release this view shows what are deployment parameters or control Loop. To open it from Loop Instance menu select Properties

clamp-menu-prop

This opens a box with JSON object. It contains deployment parameters extracted from DCAE_INVENTORY_BLUEPRINT. It’s not recommended to edit this JSON. Each of this parameters should be available in view shown to deploy analytic application.

clamp-prop-box

Operational policy properties

Operational policies are added by the user using Modify window. The configuration view is generated using Policy Type assigned to selected operational policy.

To configure operational policies, user has to click the corresponding operational policy boxes. Example popup dialog for operational policy looks like:

clamp-op-policy-box-policy

Operations and payload for CDS actor is fetched from CDS. Clamp receives CDS blueprint name and version information from sdnc_model_name and sdnc_model_version properties in CSAR distributed by SDC and queries CDS to get list of operations and payload for the corresponding CDS blueprint.

clamp-cds-operation

Micro-service policy properties

Boxes between VES and Operational Policies are generated from blueprint. They can be one of ONAP predefined analytic microservices or custom analytics. Each of the boxes is clickable. Microservice configuration view is generated using Policy Type assigned to selected microservice. Clamp by default assumes that microservices have policy type onap.policies.monitoring.cdap.tca.hi.lo.app.

After clicking microservice box Clamp opens popup dialog. Example popup dialog for microservice with default type looks like:

clamp-config-policy-tca

In the Loop Operations menu, lists the operations to be perform to the loop.

clamp-loop-operation-menu

Submitting the Control loop to core policy

The SUBMIT operation can be used to send the configuration to policy engine. If everything is successful, the status to the policy will become SENT. Clamp should also show proper logs in logs view.

clamp-policy-submitted

After Policies are submitted they should be visible in Policy PAP component. Please check Policy GUI

Deploy/undeploy the Control Loop to DCAE

Once sent to policy engine, Clamp can ask to DCAE to DEPLOY the micro service

This opens a window where the parameters of the DCAE micro service can be configured/tuned. The policy_id is automatically generated by Clamp in the previous steps.

clamp-deploy-params

Once deployed on DCAE the status of DCAE goes to MICROSERVICE_INSTALLED_SUCCESSFULLY, it can then be Undeployed/Stopped/Restart.

CLAMP Metadata Control Loop Automation Management using TOSCA

CLAMP supports the definition, deployment, and life cycle management of control loops using Metadata described in TOSCA.

TOSCA Defined Control Loops: Architecture and Design

The idea of using control loops to automatically (or autonomously) perform network management has been the subject of much research in the Network Management research community, see this paper for some background. However, it is only with the advent of ONAP that we have a platform that supports control loops for network management. Before ONAP, Control Loops have been implemented by hard-coding components together and hard coding logic into components. ONAP has taken a step forward towards automatic implementation of Control Loops by allowing parameterization of Control Loops that work on the premise that the Control Loops use a set of analytic, policy, and control components connected together in set ways.

The goal of the work is to extend and enhance the current ONAP Control Loop support to provide a complete open-source framework for Control Loops. This will enhance the current support to provide TOSCA based Control Loop definition and development, commissioning and run-time management. The participants that comprise a Control Loop and the metadata needed to link the participants together to create a Control Loop are specified in a standardized way using the OASIS TOSCA modelling language. The TOSCA description is then used to commission, instantiate, and manage the Control Loops in the run time system.

_images/01-controlloop-overview.png
1 Terminology

This section describes the terminology used in the system.

1.1 Control Loop Terminology

Control Loop Type: A definition of a Control Loop in the TOSCA language. This definition describes a certain type of a control loop. The life cycle of instances of a Control Loop Type are managed by CLAMP.

Control Loop Instance: An instance of a Control Loop Type. The life cycle of a Control Loop Instance is managed by CLAMP. A Control Loop Instance is a set of executing elements on which Life Cycle Management (LCM) is executed collectively. For example, a set of microservices may be spawned and executed together to deliver a service. This collection of services is a control loop.

Control Loop Element Type: A definition of a Control Loop Element in the TOSCA language. This definition describes a certain type of Control Loop Element for a control loop in a Control Loop Type.

Control Loop Element Instance: A single entity executing on a participant, with its Life Cycle being managed as part of the overall control loop. For example, a single microservice that is executing as one microservice in a service.

CLAMP Control Loop Runtime: The CLAMP server that holds Control Loop Type definitions and manages the life cycle of Control Loop Instances and their Control Loop Elements in cooperation with participants.

1.2 Participant Terminology

Participant Type: Definition of a type of system or framework that can take part in control loops and a definition of the capabilities of that participant type. A participant advertises its type to the CLAMP Control Loop Runtime.

Participant: A system or framework that takes part in control loops by executing Control Loop Elements in cooperation with the CLAMP Control Loop Runtime. A participant chooses to partake in control loops, to manage Control Loop Elements for CLAMP, and to receive, send and act on LCM messages for the CLAMP runtime.

1.3 Terminology for Properties

Common Properties: Properties that apply to all Control Loop Instances of a certain Control Loop Type and are specified when a Control Loop Type is commissioned.

Instance Specific Properties: Properties that must be specified for each Control Loop Instance and are specified when a Control Loop Instance is Initialized.

1.4 Concepts and their relationships

The UML diagram below shows the concepts described in the terminology sections above and how they are interrelated.

_images/02-controlloop-concepts.png

The Control Loop Definition concepts describe the types of things that are in the system. These concepts are defined at design time and are passed to the runtime in a TOSCA document. The concepts in the Control Loop Runtime are created by the runtime part of the system using the definitions created at design time.

2 Capabilities

We consider the capabilities of Control Loops at Design Time and Run Time.

At Design Time, three capabilities are supported:

  1. Control Loop Element Definition Specification. This capability allows users to define Control Loop Element Types and the metadata that can be used on and configured on a Control Loop Element Type. Users also define the Participant Type that will run the Control Loop Element when it is taking part in in a control loop. The post condition of an execution of this capability is that metadata for a Control Loop Element Type is defined in the Control Loop Design Time Catalogue.

  2. Control Loop Element Definition Onboarding. This capability allows external users and systems (such as SDC or DCAE-MOD) to define the metadata that can be used on and configured on a Control Loop Element Type and to define the Participant Type that will run the Control Loop Element when it is taking part in in a control loop. The post condition of an execution of this capability is that metadata for a Control Loop Element Type is defined in the Control Loop Design Time Catalogue.

  3. Control Loop Type Definition. This capability allows users and other systems to create Control Loop Type definitions by specifying a set of Control Loop Element Definitions from those that are available in the Control Loop Design Time Catalogue. These Control Loop Elements will work together to form Control Loops. In an execution of this capability, a user specifies the metadata for the Control Loop and specifies the set of Control Loop Elements and their Participant Types. The user also selects the correct metadata sets for each participant in the Control Loop Type and defines the overall Control Loop Type metadata. The user also specifies the Common Property Types that apply to all instances of a control loop type and the Instance Specific Property Types that apply to individual instances of a Control Loop Type. The post condition for an execution of this capability is a Control Loop definition in TOSCA stored in the Control Loop Design Time Catalogue.

Note

Once a Control Loop Definition is commissioned to the Control Loop Runtime and has been stored in the Run Time Inventory, it cannot be further edited unless it is decommissioned.

At Run Time, the following participant related capabilities are supported:

  1. System Pre-Configuration. This capability allows participants to register and deregister with CLAMP. Participants explicitly register with CLAMP when they start. Control Loop Priming is performed on each participant once it registers. The post condition for an execution of this capability is that a participant becomes available (registration) or is no longer available (deregistration) for participation in a control loop.

  2. Control Loop Priming on Participants. A participant is primed to support a Control Loop Type. Priming a participant means that the definition of a control loop and the values of Common Property Types that apply to all instances of a control loop type on a participant are sent to a participant. The participant can then take whatever actions it need to do to support the control loop type in question. Control Loop Priming takes place at participant registration and at Control Loop Commissioning. The post condition for an execution of this capability is that all participants in this control loop type are commissioned, that is they are prepared to run instances of their Control Loop Element types.

At Run Time, the following Control Loop Life Cycle management capabilities are supported:

  1. Control Loop Commissioning: This capability allows version controlled Control Loop Type definitions to be taken from the Control Loop Design Time Catalogue and be placed in the Commissioned Control Loop Inventory. It also allows the values of Common Property Types that apply to all instances of a Control Loop Type to be set. Further, the Control Loop Type is primed on all concerned participants. The post condition for an execution of this capability is that the Control Loop Type definition is in the Commissioned Control Loop Inventory and the Control Loop Type is primed on concerned participants.

  2. Control Loop Instance Life Cycle Management: This capability allows a Control Loop Instance to have its life cycle managed.

    1. Control Loop Instance Creation: This capability allows a Control Loop Instance to be created. The Control Loop Type definition is read from the Commissioned Control Loop Inventory and values are assigned to the Instance Specific Property Types defined for instances of the Control Loop Type in the same manner as the existing CLAMP client does. A Control Loop Instance that has been created but has not yet been instantiated on participants is in state UNINITIALIZED. In this state, the Instance Specific Property Type values can be revised and updated as often as the user requires. The post condition for an execution of this capability is that the Control Loop instance is created in the Instantiated Control Loop Inventory but has not been instantiated on Participants.

    2. Control Loop Instance Update on Participants: Once the user is happy with the property values, the Control Loop Instance is updated on participants and the Control Loop Elements for this Control Loop Instance are initialized or updated by participants using the control loop metadata. The post condition for an execution of this capability is that the Control Loop instance is updated on Participants.

    3. Control Loop State Change: The user can now order the participants to change the state of the Control Loop Instance. If the Control Loop is set to state RUNNING, each participant begins accepting and processing control loop events and the Control Loop Instance is set to state RUNNING in the Instantiated Control Loop inventory. The post condition for an execution of this capability is that the Control Loop instance state is changed on participants.

    4. Control Loop Instance Monitoring: This capability allows Control Loop Instances to be monitored. Users can check the status of Participants, Control Loop Instances, and Control Loop Elements. Participants report their overall status and the status of Control Loop Elements they are running periodically to CLAMP. Clamp aggregates these status reports into an aggregated Control Loop Instance status record, which is available for monitoring. The post condition for an execution of this capability is that Control Loop Instances are being monitored.

    5. Control Loop Instance Supervision: This capability allows Control Loop Instances to be supervised. The CLAMP runtime expects participants to report on Control Loop Elements periodically. The CLAMP runtime checks that periodic reports are received and that each Control Loop Element is in the state it should be in. If reports are missed or if a Control Loop Element is in an incorrect state, remedial action is taken and notifications are issued. The post condition for an execution of this capability is that Control Loop Instances are being supervised by the CLAMP runtime.

    6. Control Loop Instance Removal from Participants: A user can order the removal of a Control Loop Instance from participants. The post condition for an execution of this capability is that the Control Loop instance is removed from Participants.

    7. Control Loop Instance Deletion: A user can order the removal of a Control Loop Instance from the CLAMP runtime. Control Loop Instances that are instantiated on participants cannot be removed from the CLAMP runtime. The post condition for an execution of this capability is that the Control Loop instance is removed from Instantiated Control Loop Inventory.

  3. Control Loop Decommissioning: This capability allows version controlled Control Loop Type definitions to be removed from the Commissioned Control Loop Inventory. A Control Loop Definition that has instances in the Instantiated Control Loop Inventory cannot be removed. The post condition for an execution of this capability is that the Control Loop Type definition removed from the Commissioned Control Loop Inventory.

Note

The system dialogues for run time capabilities are described in detail on the System Level Dialogues page.

2.1 Control Loop Instance States

When a control loop definition has been commissioned, instances of the control loop can be created, updated, and deleted. The system manages the lifecycle of control loops and control loop elements following the state transition diagram below.

_images/03-controlloop-instance-states.png
3 Overall Target Architecture

The diagram below shows an overview of the architecture of TOSCA based Control Loop Management in CLAMP.

_images/04-overview.png

Following the ONAP Reference Architecture, the architecture has a Design Time part and a Runtime part.

The Design Time part of the architecture allows a user to specify metadata for participants. It also allows users to compose control loops. The Design Time Catalogue contains the metadata primitives and control loop definition primitives for composition of control loops. As shown in the figure above, the Design Time component provides a system where Control Loops can be designed and defined in metadata. This means that a Control Loop can have any arbitrary structure and the Control Loop developers can use whatever analytic, policy, or control participants they like to implement their Control Loop. At composition time, the user parameterises the Control Loop and stores it in the design time catalogue. This catalogue contains the primitive metadata for any participants that can be used to compose a Control Loop. A Control Loop SDK is used to compose a Control Loop by aggregating the metadata for the participants chosen to be used in a Control Loop and by constructing the references between the participants. The architecture of the Control Loop Design Time part will be elaborated in future releases.

Composed Control Loops are commissioned on the run time part of the system, where they are stored in the Commissioned Control Loop inventory and are available for instantiation. The Commissioning component provides a CRUD REST interface for Control Loop Types, and implements CRUD of Control Loop Types. Commissioning also implements validation and persistence of incoming Control Loop Types. It also guarantees the integrity of updates and deletions of Control Loop Types, such as performing updates in accordance with semantic versioning rules and ensuring that deletions are not allowed on Control Loop Types that have instances defined.

The Instantiation component manages the Life Cycle Management of Control Loop Instances and their Control Loop Elements. It publishes a REST interface that is used to create Control Loop Instances and set values for Common and Instance Specific properties. This REST interface is public and is used by the CLAMP GUI. It may also be used by any other client via the public REST interface. the REST interface also allows the state of Control Loop Instances to be changed. A user can change the state of Control Loop Instances as described in the state transition diagram shown in section 2 above. The Instantiation component issues update and state change messages via DMaaP to participants so that they can update and manage the state of the Control Loop Elements they are responsible for. The Instantiation component also implements persistence of Control Loop Instances, control loop elements, and their state changes.

The Monitoring component reads updates sent by participants. Participants report on the state of their Control Loop Elements periodically and in response to a message they have received from the Instantiation component. The Monitoring component reads the contents of the participant messages and persists their state updates and statistics records. It also publishes a REST interface that publishes the current state of all Participants, Control Loop Instances and their Control Loop Elements, as well as publishing Participant and Control Loop statistics.

The Supervision component is responsible for checking that Control Loop Instances are correctly instantiated and are in the correct state (UNINITIALIZED/READY/RUNNING). It also handles timeouts and on state changes to Control Loop Instances, and retries and rolls back state changes where state changes failed.

A Participant is an executing component that partakes in control loops. More explicitly, a Participant is something that implements the Participant Instantiation and Participant Monitoring messaging protocol over DMaaP for Life Cycle management of Control Loop Elements. A Participant runs Control Loop Elements and manages and reports on their life cycle following the instructions it gets from the CLAMP runtime in messages delivered over DMaaP.

In the figure above, five participants are shown. A Configuration Persistence Participant manages Control Loop Elements that interact with the ONAP Configuration Persistence Service to store common data. The DCAE Participant runs Control Loop Elements that manage DCAE microservices. The Kubernetes Participant hosts the Control Loop Elements that are managing the life cycle of microservices in control loops that are in a Kubernetes ecosystem. The Policy Participant handles the Control Loop Elements that interact with the Policy Framework to manage policies for control loops. A Controller Participant such as the CDS Participant runs Control Loop Elements that load metadata and configure controllers so that they can partake in control loops. Any third party Existing System Participant can be developed to run Control Loop Elements that interact with any existing system (such as an operator’s analytic, machine learning, or artificial intelligence system) so that those systems can partake in control loops.

4. Other Considerations
4.1 Management of Control Loop Instance Configurations

In order to keep management of versions of the configuration of control loop instances straightforward and easy to implement, the following version management scheme using semantic versioning is implemented. Each configuration of a Control Loop Instance and configuration of a Control Loop Element has a semantic version with 3 digits indicating the major.minor.patch number of the version.

Note

A configuration means a full set of parameter values for a Control Loop Instance.

_images/05-upgrade-states.png

Change constraints:

  1. A Control Loop or Control Loop Element in state RUNNING can be changed to a higher patch level or rolled back to a lower patch level. This means that hot changes that do not impact the structure of a Control Loop or its elements can be executed.

  2. A Control Loop or Control Loop Element in state PASSIVE can be changed to a higher minor/patch level or rolled back to a lower minor/patch level. This means that structural changes to Control Loop Elements that do not impact the Control Loop as a whole can be executed by taking the control loop to state PASSIVE.

  3. A Control Loop or Control Loop Element in state UNINITIALIZED can be changed to a higher major/minor/patch level or rolled back to a lower major/minor/patch level. This means that where the structure of the entire control loop is changed, the control loop must be uninitialized and reinitialized.

  4. If a Control Loop Element has a minor version change, then its Control Loop Instance must have at least a minor version change.

  5. If a Control Loop Element has a major version change, then its Control Loop Instance must have a major version change.

4.2 Scalability

The system is designed to be inherently scalable. The CLAMP runtime is stateless, all state is preserved in the Instantiated Control Loop inventory in the database. When the user requests an operation such as an instantiation, activation, passivation, or an uninitialization on a Control Loop Instance, the CLAMP runtime broadcasts the request to participants over DMaaP and saves details of the request to the database. The CLAMP runtime does not directly wait for responses to requests.

When a request is broadcast on DMaaP, the request is asynchronously picked up by participants of the types required for the Control Loop Instance and those participants manage the life cycle of its control loop elements. Periodically, each participant reports back on the status of operations it has picked up for the Control Loop Elements it controls, together with statistics on the Control Loop Elements over DMaaP. On reception of these participant messages, the CLAMP runtime stores this information to its database.

The participant to use on a control loop can be selected from the registered participants in either of two ways:

Runtime-side Selection: The CLAMP runtime selects a suitable participant from the list of participants and sends the participant ID that should be used in the Participant Update message. In this case, the CLAMP runtime decides on which participant will run the Control Loop Element based on a suitable algorithm. Algorithms could be round robin based or load based.

Participant-side Selection: The CLAMP runtime sends a list of Participant IDs that may be used in the Participant Update message. In this case, the candidate participants decide among themselves which participant should host the Control Loop Element.

This approach makes it easy to scale Control Loop life cycle management. As Control Loop Instance counts increase, more than one CLAMP runtime can be deployed and REST/supervision operations on Control Loop Instances can run in parallel. The number of participants can scale because an asynchronous broadcast mechanism is used for runtime-participant communication and there is no direct connection or communication channel between participants and CLAMP runtime servers. Participant state, Control Loop Instance state, and Control Loop Element state is held in the database, so any CLAMP runtime server can handle operations for any participant. Because many participants of a particular type can be deployed and participant instances can load balance control loop element instances for different Control Loop Instances of many types across themselves using a mechanism such as a Kubernetes cluster.

4.3 Sandboxing and API Gateway Support

At runtime, interaction between ONAP platform services and application microservices are relatively unconstrained, so interactions between Control Loop Elements for a given Control Loop Instance remain relatively unconstrained. A proposal to support access-controlled access to and between ONAP services will improve this. This can be complemented by intercepting and controlling services accesses between Control Loop Elements for Control Loop Instances for some/all Control Loop types.

API gateways such as Kong have emerged as a useful technology for exposing and controlling service endpoint access for applications and services. When a Control Loop Type is onboarded, or when Control Loop Instances are created in the Participants, CLAMP can configure service endpoints between Control Loop Elements to redirect through an API Gateway.

Authentication and access-control rules can then be dynamically configured at the API gateway to support constrained access between Control Loop Elements and Control Loop Instances.

The diagram below shows the approach for configuring API Gateway access at Control Loop Instance and Control Loop Element level.

_images/06-api-gateway-sandbox.png

At design time, the Control Loop type definition specifies the type of API gateway configuration that should be supported at Control Loop and Control Loop Element levels.

At runtime, the CLAMP can configure the API gateway to enable (or deny) interactions between Control Loop Instances and individually for each Control Loop Element. All service-level interactions in/out of a Control Loop Element, except that to/from the API Gateway, can be blocked by networking policies, thus sandboxing a Control Loop Element and an entire Control Loop Instance if desired. Therefore, a Control Loop Element will only have access to the APIs that are configured and enabled for the Control Loop Element/Instance in the API gateway.

For some Control Loop Element Types the Participant can assist with service endpoint reconfiguration, service request/response redirection to/from the API Gateway, or annotation of requests/responses.

Once the Control Loop instance is instantiated on participants, the participants configure the API gateway with the Control Loop Instance level configuration and with the specific configuration for their Control Loop Element.

Monitoring and logging of the use of the API gateway may also be provided. Information and statistics on API gateway use can be read from the API gateway and passed back in monitoring messages to the CLAMP runtime.

Additional isolation and execution-environment sandboxing can be supported depending on the Control Loop Element Type. For example: ONAP policies for given Control Loop Instances/Types can be executed in a dedicated PDP engine instances; DCAE or K8S-hosted services can executed in isolated namespaces or in dedicated workers/clusters; etc..

5 APIs and Protocols

The APIs and Protocols used by CLAMP for Control Loops are described on the pages below:

  1. System Level Dialogues

  2. The CLAMP Control Loop Participant Protocol

  3. REST APIs for CLAMP Control Loops

6 Design and Implementation

The design and implementation of TOSCA Control Loops in CLAMP is described for each executable entity on the pages below:

  1. The CLAMP Control Loop Runtime Server

  2. CLAMP Control Loop Participants

  3. Managing Control Loops using The CLAMP GUI

End of Document

Defining Control Loops in TOSCA for CLAMP

A Control Loop Type is defined in a TOSCA service template. A TOSCA Service Template has two parts: a definition part in the service template itself, which contains the definitions of concepts that can be used to define the types of concepts that can appear on a Toplogy Template and a Topology Template that defines a topology. See the Oasis Open TOSCA web page for more details on TOSCA.

Unsurprisingly, to define a Control Loop Type in TOSCA, of Control Loop related concepts that we can use in all control loops exist. They are described in Section 1. Section 2 describes how properties are managed. Properties are the configuration parameters that are provided to Control Loops and the Control Loop Elements they use. Section 3 describes how to define a Control Loop using the predefined Control Loop concepts.

1 Standard TOSCA Service Template Concepts for Control Loops

These concepts are the base concepts available to users who write definitions for control loops in TOSCA. TOSCA control loop definitions are written using these concepts.

1.1 Fundamental TOSCA Concepts for Control Loops

The following TOSCA concepts are the fundamental concepts in a TOSCA Service Template for defining control loops.

_images/fundamental-concepts.png

The TOSCA concepts above may be declared in the TOSCA Service Template of a control loop. If the concepts already exist in the Design Time Catalogue or the Runtime Inventory, they may be omitted from a TOSCA service template that defines a control loop type.

The start_phase is a value indicating the start phase in which this control loop element will be started, the first start phase is zero. Control Loop Elements are started in their start_phase order and stopped in reverse start phase order. Control Loop Elements with the same start phase are started and stopped simultaneously.

The Yaml file that holds the Definition of TOSCA fundamental Control Loop Types is available in Github and is the canonical definition of the Control Loop concepts.

1.2 TOSCA Concepts for Control Loop Elements delivered by ONAP

TOSCA Standard Control Loop Elements

_images/standard-cle.png
1.2.1 Policy Control Loop Element

The Policy Participant runs Policy Control Loop Elements. Each Policy Control Loop Element manages the deployment of the policy specified in the Policy Control Loop Element definition. The Yaml file that holds the Policy Control Loop Element Type definition is available in Github and is the canonical definition of the Policy Control Loop Element type. For a description of the Policy Control Loop Element and Policy Participant, please see The CLAMP Policy Framework Participant page.

1.2.2 HTTP Control Loop Element

The HTTP Participant runs HTTP Control Loop Elements. Each HTTP Control Loop Element manages REST communication towards a REST endpoint using the REST calls a user has specified in the configuration of the HTTP Control Loop Element. The Yaml file that holds the HTTP Control Loop Element Type definition is available in Github and is the canonical definition of the HTTP Control Loop Element type. For a description of the HTTP Control Loop Element and HTTP Participant, please see The CLAMP HTTP Participant page.

1.2.3 Kubernetes Control Loop Element

The Kubernetes Participant runs Kubernetes Control Loop Elements. Each Kubernetes Control Loop Element manages a Kubernetes microservice using Helm. The user defines the Helm chart for the Kubernetes microservice as well as other properties that the microservice requires in order to execute. The Yaml file that holds the Kubernetes Control Loop Element Type defintion is available in Github and is the canonical definition of the Kubernetes Control Loop Element type. For a description of the Kubernetes Control Loop Element and Kubernetes Participant,please see The CLAMP Kubernetes Participant page.

2 Common and Instance Specific Properties

Properties are used to define the configuration for Control Loops and Control Loop Elements. At design time, the types, constraints, and descriptions of the properties are specified. The values for properties are specified in the CLAMP GUI at runtime. TOSCA provides support for defining properties, see Section 3.6.10: TOSCA Property Definition in the TOSCA documentation.

2.1 Terminology for Properties

Property: Metadata defined in TOSCA that is associated with a Control Loop, a Control Loop Element, or a Participant.

TOSCA Property Type: The TOSCA definition of the type of a property. A property can have a generic type such as string or integer or can have a user defined TOSCA data type.

TOSCA Property Value: The value of a Property Type. Property values are assigned at run time in CLAMP.

Common Property Type: Property Types that apply to all instances of a Control Loop Type.

Common Property Value: The value of a Property Type. It is assigned at run time once for all instances of a Control Loop Type.

Instance Specific Property Type: Property Types that apply to an individual instance of a Control Loop Type.

Instance Specific Property Value: The value of a Property Type that applies to an individual instance of a Control Loop Type. The value is assigned at run time for each control loop instance.

Control Loop Properties can be common or instance specific. See Section 2 of TOSCA Defined Control Loops: Architecture and Design for a detailed description of the usage of common and instance specific properties.

2.2 Common Properties

Common properties apply to all instances of a control loop. Common properties are identified by a special metadata flag in Control Loop and Control Loop Element definitions. For example, the startPhase parameter on any Control Loop Element has the same value for any instance of that control loop element, so it is defined as shown below in the Definition of TOSCA fundamental Control Loop Types yaml file.

startPhase:
  type: integer
  required: false
  constraints:
  - greater-or-equal: 0
  description: A value indicating the start phase in which this control loop element will be started, the
              first start phase is zero. Control Loop Elements are started in their start_phase order and stopped
              in reverse start phase order. Control Loop Elements with the same start phase are started and
              stopped simultaneously
  metadata:
    common: true

The “common: true” value in the metadata of the startPhase property identifies that property as being a common property. This property will be set on the CLAMP GUI during control loop commissioning.

2.3 Instance Specific Properties

Instance Specific properties apply to individual instances of a Control Loop and/or Control Loop Element and must be set individually for Control Loop and Control Loop Element instance. Properties are instance specific by default, but can be identified by a special metadata flag in Control Loop and Control Loop Element definitions. For example, the chart parameter on a Kubernetes Control Loop Element has a different value for every instance of a Kubernetes Control Loop Element, so it can be defined as shown below in the Kubernetes Control Loop Type definition yaml file.

# Definition that omits the common flag metadata
chart:
  type: org.onap.datatypes.policy.clamp.controlloop.kubernetesControlLoopElement.Chart
  typeVersion: 1.0.0
  description: The helm chart for the microservice
  required: true

# Definition that specifies the common flag metadata
chart:
  type: org.onap.datatypes.policy.clamp.controlloop.kubernetesControlLoopElement.Chart
  typeVersion: 1.0.0
  description: The helm chart for the microservice
  required: true
  metadata:
    common: false

The “common: false” value in the metadata of the chart property identifies that property as being an instance specific property. This property will be set on the CLAMP GUI during control loop instantiation.

3 Writing a Control Loop Type Definition

The TOSCA definition of a control loop contains a TOSCA Node Template for the control loop itself, which contains TOSCA Node Templates for each Control Loop Element that makes up the Control Loop.

_images/controlloop-node-template.png

To create a control loop, a user creates a TOSCA Topology Template. In the Topology Template, the user creates a TOSCA Node Template for each Control Loop Element that will be in the Control Loop Definition. Finally, the user creates the Node Template that defines the Control Loop itself, and references the Control Loop Element definitions that make up the Control Loop Definition.

3.1 The Gentle Guidance Control Loop

The best way to explain how to create a Control Loop Definition is by example.

_images/gentle-guidance-controlloop.png

The example Gentle Guidance control loop is illustrated in the diagram above. The domain logic for the control loop is implemented in a microservice running in Kubernetes, a policy, and some configuration that is passed to the microservice over a REST endpoint. We want to manage the life cycle of the domain logic for our Gentle Guidance control loop using our TOSCA based Control Loop Life Cycle Management approach. To do this we create four Control Loop Element definitions, one for the Kubernetes microservice, one for the policy and one or the REST configuration.

3.2 The TOSCA Control Loop Definition

We use a TOSCA Topology Template to specify a Control Loop definition and the definitions of its Control Loop Elements. Optionally, we can specify default parameter values in the TOSCA Topology Template. The actual values of Control Loop common and instance specific parameters are set at run time in the CLAMP GUI.

In the case of the Gentle Guidance control loop, we define a Control Loop Element Node Template for each part of the domain logic we are managing. We then define the Control Loop Node Template for the control loop itself.

Please refer to the No Properties yaml file in Github for the definitive Yaml specification for the TOSCA Topology Template for the Gentle Guidance domain when no parameters are defined.

Please refer to the Default Properties yaml file in Github for the definitive Yaml specification for the TOSCA Topology Template for the Gentle Guidance domain when the default values of parameters are defined.

4 Creating Custom Control Loop Elements

Any organization can include their own component in the framework and use the framework and have the Policy Framework CLAMP manage the lifecycle of domain logic in their component as part of a Control Loop. To do this, a participant for the component must be developed that allows Control Loop Elements for that component to be run. To develop a participant, the participant must comply with the CLAMP Participants framework and in particular comply with The CLAMP Control Loop Participant Protocol. The organization must also specify a new Control Loop Element type definition in TOSCA similar to those supplied in ONAP and described in Section 1.2. This Control Loop Element type tells the CLAMP Control Loop Lifecycle management that the Control Loop Element exists and can be included in control loops. It also specifies the properties that can be specified for the Control Loop Element.

An organization can supply the code for the Participant (for example as a Java jar file) and a TOSCA artifact with the Control Loop Element definition and it can be added to the platform. In future releases, support will be provided to include participants and their Control Loop Element definitions as packaged plugins that can be installed on the platform.

End of document

CLAMP TOSCA Control Loop APIs and Protocols

The sections below describe the APIs and Protocols used in TOSCA Control Loops.

System Level Dialogues

The CLAMP Control Loop Runtime Lifecycle Management uses the following system level dialogues. These dialogues enable the CLAMP runtime capabilities described in Section 2 of TOSCA Defined Control Loops: Architecture and Design. Design Time dialogues will be described in future releases of the system.

1 Commissioning Dialogues

Commissioning dialogues are used to commission and decommission Control Loop Type definitions and to set the values of Common Parameters.

Commissioning a Control Loop Type is a three-step process:

  1. The Control Loop Type must be created, that is the Control Loop Type definition must be loaded and stored in the database. This step may be carried out over the REST interface or using SDC distribution.

  2. The Common Properties of the Control Loop type must be assigned values and those values must be stored in the database. This step is optional only if all mandatory common properties have default values. The Common Property values may be set and amended over and over again in multiple sessions until the Control Loop Type is primed.

  3. The Control Loop Type Definition and the Common Property values must be primed, that is sent to the concerned participants. Once a Control Loop Type is primed, its Common Property values can no longer be changed. To change Common Properties on a primed Control Loop Type, all instances of the Control Loop Type must be removed and the Control Loop Type must be de-primed.

1.1 Commissioning a Control Loop Type Definition using the CLAMP GUI

This dialogue corresponds to a “File → Import” menu on the CLAMP GUI. The documentation of future releases of the system will describe how the Design Time functionality interacts with the Runtime commissioning API.

_images/comissioning-clamp-gui.png
1.2 Commissioning a Control Loop Type Definition using SDC
_images/comissioning-sdc.png
1.3 Setting Common Properties for a Control Loop Type Definition

This dialogue sets the values of common properties. The values of the common properties may be set, updated, or deleted at will, as this dialogue saves the properties to the database but does not send the definitions or properties to the participants. However, once a Control Loop Type Definition and its properties are primed (See Section 1.4), the properties cannot be changed until the control loop type definition is de-primed (See Section 1.5).

_images/common-properties-type-definition.png
1.4 Priming a Control Loop Type Definition on Participants

The Priming operation sends Control Loop Type definitions and common property values to participants. Once a Control Loop Type definition is primed, its property values can on longer be changed until it is de-primed.

_images/priming-cl-type-definition.png
1.5 De-Prime a Control Loop Type Definition on Participants

This dialogue allows a Control Loop Type Definition to be de-primed so that it can be deleted or its common parameter values can be altered.

_images/depriming-cl-type-definition.png
1.6 Decommissioning a Control Loop Type Definition in CLAMP
_images/decommission-cl-type-definition.png
1.7 Reading Commissioned Control Loop Type Definitions
_images/read-commision-cl-type-definition.png
2. Instantiation Dialogues

Instantiation dialogues are used to create, set parameters on, instantiate, update, and remove Control Loop instances.

Assume a suitable Control Loop Definition exists in the Commissioned Control Loop Inventory. To get a Control Loop instance running one would, for example, execute dialogues 2.1, 2.3, and 2.4.

2.1 Creating a Control Loop Instance
_images/create-cl-instance.png

Note

This dialogue creates the Control Loop Instance in the Instantiated Control Loop Inventory. The instance is sent to the participants using the process described in the dialogue in Section 2.3.

2.2 Updating Instance Specific Parameters on a Control Loop Instance
_images/update-instance-params-cl.png
2.3 Updating a Control Loop Instance with a Configuration on Participants
_images/update-cl-instance-config-participants.png
2.4 Changing the state of a Control Loop Instance on Participants
_images/change-cl-instance-state-participants.png
2.5 De-instantiating a Control Loop Instance from Participants
_images/deinstantiate-cl-from-participants.png
2.6 Deleting a Control Loop Instance
_images/delete-cl-instance.png
2.7 Reading Control Loop Instances
_images/read-cl-instance.png
1. Monitoring Dialogues

Monitoring dialogues are used to monitor and to read statistics on Control Loop Instances.

3.1 Reporting of Monitoring Information and Statistics by Participants
_images/monitoring-by-participants.png
3.2 Viewing of Monitoring Information
_images/view-monitoring-info.png
3.2 Viewing of Statistics
_images/view-statistics.png
3.3 Statistics Housekeeping
_images/statistics-housekeeping.png
4. Supervision Dialogues

Supervision dialogues are used to check the state of Control Loop Instances and Participants.

4.1 Supervise Participants
_images/supervise-participants.png
4.2 Supervise Control Loops
_images/supervise-controlloops.png

End of Document

The CLAMP Control Loop Participant Protocol

The CLAMP Control Loop Participant protocol is an asynchronous protocol that is used by the CLAMP runtime to coordinate life cycle management of Control Loop instances. The protocol supports the functions described in the sections below.

Protocol Dialogues

The protocol supports the dialogues described below.

Participant Registration and De-Registration

Registration when a participant comes up and update of participant with control loop type information and common parameter values for its control loop types.

_images/participant-registering.png

De-registration is executed as a participant goes down.

_images/participant-deregistration.png
Control Loop Priming and De-Priming

When a control loop is primed, the portion of the Control Loop Type Definition and Common Property values for the participants of each participant type mentioned in the Control Loop Definition are sent to the participants.

_images/controlloop-priming.png

When a control loop is de-primed, the portion of the Control Loop Type Definition and Common Property values for the participants of each participant type mentioned in the Control Loop Definition are deleted on participants.

_images/controlloop-depriming.png
Control Loop Update

Control Loop Update handles creation, change, and deletion of control loops on participants. Change of control loops uses a semantic versioning approach and follow the semantics described on the page 4.1 Management of Control Loop Instance Configurations <management-cl-instance-configs>.

_images/controlloop-update.png

The handling of a ControlLoopUpdate message in each participant is as shown below.

_images/controlloop-update-msg.png
Control Loop State Change

This dialogue is used to change the state of Control Loops and their Control Loop Elements. The CLAMP Runtime sends a Control Loop State Change message on the control loop to all participants. Participants that have Control Loop Elements in that Control Loop attempt an update on the state of the control loop elements they have for that control loop, and report the result back.

The startPhase in the Definition of TOSCA fundamental Control Loop Types is particularly important in control loop state changes because sometime the user wishes to control the order in which the state changes in Control Loop Elements in a control loop. In state changes from UNITITIALISEDPASSIVE and from PASSIVERUNNING, control loop elements are started in increasing order of their startPhase. In state changes from RUNNINGPASSIVE and from PASSIVEUNITITIALISED, control loop elements are started in decreasing order of their startPhase.

The CLAMP runtime controls the state change process described in the diagram below. The CLAMP runtime sends a Control Loop State Change message on DMaaP to all participants in a particular Start Phase so, in each state change multiple Control Loop State Change messages are sent, one for each Start Phase in the control loop. If more than one Control Loop Element has the same Start Phase, those Control Loop Elements receive the same Control Loop State Change message from DMaaP and start in parallel.

The Participant reads each State Change Message it sees on DMaaP. If the Start Phase on the Control Loop State Change message matches the Start Phase of the Control Loop Element, the participant processes the State Change message. Otherwise the participant ignores the message.

_images/controlloop-state-change.png

The handling of a ControlLoopStateChange message in each participant is as shown below.

_images/controlloop-state-change-msg.png
Control Loop Monitoring and Reporting

This dialogue is used as a heartbeat mechanism for participants, to monitor the status of Control Loop Elements, and to gather statistics on control loops. The ParticipantStatus message is sent periodically by each participant. The reporting interval for sending the message is configurable.

_images/controlloop-monitoring.png
Messages

The CLAMP Control Loop Participant Protocol uses the following messages. The descriptions below give an overview of each message. For the precise definition of the messages, see the CLAMP code at Github . All messages are carried on DMaaP.

Message

Source

Target

Purpose

Important Fields

Field Descriptions

ParticipantRegister

Participant

CLAMP Runtime

Participant registers with the CLAMP runtime

ParticipantId

The ID of this participant

ParticipantType

The type of the participant, maps to the capabilities of the participant in Control Loop Type Definitions

ParticipantRegisterAck

CLAMP Runtime

Participant

Acknowledgement of Participant Registration

ParticipantId

The ID of this participant

ParticipantType

The type of the participant, maps to the capabilities of the participant in Control Loop Type Definitions

Result

Success/Fail

Message

Message indicating reason for failure

ParticipantUpdate

CLAMP Runtime

Participant

CLAMP Runtime sends Control Loop Element Definitions and Common Parameter Values to Participants

ParticipantDefinitionUpdateMap

Map with Participant ID as its key, each value on the map is a ControlLoopElementDefintionMap

ControlLoopElementDefintionMap

List of ControlLoopElementDefinition values for a particular participant, keyed by its Control Loop Element Definition ID

ControlLoopElementDefinition

A ControlLoopElementToscaServiceTemplate containing the definition of the Control Loop Element and a CommonPropertiesMap with the values of the common property values for Control Loop Elements of this type

ControlLoopElementToscaServiceTemplate

The definition of the Control Loop Element in TOSCA

CommonPropertiesMap

A <String, String> map indexed by the property name. Each map entry is the serialized value of the property, which can be deserialized into an instance of the type of the property.

ParticipantUpdateAck

Participant

CLAMP Runtime

Acknowledgement of Participant Update

ParticipantId

The ID of this participant

ParticipantType

The type of the participant, maps to the capabilities of the participant in Control Loop Type Definitions

Result

Success/Fail

Message

Message indicating reason for failure

ParticipantDeregister

Participant

CLAMP Runtime

Participant deregisters with the CLAMP runtime

ParticipantId

The ID of this participant

ParticipantType

The type of the participant, maps to the capabilities of the participant in Control Loop Type Definitions

ParticipantDeregisterAck

CLAMP Runtime

Participant

Acknowledgement of Participant Deegistration

ParticipantId

The ID of this participant

ParticipantType

The type of the participant, maps to the capabilities of the participant in Control Loop Type Definitions

Result

Success/Fail

Message

Message indicating reason for failure

ControlLoopUpdate

CLAMP Runtime

Participant

CLAMP Runtime sends Control Loop Element instances and Instance Specific Parameter Values for a Control Loop Instance to Participants

ControlLoopId

The name and version of the Control Loop

ParticipantUpdateMap

Map with Participant ID as its key, each value on the map is a ControlLoopElementList

ControlLoopElementList

List of ControlLoopElement values for the Control Loop

ControlLoopElement

A ControlLoopElement, which contains among other things a PropertiesMap with the values of the property values for this Control Loop Element instance and a ToscaServiceTemplateFragment with extra concept definitions and instances that a participant may need.

PropertiesMap

A <String, String> map indexed by the property name. Each map entry is the serialized value of the property, which can be deserialized into an instance of the type of the property.

ToscaServiceTemplateFragment

A well-formed TOSCA service template containing extra concept definitions and instances that a participant may need. For example, the Policy Participant may need policy type definitions or policy instances to be provided if they are not already stored in the Policy Framework.

ControlLoopUpdateAck

Participant

CLAMP Runtime

Acknowledgement of Control Loop Update

ParticipantId

The ID of this participant

ParticipantType

The type of the participant, maps to the capabilities of the participant in Control Loop Type Definitions

ControlLoopId

The name and version of the Control Loop

ControlLoopResult

Holds a Result and Message for the overall operation on the participant and a map of Result and Message fields for each Control Loop Element of the control loop on this participant

Result

Success/Fail

Message

Message indicating reason for failure

ControlLoopStateChange

CLAMP Runtime

Participant

CLAMP Runtime asks Participants to change the state of a Control Loop

ControlLoopId

The name and version of the Control Loop

currentState

The current state of the Control Loop

orderedState

The state that the Control Loop should transition to

startPhase

The start phase to which this ControLoopStateChange message applies

ControlLoopStateChangeAck

Participant

CLAMP Runtime

Acknowledgement of Control Loop State Change

ParticipantId

The ID of this participant

ParticipantType

The type of the participant, maps to the capabilities of the participant in Control Loop Type Definitions

ControlLoopId

The name and version of the Control Loop

startPhase

The start phase to which this ControLoopStateChangeAck message applies

ControlLoopResult

Holds a Result and Message for the overall operation on the participant and a map of Result and Message fields for each Control Loop Element of the control loop on this participant

Result

Success/Fail

Message

Message indicating reason for failure

ParticipantStatusReq

CLAMP Runtime

Participant

Request that the specified participants return a ParticipantStatus message immediately

ParticipantId

The ID of this participant, if not specified, all participants respond.

ParticipantStatus

Participant

CLAMP Runtime

Periodic or on-demand report for heartbeat, Participant Status, Control Loop Status, and Control Loop Statistics

ParticipantId

The ID of this participant

ParticipantType

The type of the participant, maps to the capabilities of the participant in Control Loop Type Definitions

ParticipantDefinitionUpdateMap (returned in repsonse to ParticipantStatusReq only)

See ParticipantUpdate message above for definition of this field

ParticipantStatus

The current status of the participant for monitoring

ParticipantStatistics

Statistics on the participant such as up time, or messages processed. Can include participant specific data in a string blob that is opaque to CLAMP

ControlLoopInfoMap

A map of ControlLoopInfo types indexed by ControlLoopId, one entry for each control loop running on the participant

ControlLoopInfo

The ControlLoopStatus and ControlLoopStatistics for a given control loop

ControlLoopStatus

The current status of the control loop for monitoring

ControlLoopStatistics

Statistics on the control loop such as up time, or messages processed. Can include participant specific data in a string blob that is opaque to CLAMP

End of Document

REST APIs for CLAMP Control Loops
Commissioning API

This API is a CRUD API that allows Control Loop Type definitions created in a design environment to be commissioned on the CLAMP runtime. It has endpoints that allow Control Loop Types to be created, read, updated, and deleted.

The body of the create and update end points is a TOSCA Service/Topology template that defines the new or changed Control Loop Type. The update and delete endpoints take a reference to the Control Loop Type. The incoming TOSCA is verified and checked for referential integrity. On delete requests, a check is made to ensure that no Control Loop Instances exist for the Control Loop Type to be deleted.

Download Policy Control Loop Commissioning API Swagger

GET /onap/controlloop/v2/commission

Query details of the requested commissioned control loop definitions

  • Description: Queries details of the requested commissioned control loop definitions, returning all control loop details

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

name

query

Control Loop definition name

string

version

query

Control Loop definition version

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - OK

401 - Authentication Error

403 - Authorization Error

404 - Not Found

500 - Internal Server Error

POST /onap/controlloop/v2/commission

Commissions control loop definitions

  • Description: Commissions control loop definitions, returning the commissioned control loop definition IDs

  • Consumes: [‘application/json’, ‘application/yaml’]

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

body

body

Entity Body of Control Loop

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - OK

201 - Created

401 - Authentication Error

403 - Authorization Error

404 - Not Found

500 - Internal Server Error

DELETE /onap/controlloop/v2/commission

Delete a commissioned control loop

  • Description: Deletes a Commissioned Control Loop, returning optional error details

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

name

query

Control Loop definition name

string

version

query

Control Loop definition version

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - OK

204 - No Content

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

GET /onap/controlloop/v2/commission/elements

Query details of the requested commissioned control loop element definitions

  • Description: Queries details of the requested commissioned control loop element definitions, returning all control loop elements’ details

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

name

query

Control Loop definition name

string

version

query

Control Loop definition version

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - OK

401 - Authentication Error

403 - Authorization Error

404 - Not Found

500 - Internal Server Error

Instantiation API

The instantiation API has two functions:

  1. Creation, Reading, Update, and Deletion of Control Loop Instances.

  2. Instantiation and lifecycle management of Control Loop Instances on participants

The Instantiation API is used by the CLAMP GUI.

Instantiation Control Loop Instance CRUD

This sub API allows for the creation, read, update, and deletion of Control Loop Instances. The endpoints for create and update take a JSON body that describes the Control Loop Instance. The endpoints for read and delete take a Control Loop Instance ID to determine which Control Loop Instance to act on. For the delete endpoint, a check is made to ensure that the Control Loop Instance is not instantiated on participants.

A call to the update endpoint for a Control Loop Instance follow the semantics described here: 4.1 Management of Control Loop Instance Configurations <management-cl-instance-configs>.

Download Policy Control Loop Instantiation API Swagger

GET /onap/controlloop/v2/instantiation

Query details of the requested control loops

  • Description: Queries details of the requested control loops, returning all control loop details

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

name

query

Control Loop definition name

string

version

query

Control Loop definition version

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - OK

401 - Authentication Error

403 - Authorization Error

404 - Not Found

500 - Internal Server Error

POST /onap/controlloop/v2/instantiation

Commissions control loop definitions

  • Description: Commissions control loop definitions, returning the control loop IDs

  • Consumes: [‘application/json’, ‘application/yaml’]

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

controlLoops

body

Entity Body of Control Loop

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - OK

201 - Created

401 - Authentication Error

403 - Authorization Error

404 - Not Found

500 - Internal Server Error

PUT /onap/controlloop/v2/instantiation

Updates control loop definitions

  • Description: Updates control loop definitions, returning the updated control loop definition IDs

  • Consumes: [‘application/json’, ‘application/yaml’]

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

controlLoops

body

Entity Body of Control Loop

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - OK

201 - Created

401 - Authentication Error

403 - Authorization Error

404 - Not Found

500 - Internal Server Error

DELETE /onap/controlloop/v2/instantiation

Delete a control loop

  • Description: Deletes a control loop, returning optional error details

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

name

query

Control Loop definition name

string

version

query

Control Loop definition version

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - OK

204 - No Content

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

PUT /onap/controlloop/v2/instantiation/command

Issue a command to the requested control loops

  • Description: Issues a command to a control loop, ordering a state change on the control loop

  • Consumes: [‘application/json’, ‘application/yaml’]

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

command

body

Entity Body of control loop command

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - OK

201 - Created

401 - Authentication Error

403 - Authorization Error

404 - Not Found

500 - Internal Server Error

Instantiation Control Loop Instance Lifecycle Management

This sub API is used to manage the life cycle of Control Loop Instances. A Control Loop Instance can be in the states described here: 2.1 Control Loop Instance States <controlloop-instance-states>. Managing the life cycle of a Control Loop Instance amounts to steering the Control Loop through its states.

The sub API allows upgrades and downgrades of Control Loop Instances to be pushed to participants following the semantics described here: 4.1 Management of Control Loop Instance Configurations <management-cl-instance-configs>. When the API is used to update the participants on a Control Loop Instance, the new/upgraded/downgraded definition of the Control Loop is pushed to the participants. Note that the API asks the participants in a Control Loop Instance to perform the update, it is the responsibility of the participants to execute the update and report the result using the protocols described here: CLAMP Participants. The progress and result of an update can be monitored using the Monitoring API <monitoring-api>.

The sub API also allows a state change of a Control Loop Instance to be ordered. The required state of the Control Loop Instance is pushed to participants in a Control Loop Instance using the API. Note that the API asks the participants in a Control Loop Instance to perform the state change, it is the responsibility of the participants to execute the state change and report the result using the protocols described here: CLAMP Participants. The progress and result of a state change can be monitored using the Monitoring API <monitoring-api>.

Warning

The Swagger for the Instantiation Lifecycle Management API will appear here.

Monitoring API

The Monitoring API allows the state and statistics of Participants, Control Loop Instances and their Control Loop Elements to be monitored. This API is used by the CLAMP GUI. The API provides filtering so that specific Participants and Control Loop Instances can be retrieved. In addition, the quantity of statistical information to be returned can be scoped.

Download Policy Control Loop Monitoring API Swagger

GET /onap/controlloop/v2/monitoring/clelement

Query details of the requested cl element stats

  • Description: Queries details of the requested cl element stats, returning all clElement stats

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

endTime

query

end time

string

id

query

Control Loop element id

string

name

query

Participant name

string

recordCount

query

Record count

integer

startTime

query

start time

string

version

query

Participant version

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - OK

401 - Authentication Error

403 - Authorization Error

404 - Not Found

500 - Internal Server Error

GET /onap/controlloop/v2/monitoring/clelements/controlloop

Query details of the requested cl element stats in a control loop

  • Description: Queries details of the requested cl element stats, returning all clElement stats

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

name

query

Control Loop name

string

version

query

Control Loop version

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - OK

401 - Authentication Error

403 - Authorization Error

404 - Not Found

500 - Internal Server Error

GET /onap/controlloop/v2/monitoring/participant

Query details of the requested participant stats

  • Description: Queries details of the requested participant stats, returning all participant stats

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

endTime

query

end time

string

name

query

Control Loop participant name

string

recordCount

query

Record count

integer

startTime

query

start time

string

version

query

Control Loop participant version

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - OK

401 - Authentication Error

403 - Authorization Error

404 - Not Found

500 - Internal Server Error

GET /onap/controlloop/v2/monitoring/participants/controlloop

Query details of all the participant stats in a control loop

  • Description: Queries details of the participant stats, returning all participant stats

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

name

query

Control Loop name

string

version

query

Control Loop version

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - OK

401 - Authentication Error

403 - Authorization Error

404 - Not Found

500 - Internal Server Error

Pass Through API

This API allows information to be passed to Control Loop Elements in a control loop.

Warning

The requirements on this API are still under discussion.

Warning

The Swagger for the Pass Through API will appear here.

Participant Standalone API

This API allows a Participant to run in standalone mode and to run standalone Control Loop Elements.

Kubernetes participant can also be deployed as a standalone application and provides REST end points for onboarding helm charts to its local chart storage, installing and uninstalling of helm charts to a kubernetes cluster. It also allows to configure a remote repository in kubernetes participant for installing helm charts. User can onboard a helm chart along with the overrides yaml file, the chart gets stored in to the local chart directory of kubernetes participant. The onboarded charts can be installed, uninstalled. The GET API fetches all the available helm charts from the chart storage.

Download Policy Control Loop Participant Standalone API Swagger

DELETE /onap/k8sparticipant/helm/chart/{name}/{version}

Delete the chart

  • Produces: [‘*/*’]

Parameters

Name

Position

Description

Type

name

path

name

string

version

path

version

string

Responses

200 - OK

204 - Chart Deleted

401 - Unauthorized

403 - Forbidden

GET /onap/k8sparticipant/helm/charts

Return all Charts

  • Produces: [‘application/json’]

Responses

200 - chart List

401 - Unauthorized

403 - Forbidden

404 - Not Found

POST /onap/k8sparticipant/helm/install

Install the chart

  • Consumes: [‘application/json’]

  • Produces: [‘application/json’]

Parameters

Name

Position

Description

Type

info

body

info

Responses

200 - OK

201 - chart Installed

401 - Unauthorized

403 - Forbidden

404 - Not Found

POST /onap/k8sparticipant/helm/onboard/chart

Onboard the Chart

  • Consumes: [‘multipart/form-data’]

  • Produces: [‘application/json’]

Parameters

Name

Position

Description

Type

chart

formData

file

info

formData

string

values

body

values

Responses

200 - OK

201 - Chart Onboarded

401 - Unauthorized

403 - Forbidden

404 - Not Found

POST /onap/k8sparticipant/helm/repo

Configure helm repository

  • Consumes: [‘application/json’]

  • Produces: [‘application/json’]

Parameters

Name

Position

Description

Type

repo

body

repo

Responses

200 - OK

201 - Repository added

401 - Unauthorized

403 - Forbidden

404 - Not Found

DELETE /onap/k8sparticipant/helm/uninstall/{name}/{version}

Uninstall the Chart

  • Produces: [‘application/json’]

Parameters

Name

Position

Description

Type

name

path

name

string

version

path

version

string

Responses

200 - OK

201 - chart Uninstalled

204 - No Content

401 - Unauthorized

403 - Forbidden

Participant Simulator API

This API allows a Participant Simulator to be started and run for test purposes.

Download Policy Participant Simulator API Swagger

PUT /onap/participantsim/v2/elements

Updates simulated control loop elements

  • Description: Updates simulated control loop elements, returning the updated control loop definition IDs

  • Consumes: [‘application/json’]

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

body

body

Body of a control loop element

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - OK

201 - Created

401 - Authentication Error

403 - Authorization Error

404 - Not Found

500 - Internal Server Error

GET /onap/participantsim/v2/elements/{name}/{version}

Query details of the requested simulated control loop elements

  • Description: Queries details of the requested simulated control loop elements, returning all control loop element details

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

name

path

Control loop element name

string

version

path

Control loop element version

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - OK

401 - Authentication Error

403 - Authorization Error

404 - Not Found

500 - Internal Server Error

PUT /onap/participantsim/v2/participants

Updates simulated participants

  • Description: Updates simulated participants, returning the updated control loop definition IDs

  • Consumes: [‘application/json’]

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

body

body

Body of a participant

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - OK

201 - Created

401 - Authentication Error

403 - Authorization Error

404 - Not Found

500 - Internal Server Error

GET /onap/participantsim/v2/participants/{name}/{version}

Query details of the requested simulated participants

  • Description: Queries details of the requested simulated participants, returning all participant details

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

name

path

Participant name

string

version

path

Participant version

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - OK

401 - Authentication Error

403 - Authorization Error

404 - Not Found

500 - Internal Server Error

End of Document

CLAMP TOSCA Control Loop Components: Design and Implementaiton

The sections below describe the components that handle TOSCA Control Loops.

The CLAMP Control Loop Runtime

This article explains how CLAMP Control Loop Runtime is implemented.

Terminology
  • Broadcast message: a message for all participants (participantId=null and participantType=null)

  • Message to a participant: a message only for a participant (participantId and participantType properly filled)

  • ThreadPoolExecutor: ThreadPoolExecutor executes the given task, into SupervisionAspect class is configured to execute tasks in ordered manner, one by one

  • Spring Scheduling: into SupervisionAspect class, the @Scheduled annotation invokes “schedule()” method every “runtime.participantParameters.heartBeatMs” milliseconds with a fixed delay

  • MessageIntercept: “@MessageIntercept” annotation is used into SupervisionHandler class to intercept “handleParticipantMessage” method calls using spring aspect oriented programming

  • GUI: graphical user interface, Postman or a Front-End Application

Design of Rest Api
Create of a Control Loop Type
  • GUI calls POST “/commission” endpoint with a Control Loop Type Definition (Tosca Service Template) as body

  • CL-runtime receives the call by Rest-Api (CommissioningController)

  • It saves to DB the Tosca Service Template using PolicyModelsProvider

  • if there are participants registered, it triggers the execution to send a broadcast PARTICIPANT_UPDATE message

  • the message is built by ParticipantUpdatePublisher using Tosca Service Template data (to fill the list of ParticipantDefinition)

Delete of a Control Loop Type
  • GUI calls DELETE “/commission” endpoint

  • CL-runtime receives the call by Rest-Api (CommissioningController)

  • if there are participants registered, CL-runtime triggers the execution to send a broadcast PARTICIPANT_UPDATE message

  • the message is built by ParticipantUpdatePublisher with an empty list of ParticipantDefinition

  • It deletes the Control Loop Type from DB

Create of a Control Loop
  • GUI calls POST “/instantiation” endpoint with a Control Loop as body

  • CL-runtime receives the call by Rest-Api (InstantiationController)

  • It validates the Control Loop

  • It saves the Control Loop to DB

  • Design of an update of a Control Loop

  • GUI calls PUT “/instantiation” endpoint with a Control Loop as body

  • CL-runtime receives the call by Rest-Api (InstantiationController)

  • It validates the Control Loop

  • It saves the Control Loop to DB

Delete of a Control Loop
  • GUI calls DELETE “/instantiation” endpoint

  • CL-runtime receives the call by Rest-Api (InstantiationController)

  • It checks that Control Loop is in UNINITIALISED status

  • It deletes the Control Loop from DB

“issues control loop commands to control loops”

case UNINITIALISED to PASSIVE

  • GUI calls “/instantiation/command” endpoint with PASSIVE as orderedState

  • CL-runtime checks if participants registered are matching with the list of control Loop Element

  • It updates control loop and control loop elements to DB (orderedState = PASSIVE)

  • It validates the status order issued

  • It triggers the execution to send a broadcast CONTROL_LOOP_UPDATE message

  • the message is built by ControlLoopUpdatePublisher using Tosca Service Template data and ControlLoop data. (with startPhase = 0)

  • It updates control loop and control loop elements to DB (state = UNINITIALISED2PASSIVE)

case PASSIVE to UNINITIALISED

  • GUI calls “/instantiation/command” endpoint with UNINITIALISED as orderedState

  • CL-runtime checks if participants registered are matching with the list of control Loop Element

  • It updates control loop and control loop elements to DB (orderedState = UNINITIALISED)

  • It validates the status order issued

  • It triggers the execution to send a broadcast CONTROL_LOOP_STATE_CHANGE message

  • the message is built by ControlLoopStateChangePublisher with controlLoopId

  • It updates control loop and control loop elements to DB (state = PASSIVE2UNINITIALISED)

case PASSIVE to RUNNING

  • GUI calls “/instantiation/command” endpoint with RUNNING as orderedState

  • CL-runtime checks if participants registered are matching with the list of control Loop Element.

  • It updates control loop and control loop elements to DB (orderedState = RUNNING)

  • It validates the status order issued

  • It triggers the execution to send a broadcast CONTROL_LOOP_STATE_CHANGE message

  • the message is built by ControlLoopStateChangePublisher with controlLoopId

  • It updates control loop and control loop elements to DB (state = PASSIVE2RUNNING)

case RUNNING to PASSIVE

  • GUI calls “/instantiation/command” endpoint with UNINITIALISED as orderedState

  • CL-runtime checks if participants registered are matching with the list of control Loop Element

  • It updates control loop and control loop elements to db (orderedState = RUNNING)

  • It validates the status order issued

  • It triggers the execution to send a broadcast CONTROL_LOOP_STATE_CHANGE message

  • the message is built by ControlLoopStateChangePublisher with controlLoopId

  • It updates control loop and control loop elements to db (state = RUNNING2PASSIVE)

StartPhase

The startPhase is particularly important in control loop update and control loop state changes because sometime the user wishes to control the order in which the state changes in Control Loop Elements in a control loop.

How to define StartPhase

StartPhase is defined as shown below in the Definition of TOSCA fundamental Control Loop Types yaml file.

startPhase:
  type: integer
  required: false
  constraints:
  - greater-or-equal: 0
  description: A value indicating the start phase in which this control loop element will be started, the
               first start phase is zero. Control Loop Elements are started in their start_phase order and stopped
               in reverse start phase order. Control Loop Elements with the same start phase are started and
               stopped simultaneously
  metadata:
    common: true

The “common: true” value in the metadata of the startPhase property identifies that property as being a common property. This property will be set on the CLAMP GUI during control loop commissioning. Example where it could be used:

org.onap.domain.database.Http_PMSHMicroserviceControlLoopElement:
  # Consul http config for PMSH.
  version: 1.2.3
  type: org.onap.policy.clamp.controlloop.HttpControlLoopElement
  type_version: 1.0.1
  description: Control loop element for the http requests of PMSH microservice
  properties:
    provider: ONAP
    participant_id:
      name: HttpParticipant0
      version: 1.0.0
    participantType:
      name: org.onap.k8s.controlloop.HttpControlLoopParticipant
      version: 2.3.4
    uninitializedToPassiveTimeout: 180
    startPhase: 1
How StartPhase works

In state changes from UNITITIALISED → PASSIVE, control loop elements are started in increasing order of their startPhase.

Example with Http_PMSHMicroserviceControlLoopElement with startPhase to 1 and PMSH_K8SMicroserviceControlLoopElement with startPhase to 0

  • CL-runtime sends a broadcast CONTROL_LOOP_UPDATE message to all participants with startPhase = 0

  • participant receives the CONTROL_LOOP_UPDATE message and runs to PASSIVE state (only CL elements defined as startPhase = 0)

  • CL-runtime receives CONTROL_LOOP_UPDATE_ACT messages from participants and set the state (from the CL element of the message) to PASSIVE

  • CL-runtime calculates that all CL elements with startPhase = 0 are set to proper state and sends a broadcast CONTROL_LOOP_UPDATE message with startPhase = 1

  • participant receives the CONTROL_LOOP_UPDATE message and runs to PASSIVE state (only CL elements defined as startPhase = 1)

  • CL-runtime calculates that all CL elements are set to proper state and set CL to PASSIVE

In that scenario the message CONTROL_LOOP_UPDATE has been sent two times.

Design of managing messages
PARTICIPANT_REGISTER
  • A participant starts and send a PARTICIPANT_REGISTER message

  • ParticipantRegisterListener collects the message from DMaap

  • if not present, it saves participant reference with status UNKNOWN to DB

  • if is present a Control Loop Type, it triggers the execution to send a PARTICIPANT_UPDATE message to the participant registered (message of Priming)

  • the message is built by ParticipantUpdatePublisher using Tosca Service Template data (to fill the list of ParticipantDefinition)

  • It triggers the execution to send a PARTICIPANT_REGISTER_ACK message to the participant registered

  • MessageIntercept intercepts that event, if PARTICIPANT_UPDATE message has been sent, it will be add a task to handle PARTICIPANT_REGISTER in SupervisionScanner

  • SupervisionScanner starts the monitoring for participantUpdate

PARTICIPANT_UPDATE_ACK
  • A participant sends PARTICIPANT_UPDATE_ACK message in response to a PARTICIPANT_UPDATE message

  • ParticipantUpdateAckListener collects the message from DMaap

  • MessageIntercept intercepts that event and adds a task to handle PARTICIPANT_UPDATE_ACK in SupervisionScanner

  • SupervisionScanner removes the monitoring for participantUpdate

  • It updates the status of the participant to DB

PARTICIPANT_STATUS
  • A participant sends a scheduled PARTICIPANT_STATUS message

  • ParticipantStatusListener collects the message from DMaap

  • MessageIntercept intercepts that event and adds a task to handle PARTICIPANT_STATUS in SupervisionScanner

  • SupervisionScanner clears and starts the monitoring for participantStatus

CONTROLLOOP_UPDATE_ACK
  • A participant sends CONTROLLOOP_UPDATE_ACK message in response to a CONTROLLOOP_UPDATE message. It will send a CONTROLLOOP_UPDATE_ACK - for each CL-elements moved to the ordered state as indicated by the CONTROLLOOP_UPDATE

  • ControlLoopUpdateAckListener collects the message from DMaap

  • It checks the status of all control loop elements and checks if the control loop is primed

  • It updates the CL to DB if it is changed

  • MessageIntercept intercepts that event and adds a task to handle a monitoring execution in SupervisionScanner

CONTROLLOOP_STATECHANGE_ACK

Design of a CONTROLLOOP_STATECHANGE_ACK is similar to the design for CONTROLLOOP_UPDATE_ACK

Design of monitoring execution in SupervisionScanner

Monitoring is designed to process the follow operations:

  • to determine the next startPhase in a CONTROLLOOP_UPDATE message

  • to update CL state: in a scenario that “ControlLoop.state” is in a kind of transitional state (example UNINITIALISED2PASSIVE), if all - CL-elements are moved properly to the specific state, the “ControlLoop.state” will be updated to that and saved to DB

  • to retry CONTROLLOOP_UPDATE/CONTROL_LOOP_STATE_CHANGE messages. if there is a CL Element not in the proper state, it will retry a broadcast message

  • to retry PARTICIPANT_UPDATE message to the participant in a scenario that CL-runtime do not receive PARTICIPANT_UPDATE_ACT from it

  • to send PARTICIPANT_STATUS_REQ to the participant in a scenario that CL-runtime do not receive PARTICIPANT_STATUS from it

The solution Design of retry, timeout, and reporting for all Participant message dialogues are implemented into the monitoring execution.

  • Spring Scheduling inserts the task to monitor retry execution into ThreadPoolExecutor

  • ThreadPoolExecutor executes the task

  • a message will be retry if CL-runtime do no receive Act message before MaxWaitMs milliseconds

Design of Exception handling
GlobalControllerExceptionHandler

If error occurred during the Rest Api call, CL-runtime responses with a proper status error code and a JSON message error. This class is implemented to intercept and handle ControlLoopException, PfModelException and PfModelRuntimeException if they are thrown during the Rest Ali calls. All of those classes must implement ErrorResponseInfo that contains message error and status response code. So the Exception is converted in JSON message.

RuntimeErrorController

If wrong end-point is called or an Exception not intercepted by GlobalControllerExceptionHandler, CL-runtime responses with a proper status error code and a JSON message error. This class is implemented to redirect the standard Web error page to a JSON message error. Typically that happen when a wrong end-point is called, but also could be happen for not authorized call, or any other Exception not intercepted by GlobalControllerExceptionHandler.

Handle version and “X-ONAP-RequestID”

RequestResponseLoggingFilter class handles version and “X-ONAP-RequestID” during a Rest-Api call; it works as a filter, so intercepts the Rest-Api and adds to the header those information.

Media Type Support

CL-runtime Rest Api supports application/json, application/yaml and text/plain Media Types. The configuration is implemented in CoderHttpMesageConverter.

application/json

JSON format is a standard for Rest Api. For the conversion from JSON to Object and vice-versa will be used org.onap.policy.common.utils.coder.StandardCoder.

application/yaml

YAML format is a standard for Control Loop Type Definition. For the conversion from YAML to Object and vice-versa will be used org.onap.policy.common.utils.coder.StandardYamlCoder.

text/plain

Text format is used by Prometheus. For the conversion from Object to String will be used StringHttpMessageConverter.

The Policy GUI for Control Loops
1. Introduction

The Policy GUI for Control Loops is designed to provide a user the ability to interact with the Control Loop Runtime to perform several actions. The actual technical design of the Control Loop Runtime is detailed in The CLAMP Control Loop Runtime. All of the endpoints and the purpose for accessing those endpoints is discussed there. In the current release of the GUI, the main purposes are to perform the below:

  • Commission new Tosca Service Templates.

  • Editing Common Properties.

  • Priming/De-priming Control Loop Definitions.

  • Decommission existing Tosca Service Templates.

  • Create new instances of Control Loops.

  • Change the state of the Control Loops.

  • Delete Control Loops.

These functions can be carried out by accessing the Controlloop Runtime alone but this should not be required for a typical user to access the system. That is why the Controlloop GUI is required. The remainder of this document will be split into 2 main sections. The first section will show the overall architecture of ControlLoop with the GUI included, so that the reader can see where it fits in to the system. Then the section will outline the individual components required for a working GUI and outline how GUI interacts with these components and why. The final section has a diagram to show the flow of typical operations from the GUI, all the way down to the participants.

2. GUI-focussed System Architecture

An architectural/functional diagram has bee provided in below. This does not show details of the other components involved in the GUI functionality. Most of the detail is provided for the GUI itself.

_images/GUI-Architecture.png

The remainder of this section outlines the different elements that comprise the architecture of the GUI and how the different elements connect to one another.

2.1 Policy CLAMP GUI
2.1.1 CLAMP GUI

The original Clamp project used the GUI to connect to various onap services, including policy api, policy pap, dcae, sdc and cds. Connection to all of these services is managed by the Camel Exchange present in the section 2.2 Policy Clamp Backend.

Class-based react components are used to render the different pages related to functionality around

  • Creating loop instances from existing templates that have been distributed by SDC.

  • Deploying/Undeploying policies to the policy framework.

  • Deploying/Undeploying microservices to the policy framework.

  • Deleting Instances.

Although this GUI deploys microservices, it is a completely different paradigm to the new ControlLoop participant-based deployment of services. Details of the CLAMP GUI are provided in Policy/CLAMP - Control Loop Automation Management Platform

2.1.2 Controlloop GUI

The current control loop GUI is an extension of the previously created GUI for the Clamp project. The Clamp project used the CLAMP GUI to connect to various onap services, including policy api, policy pap, dcae, sdc and cds. Although the current control loop project builds upon this GUI, it does not rely on these connected services. Instead, the ControlLoop GUI connects to the ControlLoop Runtime only. The ControlLoop Runtime then communicates with the database and all the ControlLoop participants (indirectly) over DMAAP.

The CLAMP GUI was originally housed in the clamp repository but for the Istanbul release, it has been moved to the policy/gui repo. There are 3 different GUIs within this repository - clamp-gui (and ControlLoop gui) code is housed under the “gui-clamp” directory and the majority of development takes place within the “gui-clamp/ui-react” directory.

The original CLAMP GUI was created using the React framework, which is a light-weight framework that promotes use of component-based architecture. Previously, a class-based style was adopted to create the Clamp components. It was decided that ControlLoop would opt for the more concise functional style of components. This architecture style allows for the logical separation of functionality into different components and minimizes bloat. As you can see from the image, there is a “ControlLoop” directory under components where all of our ControlLoop components are housed.

_images/ComponentFileStructure.png

Any code that is directly involved in communication with outside services like Rest Apis is under “ui-react/src/api”. The “fetch” Javascript library is used for these calls. The ControlLoop service communicates with just the ControlLoop Runtime Rest Api, so all of the communication code is within “ui-react/src/api/ControlLoopService.js”.

2.1.2.1 Services

The ControlLoop GUI is designed to be service-centric. This means that the code involved in rendering and manipulating data is housed in a different place to the code responsible for communication with outside services. The ControlLoop related services are those responsible for making calls to the commissioning and instantiation endpoints in the ControlLoop Runtime. Another detail to note is that both the ControlLoop and CLAMP GUI use a proxy to forward requests to the policy clamp backend. Any URLs called by the frontend that contain the path “restservices/clds/v2/” are forwarded to the backend. Services are detailed below:

  • A commissioning call is provided for contacting the commissioning API to commission a tosca service template.

  • A decommissioning call is provided for calling the decommissioning endpoint.

  • A call to retrieve the tosca service template from the runtime is provided. This is useful for carrying out manipulations on the template, such as editing the common properties.

  • A call to get the common or instance properties is provided. This is used to provide the user an opportunity to edit these properties.

  • Calls to allow creation and deletion of an instance are provided

  • Calls to change the state of and instance are provided.

  • Calls to get the current state and ordered state of the instances, effectively monitoring.

These services provide the data and communication functionality to allow the user to perform all of the actions mentioned in the 1. Introduction.

2.1.2.2 Components

The components in the architecture image reflect those rendered elements that are presented to the user. Each element is designed to be as user-friendly as possible, providing the user with clean uncluttered information. Note that all of these components relate to and were designed around specific system dialogues that are present in System Level Dialogues.

  • For commissioning, the user is provided with a simple file upload. This is something the user will have seen many times before and is self explanatory.

  • For the edit of common properties, a JSON editor is used to present whatever common properties that are present in the service template to the user in as simple a way possible. The user can then edit, save and recommission.

  • A link is provided to manage the tosca service template, where the user can view the file that has been uploaded in JSON format and optionally delete it.

  • Several functions are exposed to the user in the “Manage Instances” modal. From there they can trigger, creation of an instance, view monitoring information, delete an instance and change the state.

  • Before an instance is created, the user is provided an opportunity to edit the instance properties. That is, those properties that have not been marked as common.

  • The user can change the state of the instance by using the “Change” button on the “Manage Instances” modal. This is effectively where the user can deploy and undeploy an instance.

  • Priming and De-priming take place as a result of the action of commissioning and decommissioning a tosca service template. A more complete discussion of priming and de-priming is found here The CLAMP Control Loop Participant Protocol.

  • As part of the “Manage Instances” modal, we can monitor the state of the instances in 2 ways. The color of the instance highlight in the table indicates the state (grey - uninitialised, passive - yellow, green - running). Also, there is a monitoring button that allows use to view the individual elements’ state.

2.2 Policy Clamp Backend

The only Rest API that the ControlLoop frontend (and CLAMP frontend) communicates with directly is the Clamp backend. The backend is written in the Springboot framework and has many functions. In this document, we will only discuss the ControlLoop related functionality. Further description of non-ControlLoop Clamp and its’ architecture can be found in Policy/CLAMP - Control Loop Automation Management Platform. The backend receives the calls from the frontend and forwards the requests to other relevant APIs. In the case of the ControlLoop project, the only Rest API that it currently requires communication with is the runtime ControlLoop API. ControlLoop adopts the same “request forwarding” method as the non-ControlLoop elements in the CLAMP GUI. This forwarding is performed by Apache Camel Exchanges, which are specified in XML and can be found in the directory shown below in the Clamp repository.

_images/CamelDirectory.png

The Rest Endpoints for the GUI to call are defined in “clamp-api-v2.xml” and all of the runtime ControlLoop rest endpoints that GUI requests are forwarded to are defined in ControlLoop-flows.xml. If an Endpoint is added to the runtime ControlLoop component, or some other component you wish the GUI to communicate with, a Camel XML exchange must be defined for it here.

2.3 ControlLoop Runtime

This is where all of the endpoints for operations on ControlLoops are defined thus far. Commissioning, decommissioning, control loop creation, control loop state change and control loop deletion are all performed here. The component is written using the Springboot framework and all of the code is housed in the runtime-ControlLoop directory shown below:

_images/RuntimeControlloopDirectory.png

The rest endpoints are split over two main classes; CommissioningController.java and InstantiationController.java. There are also some rest endpoints defined in the MonitoringQueryController. These classes have minimal business logic defined in them and delegate these operations to other classes within the controlloop.runtime package. The ControlLoop Runtime write all data received on its’ endpoints regarding commissioning and instantiation to its; database, where it can be easily accessed later by the UI.

The Runtime also communicates with the participants over DMAAP. Commissioning a control loop definition writes it to the database but also triggers priming of the definitions over DMAAP. The participants then receive those definitions and hold them in memory. Similarly, upon decommissioning, a message is sent over DMAAP to the participants to trigger de-priming.

Using DMAAP, the Runtime can send; updates to the control loop definitions, change the state of control loops, receive information about participants, receive state information about control loops and effectively supervise the control loops. This data is then made available via Rest APIs that can be queried by the frontend. This is how the GUI can perform monitoring operations.

More detail on the design of the Runtime ControlLoop can be found in The CLAMP Control Loop Runtime.

2.4 DMAAP

DMAAP is comonent that provides data movement services that transports and processes data from any source to any target. It provides the capability to: - Support the transfer of messages between ONAP components, as well as to other components - Support the transfer of data between ONAP components as well as to other components. - Data Filtering capabilities - Data Processing capabilities - Data routing (file based transport) - Message routing (event based transport) - Batch and event based processing

Specifically, regarding the communication between the ControlLoop Runtime and the ControlLoop Participants, both components publish and subscribe to a specific topic, over which data and updates from the participants and control loops are sent. The ControlLoop Runtime updates the current statuses sent from the participants in the database and makes them available the the GUI over the Rest API.

2.5 The Participants

The purpose of the ControlLoop participants is to communicate with different services on behalf of the ControlLoop Runtime. As there are potentially many different services that a ControlLoop might require access to, there can be many different participants. For example, the kubernetes participant is responsible for carrying out operations on a kubernetes cluster with helm. As of the time of writing, there are three participants defined for the ControlLoop project; the policy participant, the kubernetes participant and the http participant. The participants are housed in the directory shown below in the policy-clamp repo.

_images/ParticipantsDirectory.png

The participants communicate with the Runtime over DMAAP. Tosca service template specifications, ControlLoop updates and state changes are shared with the participants via messages from runtime ControlLoop through the topic “POLICY-CLRUNTIME-PARTICIPANT”.

3. GUI Sample Flows

The primary flows from the GUI to the backend, through DMAAP and the participants are shown in the diagram below. This diagram just serves as an illustration of the scenarios that the user will experience in the GUI. You can see factually complete dialogues in System Level Dialogues.

_images/GUI-Flow.png
Control Loop Participants

A Participant is a component that acts as a bridge between the CLAMP COntrol Loop runtime and components such as the Policy Framework, DCAE, or a Kubernetes cluster that are taking part in control loops. It listens to DMaaP to receive messages from the CLAMP runtime and performs operations towards components that are taking part in control loops. A participant has a Control Loop Element for each control loop in which it is taking part.

The implementation of a participant may use a common Participant Intermediary library, which carries out common message and state handling for Control Loop Elements in participants. The ParticipantImpelementation is the component specific implementation of a participant, which is specifically developed for each component that wishes to take part in control loops.

_images/participants.png

The figure above shows participants for various components that may take part in control loops.

Note

The figure above is for illustration. Not all the participants mentioned above have realizations in ONAP. Some of the participants in the figure above represent a type of participant. For example, a controller participant would be written for a specific controller such as CDS and a participant for an existing system would be written towards that existing system.

The detailed implementation of the CLAMP Participant ecosystem is described on the following pages:

Participant Intermediary

The CLAMP Participant Intermediary is a common library in ONAP, which does common message and state handling for participant implementations. It provides a Java API, which participant implementations implement to receive and send messages to the CLAMP runtime and to handle Control Loop Element state.

Terminology
  • Broadcast message: a message for all participants (participantId=null and participantType=null)

  • Message to a participant: a message only for a participant (participantId and participantType properly filled)

  • MessageSender: a class that takes care of sending messages from participant-intermediary

  • GUI: graphical user interface, Postman or a Front-End Application

Inbound messages to participants
  • PARTICIPANT_REGISTER_ACK: received as a response from controlloop runtime server as an acknowledgement to ParticipantRegister message sent from a participant

  • PARTICIPANT_DEREGISTER_ACK: received as a response from controlloop runtime server as an acknowledgement to ParticipantDeregister message sent from a participant

  • CONTROL_LOOP_STATE_CHANGE: a message received from controlloop runtime server for a state change of controlloop

  • CONTROL_LOOP_UPDATE: a message received from controlloop runtime server for a controlloop update with controlloop instances

  • PARTICIPANT_UPDATE: a message received from controlloop runtime server for a participant update with tosca definitions of controlloop

  • PARTICIPANT_STATUS_REQ: A status request received from controlloop runtime server to send an immediate ParticipantStatus from all participants

Outbound messages
  • PARTICIPANT_REGISTER: is sent by a participant during startup

  • PARTICIPANT_DEREGISTER: is sent by a participant during shutdown

  • PARTICIPANT_STATUS: is sent by a participant as heartbeat with the status and health of a participant

  • CONTROLLOOP_STATECHANGE_ACK: is an acknowledgement sent by a participant as a response to ControlLoopStateChange

  • CONTROLLOOP_UPDATE_ACK: is an acknowledgement sent by a participant as a response to ControlLoopUpdate

  • PARTICIPANT_UPDATE_ACK: is an acknowledgement sent by a participant as a response to ParticipantUpdate

Design of a PARTICIPANT_REGISTER message
  • A participant starts and send a PARTICIPANT_REGISTER message

  • ParticipantRegisterListener collects the message from DMaap

  • if participant is not present in DB, it saves participant reference with status UNKNOWN to DB

  • if participant is present in DB, it triggers the execution to send a PARTICIPANT_UPDATE message to the participant registered (message of Priming)

  • the message is built by ParticipantUpdatePublisher using Tosca Service Template data (to fill the list of ParticipantDefinition)

  • It triggers the execution to send a PARTICIPANT_REGISTER_ACK message to the participant registered

  • MessageIntercept intercepts that event, if PARTICIPANT_UPDATE message has been sent, it will be add a task to handle PARTICIPANT_REGISTER in SupervisionScanner

  • SupervisionScanner starts the monitoring for participantUpdate

Design of a PARTICIPANT_DEREGISTER message
  • A participant starts and send a PARTICIPANT_DEREGISTER message

  • ParticipantDeregisterListener collects the message from DMaap

  • if participant is not present in DB, do nothing

  • if participant is present in DB, it triggers the execution to send a PARTICIPANT_UPDATE message to the participant registered (message of DePriming)

  • the message is built by ParticipantUpdatePublisher using Tosca Service Template data as null

  • ParticipantHandler removes the tosca definitions stored

  • It triggers the execution to send a PARTICIPANT_DEREGISTER_ACK message to the participant registered

  • Participant is not monitored.

Design of a creation of a Control Loop Type
  • If there are participants registered with CL-runtime, it triggers the execution to send a broadcast PARTICIPANT_UPDATE message

  • the message is built by ParticipantUpdatePublisher using Tosca Service Template data (to fill the list of ParticipantDefinition)

  • Participant-intermediary will receive a PARTICIPANT_UDPATE message and stores the Tosca Service Template data on ParticipantHandler

Design of a deletion of a Control Loop Type
  • if there are participants registered, CL-runtime triggers the execution to send a broadcast PARTICIPANT_UPDATE message

  • the message is built by ParticipantUpdatePublisher with an empty list of ParticipantDefinition

  • It deletes the Control Loop Type from DB

  • Participant-intermediary will receive a PARTICIPANT_UDPATE message and deletes the Tosca Service Template data on ParticipantHandler

Design of a creation of a Control Loop
  • CONTROL_LOOP_UPDATE message with instantiation details and UNINITIALISED state is sent to participants

  • Participant-intermediary validates the current state change

  • Participant-intermediary will recieve CONTROL_LOOP_UPDATE message and sends the details of ControlLoopElements to participants

  • Each participant performs its designated job of deployment by interacting with respective frameworks

Design of a deletion of a Control Loop
  • CONTROL_LOOP_STATE_CHANGE message with UNINITIALISED state is sent to participants

  • Participant-intermediary validates the current state change

  • Participant-intermediary will recieve CONTROL_LOOP_STATE_CHANGE message and sends the details of ControlLoopElements to participants

  • Each participant performs its designated job of undeployment by interacting with respective frameworks

Design of “issues control loop commands to control loops” - case UNINITIALISED to PASSIVE
  • CONTROL_LOOP_STATE_CHANGE message with state changed from UNINITIALISED to PASSIVE is sent to participants

  • Participant-intermediary validates the current state change

  • Participant-intermediary will recieve CONTROL_LOOP_STATE_CHANGE message and sends the details of state change to participants

  • Each participant performs its designated job of state change by interacting with respective frameworks

Design of “issues control loop commands to control loops” - case PASSIVE to UNINITIALISED
  • CONTROL_LOOP_STATE_CHANGE message with state changed from PASSIVE to UNINITIALISED is sent to participants

  • Participant-intermediary validates the current state change

  • Participant-intermediary will recieve CONTROL_LOOP_STATE_CHANGE message and sends the details of state change to participants

  • Each participant performs its designated job of state change by interacting with respective frameworks

Design of “issues control loop commands to control loops” - case PASSIVE to RUNNING
  • CONTROL_LOOP_STATE_CHANGE message with state changed from PASSIVE to RUNNING is sent to participants

  • Participant-intermediary validates the current state change

  • Participant-intermediary will recieve CONTROL_LOOP_STATE_CHANGE message and sends the details of state change to participants

  • Each participant performs its designated job of state change by interacting with respective frameworks

Design of “issues control loop commands to control loops” - case RUNNING to PASSIVE
  • CONTROL_LOOP_STATE_CHANGE message with state changed from RUNNING to PASSIVE is sent to participants

  • Participant-intermediary validates the current state change

  • Participant-intermediary will recieve CONTROL_LOOP_STATE_CHANGE message and sends the details of state change to participants

  • Each participant performs its designated job of state change by interacting with respective frameworks

Design of a PARTICIPANT_STATUS message
  • A participant sends a scheduled PARTICIPANT_STATUS message

  • This message will hold the state and healthStatus of all the participants running actively

  • PARTICIPANT_STATUS message holds a special attribute to return Tosca definitions, this attribute is populated only in response to PARTICIPANT_STATUS_REQ

Design of a CONTROLLOOP_UPDATE_ACK message
  • A participant sends CONTROLLOOP_UPDATE_ACK message in response to a CONTROLLOOP_UPDATE message.

  • For each CL-elements moved to the ordered state as indicated by the CONTROLLOOP_UPDATE

  • ControlLoopUpdateAckListener in CL-runtime collects the messages from DMaap

  • It checks the status of all control loop elements and checks if the control loop is primed

  • It updates the controlloop in DB accordingly

Design of a CONTROLLOOP_STATECHANGE_ACK is similar to the design for CONTROLLOOP_UPDATE_ACK

HTTP Participant

The CLAMP HTTP participant receives configuration information from the CLAMP runtime, maps the configuration information to a REST URL, and makes a REST call on the URL. Typically the HTTP Participant is used with another participant such as the Kubernetes Participant, which brings up the microservice that runs a REST server. Once the microservice is up, the HTTP participant can be used to configure the microservice over its REST interface.Of course, the HTTP participant works towards any REST service, it is not restricted to REST services started by participants.

_images/http-participant.png

The HTTP participant runs a Control Loop Element to handle the REST dialogues for a particular application domain. The REST dialogues are whatever REST calls that are required to implement the functionality for the application domain.

The HTTP participant allows the REST dialogues for a Control Loop to be managed. A particular Control Loop may require many things to be configured and managed and this may require many REST dialogues to achieve.

When a control loop is initialized, the HTTP participant starts a HTTP Control Loop element for the control loop. It reads the configuration information sent from the Control Loop Runtime runs a HTTP client to talk to the REST endpoint that is receiving the REST requests. A HTTP participant can simultaneously manage HTTP Control Loop Elements towards multiple REST endpoints, as shown in the diagram above where the HTTP participant is running two HTTP Control Loop Elements, one for Control Loop A and one for Control Loop B.

Configuring a Control Loop Element on the HTTP participant for a Control Loop

A Configuration Entity describes a concept that is managed by the HTTP participant. A Configuration Entity can be created, Read, Updated, and Deleted (CRUD). The user defines the Configuration Entities that it wants its HTTP Control Loop Element to manage and provides a sequence of parameterized REST commands to Create, Read, Update, and Delete each Configuration Entity.

Sample tosca template defining a http participant and a control loop element for a control loop. click here

The user configures the following properties in the TOSCA for the HTTP participant:

Property

Type

Description

baseUrl

URL

A well formed URL pointing at the REST server that is processing the REST requests

httpHeaders

map

A map of <String, String> defining the HTTP headers to send on all REST calls

configurationEntitiies

map

A map of <String, ConfigurationEntity> describing the names and definitions of configuration entities that are managed by this HTTP Control Loop Element

The ConfigurationEntity type is described in the following table:

Field

Type

Description

ID

ToscaConceptIdentifier

The name and version of the Configuration Entity

restSequence

List<RestRequest>

A list of REST requests to give manage the Configuration Entity

The RestRequest type is described in the following table:

Field

Type

Description

httpMethod

HttpMethod

An enum for the HTTP method {GET, PUT, POST, DELETE}

path

String

The path of the REST endpoint relative to the baseUrl

body

String

The body of the request for POST and PUT methods

expectedResponse

HttpStatus

The expected HTTP response code fo the REST request

Http participant Interactions:

The http participant interacts with Control Loop Runtime on the northbound via DMaap. It interacts with any microservice on the southbound over http for configuration.

The communication for the Control loop updates and state change requests are sent from the Control Loop Runtime to the participant via DMaap. The participant invokes the appropriate http endpoint of the microservice based on the received messages from the Control Loop Runtime.

startPhase:

The http participant is often used along with Kubernetes Participant to configure the microservice after the deployment. This requires the Control Loop Element of http participant to be started after the completion of deployment of the microservice. This can be achieved by adding the property startPhase in the Control Loop Element of http participant. Control Loop Runtime starts the elements based on the startPhase value defined in the Tosca. The default value of startPhase is taken as ‘0’ which takes precedence over the Control Loop Elements with the startPhase value ‘1’. Http Control Loop Elements are defined with value ‘1’ in order to start the Control Loop Element in the second phase.

Http participant Workflow:

Once the participant is started, it sends a “REGISTER” event to the DMaap topic which is then consumed by the Control Loop Runtime to register this participant on the runtime database. The user can commission the tosca definitions from the Policy Gui to the Control Loop Runtime that further updates the participant with these definitions via DMaap. Once the control loop definitions are available in the runtime database, the Control Loop can be instantiated with the default state “UNINITIALISED” from the Policy Gui.

When the state of the Control Loop is changed from “UNINITIALISED” to “PASSIVE” from the Policy Gui, the http participant receives the control loop state change event from the runtime and configures the microservice of the corresponding Control Loop Element over http. The configuration entity for a microservice is associated with each Control Loop Element for the http participant. The http participant holds the executed http requests information along with the responses received.

The participant is used in a generic way to configure any entity over http and it does not hold the information about the microservice to unconfigure/revert the configurations when the state of Control Loop changes from “PASSIVE” to “UNINITIALISED”.

Kubernetes Participant

The kubernetes participant receives a helm chart information from the CLAMP runtime and installs the helm chart in to the k8s cluster on the specified namespace. It can fetch the helm chart from remote helm repositories as well as from any of the repositories that are configured on the helm client. The participant acts as a wrapper around the helm client and creates the required resources in the k8s cluster.

The kubernetes participant also exposes REST endpoints for onboarding, installing and uninstalling of helm charts from the local chart database which facilitates the user to also use this component as a standalone application for helm operations.

In Istanbul version, the kubernetes participant supports the following methods of installation of helm charts.

  • Installation of helm charts from configured helm repositories and remote repositories passed via TOSCA in CLAMP.

  • Installation of helm charts from the local chart database via the participant’s REST Api.

Prerequisites for using Kubernetes participant in Istanbul version:
  • A running Kubernetes cluster.

    Note:

    • If the kubernetes participant is deployed outside the cluster , the config file of the k8s cluster needs to be copied to the ./kube folder of kubernetes participant’s home directory to make the participant work with the external cluster.

    • If the participant needs additional permission to create resources on the cluster, cluster-admin role binding can be created for the service account of the participant with the below command.

      Example: kubectl create clusterrolebinding k8s-participant-admin-binding –clusterrole=cluster-admin –serviceaccount=<k8s participant service account>

_images/k8s-participant.png
Defining a TOSCA CL definition for kubernetes participant:

A chart parameter map describes the helm chart parameters in tosca template for a microservice that is used by the kubernetes participant for the deployment. A Control Loop element in TOSCA is mapped to the kubernetes participant and also holds the helm chart parameters for a microservice defined under the properties of the Control Loop Element.

Sample tosca template defining a participant and a control loop element for a control loop. click here

Configuring a Control Loop Element on the kubernetes participant for a Control Loop

The user configures the following properties in the TOSCA template for the kubernetes participant:

Property

Type

Description

chartId

ToscaConceptIdentifier

The name and version of the helm chart that needs to be managed by the kubernetes participant

namespace

String

The namespace in the k8s cluster where the helm chart needs to be installed

releaseName

String

The helm deployment name that specifies the installed component in the k8s cluster

repository (optional)

map

A map of <String, String> defining the helm repository parameters for the chart

overrideParams (optional)

map

A map of <String, String> defining the helm chart parameters that needs to be overridden

Note: The repository property can be skipped if the helm chart is available in the local chart database or in a repository that is already configured on the helm client. The participant does a chart lookup by default.

The repository type is described in the following table:

Field

Type

Description

repoName

String

The name of the helm repository that needs to be configured on the helm client

protocol

String

Specifies http/https protocols to connect with repository url

address

String

Specifies the ip address or the host name

port (optional)

String

Specifies the port where the repository service is running

userName (optional)

String

The username to login the helm repository

password (optional)

String

The password to login the helm repository

Kubernetes participant Interactions:

The kubernetes participant interacts with Control Loop Runtime on the northbound via DMaap. It interacts with the helm client on the southbound for performing various helm operations to the k8s cluster.

The communication for the Control loop updates and state change requests are sent from the Control Loop Runtime to the participant via DMaap. The participant performs appropriate operations on the k8s cluster via helm client based on the received messages from the Control Loop Runtime.

kubernetes participant Workflow:

Once the participant is started, it sends a “REGISTER” event to the DMaap topic which is then consumed by the Control Loop Runtime to register this participant on the runtime database. The user can commission the tosca definitions from the Policy Gui to the Control Loop Runtime that further updates the participant with these definitions via DMaap. Once the control loop definitions are available in the runtime database, the Control Loop can be instantiated with the default state “UNINITIALISED” from the Policy Gui.

When the state of the Control Loop is changed from “UNINITIALISED” to “PASSIVE” from the Policy Gui, the kubernetes participant receives the control loop state change event from the runtime and deploys the helm charts associated with each Control Loop Elements by creating appropriate namespace on the cluster. If the repository of the helm chart is not passed via TOSCA, the participant looks for the helm chart in the configured helm repositories of helm client. It also performs a chart look up on the local chart database where the helm charts are onboarded via the participant’s REST Api.

The participant also monitors the deployed pods for the next 3 minutes until the pods comes to RUNNING state. It holds the deployment information of the pods including the current status of the pods after the deployment.

When the state of the Control Loop is changed from “PASSIVE” to “UNINITIALISED” back, the participant also undeploys the helm charts from the cluster that are part of the Control Loop Element.

REST APIs on Kubernetes participant

Kubernetes participant can also be installed as a standalone application which exposes REST endpoints for onboarding, installing, uninstalling helm charts from local chart database.

_images/k8s-rest.png

Download Kubernetes participant API Swagger

The CLAMP Policy Framework Participant

Control Loop Elements in the Policy Framework Participant are configured using TOSCA metadata defined for the Policy Control Loop Element type.

The Policy Framework participant receives messages through participant-intermediary common code, and handles them by invoking REST APIs towards policy-framework.

For example, When a ControlLoopUpdate message is received by policy participant, it contains full ToscaServiceTemplate describing all components participating in a control loop. When the control loop element state changed from UNINITIALIZED to PASSIVE, the Policy-participant triggers the creation of policy-types and policies in Policy-Framework.

When the state changes from PASSIVE to UNINITIALIZED, Policy-Participant deletes the policies, policy-types by invoking REST APIs towards the policy-framework.

Run Policy Framework Participant command line using Maven

mvn spring-boot:run -Dspring-boot.run.arguments=”–server.port=8082”

Run Policy Framework Participant command line using Jar

java -jar -Dserver.port=8082 -DtopicServer=localhost target/policy-clamp-participant-impl-policy-6.1.2-SNAPSHOT.jar

Distributing Policies

The Policy Framework participant uses the Policy PAP API to deploy and undeploy policies.

When a Policy Framework Control Loop Element changes from state PASSIVE to state RUNNING, the policy is deployed. When it changes from state RUNNING to state PASSIVE, the policy is undeployed.

The PDP group to which the policy should be deployed is specified in the Control Loop Element metadata, see the Policy Control Loop Element type definition. If the PDP group specified for policy deployment does not exist, an error is reported.

The PAP Policy Status API and Policy Deployment Status API are used to retrieve data to report on the deployment status of policies in Participant Status messages.

The PDP Statistics API is used to get statistics for statistics report from the Policy Framework Participant back to the CLAMP runtime.

Policy Type and Policy References

The Policy Framework uses the policyType and policyId properties defined in the Policy Control Loop Element type references to specify what policy type and policy should be used by a Policy Control Loop Element.

The Policy Type and Policy specified in the policyType and policyId reference must of course be available in the Policy Framework in order for them to be used in Control Loop instances. In some cases, the Policy Type and/or the Policy may be already loaded in the Policy Framework. In other cases, the Policy Framework participant must load the Policy Type and/or policy.

Policy Type References

The Policy Participant uses the following steps for Policy Type References:

  1. The Policy Participant reads the Policy Type ID from the policyType property specified for the Control Loop Element.

  2. It checks if a Policy Type with that Policy Type ID has been specified in the ToscaServiceTemplateFragment field in the ControLoopElement definition in the ControlLoopUpdate message, see The CLAMP Control Loop Participant Protocol.

  1. If the Policy Type has been specified, the Participant stores the Policy Type in the Policy framework. If the Policy Type is successfully stored, execution proceeds, otherwise an error is reported.

  2. If the Policy Type has not been specified, the Participant checks that the Policy Type is already in the Policy framework. If the Policy Type already exists, execution proceeds, otherwise an error is reported.

Policy References

The Policy Participant uses the following steps for Policy References:

  1. The Policy Participant reads the Policy ID from the policyId property specified for the Control Loop Element.

  2. It checks if a Policy with that Policy ID has been specified in the ToscaServiceTemplateFragment field in the ControLoopElement definition in the ControlLoopUpdate message, The CLAMP Control Loop Participant Protocol.

  1. If the Policy has been specified, the Participant stores the Policy in the Policy framework. If the Policy is successfully stored, execution proceeds, otherwise an error is reported.

  2. If the Policy has not been specified, the Participant checks that the Policy is already in the Policy framework. If the Policy already exists, execution proceeds, otherwise an error is reported.

Participant Simulator

This can be used for simulation testing purpose when there are no actual frameworks or a full deployment. Participant simulator can edit the states of ControlLoopElements and Participants for verification of other controlloop components for early testing. All controlloop components should be setup, except participant frameworks (for example, no policy framework components are needed) and participant simulator acts as respective participant framework, and state changes can be done with following REST APIs

Participant Simulator API

This API allows a Participant Simulator to be started and run for test purposes.

Download Policy Participant Simulator API Swagger

PUT /onap/participantsim/v2/elements

Updates simulated control loop elements

  • Description: Updates simulated control loop elements, returning the updated control loop definition IDs

  • Consumes: [‘application/json’]

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

body

body

Body of a control loop element

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - OK

201 - Created

401 - Authentication Error

403 - Authorization Error

404 - Not Found

500 - Internal Server Error

GET /onap/participantsim/v2/elements/{name}/{version}

Query details of the requested simulated control loop elements

  • Description: Queries details of the requested simulated control loop elements, returning all control loop element details

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

name

path

Control loop element name

string

version

path

Control loop element version

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - OK

401 - Authentication Error

403 - Authorization Error

404 - Not Found

500 - Internal Server Error

PUT /onap/participantsim/v2/participants

Updates simulated participants

  • Description: Updates simulated participants, returning the updated control loop definition IDs

  • Consumes: [‘application/json’]

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

body

body

Body of a participant

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - OK

201 - Created

401 - Authentication Error

403 - Authorization Error

404 - Not Found

500 - Internal Server Error

GET /onap/participantsim/v2/participants/{name}/{version}

Query details of the requested simulated participants

  • Description: Queries details of the requested simulated participants, returning all participant details

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

name

path

Participant name

string

version

path

Participant version

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - OK

401 - Authentication Error

403 - Authorization Error

404 - Not Found

500 - Internal Server Error

Note

Policy/CLAMP was merged into the Policy Framework in the Honolulu release of ONAP. Prior to that release, it was a separate project. The release notes for CLAMP when it existed as a separate proejct are located below.

Pre Migration (Guilin and earlier) Release Notes for CLAMP

Warning

The CLAMP project was migrated to policy-clamp in the Policy Framework in the Honolulu release. For CLAMP release notes for the Hinolulu and subsequent releases, please see the policy-clamp related release notes in the POLICY Framework Release Notes

Version: 5.1.0 (Guilin)

Release Date:

2020-11-19

New Features

The Guilin release of the Control Loop Automation Management Platform (CLAMP).

The main goal of the Guilin release was to:

  • Complete integration to CDS for Actor/Action selection.

  • SECCOM Perform Software Composition Analysis - Vulnerability tables (TSC must have).

  • SECCOM Password removal from OOM HELM charts (TSC must have) - implementation of certinInitializer to get AAF certificates at oom deployment time.

Bug Fixes

Known Issues

Security Notes

Fixed Security Issues

Known Security Issues

Known Vulnerabilities in Used Modules

CLAMP code has been formally scanned during build time using NexusIQ and all Critical vulnerabilities have been addressed, items that remain open have been assessed for risk and actions to be taken in future release.

Quick Links:

Upgrade Notes

  • The Upgrade strategy for Guilin can be found here:https://wiki.onap.org/display/DW/Frankfurt+CLAMP+Container+upgrade+strategy

  • New Docker Containers are available. the list of containers composing this release are below:
    • clamp-backend: nexus3.onap.org:10001/onap/clamp-backend 5.1.5

    • clamp-frontend: nexus3.onap.org:10001/onap/clamp-frontend 5.1.5

    • clamp-dash-es: nexus3.onap.org:10001/onap/clamp-dashboard-elasticsearch 5.0.4

    • clamp-dash-kibana: nexus3.onap.org:10001/onap/clamp-dashboard-kibana 5.0.4

    • clamp-dash-logstash: nexus3.onap.org:10001/onap/clamp-dashboard-logstash 5.0.4

Version: 5.0.7 (Frankfurt maintenance release tag 6.0.0)

Release Date:

2020-08-17

Bug Fixes

  • CLAMP-878 Clamp backend pod fails with mariaDB server error

  • CLAMP-885 CLAMP update documentation

Known Issues

Security Notes

N/A

Fixed Security Issues

Known Security Issues

Known Vulnerabilities in Used Modules

CLAMP code has been formally scanned during build time using NexusIQ and all Critical vulnerabilities have been addressed, items that remain open have been assessed for risk and actions to be taken in future release.

Quick Links:

Upgrade Notes

  • The Upgrade strategy for Frankfurt can be found here:https://wiki.onap.org/display/DW/Frankfurt+CLAMP+Container+upgrade+strategy

  • New Docker Containers are available. the list of containers composing this release are below:

    • clamp-backend-filebeat-onap: docker.elastic.co/beats/filebeat 5.5.0

    • clamp-backend: nexus3.onap.org:10001/onap/clamp-backend 5.0.7

    • clamp-frontend: nexus3.onap.org:10001/onap/clamp-frontend 5.0.7

    • clamp-dash-es: nexus3.onap.org:10001/onap/clamp-dashboard-elasticsearch 5.0.3

    • clamp-dash-kibana: nexus3.onap.org:10001/onap/clamp-dashboard-kibana 5.0.3

    • clamp-dash-logstash: nexus3.onap.org:10001/onap/clamp-dashboard-logstash 5.0.3

Version: 5.0.1 (Frankfurt)

Release Date:

2020-05-12

New Features

The Frankfurt release is the seventh release of the Control Loop Automation Management Platform (CLAMP).

The main goal of the Frankfurt release was to:

  • implementing a new Control Loop creation flow: Self Serve Control Loop(partially done will be continued in next release).

  • Add Tosca policy-model support for Operational Policies definitions.

  • Add integration to CDS for Actor/Action selection.

  • Move from SearchGuard to OpenDistro.

  • Document(high level) current upgrade component strategy (TSC must have).

  • SECCOM Perform Software Composition Analysis - Vulnerability tables (TSC must have).

  • SECCOM Password removal from OOM HELM charts (TSC must have).

  • SECCOM HTTPS communication vs. HTTP (TSC must have)

Bug Fixes

Known Issues
  • CLAMP-856 CLAMP should not display all CDS workflow properties

Security Notes

Fixed Security Issues

Known Security Issues

Known Vulnerabilities in Used Modules

CLAMP code has been formally scanned during build time using NexusIQ and all Critical vulnerabilities have been addressed, items that remain open have been assessed for risk and actions to be taken in future release.

Quick Links:

Upgrade Notes

  • The Upgrade strategy for Frankfurt can be found here:https://wiki.onap.org/display/DW/Frankfurt+CLAMP+Container+upgrade+strategy

  • New Docker Containers are available. the list of containers composing this release are below: - clamp-backend-filebeat-onap: docker.elastic.co/beats/filebeat 5.5.0 - clamp-backend: nexus3.onap.org:10001/onap/clamp-backend 5.0.6 - clamp-frontend: nexus3.onap.org:10001/onap/clamp-frontend 5.0.6 - clamp-dash-es: nexus3.onap.org:10001/onap/clamp-dashboard-elasticsearch 5.0.3 - clamp-dash-kibana: nexus3.onap.org:10001/onap/clamp-dashboard-kibana 5.0.3 - clamp-dash-logstash: nexus3.onap.org:10001/onap/clamp-dashboard-logstash 5.0.3

Version: 4.1.3 (El-Alto)

Release Date:

2019-10-11

New Features

The El Alto release is the sixth release of the Control Loop Automation Management Platform (CLAMP).

The main goal of the El Alto release was to:

  • _.Fix a maximum a security issues, especially the angular related issues by moving to React.

Bug Fixes

  • The full list of implemented user stories and epics is available on El Alto CLAMP user stories done This includes the list of bugs that were fixed during the course of this release.

Known Issues

  • CLAMP-506 Elastic Search Clamp image cannot be built anymore(SearchGuard DMCA issue)

  • Due to the uncertainties with the DMCA SearchGuard issue, the ELK stack has been removed from El Alto release, meaning the CLAMP “Control Loop Dashboard” is not part of the El Alto release.

  • CLAMP-519 Clamp cannot authenticate to AAF(Local authentication as workaround)

Security Notes

Fixed Security Issues

Known Security Issues

Known Vulnerabilities in Used Modules

CLAMP code has been formally scanned during build time using NexusIQ and all Critical vulnerabilities have been addressed, items that remain open have been assessed for risk and actions to be taken in future release. The CLAMP open Critical security vulnerabilities and their risk assessment have been documented as part of the project in El Alto.

Quick Links:

Upgrade Notes

New Docker Containers are available.

Version: 4.1.0 (El-Alto Early Drop)

Release Date:

2019-08-19

New Features

The El Alto-Early Drop release is the fifth release of the Control Loop Automation Management Platform (CLAMP).

The main goal of the El Alto-Early Drop release was to:

  • _.Fix a maximum a security issues, especially the angular related issues by moving to React.

Bug Fixes

  • The full list of implemented user stories and epics is available on CLAMP R5 - Early Drop RELEASE This includes the list of bugs that were fixed during the course of this release.

Known Issues

  • CLAMP-384 Loop State in UI is not reflecting the current state

Security Notes

Fixed Security Issues

  • OJSI-166 Port 30290 exposes unprotected service outside of cluster.

Known Security Issues

Known Vulnerabilities in Used Modules

CLAMP code has been formally scanned during build time using NexusIQ and all Critical vulnerabilities have been addressed, items that remain open have been assessed for risk and actions to be taken in future release. The CLAMP open Critical security vulnerabilities and their risk assessment have been documented as part of the project in El Alto Early Drop.

Quick Links:

Upgrade Notes

New Docker Containers are available.

Version: 4.0.5 (Dublin)

Release Date:

2019-06-06

New Features

The Dublin release is the fourth release of the Control Loop Automation Management Platform (CLAMP).

The main goal of the Dublin release was to:

  • Stabilize Platform maturity by stabilizing CLAMP maturity matrix see Wiki for Dublin.

  • CLAMP supports of Policy-model based Configuration Policy

  • CLAMP supports new Policy Engine direct Rest API (no longer based on jar provided by Policy Engine)

  • CLAMP main Core/UI have been reworked, removal of security issues reported by Nexus IQ.

Bug Fixes

  • The full list of implemented user stories and epics is available on DUBLIN RELEASE This includes the list of bugs that were fixed during the course of this release.

Known Issues

  • CLAMP-384 Loop State in UI is not reflecting the current state

Security Notes

Fixed Security Issues

  • OJSI-128 In default deployment CLAMP (clamp) exposes HTTP port 30258 outside of cluster.

  • OJSI-147 In default deployment CLAMP (cdash-kibana) exposes HTTP port 30290 outside of cluster.

  • OJSI-152 In default deployment CLAMP (clamp) exposes HTTP port 30295 outside of cluster.

Known Security Issues

Known Vulnerabilities in Used Modules

CLAMP code has been formally scanned during build time using NexusIQ and all Critical vulnerabilities have been addressed, items that remain open have been assessed for risk and actions to be taken in future release. The CLAMP open Critical security vulnerabilities and their risk assessment have been documented as part of the project in Dublin.

Quick Links:

Upgrade Notes

New Docker Containers are available.

Version: 3.0.4 - maintenance release

Release Date:

2019-04-06

New Features none

Bug Fixes none

Known Issues CLAMP certificates have been renewed to extend their expiry dates

  • CLAMP-335 Update Certificates on Casablanca release.

Version: 3.0.3 - maintenance release

Release Date:

2019-02-06

New Features none

Bug Fixes none

Known Issues one documentation issue was fixed, this issue does not require a new docker image:

  • CLAMP-257 User Manual for CLAMP : nothing on readthedocs.

Version: 3.0.3 (Casablanca)

Release Date:

2018-11-30

New Features

The Casablanca release is the third release of the Control Loop Automation Management Platform (CLAMP).

The main goal of the Casablanca release was to:

  • Enhance Platform maturity by improving CLAMP maturity matrix see Wiki for Casablanca.

  • CLAMP Dashboard improvements for the monitoring of active Closed Loops

  • CLAMP logs alignment on the ONAP platform.

  • CLAMP is now integrated with AAF for authentication and permissions retrieval (AAF server is pre-loaded by default with the required permissions)

  • CLAMP improvement for configuring the policies (support of Scale Out use case)

  • CLAMP main Core/UI have been reworked, removal of security issues reported by Nexus IQ on JAVA/JAVASCRIPT code (Libraries upgrade or removal/replacement when possible)

  • As a POC, the javascript coverage can now be enabled in SONAR (Disabled for now)

Bug Fixes

  • The full list of implemented user stories and epics is available on CASABLANCA RELEASE This includes the list of bugs that were fixed during the course of this release.

Known Issues

  • None

Security Notes

CLAMP code has been formally scanned during build time using NexusIQ and all Critical vulnerabilities have been addressed, items that remain open have been assessed for risk and actions to be taken in future release. The CLAMP open Critical security vulnerabilities and their risk assessment have been documented as part of the project in Casablanca.

Quick Links:

Upgrade Notes

New Docker Containers are available, an ELK stack is also now part of CLAMP deployments.

Deprecation Notes

The CLAMP Designer Menu (in CLAMP UI) is deprecated since Beijing, the design time is being onboarded into SDC - DCAE D.

Other

CLAMP Dashboard is now implemented, allows to monitor Closed Loops that are running by retrieving CL events on DMAAP.

How to - Videos

Version: 2.0.2 (Beijing)

Release Date:

2018-06-07

New Features

The Beijing release is the second release of the Control Loop Automation Management Platform (CLAMP).

The main goal of the Beijing release was to:

  • Enhance Platform maturity by improving CLAMP maturity matrix see Wiki for Beijing.

  • Focus CLAMP on Closed loop runtime operations and control - this is reflected by the move of the design part to DCAE-D.

  • Introduce CLAMP Dashboard for monitoring of active Closed Loops.

  • CLAMP is integrated with MSB.

  • CLAMP has integrated SWAGGER.

  • CLAMP main Core has been reworked for improved flexibility.

Bug Fixes

  • The full list of implemented user stories and epics is available on BEIJING RELEASE This includes the list of bugs that were fixed during the course of this release.

Known Issues

  • CLAMP-69 Deploy action does not always work.

    The “Deploy” action does not work directly after submitting it.

    Workaround:

    You have to close the CL and reopen it again. In that case the Deploy action will do something.

Security Notes

CLAMP code has been formally scanned during build time using NexusIQ and all Critical vulnerabilities have been addressed, items that remain open have been assessed for risk and determined to be false positive. The CLAMP open Critical security vulnerabilities and their risk assessment have been documented as part of the project in Beijing.

Quick Links:

Upgrade Notes

New Docker Containers are avaialble, an ELK stack is also now part of CLAMP deployments.

Deprecation Notes

The CLAMP Designer UI is now deprecated and unavailable, the design time is being onboarded into SDC - DCAE D.

Other

CLAMP Dashboard is now implemented, allows to monitor Closed Loops that are running by retrieving CL events on DMAAP.

Version: 1.1.0 (Amsterdam)

Release Date:

2017-11-16

New Features

The Amsterdam release is the first release of the Control Loop Automation Management Platform (CLAMP).

The main goal of the Amsterdam release was to:

  • Support the automation of provisionning for the Closed loops of the vFW, vDNW and vCPE through TCA.

  • Support the automation of provisionning for the Closed loops of VVolte (Holmes)

  • Demonstrate complete interaction with Policy, DCAE, SDC and Holmes.

Bug Fixes

  • The full list of implemented user stories and epics is available on AMSTERDAM RELEASE This is technically the first release of CLAMP, previous release was the seed code contribution. As such, the defects fixed in this release were raised during the course of the release. Anything not closed is captured below under Known Issues. If you want to review the defects fixed in the Amsterdam release, refer to Jira link above.

Known Issues
  • CLAMP-68 ResourceVF not always provisioned.

In Closed Loop -> Properties CL: When opening the popup window, the first service in the list does not show Resource-VF even though in SDC there is a resource instance in the service.

Workaround:

If you have multiple service available (if not create a dummy one on SDC), just click on another one and then click back on the first one in the list. The ResourceVF should be provisioned now.

  • CLAMP-69 Deploy action does not always work.

    The “Deploy” action does not work directly after submitting it.

    Workaround:

    You have to close the CL and reopen it again. In that case the Deploy action will do something

Security Issues

CLAMP is following the CII Best Practices Badge Program, results including security assesment can be found on the project page

Upgrade Notes

N/A

Deprecation Notes

N/A

Other


End of Release Notes

Using Monitoring Gui

Here is an example running Monitoring Gui on a Native Windows Computer.

Environment setup

create and run docker images about the following tar packages

1docker load-i pdp.tar
2docker load-i mariadb.tar
3docker load-i api.tar
4docker load-i apex.tar
5docker load-i pap.tar
6docker load-i xacml.tar

download latest source from gerrit and create tar by command

1tar example example.tar

download drools-pdp latest source from gerrit

prepare eclipse for starting drools-pdp

config drools-pdp dependency in eclipse

  • create config folder inside drools-pdppolicy-management, copy feature-lifecycle.properties into this folder

    Create the Folder Arc

     1lifecycle.pdp.group=${envd:POLICY_PDP_PAP_GROUP:defaultGroup}
     2
     3dmaap.source.topics=POLICY-PDP-PAP
     4dmaap.sink.topics=POLICY-PDP-PAP
     5
     6dmaap.source.topics.POLICY-PDP-PAP.servers=localhost:3904
     7dmaap.source.topics.POLICY-PDP-PAP.managed=false
     8
     9dmaap.sink.topics.POLICY-PDP-PAP.servers=localhost:3904
    10dmaap.sink.topics.POLICY-PDP-PAP.managed=false
    
  • update run property “classpath” of “drools.system.Main” in Eclipse

    Update run Property

    Lifecycle classpath setting

Prepare Postman for sending REST request to components during demo

import “demo.postman_collection.json” into PostMan

Import JSON in PostMan

“demo.postman_collection.json”, “link

clean docker environment

1# docker rm $(docker ps-aq)

Demo steps

docker compose start mariadb and message-router. Mariadb must be started in a seperate console because it needs several seconds to finish startup, and other docker startups depends on this

1# docker-compose up -d mariadb message-router

docker compose start other components API, PAP, APEX-PDP, XACML-PDP

1# docker-compose up -d pdp xacml pap api

start “drools.system.Main” in eclipse

verify PDPs are registered into the database

  • start PAP statistics monitoring GUI

    java -jar client/client-monitoring/target/client-monitoring-uber 2.2.0-SNAPSHOT.jar

  • open monitor in browser

    curl localhost:18999

set up pap parameter

Pap parameter

input parameters

Set up pap parameter

Fetch PdpLists

Fetch Pdp Lists

no Engine Worker started, we can only see healthcheck result when we click on the instance APEX statistics

No engine worker started

XACML statistics

XACML statistics

use PostMan to send request to API to create policy type/create policy/ deploy policy

1API_Create Policy Type
2API_Create Policy
3Simple Deploy Policy

now APEX PDP statistics data includes engine worker statistics, and shows the monitoring GUI updating automatically (every 2 minutes)

Engine worker started

use PostMan to send a request to DMAAP, add one xacml-pdp statistics message manually, show that the monitoring GUI updates the PostMan API

xacml-pdp statistics update

Update XACML statistics

System Attributes: Handling, Integration, and Management of the Policy Framework

Using Policy DB Migrator

Policy DB Migrator is a set of shell scripts used to install the database tables required to run ONAP Policy Framework.

Note

Currently the Istanbul versions of the PAP and API components require db-migrator to run prior to initialization.

Package contents

Policy DB Migrator is run as a docker container and consists of the following scripts:

1  prepare_upgrade.sh
2  prepare_downgrade.sh
3  db-migrator

prepare_upgrade.sh is included as part of the docker image and is used to copy the upgrade sql files to the run directory. This script takes one parameter: <SCHEMA NAME>.

prepare_downgrade.sh is included as part of the docker image and is used to copy the downgrade sql files to the run directory. This script takes one parameter: <SCHEMA NAME>.

db-migrator is included as part of the docker image and is used to run either the upgrade or downgrade operation depending on user requirements. This script can take up to four parameters:

Parameter Name

Parameter flag

Value (example)

operation

-o

upgrade/downgrade/report

schema

-s

policyadmin

to

-t

0800/0900

from

-f

0800/0900

The container also consists of several sql files which are used to upgrade/downgrade the policy database.

The following environment variables need to be set to enable db-migrator to run and connect to the database.

Name

Value (example)

SQL_HOST

mariadb

SQL_DB

policyadmin

SQL_USER

policy_user

SQL_PASSWORD

policy_user

POLICY_HOME

/opt/app/policy

Prepare Upgrade

Prior to upgrading the following script is run:

/opt/app/policy/bin/prepare_upgrade.sh <SCHEMA NAME>

This will copy the upgrade files from /home/policy/sql to $POLICY_HOME/etc/db/migration/<SCHEMA NAME>/sql/

Each individual sql file that makes up that release will be run as part of the upgrade.

Prepare Downgrade

Prior to downgrading the following script is run: .. code:

/opt/app/policy/bin/prepare_downgrade.sh <SCHEMA NAME>

This will copy the downgrade files from /home/policy/sql to $POLICY_HOME/etc/db/migration/<SCHEMA NAME>/sql/

Each individual sql file that makes up that release will be run as part of the downgrade.

Upgrade

/opt/app/policy/bin/db-migrator -s <SCHEMA NAME> -o upgrade -f 0800 -t 0900

If the -f and -t flags are not specified, the script will attempt to run all available sql files greater than the current version.

The script will return either 1 or 0 depending on successful completion.

Downgrade

/opt/app/policy/bin/db-migrator -s <SCHEMA NAME> -o downgrade -f 0900 -t 0800

If the -f and -t flags are not specified, the script will attempt to run all available sql files less than the current version.

The script will return either 1 or 0 depending on successful completion.

Logging

After every upgrade/downgrade db-migrator runs the report operation to show the contents of the db-migrator log table.

/opt/app/policy/bin/db-migrator -s <SCHEMA NAME> -o report

Console output will also show the sql script command as in the example below:

upgrade 0100-jpapdpgroup_properties.sql
--------------
CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL,
PROPERTIES VARCHAR(255) NULL, PROPERTIES_KEY VARCHAR(255) NULL)

migration schema

The migration schema contains two tables which belong to db-migrator.

  • schema_versions - table to store the schema version currently installed by db-migrator

name

version

policyadmin

0900

  • policyadmin_schema_changelog - table which stores a record of each sql file that has been run

ID

script

operation

from_version

to_version

tag

success

atTime

1

0100-jpapdpgroup_properties.sql

upgrade

0

0800

1309210909250800u

1

2021-09-13 09:09:26

  • ID: Sequence number of the operation

  • script: name of the sql script which was run

  • operation: operation type - upgrade/downgrade

  • from_version: starting version

  • to_version: target version

  • tag: tag to identify operation batch

  • success: 1 if script succeeded and 0 if it failed

  • atTime: time script was run

Partial Upgrade/Downgrade

If an upgrade or downgrade ends with a failure status (success=0) the next time an upgrade or downgrade is run it will start from the point of failure rather than re-run scripts that succeeded. This allows the user to perform a partial upgrade or downgrade depending on their requirements.

Running db-migrator

The script that runs db-migrator is part of the database configuration and is in the following directory:

oom/kubernetes/policy/resources/config/db_migrator_policy_init.sh

This script is mounted from the host file system to the policy-db-migrator container. It is setup to run an upgrade by default.

/opt/app/policy/bin/prepare_upgrade.sh ${SQL_DB}
/opt/app/policy/bin/db-migrator -s ${SQL_DB} -o upgrade
rc=$?
/opt/app/policy/bin/db-migrator -s ${SQL_DB} -o report
exit $rc

The following table describes what each line does:

code

description

/opt/app/policy/bin/prepare_upgrade.sh ${SQL_DB}

prepare the upgrade scripts for the <SQL_DB> schema

/opt/app/policy/bin/db-migrator -s ${SQL_DB} -o upgrade

run the upgrade

rc=$?

assign the return code from db-migrator to a variable

/opt/app/policy/bin/db-migrator -s ${SQL_DB} -o report

run the db-migrator report for the <SQL_DB> schema

exit $rc

exit with the return code from db-migrator

To alter how db-migrator is run the first two lines need to be modified. The first line can be changed to call either prepare_upgrade.sh or prepare_downgrade.sh. The second line can be changed to use different input parameters for db-migrator :

flag

value

required

-o

upgrade/downgrade

Y

-s

${SQL_DB}

Y

-f

current version (e.g. 0800)

N

-t

target version (e.g. 0900)

N

This is an example of how a downgrade from version 0900 to version 0800 could be run:

/opt/app/policy/bin/prepare_downgrade.sh ${SQL_DB}
/opt/app/policy/bin/db-migrator -s ${SQL_DB} -o downgrade -f 0900 -t 0800
rc=$?
/opt/app/policy/bin/db-migrator -s ${SQL_DB} -o report
exit $rc

Additional Information

If the target version of your upgrade or downgrade is the same as the current version, no sql files are run.

If an upgrade is run on a database where tables already exist in the policy schema, the current schema version is set to 0800 and only sql scripts from later versions are run.

Note

It is advisable to take a backup of your database prior to running this utility. Please refer to the mariadb documentation on how to do this.

End of Document

Policy Release Notes

Version: 9.0.1

Release Date:

2022-02-17 (Istanbul Maintenance Release #1)

Artifacts

Artifacts released:

Repository

Java Artifact

Docker Image (if applicable)

policy/parent

3.4.4

N/A

policy/docker

2.3.2

onap/policy-jdk-alpine:2.3.2
onap/policy-jre-alpine:2.3.2
onap/policy-db-migrator:2.3.2

policy/common

1.9.2

N/A

policy/models

2.5.2

N/A

policy/api

2.5.2

onap/policy-api:2.5.2

policy/pap

2.5.2

onap/policy-pap:2.5.2

policy/drools-pdp

1.9.2

onap/policy-drools:1.9.2

policy/apex-pdp

2.6.2

onap/policy-apex-pdp:2.6.2

policy/xacml-pdp

2.5.2

onap/policy-xacml-pdp:2.5.2

policy/drools-applications

1.9.2

onap/policy-pdpd-cl:1.9.2

policy/clamp

6.1.4

onap/policy-clamp-backend:6.1.4
onap/policy-clamp-frontend:6.1.4
onap/policy-clamp-cl-pf-ppnt:6.1.4
onap/policy-clamp-cl-k8s-ppnt:6.1.4
onap/policy-clamp-cl-http-ppnt:6.1.4
onap/policy-clamp-cl-runtime:6.1.4

policy/gui

2.1.2

onap/policy-gui:2.1.2

policy/distribution

2.6.2

onap/policy-distribution:2.6.2

Bug Fixes and Necessary Enhancements

  • [POLICY-3862] - Check all code for Log4J before version 2.15.0 and upgrade if necessary

Version: 9.0.0

Release Date:

2021-11-04 (Istanbul Release)

New features

Artifacts released:

Repository

Java Artifact

Docker Image (if applicable)

policy/parent

3.4.3

N/A

policy/docker

2.3.1

onap/policy-jdk-alpine:2.3.1
onap/policy-jre-alpine:2.3.1
onap/policy-db-migrator:2.3.1

policy/common

1.9.1

N/A

policy/models

2.5.1

N/A

policy/api

2.5.1

onap/policy-api:2.5.1

policy/pap

2.5.1

onap/policy-pap:2.5.1

policy/drools-pdp

1.9.1

onap/policy-drools:1.9.1

policy/apex-pdp

2.6.1

onap/policy-apex-pdp:2.6.1

policy/xacml-pdp

2.5.1

onap/policy-xacml-pdp:2.5.1

policy/drools-applications

1.9.1

onap/policy-pdpd-cl:1.9.1

policy/clamp

6.1.3

onap/policy-clamp-backend:6.1.3
onap/policy-clamp-frontend:6.1.3
onap/policy-clamp-cl-pf-ppnt:6.1.3
onap/policy-clamp-cl-k8s-ppnt:6.1.3
onap/policy-clamp-cl-http-ppnt:6.1.3
onap/policy-clamp-cl-runtime:6.1.3

policy/gui

2.1.1

onap/policy-gui:2.1.1

policy/distribution

2.6.1

onap/policy-distribution:2.6.1

Key Updates

Clamp -> policy Control Loop Database

  • REQ-684 - Merge CLAMP functionality into Policy Framework project
    • keep CLAMP functions into ONAP

    • reduce ONAP footprint

    • consolidate the UI (Control loop UI and policy)

    • enables code sharing and common handling for REST and TOSCA

    • introduces the Spring Framework into the Policy Framework

    • see the CLAMP documentation

  • REQ-716 - Control Loop in TOSCA LCM
    • Allows Control Loops to be defined and described in Metadata using TOSCA

    • Control loops can run on the fly on any component that implements a participant API

    • Control Loops can be commissioned into Policy/CLAMP, they can be parameterized, initiated on arbitrary participants, activated and monitored

    • See the CLAMP TOSCA Control Loop documentation

  • CLAMP Client Policy and TOSCA Handling
    • Push existing policy(tree) into pdp

    • Handling of PDP Groups

    • Handling of Policy Types

    • Handling of TOSCA Service Templates

    • Push of Policies to PDPs

    • Support multiple PDP Groups per Policy Type

    • Tree view in Policies list

    • Integration of new TOSCA Control Loop GUI into CLAMP GUI

  • Policy Handling Improvements
    • Support delta policies in PDPs

    • Allow XACML rules to specify EventManagerService

    • Sending of notifications to Kafka & Rest in apex-pdp policies

    • External configuration of groups other than defaultGroup

    • XACML Decision support for Multiple Requests

    • Updated query parameter names and support for wildcards in APIs

    • Added new APIs for Policy Audit capabilities

    • Capability to send multiple output events from a state in APEX-PDP

  • System Attribute Improvements
    • Support for upgrade and rollback, starting with upgrade from the Honolulu release to the Istanbul release

    • Consolidated health check

    • Phase 1 of Spring Framework introduction

    • Phase 1 of Prometheus introduction, base Prometheus metrics

Known Limitations, Issues and Workarounds

System Limitations

N/A

Known Vulnerabilities

N/A

Workarounds

N/A

Security Notes

POLICY-3169 - Remove security issues reported by NEXUS-IQ
POLICY-3315 - Review license scan issues
POLICY-3327 - OOM AAF generated certificates contain invalid SANs entries
POLICY-3338 - Upgrade CDS dependency to the latest version
POLICY-3384 - Use signed certificates in the CSITs
POLICY-3431 - Review license scan issues
POLICY-3516 - Upgrade CDS dependency to the 1.1.5 version
POLICY-3590 - Address security vulnerabilities and License issues in Policy Framework
POLICY-3697 - Review license scan issues

Functional Improvements

REQ-684 - Merge CLAMP functionality into Policy Framework project
REQ-716 - Control Loop in TOSCA LCM
POLICY-1787 - Support mariadb upgrade/rollback functionality
POLICY-2535 - Query deployed policies by regex on the name, for a given policy type
POLICY-2618 - PDP-D make legacy configuration interface (used by brmsgw) an optional feature
POLICY-2769 - Support multiple PAP instances
POLICY-2865 - Add support and documentation on how an application can control what info is returned in Decision API
POLICY-2896 - Improve consolidated health check to include dependencies
POLICY-2920 - policy-clamp ui is capable to push and existing policy(tree) into pdp
POLICY-2921 - use the policy-clamp ui to manage pdp groups
POLICY-2923 - use the policy-clamp ui to manage policy types
POLICY-2930 - clamp-backend rest api to push policies to pdp
POLICY-2931 - clamp GUI to push policy to pdp
POLICY-3072 - clamp ui support multiple pdp group per policy type
POLICY-3107 - Support delta policies in PDPs
POLICY-3165 - Implement tree view in policies list
POLICY-3209 - CLAMP Component Lifecycle Management using Spring Framework
POLICY-3218 - Integrate CLAMP GUIs (Instantiation/Monitoring) in the policy-gui repo
POLICY-3227 - Implementation of context album improvements in apex-pdp
POLICY-3228 - Implement clamp backend part to add policy models api
POLICY-3229 - Implement the front end part to add tosca model
POLICY-3230 - Make default PDP-D and PDP-D-APPS work out of the box
POLICY-3260 - Allow rules to specify EventManagerService
POLICY-3324 - Design a solution for sending notifications to Kafka & Rest in apex-pdp policies
POLICY-3331 - PAP: should allow for external configuration of groups other than defaultGroup
POLICY-3340 - Create REST API’s in PAP to fetch the audit information stored in DB
POLICY-3514 - XACML Decision support for Multiple Requests
POLICY-3524 - Explore options to integrate prometheus with policy framework components
POLICY-3527 - Update query parameter names in policy audit api’s
POLICY-3533 - PDP-D: make DB port provisionable
POLICY-3538 - Export basic metrics from policy components for prometheus
POLICY-3545 - Use generic create policy url in policy/distribution
POLICY-3557 - Export basic prometheus metrics from clamp

Necessary Improvements and Bug Fixes

Necessary Improvements
POLICY-2418 - Refactor XACML PDP POJO’s into Bean objects in order to perform validation more simply
POLICY-2429 - Mark policy/engine read-only and remove ci-management jobs for it
POLICY-2542 - Improve the REST parameter validation for PAP api’s
POLICY-2767 - Improve error handling of drools-pdp when requestID in onset is not valid UUID
POLICY-2899 - Store basic audit details of deploy/undeploy operations in PAP
POLICY-2996 - Address technical debt left over from Honolulu
POLICY-3059 - Fix name of target-database property in persistence.xml files
POLICY-3062 - Update the ENTRYPOINT in APEX-PDP Dockerfile
POLICY-3078 - Support SSL communication in Kafka IO plugin of Apex-PDP
POLICY-3087 - Use sl4fj instead of EELFLogger
POLICY-3089 - Cleanup logs for success/failure consumers in apex-pdp
POLICY-3096 - Fix intermittent test failures in APEX
POLICY-3128 - Use command command-line handler across policy repos
POLICY-3129 - Refactor command-line handling across policy-repos
POLICY-3132 - Apex-pdp documentation refers to missing logos.png
POLICY-3134 - Use base image for policy-jdk docker images
POLICY-3136 - Ignore jacoco and checkstyle when in eclipse
POLICY-3143 - Remove keystore files from policy repos
POLICY-3145 - HTTPS clients should not allow self-signed certificates
POLICY-3147 - Xacml-pdp should not use RestServerParameters for client parameters
POLICY-3155 - Use python3 for CSITs
POLICY-3160 - Use “sh” instead of “ash” where possible
POLICY-3163 - Remove spaces from xacml file name
POLICY-3166 - Use newer onap base image in clamp
POLICY-3171 - Fix sporadic error in models provider junits
POLICY-3175 - Minor clean-up of drools-apps
POLICY-3182 - Update npm repo
POLICY-3189 - Create a new key class which uses the @GeneratedValue annotation
POLICY-3190 - Investigate handling of context albums in Apex-PDP for failure responses (ex - AAI)
POLICY-3198 - Remove VirtualControlLoopEvent from OperationsHistory classes
POLICY-3211 - Parameter Handling and Parameter Validation
POLICY-3214 - Change Monitoring UI implementation to use React
POLICY-3215 - Update CLAMP Module structure to Multi Module Maven approach
POLICY-3221 - wrong lifecycle state information in INFO.yaml for policy/clamp
POLICY-3222 - Use existing clamp gui to set the parameters during CL instantiation
POLICY-3235 - gui-editor-apex fails to start
POLICY-3257 - Update csit test cases to include policy status & statistics api’s
POLICY-3261 - Rules need a way to release locks
POLICY-3262 - Extract more common code from UsecasesEventManager
POLICY-3292 - Update the XACML PDP Tutorial docker compose files to point to release Honolulu images
POLICY-3298 - Add key names to IndexedXxx factory class toString() methods
POLICY-3299 - Merge policy CSITs into docker/csit
POLICY-3300 - PACKAGES UPGRADES IN DIRECT DEPENDENCIES FOR ISTANBUL
POLICY-3303 - Update the default logback.xml in APEX to log to STDOUT
POLICY-3305 - Ensure XACML PDP application/translator methods are extendable
POLICY-3306 - Fix issue where apex-pdp test is failing in gitlab
POLICY-3307 - Turn off frankfurt CSITs
POLICY-3333 - bean validator should use SerializedName
POLICY-3336 - APEX CLI/Model: multiple outputs for nextState NULL
POLICY-3337 - Move clamp documentation to policy/parent
POLICY-3366 - PDP-D: support configuration of overarching DMAAP https flag
POLICY-3367 - oom: policy-clamp-create-tables.sql: add IF NOT EXISTS clauses
POLICY-3374 - Docker registry should be defined in the parent pom
POLICY-3378 - Move groovy scripts to separate/common file
POLICY-3382 - Create document for policy chaining in drools-pdp
POLICY-3383 - Standardize policy deployment vs undeployment count in PdpStatistics
POLICY-3388 - policy/gui merge jobs failing
POLICY-3389 - Use lombok annotations instead of hashCode, equals, toString, get, set
POLICY-3404 - Rolling DB errors in log output for API, PAP, and DB components
POLICY-3419 - Remove operationshistory10 DB
POLICY-3450 - PAP should support turning on/off via configuration storing PDP statistics
POLICY-3456 - Use new RestClientParameters class instead of BusTopicParams
POLICY-3457 - Topic source should not go into fast-fail loop when dmaap is unreachable
POLICY-3459 - Document how to turn off collection of PdpStatistics
POLICY-3473 - CSIT for xacml doesn’t check dmaap msg status
POLICY-3474 - Delete extra simulators from policy-models
POLICY-3486 - policy-jdk docker image should have at least one up to date image
POLICY-3499 - Improve Apex-PDP logs to avoid printing errors for irrelevant events in multiple policy deployment
POLICY-3501 - Refactor guard actor
POLICY-3511 - Limit statistics record count
POLICY-3525 - Improve policy/pap csit automation test cases
POLICY-3528 - Update documents & postman collection for pdp statistics api’s
POLICY-3531 - PDP-X: initialization delays causes liveness checks to be missed under OOM deployment
POLICY-3532 - Add Honolulu Maintenance Release notes to read-the-docs
POLICY-3539 - Use RestServer from policy/common in apex-pdp
POLICY-3547 - METADATA tables for policy/docker db-migrator should be different than counterpart in policy/drools-pdp seed
POLICY-3556 - Document xacml REST server limitations
POLICY-3605 - Enhance dmaap simulator to support “”/topics” endpoint
POLICY-3609 - Add CSIT test case for policy consolidated health check
Bug Fixes
POLICY-2845 - Policy dockers contain GPLv3
POLICY-3066 - Stackoverflow error in APEX standalone after changing to onap java image
POLICY-3161 - OOM clamp BE/FE do not start properly when clamp db exists in the cluster
POLICY-3174 - POLICY-APEX log does not include the DATE in STDOUT
POLICY-3176 - POLICY-DROOLS log does not include the DATE in STDOUT
POLICY-3177 - POLICY-PAP log does not include the DATE in STDOUT
POLICY-3201 - fix CRITICAL weak-cryptography issues identified in sonarcloud
POLICY-3202 - PDP-D: no locking feature: service loader not locking the no-lock-manager
POLICY-3203 - Update the PDP deployment in policy window failure
POLICY-3204 - Clamp UI does not accept to deploy policy to PDP
POLICY-3205 - The submit operation in Clamp cannot be achieved successfully
POLICY-3225 - Clamp policy UI does not send right pdp command
POLICY-3226 - Clamp policy UI does 2 parallel queries to policy list
POLICY-3248 - PdpHeartbeats are not getting processed by PAP
POLICY-3301 - Apex Avro Event Schemas - Not support for colon ‘:’ character in field names
POLICY-3322 - gui-editor-apex doesn’t contain webapp correctly
POLICY-3332 - Issues around delta policy deployment in APEX
POLICY-3369 - Modify NSSI closed loop not running
POLICY-3445 - Version conflicts in spring boot dependency jars in CLAMP
POLICY-3454 - PDP-D CL APPS: swagger mismatched libraries cause telemetry shell to fail
POLICY-3468 - PDPD-CL APPS: Clean up library transitive dependencies conflicts (jackson version) from new CDS libraries
POLICY-3507 - CDS Operation Policy execution runtime error
POLICY-3526 - OOM start of policy-distribution fails (keyStore values)
POLICY-3558 - Delete Instance Properties if Instantiation is Unitialized
POLICY-3600 - Some REST calls in Clamp GUI do not include pathname
POLICY-3601 - Static web resource paths in gui-editor-apex are incorrect
POLICY-3602 - Context schema table is not populated in Apex Editor
POLICY-3603 - gui-pdp-monitoring broken in gui docker image
POLICY-3608 - LASTUPDATE column in pdp table causing Nullpointer Exception in PAP initialization
POLICY-3610 - PDP-D-APPS: audit and metric logging information is incorrect
POLICY-3611 - “API,PAP: decrease eclipselink verbosity in persistence.xml”
POLICY-3625 - Terminated PDPs are not being removed by PAP
POLICY-3637 - Policy-mariadb connection intermittently fails from PF components
POLICY-3639 - CLAMP_REST_URL environment variable is not needed
POLICY-3647 - Cannot create Instance from Policy GUI
POLICY-3649 - SSL Handshake failure between CL participants and DMaap
POLICY-3650 - Disable apex-editor and pdp-monitoring in gui docker
POLICY-3660 - DB-Migrator job completes even during failed upgrade
POLICY-3678 - K8s participants tests are skipped due to json parsing error.
POLICY-3679 - Modify pdpstatistics to prevent duplicate keys
POLICY-3680 - PDP Monitoring GUI fails to parse JSON from PAP
POLICY-3682 - Unable to list the policies in Policy UI
POLICY-3683 - clamp-fe & policy-gui: useless rolling logs
POLICY-3684 - Unable to select a PDP group & Subgroup when configuring a control loop policy
POLICY-3685 - Fix CL state change issues in runtime and participants
POLICY-3686 - Update Participant Status after Commissioning
POLICY-3687 - Continuous sending CONTROL_LOOP_STATE_CHANGE message
POLICY-3688 - Register participant in ParticipantRegister message
POLICY-3689 - Handle ParticipantRegister
POLICY-3691 - Problems Parsing Service Template
POLICY-3695 - Tosca Constraint “in_range” not supported by policy/models
POLICY-3706 - Telemetry not working in drools-pdp
POLICY-3707 - Cannot delete a loop in design state

References

For more information on the ONAP Istanbul release, please see:

  1. ONAP Home Page

  2. ONAP Documentation

  3. ONAP Release Downloads

  4. ONAP Wiki Page

Quick Links:

Version: 8.0.1

Release Date:

2021-08-12 (Honolulu Maintenance Release #1)

Artifacts

Artifacts released:

Repository

Java Artifact

Docker Image (if applicable)

policy/parent

3.3.2

policy/common

1.8.2

policy/models

2.4.4

policy/api

2.4.4

onap/policy-api:2.4.4

policy/pap

2.4.5

onap/policy-pap:2.4.5

policy/drools-pdp

1.8.4

onap/policy-drools:1.8.4

policy/apex-pdp

2.5.4

onap/policy-apex-pdp:2.5.4

policy/xacml-pdp

2.4.5

onap/policy-xacml-pdp:2.4.5

policy/drools-applications

1.8.4

onap/policy-pdpd-cl:1.8.4

policy/distribution

2.5.4

onap/policy-distribution:2.5.4

policy/docker

2.2.1

onap/policy-jdk-alpine:2.2.1, onap/policy-jre-alpine:2.2.1

Bug Fixes and Necessary Enhancements

  • [POLICY-3062] - Update the ENTRYPOINT in APEX-PDP Dockerfile

  • [POLICY-3066] - Stackoverflow error in APEX standalone after changing to onap java image

  • [POLICY-3078] - Support SSL communication in Kafka IO plugin of Apex-PDP

  • [POLICY-3173] - APEX-PDP incorrectly reports successful policy deployment to PAP

  • [POLICY-3202] - PDP-D: no locking feature: service loader not locking the no-lock-manager

  • [POLICY-3227] - Implementation of context album improvements in apex-pdp

  • [POLICY-3230] - Make default PDP-D and PDP-D-APPS work out of the box

  • [POLICY-3248] - PdpHeartbeats are not getting processed by PAP

  • [POLICY-3301] - Apex Avro Event Schemas - Not support for colon ‘:’ character in field names

  • [POLICY-3305] - Ensure XACML PDP application/translator methods are extendable

  • [POLICY-3331] - PAP: should allow for external configuration of groups other than defaultGroup

  • [POLICY-3338] - Upgrade CDS dependency to the latest version

  • [POLICY-3366] - PDP-D: support configuration of overarching DMAAP https flag

  • [POLICY-3450] - PAP should support turning on/off via configuration storing PDP statistics

  • [POLICY-3454] - PDP-D CL APPS: swagger mismatched libraries cause telemetry shell to fail

  • [POLICY-3485] - Limit statistics record count

  • [POLICY-3507] - CDS Operation Policy execution runtime error

  • [POLICY-3516] - Upgrade CDS dependency to the 1.1.5 version

Known Limitations

The APIs provided by xacml-pdp (e.g., healthcheck, statistics, decision) are always active. While PAP controls which policies are deployed to a xacml-pdp, it does not control whether or not the APIs are active. In other words, xacml-pdp will respond to decision requests, regardless of whether PAP has made it ACTIVE or PASSIVE.

Version: 8.0.0

Release Date:

2021-04-29 (Honolulu Release)

New features

Artifacts released:

Repository

Java Artifact

Docker Image (if applicable)

policy/parent

3.3.0

policy/common

1.8.0

policy/models

2.4.2

policy/api

2.4.2

onap/policy-api:2.4.2

policy/pap

2.4.2

onap/policy-pap:2.4.2

policy/drools-pdp

1.8.2

onap/policy-drools:1.8.2

policy/apex-pdp

2.5.2

onap/policy-apex-pdp:2.5.2

policy/xacml-pdp

2.4.2

onap/policy-xacml-pdp:2.4.2

policy/drools-applications

1.8.2

onap/policy-pdpd-cl:1.8.2

policy/distribution

2.5.2

onap/policy-distribution:2.5.2

policy/docker

2.2.1

onap/policy-jdk-alpine:2.2.1, onap/policy-jre-alpine:2.2.1

Key Updates

  • Enhanced statistics
    • PDPs provide statistics, retrievable via PAP REST API

  • PDP deployment status
    • Policy deployment API enhanced to reflect actual policy deployment status in PDPs

    • Make PAP component stateless

  • Policy support
    • Upgrade XACML 3.0 code to use new Time Extensions

    • Enhancements for interoperability between Native Policies and other policy types

    • Support for arbitrary policy types on the Drools PDP

    • Improve handling of multiple policies in APEX PDP

    • Update policy-models TOSCA handling with Control Loop Entities

  • Alternative locking mechanisms
    • Support NO locking feature in Drools-PDP

  • Security
    • Remove credentials in code from the Apex JMS plugin

  • Actor enhancements
    • Actors should give better warnings than NPE when data is missing

    • Remove old event-specific actor code

  • PDP functional assignments
    • Make PDP type configurable in drools-pdp

    • Make PDP type configurable in xacml-pdp

  • Performance improvements
    • Support policy updates between PAP and the PDPs, phase 1

  • Maintainability
    • Use ONAP base docker image

    • Remove GPLv3 components from docker containers

    • Move CSITs to Policy repos

    • Deprecate server pool feature in drools-pdp

  • PoCs
    • Merge CLAMP functionality into Policy Framework project

    • TOSCA Defined Control Loop

Known Limitations, Issues and Workarounds

System Limitations

The policy API component requires a fresh new database when migrating to the honolulu release. Therefore, upgrades require a fresh new database installation. Please see the Installing or Upgrading Policy section for appropriate procedures.

Known Vulnerabilities
Workarounds
  • POLICY-2998 - Provide a script to periodically purge the statistics table

Security Notes

  • POLICY-3005 - Bump direct dependency versions
    • Upgrade org.onap.dmaap.messagerouter.dmaapclient to 1.1.12

    • Upgrade org.eclipse.persistence to 2.7.8

    • Upgrade org.glassfish.jersey.containers to 2.33

    • Upgrade com.fasterxml.jackson.module to 2.11.3

    • Upgrade com.google.re2j to 1.5

    • Upgrade org.mariadb.jdbc to 2.7.1

    • Upgrade commons-codec to 1.15

    • Upgrade com.thoughtworks.xstream to 1.4.15

    • Upgrade org.apache.httpcomponents:httpclient to 4.5.13

    • Upgrade org.apache.httpcomponents:httpcore to 4.4.14

    • Upgrade org.json to 20201115

    • Upgrade org.projectlombok to 1.18.16

    • Upgrade org.yaml to 1.27

    • Upgrade io.cucumber to 6.9.1

    • Upgrade org.apache.commons:commons-lang3 to 3.11

    • Upgrade commons-io to 2.8.0

  • POLICY-2943 - Review license scan issues
    • Upgrade com.hazelcast to 4.1.1

    • Upgrade io.netty to 4.1.58.Final

  • POLICY-2936 - Upgrade to latest version of CDS API
    • Upgrade io.grpc to 1.35.0

    • Upgrade com.google.protobuf to 3.14.0

References

For more information on the ONAP Honolulu release, please see:

  1. ONAP Home Page

  2. ONAP Documentation

  3. ONAP Release Downloads

  4. ONAP Wiki Page

Quick Links:

Version: 7.0.0

Release Date:

2020-12-03 (Guilin Release)

New features

Artifacts released:

Repository

Java Artifact

Docker Image (if applicable)

policy/parent

3.2.0

policy/common

1.7.1

policy/models

2.3.5

policy/api

2.3.3

onap/policy-api:2.3.3

policy/pap

2.3.3

onap/policy-pap:2.3.3

policy/drools-pdp

1.7.4

onap/policy-drools:1.7.4

policy/apex-pdp

2.4.4

onap/policy-apex-pdp:2.4.4

policy/xacml-pdp

2.3.3

onap/policy-xacml-pdp:2.3.3

policy/drools-applications

1.7.5

onap/policy-pdpd-cl:1.7.5

policy/distribution

2.4.3

onap/policy-distribution:2.4.3

policy/docker

2.1.1

onap/policy-jdk-alpine:2.1.1, onap/policy-jre-alpine:2.1.1

Key Updates

  • Kubernetes integration
    • All components return with non-zero exit code in case of application failure

    • All components log to standard out (i.e., k8s logs) by default

    • Continue to write log files inside individual pods, as well

  • E2E Network Slicing
    • Added ModifyNSSI operation to SO actor

  • Consolidated health check
    • Indicate failure if there aren’t enough PDPs registered

  • Legacy operational policies
    • Removed from all components

  • OOM helm charts refactoring
    • Name standardization

    • Automated certificate generation

  • Actor Model
    • Support various use cases and provide more flexibility to Policy Designers

    • Reintroduced the “usecases” controller into drools-pdp, supporting the use cases under the revised actor architecture

  • Guard Application
    • Support policy filtering

  • Matchable Application - Support for ONAP or 3rd party components to create matchable policy types out of the box

  • Policy Lifecycle & Administration API
    • Query/Delete by policy name & version without policy type

  • Apex-PDP enhancements
    • Support multiple event & response types coming from a single endpoint

    • Standalone installation now supports Tosca-based policies

    • Legacy policy format has been removed

    • Support chaining/handling of gRPC failure responses

  • Policy Distribution
    • HPA decoders & related classes have been removed

  • Policy Engine
    • Deprecated

Known Limitations, Issues and Workarounds

System Limitations

The policy API component requires a fresh new database when migrating to the guilin release. Therefore, upgrades require a fresh new database installation. Please see the Installing or Upgrading Policy section for appropriate procedures.

Known Vulnerabilities
  • POLICY-2463 - In APEX Policy javascript task logic, JSON.stringify causing stackoverflow exceptions

Workarounds
  • POLICY-2463 - Use the stringify method of the execution context

Security Notes

  • POLICY-2878 - Dependency upgrades
    • Upgrade com.fasterxml.jackson to 2.11.1

  • POLICY-2387 - Dependency upgrades
    • Upgrade org.json to 20200518

    • Upgrade com.google.re2j to 1.4

    • Upgrade com.thoughtworks.xstream to 1.4.12

    • Upgrade org.eclipse.persistence to 2.2.1

    • Upgrade org.apache.httpcomponents to 4.5.12

    • Upgrade org.projectlombok to 1.18.12

    • Upgrade org.slf4j to 1.7.30

    • Upgrade org.codehaus.plexus to 3.3.0

    • Upgrade com.h2database to 1.4.200

    • Upgrade io.cucumber to 6.1.2

    • Upgrade org.assertj to 3.16.1

    • Upgrade com.openpojo to 0.8.13

    • Upgrade org.mockito to 3.3.3

    • Upgrade org.awaitility to 4.0.3

    • Upgrade org.onap.aaf.authz to 2.1.21

  • POLICY-2668 - Dependency upgrades
    • Upgrade org.java-websocket to 1.5.1

  • POLICY-2623 - Remove log4j dependency

  • POLICY-1996 - Dependency upgrades
    • Upgrade org.onap.dmaap.messagerouter.dmaapclient to 1.1.11

References

For more information on the ONAP Guilin release, please see:

  1. ONAP Home Page

  2. ONAP Documentation

  3. ONAP Release Downloads

  4. ONAP Wiki Page

Quick Links:

Version: 6.0.1

Release Date:

2020-08-21 (Frankfurt Maintenance Release #1)

Artifacts

Artifacts released:

Repository

Java Artifact

Docker Image (if applicable)

policy/drools-applications

1.6.4

onap/policy-pdpd-cl:1.6.4

Bug Fixes

Security Notes

Fixed Security Issues

  • [POLICY-2678] - policy/engine tomcat upgrade for CVE-2020-11996

Version: 6.0.0

Release Date:

2020-06-04 (Frankfurt Release)

New features

Artifacts released:

Repository

Java Artifact

Docker Image (if applicable)

policy/parent

3.1.3

policy/common

1.6.5

policy/models

2.2.6

policy/api

2.2.4

onap/policy-api:2.2.4

policy/pap

2.2.3

onap/policy-pap:2.2.3

policy/drools-pdp

1.6.3

onap/policy-drools:1.6.3

policy/apex-pdp

2.3.2

onap/policy-apex-pdp:2.3.2

policy/xacml-pdp

2.2.2

onap/policy-xacml-pdp:2.2.2

policy/drools-applications

1.6.4

onap/policy-pdpd-cl:1.6.4

policy/engine

1.6.4

onap/policy-pe:1.6.4

policy/distribution

2.3.2

onap/policy-distribution:2.3.2

policy/docker

2.0.1

onap/policy-jdk-alpine:2.0.1, onap/policy-jre-alpine:2.0.1, onap/policy-jdk-debian:2.0.1, onap/policy-jre-debian:2.0.1

Summary

New features include policy update notifications, native policy support, streamlined health check for the Policy Administration Point (PAP), configurable pre-loading/pre-deployment of policies, new APIs (e.g. to create one or more Policies with a single call), new experimental PDP monitoring GUI, and enhancements to all three PDPs: XACML, Drools, APEX.

Common changes in all policy components

  • Upgraded all policy components to Java 11.

  • Logback file can be now loaded using OOM configmap.
    • If needed, logback file can be loaded as a configmap during the OOM deployment. For this, just put the logback.xml file in corresponding config directory in OOM charts.

  • TOSCA changes:
    • “tosca_definitions_version” is now “tosca_simple_yaml_1_1_0”

    • typeVersion→ type_version, int→integer, bool→boolean, String→string, Map→map, List→list

  • SupportedPolicyTypes now removed from pdp status message.
    • All PDPs now send PdpGroup to which they belong to in the registration message.

    • SupportedPolicyTypes are not sent anymore.

  • Native Policy Support
    • Each PDP engine has its own native policy language. A new Policy Type onap.policies.Native was created and supported for each PDP engine to support native policy types.

POLICY-PAP

  • Policy Update Notifications
    • PAP now generates notifications via the DMaaP Message Router when policies are successfully or unsuccessfully deployed (or undeployed) from all relevant PDPs.

  • PAP API to fetch Policy deployment status
    • Clients will be able to poll the PAP API to find out when policies have been successfully or unsuccessfully deployed to the PDP’s.

  • Removing supportedPolicyTypes from PdpStatus
    • PDPs are assigned to a PdpGroup based on what group is mentioned in the heartbeat. Earlier this was done based on the supportedPolicyTypes.

  • Support policy types with wild-cards, Preload wildcard supported type in PAP

  • PAP should NOT make a PDP passive if it cannot deploy a policy.
    • If a PDP fails to deploy one or more policies specified in a PDP-UPDATE message, PAP will undeploy those policies that failed to deploy to the PDP. This entails removing the policies from the Pdp Group(s), issuing new PDP-UPDATE requests, and updating the notification tracking data.

    • Also, re-register pdp if not found in the DB during heartbeat processing.

  • Consolidated health check in PAP
    • PAP can report the health check for ALL the policy components now. The PDP’s health is tracked based on heartbeats, and other component’s REST API is used for healthcheck.

    • “healthCheckRestClientParameters” (REST parameters for API and Distribution healthcheck) are added to the startup config file in PAP.

  • PDP statistics from PAP
    • All PDPs send statistics data as part of the heartbeat. PAP reads this and saves this data to the database, and this statistics data can be accessed from the monitoring GUI.

  • PAP API for Create or Update PdpGroups
    • A new API is now available just for creating/updating PDP Groups. Policies cannot be added/updated during PDP Group create/update operations. There is another API for this. So, if provided in the create/update group request, they are ignored. Supported policy types are defined during PDP Group creation. They cannot be updated once they are created. Refer to this for details: https://github.com/onap/policy-parent/blob/master/docs/pap/pap.rst#id8

  • PAP API to deploy policies to PdpGroups
    • A new API is introduced to deploy policies on specific PDPGroups. Each subgroup includes an “action” property, which is used to indicate that the policies are being added (POST) to the subgroup, deleted (DELETE) from the subgroup, or that the subgroup’s entire set of policies is being replaced (PATCH) by a new set of policies.

POLICY-API

  • A new simplified API to create one or more policies in one call.
    • This simplified API doesn’t require policy type id & policy type version to be part of the URL.

    • The simple URI “policy/api/v1/policies” with a POST input body takes in a ToscaServiceTemplate with the policies in it.

  • List of Preloaded policy types are made configurable
    • Until El Alto, the list of pre-loaded policy types are hardcoded in the code. Now, this is made configurable, and the list can be specified in the startup config file for the API component under “preloadPolicyTypes”. The list is ignored if the DB already contains one or more policy types.

  • Preload default policies for ONAP components
    • The ability to configure the preloading of initial default policies into the system upon startup.

  • A lot of improvements to the API code and validations corresponding to the changes in policy-models.
    • Creating same policyType/policy repeatedly without any change in request body will always be successful with 200 response

    • If there is any change in the request body, then that should be a new version. If any change is posted without a version change, then 406 error response is returned.

  • Known versioning issues are there in Policy Types handling.
    • https://jira.onap.org/browse/POLICY-2377 covers the versioning issues in Policy. Basically, multiple versions of a Policy Type cannot be handled in TOSCA. So, in Frankfurt, the latest version of the policy type is examined. This will be further looked into in Guilin.

  • Cascaded GET of PolicyTypes and Policies
    • Fetching/GET PolicyType now returns all of the referenced/parent policyTypes and dataTypes as well.

    • Fetching/GET Policy allows specifying mode now.

    • By default the mode is “BARE”, which returns only the requested Policy in response. If mode is specified as “REFERENCED”, all of the referenced/parent policyTypes and dataTypes are returned as well.

  • The /deployed API is removed from policy/api
    • This run time administration job to see the deployment status of a policy is now possible via PAP.

  • Changes related to design and support of TOSCA Compliant Policy Types for the operational and guard policy models.

POLICY-DISTRIBUTION

  • From Frankfurt release, policy-distribution component uses APIs provided by Policy-API and Policy-PAP for creation of policy types and policies, and deployment of policies.
    • Note: If “deployPolicies” field in the startup config file is true, then only the policies are deployed using PAP endpoint.

  • Policy/engine & apex-pdp dependencies are removed from policy-distribution.

POLICY-APEX-PDP

  • Changed the JavaScript executor from Nashorn to Rhino as part of Java 11 upgrade.
  • APEX supports multiple policy deployment in Frankfurt.
    • Up through El Alto APEX-PDP had the capability to take in only a single ToscaPolicy. When PAP sends a list of Tosca Policies in PdpUpdate, only the first one is taken and only that single Policy is deployed in APEX. This is fixed in Frankfurt. Now, APEX can deploy a list of Tosca Policies altogether into the engine.

    • Note: There shouldn’t be any duplicates in the deployed policies (for e.g. same input/output parameter names, or same event/task names etc).

    • For example, when 3 policies are deployed and one has duplicates, say same input/task or any such concept is used in the 2nd and 3rd policy, then APEX-PDP ignores the 3rd policy and executes only the 1st and 2nd policies. APEX-PDP also respond back to PAP with the message saying that “only Policy 1 and 2 are deployed. Others failed due to duplicate concept”.

  • Context retainment during policy upgrade.
    • In APEX-PDP, context is referred by the apex concept ‘contextAlbum’. When there is no major version change in the upgraded policy to be deployed, the existing context of the currently running policy is retained. When the upgraded policy starts running, it will have access to this context as well.

    • For example, Policy A v1.1 is currently deployed to APEX. It has a contextAlbum named HeartbeatContext and heartbeats are currently added to the HeartbeatContext based on events coming in to the policy execution. Now, when Policy A v1.2 (with some other changes and same HeartbeatContext) is deployed, Policy Av1.1 is replaced by Policy A1.2 in the APEX engine, but the content in HeartbeatContext is retained for Policy A1.2.

  • APEX-PDP now specifies which PdpGroup it belongs to.
    • Up through El Alto, PAP assigned each PDP to a PDP group based on the supportedPolicyTypes it sends in the heartbeat. But in Frankfurt, each PDP comes up saying which PdpGroup they belong to, and this is sent to PAP in the heartbeat. PAP then registers the PDP the PdpGroup specified by the PDP. If no group name is specified like this, then PAP assigns the PDP to defaultGroup by default. SupportedPolicyTypes are not sent to PAP by the PDP now.

    • In APEX-PDP, this can be specified in the startup config file(OnapPfConfig.json). “pdpGroup”: “<groupName>” is added under “pdpStatusParameters” in the config file.

  • APEX-PDP now sends PdpStatistics data in heartbeat.
    • Apex now sends the PdpStatistics data in every heartbeat sent to PAP. PAP saves this data to the database, and this statistics data can be accessed from the monitoring GUI.

  • Removed “content” section from ToscaPolicy properties in APEX.
    • Up through El Alto, APEX specific policy information was placed under properties|content in ToscaPolicy. Avoid placing under “content” and keep the information directly under properties. So, the ToscaPolicy structure will have apex specific policy information in properties|engineServiceParameters, properties|eventInputParameters, properties|eventOutputParameters.

  • Passing parameters from ApexConfig to policy logic.
  • GRPC support for APEX-CDS interaction.

POLICY-XACML-PDP

  • Added optional Decision API param to Decision API for monitor decisions that returns abbreviated results.
    • Return only an abbreviated list of policies (e.g. metadata Policy Id and Version) without the actual contents of the policies (e.g. the Properties).

  • XACML PDP now support PASSIVE_MODE.

  • Added support to return status and error if pdp-x failed to load a policy.

  • Changed optimization Decision API application to support “closest matches” algorithm.

  • Changed Xacml-pdp to report the pdp group defined in XacmlPdpParameters config file as part of heartbeat. Also, removed supportedPolicyType from pdpStatus message.

  • Design the TOSCA policy model for SDNC naming policies and implement an application that translates it to a working policy and is available for decision API.

  • XACML pdp support for Control Loop Coordination
    • Added policies for SON and PCI to support each blocking the other, with test cases and appropriate requests

  • Extend PDP-X capabilities so that it can load in and enforce the native XACML policies deployed from PAP.

POLICY-DROOLS-PDP

  • Support for PDP-D in offline mode to support locked deployments. This is the default ONAP installation.

  • Parameterize maven repository URLs for easier CI/CD integration.

  • Support for Tosca Compliant Operational Policies.

  • Support for TOSCA Compliant Native Policies that allows creation and deployment of new drools-applications.

  • Validation of Operational and Native Policies against their policy type.

  • Support for a generic Drools-PDP docker image to host any type of application.

  • Experimental Server Pool feature that supports multiple active Drools PDP hosts.

POLICY-DROOLS-APPLICATIONS

  • Removal of DCAE ONSET alarm duplicates (with different request IDs).

  • Support of a new controller (frankfurt) that supports the ONAP use cases under the new actor architecture.

  • Deprecated the “usecases” controller supporting the use cases under the legacy actor architecture.

  • Deleted the unsupported “amsterdam” controller related projects.

Known Limitations, Issues and Workarounds

System Limitations

The policy API component requires a fresh new database when migrating to the frankfurt release. Therefore, upgrades require a fresh new database installation. Please see the Installing or Upgrading Policy section for appropriate procedures.

Known Vulnerabilities
  • POLICY-2463 - In APEX Policy javascript task logic, JSON.stringify causing stackoverflow exceptions

  • POLICY-2487 - policy/api hangs in loop if preload policy does not exist

Workarounds
  • POLICY-2463 - Parse incoming object using JSON.Parse() or cast the object to a String

Security Notes

  • POLICY-2221 - Password removal from helm charts

  • POLICY-2064 - Allow overriding of keystore and truststore in policy helm charts

  • POLICY-2381 - Dependency upgrades
    • Upgrade drools 7.33.0

    • Upgrade jquery to 3.4.1 in jquery-ui

    • Upgrade snakeyaml to 1.26

    • Upgrade org.infinispan infinispan-core 10.1.5.Final

    • upgrade io.netty 4.1.48.Final

    • exclude org.glassfish.jersey.media jersey-media-jaxb artifact

    • Upgrade com.fasterxml.jackson.core 2.10.0.pr3

    • Upgrade org.org.jgroups 4.1.5.Final

    • Upgrade commons-codec 20041127.091804

    • Upgrade com.github.ben-manes.caffeine 2.8.0

Version: 5.0.2

Release Date:

2020-08-24 (El Alto Maintenance Release #1)

New Features

Artifacts released:

Repository

Java Artifact

Docker Image (if applicable)

policy/api

2.1.3

onap/policy-api:2.1.3

policy/pap

2.1.3

onap/policy-pap:2.1.3

policy/drools-pdp

1.5.3

onap/policy-drools:1.5.3

policy/apex-pdp

2.2.3

onap/policy-apex-pdp:2.2.3

policy/xacml-pdp

2.1.3

onap/policy-xacml-pdp:2.1.3

policy/drools-applications

1.5.4

onap/policy-pdpd-cl:1.5.4

policy/engine

1.5.3

onap/policy-pe:1.5.3

policy/distribution

2.2.2

onap/policy-distribution:2.2.2

policy/docker

1.4.0

onap/policy-common-alpine:1.4.0, onap/policy/base-alpine:1.4.0

Bug Fixes

  • [PORTAL-760] - Access to Policy portal is impossible

  • [POLICY-2107] - policy/distribution license issue in resource needs to be removed

  • [POLICY-2169] - SDC client interface change caused compile error in policy distribution

  • [POLICY-2171] - Upgrade elalto branch models and drools-applications

  • [POLICY-1509] - Investigate Apex org.python.jython-standalone.2.7.1

  • [POLICY-2062] - APEX PDP logs > 4G filled local storage

Security Notes

Fixed Security Issues

Version: 5.0.1

Release Date:

2019-10-24 (El Alto Release)

New Features

Artifacts released:

Repository

Java Artifact

Docker Image (if applicable)

policy/parent

3.0.1

policy/common

1.5.2

policy/models

2.1.4

policy/api

2.1.2

onap/policy-api:2.1.2

policy/pap

2.1.2

onap/policy-pap:2.1.2

policy/drools-pdp

1.5.2

onap/policy-drools:1.5.2

policy/apex-pdp

2.2.1

onap/policy-apex-pdp:2.2.1

policy/xacml-pdp

2.1.2

onap/policy-xacml-pdp:2.1.2

policy/drools-applications

1.5.3

onap/policy-pdpd-cl:1.5.3

policy/engine

1.5.2

onap/policy-pe:1.5.2

policy/distribution

2.2.1

onap/policy-distribution:2.2.1

policy/docker

1.4.0

onap/policy-common-alpine:1.4.0 onap/policy/base-alpine:1.4.0

The El Alto release for POLICY delivered the following Epics. For a full list of stories and tasks delivered in the El Alto release, refer to JiraPolicyElAlto.

  • [POLICY-1727] - This epic covers technical debt left over from Dublin

  • POLICY-969 Docker improvement in policy framwork modules

  • POLICY-1074 Fix checkstyle warnings in every repository

  • POLICY-1121 RPM build for Apex

  • POLICY-1223 CII Silver Badging Requirements

  • POLICY-1600 Clean up hash code equality checks, cloning and copying in policy-models

  • POLICY-1646 Replace uses of getCanonicalName() with getName()

  • POLICY-1652 Move PapRestServer to policy/common

  • POLICY-1732 Enable maven-checkstyle-plugin in apex-pdp

  • POLICY-1737 Upgrade oParent 2.0.0 - change daily jobs to staging jobs

  • POLICY-1742 Make HTTP return code handling configurable in APEX

  • POLICY-1743 Make URL configurable in REST Requestor and REST Client

  • POLICY-1744 Remove topic.properties and incorporate into overall properties

  • POLICY-1770 PAP REST API for PDPGroup Healthcheck

  • POLICY-1771 Boost policy/api JUnit code coverage

  • POLICY-1772 Boost policy/xacml-pdp JUnit code coverage

  • POLICY-1773 Enhance the policy/xacml-pdp S3P Stability and Performance tests

  • POLICY-1784 Better Handling of “version” field value with clients

  • POLICY-1785 Deploy same policy with a new version simply adds to the list

  • POLICY-1786 Create a simple way to populate the guard database for testing

  • POLICY-1791 Address Sonar issues in new policy repos

  • POLICY-1795 PAP: bounced apex and xacml pdps show deleted instance in pdp status through APIs.

  • POLICY-1800 API|PAP components use different version formats

  • POLICY-1805 Build up stability test for api component to follow S3P requirements

  • POLICY-1806 Build up S3P performance test for api component

  • POLICY-1847 Add control loop coordination as a preloaded policy type

  • POLICY-1871 Change policy/distribution to support ToscaPolicyType & ToscaPolicy

  • POLICY-1881 Upgrade policy/distribution to latest SDC artifacts

  • POLICY-1885 Apex-pdp: Extend CLIEditor to generate policy in ToscaServiceTemplate format

  • POLICY-1898 Move apex-pdp & distribution documents to policy/parent

  • POLICY-1942 Boost policy/apex-pdp JUnit code coverage

  • POLICY-1953 Create addTopic taking BusTopicParams instead of Properties in policy/endpoints

  • Additional items delivered with the release.

  • POLICY-1637 Remove “version” from PdpGroup

  • POLICY-1653 Remove isNullVersion() method

  • POLICY-1966 Fix more sonar issues in policy drools

  • POLICY-1988 Generate El Alto AAF Certificates

  • [POLICY-1823] - This epic covers the work to develop features that will be deployed dark in El Alto.

  • POLICY-1762 Create CDS API model implementation

  • POLICY-1763 Create CDS Actor

  • POLICY-1899 Update optimization xacml application to support more flexible Decision API

  • POLICY-1911 XACML PDP must be able to retrieve Policy Type from API

Bug Fixes

The following bug fixes have been deployed with this release:

  • [POLICY-1671] - policy/engine JUnit tests now take over 30 minutes to run

  • [POLICY-1725] - XACML PDP returns 500 vs 400 for bad syntax JSON

  • [POLICY-1793] - API|MODELS: Retrieving Legacy Operational Policy as a Tosca Policy with wrong version

  • [POLICY-1795] - PAP: bounced apex and xacml pdps show deleted instance in pdp status through APIs.

  • [POLICY-1800] - API|PAP components use different version formats

  • [POLICY-1802] - Apex-pdp: context album is mandatory for policy model to compile

  • [POLICY-1803] - PAP should undeploy policies when subgroup is deleted

  • [POLICY-1807] - Latest version is always returned when using the endpoint to retrieve all versions of a particular policy

  • [POLICY-1808] - API|PAP|PDP-X [new] should publish docker images with the following tag X.Y-SNAPSHOT-latest

  • [POLICY-1810] - API: support “../deployed” REST API (URLs) for legacy policies

  • [POLICY-1811] - The endpoint of retrieving the latest version of TOSCA policy does not return the latest one, especially when there are double-digit versions

  • [POLICY-1818] - APEX does not allow arbitrary Kafka parameters to be specified

  • [POLICY-1838] - Drools-pdp error log is missing data in ErrorDescription field

  • [POLICY-1839] - Policy Model currently needs to be escaped

  • [POLICY-1843] - Decision API not returning monitoring policies when calling api with policy-type

  • [POLICY-1844] - XACML PDP does not update policy statistics

  • [POLICY-1858] - Usecase DRL - named query should not be invoked

  • [POLICY-1859] - Drools rules should not timeout when given timeout=0 - should be treated as infinite

  • [POLICY-1872] - brmsgw fails building a jar - trafficgenerator dependency does not exist

  • [POLICY-2047] - TOSCA Policy Types should be map not a list

  • [POLICY-2060] - ToscaProperties object is missing metadata field

  • [POLICY-2156] - missing field in create VF module request to SO

Security Notes

Fixed Security Issues

Known Security Issues

Known Vulnerabilities in Used Modules

POLICY code has been formally scanned during build time using NexusIQ and all Critical vulnerabilities have been addressed, items that remain open have been assessed for risk and determined to be false positive. The POLICY open Critical security vulnerabilities and their risk assessment have been documented as part of the project (El Alto Release).

Quick Links:

Known Issues

The following known issues will be addressed in a future release:

  • [POLICY-1276] - JRuby interpreter shutdown fails on second and subsequent runs

  • [POLICY-1291] - Maven Error when building Apex documentation in Windows

  • [POLICY-1578] - PAP pushPolicies.sh in startup fails due to race condition in some environments

  • [POLICY-1832] - API|PAP: data race condition seem to appear sometimes when creating and deploying policy

  • [POLICY-2103] - policy/distribution may need to re-synch if SDC gets reinstalled

  • [POLICY-2062] - APEX PDP logs > 4G filled local storage

  • [POLICY-2080] - drools-pdp JUnit fails intermittently in feature-active-standby-management

  • [POLICY-2111] - PDP-D APPS: AAF Cadi conflicts with Aether libraries

  • [POLICY-2158] - PAP loses synchronization with PDPs

  • [POLICY-2159] - PAP console (legacy): cannot edit policies with GUI

Version: 4.0.0

Release Date:

2019-06-26 (Dublin Release)

New Features

Artifacts released:

Repository

Java Artifact

Docker Image (if applicable)

policy/parent

2.1.0

policy/common

1.4.0

policy/models

2.0.2

policy/api

2.0.1

onap/policy-api:2.0.1

policy/pap

2.0.1

onap/policy-pap:2.0.1

policy/drools-pdp

1.4.0

onap/policy-drools:1.4.0

policy/apex-pdp

2.1.0

onap/policy-apex-pdp:2.1.0

policy/xacml-pdp

2.1.0

onap/policy-xacml-pdp:2.1.0

policy/drools-applications

1.4.2

onap/policy-pdpd-cl:1.4.2

policy/engine

1.4.1

onap/policy-pe:1.4.1

policy/distribution

2.1.0

onap/policy-distribution:2.1.0

policy/docker

1.4.0

onap/policy-common-alpine:1.4.0 onap/policy/base-alpine:1.4.0

The Dublin release for POLICY delivered the following Epics. For a full list of stories and tasks delivered in the Dublin release, refer to JiraPolicyDublin.

  • [POLICY-1068] - This epic covers the work to cleanup, enhance, fix, etc. any Control Loop based code base.
    • POLICY-1195 Separate model code from drools-applications into other repositories

    • POLICY-1367 Spike - Experimentation for management of Drools templates and Operational Policies

    • POLICY-1397 PDP-D: NOOP Endpoints Support to test Operational Policies.

    • POLICY-1459 PDP-D [Control Loop] : Create a Control Loop flavored PDP-D image

  • [POLICY-1069] - This epic covers the work to harden the codebase for the Policy Framework project.
    • POLICY-1007 Remove Jackson from policy framework components

    • POLICY-1202 policy-engine & apex-pdp are using different version of eclipselink

    • POLICY-1250 Fix issues reported by sonar in policy modules

    • POLICY-1368 Remove hibernate from policy repos

    • POLICY-1457 Use Alpine in base docker images

  • [POLICY-1072] - This epic covers the work to support S3P Performance criteria.
    • S3P Performance related items

  • [POLICY-1171] - Enhance CLC Facility
    • POLICY-1173 High-level specification of coordination directives

  • [POLICY-1220] - This epic covers the work to support S3P Security criteria
    • POLICY-1538 Upgrade Elasticsearch to 6.4.x to clear security issue

  • [POLICY-1269] - R4 Dublin - ReBuild Policy Infrastructure
    • POLICY-1270 Policy Lifecycle API RESTful HealthCheck/Statistics Main Entry Point

    • POLICY-1271 PAP RESTful HealthCheck/Statistics Main Entry Point

    • POLICY-1272 Create the S3P JMeter tests for API, PAP, XACML (2nd Gen)

    • POLICY-1273 Policy Type Application Design Requirements

    • POLICY-1436 XACML PDP RESTful HealthCheck/Statistics Main Entry Point

    • POLICY-1440 XACML PDP RESTful Decision API Main Entry Point

    • POLICY-1441 Policy Lifecycle API RESTful Create/Read Main Entry Point for Policy Types

    • POLICY-1442 Policy Lifecycle API RESTful Create/Read Main Entry Point for Concrete Policies

    • POLICY-1443 PAP Dmaap PDP Register/UnRegister Main Entry Point

    • POLICY-1444 PAP Dmaap Policy Deploy/Undeploy Policies Main Entry Point

    • POLICY-1445 XACML PDP upgrade to xacml 2.0.0

    • POLICY-1446 Policy Lifecycle API RESTful Delete Main Entry Point for Policy Types

    • POLICY-1447 Policy Lifecycle API RESTful Delete Main Entry Point for Concrete Policies

    • POLICY-1449 XACML PDP Dmaap Register/UnRegister Functionality

    • POLICY-1451 XACML PDP Dmaap Deploy/UnDeploy Functionality

    • POLICY-1452 Apex PDP Dmaap Register/UnRegister Functionality

    • POLICY-1453 Apex PDP Dmaap Deploy/UnDeploy Functionality

    • POLICY-1454 Drools PDP Dmaap Register/UnRegister Functionality

    • POLICY-1455 Drools PDP Dmaap Deploy/UnDeploy Functionality

    • POLICY-1456 Policy Architecture and Roadmap Documentation

    • POLICY-1458 Create S3P JMeter Tests for Policy API

    • POLICY-1460 Create S3P JMeter Tests for PAP

    • POLICY-1461 Create S3P JMeter Tests for Policy XACML Engine (2nd Generation)

    • POLICY-1462 Create S3P JMeter Tests for Policy SDC Distribution

    • POLICY-1471 Policy Application Designer - Develop Guard and Control Loop Coordination Policy Type application

    • POLICY-1474 Modifications of Control Loop Operational Policy to support new Policy Lifecycle API

    • POLICY-1515 Prototype Policy Lifecycle API Swagger Entry Points

    • POLICY-1516 Prototype the Policy Decision API

    • POLICY-1541 PAP REST API for PDPGroup Query, Statistics & Delete

    • POLICY-1542 PAP REST API for PDPGroup Deployment, State Management & Health Check

  • [POLICY-1399] - This epic covers the work to support model drive control loop design as defined by the Control Loop Subcommittee
    • Model drive control loop related items

  • [POLICY-1404] - This epic covers the work to support the CCVPN Use Case for Dublin
    • POLICY-1405 Develop SDNC API for trigger bandwidth

  • [POLICY-1408] - This epic covers the work done with the Casablanca release
    • POLICY-1410 List Policy API

    • POLICY-1413 Dashboard enhancements

    • POLICY-1414 Push Policy and DeletePolicy API enhancement

    • POLICY-1416 Model enhancements to support CLAMP

    • POLICY-1417 Resiliency improvements

    • POLICY-1418 PDP APIs - make ClientAuth optional

    • POLICY-1419 Better multi-role support

    • POLICY-1420 Model enhancement to support embedded JSON

    • POLICY-1421 New audit data for push/delete

    • POLICY-1422 Enhanced encryption

    • POLICY-1423 Save original model file

    • POLICY-1427 Controller Logging Feature

    • POLICY-1489 PDP-D: Nested JSON Event Filtering support with JsonPath

    • POLICY-1499 Mdc Filter Feature

  • [POLICY-1438] - This epic covers the work to support 5G OOF PCI Use Case
    • POLICY-1463 Functional code changes in Policy for OOF SON use case

    • POLICY-1464 Config related aspects for OOF SON use case

  • [POLICY-1450] - This epic covers the work to support the Scale Out Use Case.
    • POLICY-1278 AAI named-queries are being deprecated and should be replaced with custom-queries

    • POLICY-1545 E2E Automation - Parse the newly added model ids from operation policy

  • Additional items delivered with the release.
    • POLICY-1159 Move expectException to policy-common/utils-test

    • POLICY-1176 Work on technical debt introduced by CLC POC

    • POLICY-1266 A&AI Modularity

    • POLICY-1274 further improvement in PSSD S3P test

    • POLICY-1401 Build onap.policies.Monitoring TOSCA Policy Template

    • POLICY-1465 Support configurable Heap Memory Settings for JVM processes

Bug Fixes

The following bug fixes have been deployed with this release:

  • [POLICY-1241] - Test failure in drools-pdp if JAVA_HOME is not set

  • [POLICY-1289] - Apex only considers 200 response codes as successful result codes

  • [POLICY-1437] - Fix issues in FileSystemReceptionHandler of policy-distribution component

  • [POLICY-1501] - policy-engine JUnit tests are not independent

  • [POLICY-1627] - APEX does not support specification of a partitioner class for Kafka

Security Notes

Fixed Security Issues

  • [OJSI-117] - In default deployment POLICY (nexus) exposes HTTP port 30236 outside of cluster.

  • [OJSI-157] - In default deployment POLICY (policy-api) exposes HTTP port 30240 outside of cluster.

  • [OJSI-118] - In default deployment POLICY (policy-apex-pdp) exposes HTTP port 30237 outside of cluster.

  • [OJSI-184] - In default deployment POLICY (brmsgw) exposes HTTP port 30216 outside of cluster.

Known Security Issues

Known Vulnerabilities in Used Modules

POLICY code has been formally scanned during build time using NexusIQ and all Critical vulnerabilities have been addressed, items that remain open have been assessed for risk and determined to be false positive. The POLICY open Critical security vulnerabilities and their risk assessment have been documented as part of the project (Dublin Release).

Quick Links:

Known Issues

The following known issues will be addressed in a future release:

  • [POLICY-1795] - PAP: bounced apex and xacml pdps show deleted instance in pdp status through APIs.

  • [POLICY-1810] - API: ensure that the REST APISs (URLs) are supported and consistent regardless the type of policy: operational, guard, tosca-compliant.

  • [POLICY-1277] - policy config takes too long time to become retrievable in PDP

  • [POLICY-1378] - add support to append value into policyScope while one policy could be used by several services

  • [POLICY-1650] - Policy UI doesn’t show left menu or any content

  • [POLICY-1671] - policy/engine JUnit tests now take over 30 minutes to run

  • [POLICY-1725] - XACML PDP returns 500 vs 400 for bad syntax JSON

  • [POLICY-1793] - API|MODELS: Retrieving Legacy Operational Policy as a Tosca Policy with wrong version

  • [POLICY-1800] - API|PAP components use different version formats

  • [POLICY-1802] - Apex-pdp: context album is mandatory for policy model to compile

  • [POLICY-1808] - API|PAP|PDP-X [new] should publish docker images with the following tag X.Y-SNAPSHOT-latest

  • [POLICY-1818] - APEX does not allow arbitrary Kafka parameters to be specified

  • [POLICY-1276] - JRuby interpreter shutdown fails on second and subsequent runs

  • [POLICY-1803] - PAP should undeploy policies when subgroup is deleted

  • [POLICY-1291] - Maven Error when building Apex documentation in Windows

  • [POLICY-1872] - brmsgw fails building a jar - trafficgenerator dependency does not exist

Version: 3.0.2

Release Date:

2019-03-31 (Casablanca Maintenance Release #2)

The following items were deployed with the Casablanca Maintenance Release:

Bug Fixes

  • [POLICY-1522] - Policy doesn’t send “payload” field to APPC

Security Fixes

  • [POLICY-1538] - Upgrade Elasticsearch to 6.4.x to clear security issue

License Issues

  • [POLICY-1433] - Remove proprietary licenses in PSSD test CSAR

Known Issues

The following known issue will be addressed in a future release.

  • [POLICY-1650] - Policy UI doesn’t show left menu or any content

A workaround for this issue consists in bypassing the Portal UI when accessing the Policy UI. See PAP recipes for the specific procedure.

Version: 3.0.1

Release Date:

2019-01-31 (Casablanca Maintenance Release)

The following items were deployed with the Casablanca Maintenance Release:

New Features

  • [POLICY-1221] - Policy distribution application to support HTTPS communication

  • [POLICY-1222] - Apex policy PDP to support HTTPS Communication

Bug Fixes

Version: 3.0.0

Release Date:

2018-11-30 (Casablanca Release)

New Features

The Casablanca release for POLICY delivered the following Epics. For a full list of stories and tasks delivered in the Casablanca release, refer to JiraPolicyCasablanca (Note: Jira details can also be viewed from this link).

  • [POLICY-701] - This epic covers the work to integrate Policy into the SDC Service Distribution

The policy team introduced a new application into the framework that provides integration of the Service Distribution Notifications from SDC to Policy.

  • [POLICY-719] - This epic covers the work to build the Policy Lifecycle API

  • [POLICY-726] - This epic covers the work to distribute policy from the PAP to the PDPs into the ONAP platform

  • [POLICY-876] - This epics covers the work to re-build how the PAP organizes the PDP’s into groups.

The policy team did some forward looking spike work towards re-building the Software Architecture.

  • [POLICY-809] - Maintain and implement performance

  • [POLICY-814] - 72 hour stability testing (component and platform)

The policy team made enhancements to the Drools PDP to further support S3P Performance. For the new Policy SDC Distribution application and the newly ingested Apex PDP the team established S3P performance standard and performed 72 hour stability tests.

  • [POLICY-824] - maintain and implement security

The policy team established AAF Root Certificate for HTTPS communication and CADI/AAF integration into the MVP applications. In addition, many java dependencies were upgraded to clear CLM security issues.

  • [POLICY-840] - Flexible control loop coordination facility.

Work towards a POC for control loop coordination policies were implemented.

  • [POLICY-841] - Covers the work required to support HPA

Enhancements were made to support the HPA use case through the use of the new Policy SDC Service Distribution application.

  • [POLICY-842] - This epic covers the work to support the Auto Scale Out functional requirements

Enhancements were made to support Scale Out Use Case to enforce new guard policies and updated SO and A&AI APIs.

  • [POLICY-851] - This epic covers the work to bring in the Apex PDP code

A new Apex PDP engine was ingested into the platform and work was done to ensure code cleared CLM security issues, sonar issues, and checkstyle.

  • [POLICY-1081] - This epic covers the contribution for the 5G OOF PCI Optimization use case.

Policy templates changes were submitted that supported the 5G OOF PCI optimization use case.

  • [POLICY-1182] - Covers the work to support CCVPN use case

Policy templates changes were submitted that supported the CCVPN use case.

Bug Fixes

The following bug fixes have been deployed with this release:

  • [POLICY-799] - Policy API Validation Does Not Validate Required Parent Attributes in the Model

  • [POLICY-869] - Control Loop Drools Rules should not have exceptions as well as die upon an exception

  • [POLICY-872] - investigate potential race conditions during rules version upgrades during call loads

  • [POLICY-878] - pdp-d: feature-pooling disables policy-controllers preventing processing of onset events

  • [POLICY-909] - get_ZoneDictionaryDataByName class type error

  • [POLICY-920] - Hard-coded path in junit test

  • [POLICY-921] - XACML Junit test cannot find property file

  • [POLICY-1083] - Mismatch in action cases between Policy and APPC

Security Notes

POLICY code has been formally scanned during build time using NexusIQ and all Critical vulnerabilities have been addressed, items that remain open have been assessed for risk and determined to be false positive. The POLICY open Critical security vulnerabilities and their risk assessment have been documented as part of the project (Casablanca Release).

Quick Links:

Known Issues

Version: 2.0.0

Release Date:

2018-06-07 (Beijing Release)

New Features

The Beijing release for POLICY delivered the following Epics. For a full list of stories and tasks delivered in the Beijing release, refer to JiraPolicyBeijing.

  • [POLICY-390] - This epic covers the work to harden the Policy platform software base (incl 50% JUnit coverage)
    • POLICY-238 policy/drools-applications: clean up maven structure

    • POLICY-336 Address Technical Debt

    • POLICY-338 Address JUnit Code Coverage

    • POLICY-377 Policy Create API should validate input matches DCAE microservice template

    • POLICY-389 Cleanup Jenkin’s CI/CD process’s

    • POLICY-449 Policy API + Console : Common Policy Validation

    • POLICY-568 Integration with org.onap AAF project

    • POLICY-610 Support vDNS scale out for multiple times in Beijing release

  • [POLICY-391] - This epic covers the work to support Release Planning activities
    • POLICY-552 ONAP Licensing Scan - Use Restrictions

  • [POLICY-392] - Platform Maturity Requirements - Performance Level 1
    • POLICY-529 Platform Maturity Performance - Drools PDP

    • POLICY-567 Platform Maturity Performance - PDP-X

  • [POLICY-394] - This epic covers the work required to support a Policy developer environment in which Policy Developers can create, update policy templates/rules separate from the policy Platform runtime platform.
    • POLICY-488 pap should not add rules to official template provided in drools applications

  • [POLICY-398] - This epic covers the body of work involved in supporting policy that is platform specific.
    • POLICY-434 need PDP /getConfig to return an indicator of where to find the config data - in config.content versus config field

  • [POLICY-399] - This epic covers the work required to policy enable Hardware Platform Enablement
    • POLICY-622 Integrate OOF Policy Model into Policy Platform

  • [POLICY-512] - This epic covers the work to support Platform Maturity Requirements - Stability Level 1
    • POLICY-525 Platform Maturity Stability - Drools PDP

    • POLICY-526 Platform Maturity Stability - XACML PDP

  • [POLICY-513] - Platform Maturity Requirements - Resiliency Level 2
    • POLICY-527 Platform Maturity Resiliency - Policy Engine GUI and PAP

    • POLICY-528 Platform Maturity Resiliency - Drools PDP

    • POLICY-569 Platform Maturity Resiliency - BRMS Gateway

    • POLICY-585 Platform Maturity Resiliency - XACML PDP

    • POLICY-586 Platform Maturity Resiliency - Planning

    • POLICY-681 Regression Test Use Cases

  • [POLICY-514] - This epic covers the work to support Platform Maturity Requirements - Security Level 1
    • POLICY-523 Platform Maturity Security - CII Badging - Project Website

  • [POLICY-515] - This epic covers the work to support Platform Maturity Requirements - Escalability Level 1
    • POLICY-531 Platform Maturity Scalability - XACML PDP

    • POLICY-532 Platform Maturity Scalability - Drools PDP

    • POLICY-623 Docker image re-design

  • [POLICY-516] - This epic covers the work to support Platform Maturity Requirements - Manageability Level 1
    • POLICY-533 Platform Maturity Manageability L1 - Logging

    • POLICY-534 Platform Maturity Manageability - Instantiation < 1 hour

  • [POLICY-517] - This epic covers the work to support Platform Maturity Requirements - Usability Level 1
    • POLICY-535 Platform Maturity Usability - User Guide

    • POLICY-536 Platform Maturity Usability - Deployment Documentation

    • POLICY-537 Platform Maturity Usability - API Documentation

  • [POLICY-546] - R2 Beijing - Various enhancements requested by clients to the way we handle TOSCA models.

Bug Fixes

The following bug fixes have been deployed with this release:

  • [POLICY-484] - Extend election handler run window and clean up error messages

  • [POLICY-494] - POLICY EELF Audit.log not in ECOMP Standards Compliance

  • [POLICY-501] - Fix issues blocking election handler and add directed interface for opstate

  • [POLICY-509] - Add IntelliJ file to .gitingore

  • [POLICY-510] - Do not enforce hostname validation

  • [POLICY-518] - StateManagement creation of EntityManagers.

  • [POLICY-519] - Correctly initialize the value of allSeemsWell in DroolsPdpsElectionHandler

  • [POLICY-629] - Fixed a bug on editor screen

  • [POLICY-684] - Fix regex for brmsgw dependency handling

  • [POLICY-707] - ONAO-PAP-REST unit tests fail on first build on clean checkout

  • [POLICY-717] - Fix a bug in checking required fields if the object has include function

  • [POLICY-734] - Fix Fortify Header Manipulation Issue

  • [POLICY-743] - Fixed data name since its name was changed on server side

  • [POLICY-753] - Policy Health Check failed with multi-node cluster

  • [POLICY-765] - junit test for guard fails intermittently

Security Notes

POLICY code has been formally scanned during build time using NexusIQ and all Critical vulnerabilities have been addressed, items that remain open have been assessed for risk and determined to be false positive. The POLICY open Critical security vulnerabilities and their risk assessment have been documented as part of the project.

Quick Links:

Known Issues

The following known issues will be addressed in a future release:

  • [POLICY-522] - PAP REST APIs undesired HTTP response body for 500 responses

  • [POLICY-608] - xacml components : remove hardcoded secret key from source code

  • [POLICY-764] - Policy Engine PIP Configuration JUnit Test fails intermittently

  • [POLICY-776] - OOF Policy TOSCA models are not correctly rendered

  • [POLICY-799] - Policy API Validation Does Not Validate Required Parent Attributes in the Model

  • [POLICY-801] - fields mismatch for OOF flavorFeatures between implementation and wiki

  • [POLICY-869] - Control Loop Drools Rules should not have exceptions as well as die upon an exception

  • [POLICY-872] - investigate potential race conditions during rules version upgrades during call loads

Version: 1.0.2

Release Date:

2018-01-18 (Amsterdam Maintenance Release)

Bug Fixes

The following fixes were deployed with the Amsterdam Maintenance Release:

  • [POLICY-486] - pdp-x api pushPolicy fails to push latest version

Version: 1.0.1

Release Date:

2017-11-16 (Amsterdam Release)

New Features

The Amsterdam release continued evolving the design driven architecture of and functionality for POLICY. The following is a list of Epics delivered with the release. For a full list of stories and tasks delivered in the Amsterdam release, refer to JiraPolicyAmsterdam.

  • [POLICY-31] - Stabilization of Seed Code
    • POLICY-25 Replace any remaining openecomp reference by onap

    • POLICY-32 JUnit test code coverage

    • POLICY-66 PDP-D Feature mechanism enhancements

    • POLICY-67 Rainy Day Decision Policy

    • POLICY-93 Notification API

    • POLICY-158 policy/engine: SQL injection Mitigation

    • POLICY-269 Policy API Support for Rainy Day Decision Policy and Dictionaries

  • [POLICY-33] - This epic covers the body of work involved in deploying the Policy Platform components
    • POLICY-40 MSB Integration

    • POLICY-124 Integration with oparent

    • POLICY-41 OOM Integration

    • POLICY-119 PDP-D: noop sinks

  • [POLICY-34] - This epic covers the work required to support a Policy developer environment in which Policy Developers can create, update policy templates/rules separate from the policy Platform runtime platform.
    • POLICY-57 VF-C Actor code development

    • POLICY-43 Amsterdam Use Case Template

    • POLICY-173 Deployment of Operational Policies Documentation

  • [POLICY-35] - This epic covers the body of work involved in supporting policy that is platform specific.
    • POLICY-68 TOSCA Parsing for nested objects for Microservice Policies

  • [POLICY-36] - This epic covers the work required to capture policy during VNF on-boarding.

  • [POLICY-37] - This epic covers the work required to capture, update, extend Policy(s) during Service Design.
    • POLICY-64 CLAMP Configuration and Operation Policies for vFW Use Case

    • POLICY-65 CLAMP Configuration and Operation Policies for vDNS Use Case

    • POLICY-48 CLAMP Configuration and Operation Policies for vCPE Use Case

    • POLICY-63 CLAMP Configuration and Operation Policies for VOLTE Use Case

  • [POLICY-38] - This epic covers the work required to support service distribution by SDC.

  • [POLICY-39] - This epic covers the work required to support the Policy Platform during runtime.
    • POLICY-61 vFW Use Case - Runtime

    • POLICY-62 vDNS Use Case - Runtime

    • POLICY-59 vCPE Use Case - Runtime

    • POLICY-60 VOLTE Use Case - Runtime

    • POLICY-51 Runtime Policy Update Support

    • POLICY-328 vDNS Use Case - Runtime Testing

    • POLICY-324 vFW Use Case - Runtime Testing

    • POLICY-320 VOLTE Use Case - Runtime Testing

    • POLICY-316 vCPE Use Case - Runtime Testing

  • [POLICY-76] - This epic covers the body of work involved in supporting R1 Amsterdam Milestone Release Planning Milestone Tasks.
    • POLICY-77 Functional Test case definition for Control Loops

    • POLICY-387 Deliver the released policy artifacts

Bug Fixes
  • This is technically the first release of POLICY, previous release was the seed code contribution. As such, the defects fixed in this release were raised during the course of the release. Anything not closed is captured below under Known Issues. For a list of defects fixed in the Amsterdam release, refer to JiraPolicyAmsterdam.

Known Issues
  • The operational policy template has been tested with the vFW, vCPE, vDNS and VOLTE use cases. Additional development may/may not be required for other scenarios.

  • For vLBS Use Case, the following steps are required to setup the service instance:
    • Create a Service Instance via VID.

    • Create a VNF Instance via VID.

    • Preload SDNC with topology data used for the actual VNF instantiation (both base and DNS scaling modules). NOTE: you may want to set “vlb_name_0” in the base VF module data to something unique. This is the vLB server name that DCAE will pass to Policy during closed loop. If the same name is used multiple times, the Policy name-query to AAI will show multiple entries, one for each occurrence of that vLB VM name in the OpenStack zone. Note that this is not a limitation, typically server names in a domain are supposed to be unique.

    • Instantiate the base VF module (vLB, vPacketGen, and one vDNS) via VID. NOTE: The name of the VF module MUST start with Vfmodule_. The same name MUST appear in the SDNC preload of the base VF module topology. We’ll relax this naming requirement for Beijing Release.

    • Run heatbridge from the Robot VM using Vfmodule_ _ as stack name (it is the actual stack name in OpenStack)

    • Populate AAI with a dummy VF module for vDNS scaling.

Security Issues
  • None at this time

Other
  • None at this time

End of Release Notes