Policy Framework Architecture

Policy Framework Architecture

Abstract

This document describes the ONAP Policy Framework. It lays out the architecture of the framework and shows the APIs provided to other components that interwork with the framework. It describes the implementation of the framework, mapping out the components, software structure, and execution ecosystem of the framework.

TOSCA Policy Primer

This page gives a short overview of how Policy is modelled in the TOSCA Simple Profile in YAML.

TOSCA defines three concepts for Policy: Policy Type, Policy, and Trigger.

_images/TOSCAPolicyConcepts.svg

Policy Type

A Policy Type is used to specify the types of policies that may be used in a service. The parameter definitions for a policy of this type, the entity types to which it applies, and what triggers policies of this type may be specified.

The types of policies that are used in a service are defined in the policy_types section of the TOSCA service template as a Policy Type. More formally, TOSCA defines a Policy Type as an artifact that “defines a type of requirement that affects or governs an application or service’s topology at some stage of its life cycle, but is not explicitly part of the topology itself”. In the definition of a Policy Type in TOSCA, you specify:

  • its properties, which define the type of configuration parameters that the policy takes

  • its targets, which define the node types and/or groups to which the policy type applies

  • its triggers, which specify the conditions in which policies of this type are fired

Policy

A Policy is used to specify the actual instances of policies that are used in a service. The parameter values of the policy and the actual entities to which it applies may be specified.

The policies that are used in a service are defined in the policies section of the TOSCA topology template as a Policy. More formally, TOSCA defines a Policy as an artifact that “defines a policy that can be associated with a TOSCA topology or top-level entity definition”. In the definition of a Policy in TOSCA, you specify:

  • its properties, which define the values of the configuration parameters that the policy takes

  • its targets, which define the node types and/or group types to which the policy type applies

Note that policy triggers are specified on the Policy Type definition and are not specified on the Policy itself.

Trigger

A Trigger defines an event, condition, and action that is used to initiate execution of a policy associated with it. The definition of the Trigger allows specification of the type of events to trigger on, the filters on those events, conditions and constraints for trigger firing, the action to perform on triggering, and various other parameters.

The triggers that are used in a service are defined as reusable modules in the TOSCA service template as a Trigger. More formally, TOSCA defines a Trigger as an artifact that “defines the event, condition and action that is used to “trigger” a policy it is associated with”. In the definition of a Trigger in TOSCA, you specify:

  • its event_type, which defines the name of the event that fires the policy

  • its schedule, which defines the time interval in which the trigger is active

  • its target_filter, which defines specific filters for firing such as specific characteristics of the nodes or relations for which the trigger should or should not fire

  • its condition, which defines extra conditions on the incoming event for firing the trigger

  • its constraint, which defines extra conditions on the incoming event for not firing the trigger

  • its period, which defines the period to use for evaluating conditions and constraints

  • its evaluations, which defines the number of evaluations that must be performed over the period to assert the condition or constraint exists

  • its method, the method to use for evaluation of conditions and constraints

  • its action, the workflow or operation to invoke when the trigger fires

Note that how a Trigger actually works with a Policy is not clear from the specification.

End of Document

1. Overview

The ONAP Policy Framework is a comprehensive policy design, deployment, and execution environment. The Policy Framework is the decision making component in an ONAP system. It allows you to specify, deploy, and execute the governance of the features and functions in your ONAP system, be they closed loop, orchestration, or more traditional open loop use case implementations. The Policy Framework is the component that is the source of truth for all policy decisions.

One of the most important goals of the Policy Framework is to support Policy Driven Operational Management during the execution of ONAP control loops at run time. In addition, use case implementations such as orchestration and control benefit from the ONAP policy Framework because they can use the capabilities of the framework to manage and execute their policies rather than embedding the decision making in their applications.

The Policy Framework is deployment agnostic, it manages Policy Execution (in PDPs) and Enforcement (in PEPs) regardless of how the PDPs and PEPs are deployed. This allows policy execution and enforcement to be deployed in a manner that meets the performance requirements of a given application or use case. In one deployment, policy execution could be deployed in a separate executing entity in a Docker container. In another, policy execution could be co-deployed with an application to increase performance. An example of co-deployment is the Drools PDP Control Loop image, which is a Docker image that combines the ONAP Drools use case application and dependencies with the Drools PDP engine.

The ONAP Policy Framework architecture separates policies from the platform that is supporting them. The framework supports development, deployment, and execution of any type of policy in ONAP. The Policy Framework is metadata (model) driven so that policy development, deployment, and execution is as flexible as possible and can support modern rapid development ways of working such as DevOps. A metadata driven approach also allows the amount of programmed support required for policies to be reduced or ideally eliminated.

We have identified five capabilities as being essential for the framework:

  1. Most obviously, the framework must be capable of being triggered by an event or invoked, and making decisions at run time.

  2. It must be deployment agnostic; capable of managing policies for various Policy Decision Points (PDPs) or policy engines.

  3. It must be metadata driven, allowing policies to be deployed, modified, upgraded, and removed as the system executes.

  4. It must provide a flexible model driven policy design approach for policy type programming and specification of policies.

  5. It must be extensible, allowing straightforward integration of new PDPs, policy formats, and policy development environments.

Another important aim of the architecture of a model driven policy framework is that it enables much more flexible policy specification. The ONAP Policy Framework complies with the TOSCA modelling approach for policies, see the TOSCA Policy Primer for more information on how policies are modeled in TOSCA.

  1. A Policy Type describes the properties, targets, and triggers that the policy for a feature can have. A Policy type is implementation independent. It is the metadata that specifies:

  • the configuration data that the policy can take. The Policy Type describes each property that a policy of a given type can take. A Policy Type definition also allows the default value, optionality, and the ranges of properties to be defined.

  • the targets such as network element types, functions, services, or resources on which a policy of the given type can act.

  • the triggers such as the event type, filtered event, scheduled trigger, or conditions that can activate a policy of the given type.

Policy Types are hierarchical, A Policy Type can inherit from a parent Policy Type, inheriting the properties, targets, and triggers of its parent. Policy Types are developed by domain experts in consultation with the developers that implement the logic and rules for the Policy Type.

  1. A Policy is defined using a Policy Type. The Policy defines:

  • the values for each property of the policy type

  • the specific targets (network elements, functions, services, resources) on which this policy will act

  • the specific triggers that trigger this policy.

  1. A Policy Type Implementation or Raw Policy, is the logic that implements the policy. It is implemented by a skilled policy developer in consultation with domain experts. The implementation has software that reads the Policy Type and parses the incoming configuration properties. The software has domain logic that is triggered when one of the triggers described in the Policy Type occurs. The software logic executes and acts on the targets specified in the Policy Type.

For example, a Policy Type could be written to describe how to manage Service Level Agreements for VPNs. The VPN Policy Type can be used to create VPN policies for a bank network, a car dealership network, or a university with many campuses. The Policy Type has two parameters:

  • The maximumDowntime parameter allows the maximum downtime allowed per year to be specified

  • The mitigationStrategy parameter allows one of three strategies to be selected for downtime breaches

  • allocateMoreResources, which will automatically allocate more resources to mitigate the problem

  • report, which report the downtime breach to a trouble ticketing system

  • ignore, which logs the breach and takes no further action

The Policy Type defines a trigger event, an event that is received from an analytics system when the maximum downtime value for a VPN is breached. The target of the policy type is an instance of the VPN service.

The Policy Type Implementation is developed that can configure the maximum downtime parameter in an analytics system, can receive a trigger from the analytics system when the maximum downtime is breached, and that can either request more resources, report an issue to a trouble ticketing system, and can log a breach.

VPN Policies are created by specifying values for the properties, triggers, and targets specified in VPN Policy Type.

In the case of the bank network, the maximumDowntime threshold is specified as 5 minutes downtime per year and the mitigationStrategy is defined as allocateMoreResources, and the target is specified as being the bank’s VPN service ID. When a breach is detected by the analytics system, the policy is executed, the target is identified as being the bank’s network, and more resources are allocated by the policy.

For the car dealership VPN policy, a less stringent downtime threshold of 60 minutes per year is specified, and the mitigation strategy is to issue a trouble ticket. The university network is best effort, so a downtime of 4 days per year is specified. Breaches are logged and mitigated as routine network administration tasks.

In ONAP, specific ONAP Policy Types are used to create specific policies that drive the ONAP Platform and Components. For more detailed information on designing Policy Types and developing an implementation for that policy type, see Policy Design and Development.

The ONAP Policy Framework for building, configuring and deploying PDPs is extendable. It allows the use of ONAP PDPs as is, the extension of ONAP PDPs, and lastly provides the capability for users to create and deploy their own PDPs. The ONAP Policy Framework provides distributed policy management for all policies in ONAP at run time. Not only does this provide unified policy access and version control, it provides life cycle control for policies and allows detection of conflicts across all policies running in an ONAP installation.

2. Architecture

The diagram below shows the architecture of the ONAP Policy Framework at its highest level.

_images/PFHighestLevel.svg

The PolicyDevelopment component implements the functionality for development of policy types and policies. PolicyAdministration is responsible for the deployment life cycle of policies as well as interworking with the mechanisms required to orchestrate the nodes and containers on which policies run. PolicyAdministration is also responsible for the administration of policies at run time; ensuring that policies are available to users, that policies are executing correctly, and that the state and status of policies is monitored. PolicyExecution is the set of PDPs running in the ONAP system and is responsible for making policy decisions and for managing the administrative state of the PDPs as directed by PolicyAdministration.

PolicyDevelopment provides APIs that allow creation of policy artifacts and supporting information in the policy database. PolicyAdministration reads those artifacts and the supporting information from the policy database whilst deploying policy artifacts. Once the policy artifacts are deployed, PolicyAdministration handles the run-time management of the PDPs on which the policies are running. PolicyDevelopment interacts with the database, and has no programmatic interface with PolicyAdministration, PolicyExecution or any other run-time ONAP components.

The diagram below shows a more detailed view of the architecture, as inspired by RFC-2753 and RFC-3198.

_images/PFDesignAndAdmin.svg

PolicyDevelopment provides a CRUD API for policy types and policies. The policy types and policy artifacts and their metadata (information about policies, policy types, and their interrelations) are stored in the PolicyDB. The PolicyDevGUI, PolicyDistribution, and other applications such as CLAMP can use the PolicyDevelopment API to create, update, delete, and read policy types and policies.

PolicyAdministration has two important functions:

  • Management of the life cycle of PDPs in an ONAP installation. PDPs register with PolicyAdministration when they come up. PolicyAdministration handles the allocation of PDPs to PDP Groups and PDP Subgroups, so that they can be managed as microservices in infrastructure management systems such as Kubernetes.

  • Management of the deployment of policies to PDPs in an ONAP installation. PolicyAdministration gives each PDP group a set of domain policies to execute.

PolicyAdministration handles PDPs and policy allocation to PDPs using asynchronous messaging over DMaaP. It provides three APIs:

  • a CRUD API for policy groups and subgroups

  • an API that allows the allocation of policies to PDP groups and subgroups to be controlled

  • an API allows policy execution to be managed, showing the status of policy execution on PDP Groups, subgroups, and individual PDPs as well as the life cycle state of PDPs

PolicyExecution is the set of running PDPs that are executing policies, logically partitioned into PDP groups and subgroups.

_images/PolicyExecution.svg

The figure above shows how PolicyExecution looks at run time with PDPs running in Kubernetes. A PDPGroup is a purely logical construct that collects all the PDPs that are running policies for a particular domain together. A PDPSubGroup is a group of PDPs of the same type that are running the same policies. A PDPSubGroup is deployed as a Kubernetes Deployment. PDPs are defined as Kubernetes Pods. At run time, the actual number of PDPs in each PDPSubGroup is specified in the configuration of the Deployment of that PDPSubGroup in Kubernetes. This structuring of PDPs is required because, in order to simplify deployment and scaling of PDPs in Kubernetes, we gather all the PDPs of the same type that are running the same policies together for deployment.

For example, assume we have policies for the SON (Self Organizing Network) and ACPS (Advanced Customer Premises Service) domains. For SON,we have XACML, Drools, and APEX policies, and for ACPS we have XACML and Drools policies. The table below shows the resulting PDPGroup, PDPSubGroup, and PDP allocations:

PDP Group

PDP Subgroup

Kubernetes Deployment

Kubernetes Deployment Strategy

PDPs in Pods

SON

SON-XACML

SON-XACML-Dep

Always 2, be geo redundant

2 PDP-X

SON-Drools

SON-Drools-Dep

At Least 4, scale up on 70% load, scale down on 40% load, be geo-redundant

>= 4 PDP-D

SON-APEX

SON-APEX-Dep

At Least 3, scale up on 70% load, scale down on 40% load, be geo-redundant

>= 3 PDP-A

ACPS

ACPS-XACML

ACPS-XACML-Dep

Always 2

2 PDP-X

ACPS-Drools

ACPS-Drools-Dep

At Least 2, scale up on 80% load, scale down on 50% load

>=2 PDP-D

For more details on PolicyAdministration APIs and management of PDPGroup and PDPSubGroup, see the documentation for Policy Administration Point (PAP) Architecture.

2.1 Policy Framework Object Model

This section describes the structure of and relations between the main concepts in the Policy Framework. This model is implemented as a common model and is used by PolicyDevelopment, PolicyDeployment, and PolicyExecution.

_images/ClassStructure.svg

The UML class diagram above shows thePolicy Framework Object Model.

2.2 Policy Design Architecture

This section describes the architecture of the model driven system used to develop policy types and to create policies using policy types. The output of Policy Design is deployment-ready artifacts and Policy metadata in the Policy Framework database.

Policy types that are expressed via natural language or a model require an implementation that allows them to be translated into runtime policies. Some Policy Type implementations are set up and available in the platform during startup such as Control Loop Operational Policy Models, OOF placement Models, DCAE microservice models. Policy type implementations can also be loaded and deployed at run time.

2.2.1 Policy Type Design

Policy Type Design is the task of creating policy types that capture the generic and vendor independent aspects of a policy for a particular domain use case.

All policy types are specified in TOSCA service templates. Once policy types are defined and created in the system, PolicyDevelopment manages them and uses them to allow policies to be created from these policy types in a uniform way regardless of the domain that the policy type is addressing or the PDP technology that will execute the policy.

A PolicyTypeImpl is developed for a policy type for a certain type of PDP (for example XACML oriented for decision policies, Drools rules or Apex state machines oriented for ECA policies). While a policy type is implementation independent, a policy type implementation for a policy type is specific for the technology of the PDP on which policies that use that policy type implementation will execute. A Policy Type may have many implementations. A PolicyTypeImpl is the specification of the specific rules or tasks, the flow of the policy, its internal states and data structures and other relevant information. A PolicyTypeImpl can be specific to a particular policy type or it can be more general, providing the implementation of a class of policy types. Further, the design environment and tool chain for implementing implementations of policy types is specific to the technology of the PDP on which the implementation will run.

In the xacml-pdp and drools-pdp, an application is written for a given category of policy types. Such an application may have logic written in Java or another programming language, and may have additional artifacts such as scripts and SQL queries. The application unmarshals and marshals events going into and out of policies as well as handling the sequencing of events for interactions of the policies with other components in ONAP. For example, drools-applications handles the interactions for operational policies running in the drools PDP. In the apex-pdp, all unmarshaling, marshaling, and component interactions are captured in the state machine, logic, and configuraiton of the policy, a Java application is not used.

PolicyDevelopment provides the RESTful Policy Design API, which allows other components to query policy types, Those components can then create policies that specify values for the properties, triggers, and targets specified in a policy type. This API is used by components such as CLAMP and PolicyDistribution to create policies from policy types.

Consider a policy type created for managing faults on vCPE equipment in a vendor independent way. The policy type implementation captures the generic logic required to manage the faults and specifies the vendor specific information that must be supplied to the type for specific vendor vCPE VFs. The actual vCPE policy that is used for managing particular vCPE equipment is created by setting the properties specified in the policy type for that vendor model of vCPE.

2.2.1.1 Generating Policy Types

It is possible to generate policy types using MDD (Model Driven Development) techniques. Policy types are expressed using a DSL (Domain Specific Language) or a policy specification environment for a particular application domain. For example, policy types for specifying SLAs could be expressed in a SLA DSL and policy types for managing SON features could be generated from a visual SON management tool. The ONAP Policy framework provides an API that allows tool chains to create policy types, see the Policy Design and Development page.

_images/PolicyTypeDesign.svg

A GUI implementation in another ONAP component (a PolicyTypeDesignClient) may use the API_User API to create and edit ONAP policy types.

2.2.1.2 Programming Policy Type Implementations

For skilled developers, the most straightforward way to create a policy type is to program it. Programming a policy type might simply mean creating and editing text files, thus manually creating the TOSCA Policy Type YAML file and the policy type implementation for the policy type.

A more formal approach is preferred. For policy type implementations, programmers use a specific Eclipse project type for developing each type of implementation, a Policy Type Implementation SDK. The project is under source control in git. This Eclipse project is structured correctly for creating implementations for a specific type of PDP. It includes the correct POM files for generating the policy type implementation and has editors and perspectives that aid programmers in their work

2.2.2 Policy Design

The PolicyCreation function of PolicyDevelopment creates policies from a policy type. The information expressed during policy type design is used to parameterize a policy type to create an executable policy. A service designer and/or operations team can use tooling that reads the TOSCA Policy Type specifications to express and capture a policy at its highest abstraction level. Alternatively, the parameter for the policy can be expressed in a raw JSON or YAML file and posted over the policy design API described on the Policy Design and Development page.

A number of mechanisms for policy creation are supported in ONAP. The process in PolicyDevelopment for creating a policy is the same for all mechanisms. The most general mechanism for creating a policy is using the RESTful Policy Design API, which provides a full interface to the policy creation support of PolicyDevelopment. This API may be exercised directly using utilities such as curl.

In future releases, the Policy Framework may provide a command line tool that will be a loose wrapper around the API. It may also provide a general purpose Policy GUI in the ONAP Portal for policy creation, which again would be a general purpose wrapper around the policy creation API. The Policy GUI would interpret any TOSCA Model that has been loaded into it and flexibly presents a GUI for a user to create policies from. The development of these mechanisms will be phased over a number of ONAP releases.

A number of ONAP components use policy in manners which are specific to their particular needs. The manner in which the policy creation process is triggered and the way in which information required to create a policy is specified and accessed is specialized for these ONAP components.

For example, CLAMP provides a GUI for creation of Control Loop policies, which reads the Policy Type associated with a control loop, presents the properties as fields in its GUI, and creates a policy using the property values entered by the user.

The following subsections outline the mechanisms for policy creation and modification supported by the ONAP Policy Framework.

2.2.2.1 Policy Design in the ONAP Policy Framework

Policy creation in PolicyDevelopment follows the general sequence shown in the sequence diagram below. An API_USER is any component that wants to create a policy from a policy type. PolicyDevelopment supplies a REST interface that exposes the API and also provides a command line tool and general purpose client that wraps the API.

_images/PolicyDesign.svg

An API_User first gets a reference to and the metadata for the Policy type for the policy they want to work on from PolicyDevelopment. PolicyDevelopment reads the metadata and artifact for the policy type from the database. The API_User then asks for a reference and the metadata for the policy. PolicyDevelopment looks up the policy in the database. If the policy already exists, PolicyDevelopment reads the artifact and returns the reference of the existing policy to the API_User with the metadata for the existing policy. If the policy does not exist, PolicyDevelopment informs the API_User.

The API_User may now proceed with a policy specification session, where the parameters are set for the policy using the policy type specification. Once the API_User is happy that the policy is completely and correctly specified, it requests PolicyDevelopment to create the policy. PolicyDevelopment creates the policy, stores the created policy artifact and its metadata in the database.

2.2.2.2 Model Driven VF (Virtual Function) Policy Design via VNF SDK Packaging

VF vendors express policies such as SLA, Licenses, hardware placement, run-time metric suggestions, etc. These details are captured within the VNF SDK and uploaded into the SDC Catalog. The SDC Distribution APIs are used to interact with SDC. For example, SLA and placement policies may be captured via TOSCA specification. License policies can be captured via TOSCA or an XACML specification. Run-time metric vendor recommendations can be captured via the VES Standard specification.

The sequence diagram below is a high level view of SDC-triggered concrete policy generation for some arbitrary entity EntityA. The parameters to create a policy are read from a TOSCA Policy specification read from a CSAR received from SDC.

_images/ModelDrivenPolicyDesign.svg

PolicyDesign uses the PolicyDistribution component for managing SDC-triggered policy creation and update requests. PolicyDistribution is an API_User, it uses the Policy Design API for policy creation and update. It reads the information it needs to populate the policy type from a TOSCA specification in a CSAR received from SDC and then uses this information to automatically generate a policy.

Note that SDC provides a wrapper for the SDC API as a Java Client and also provides a TOSCA parser. See the documentation for the Policy Distribution Component.

In Step 4 above, the PolicyDesign must download the CSAR file. If the policy is to be composed from the TOSCA definition, it must also parse the TOSCA definition.

In Step 11 above, the PolicyDesign must send back/publish status events to SDC such as DOWNLOAD_OK, DOWNLOAD_ERROR, DEPLOY_OK, DEPLOY_ERROR, NOTIFIED.

2.2.2.3 Scripted Model Driven Policy Design

Service policies such as optimization and placement policies can be specified as a TOSCA Policy at design time. These policies use a TOSCA Policy Type specification as their schemas. Therefore, scripts can be used to create TOSCA policies using TOSCA Policy Types.

_images/ScriptedPolicyDesign.svg

One straightforward way of generating policies from Policy types is to use commands specified in a script file. A command line utility such as curl is an API_User. Commands read policy types using the Policy Type API, parse the policy type and uses the properties of the policy type to prepare a TOSCA Policy. It then issues further commands to use the Policy API to create policies.

2.2.3 Policy Design Process

All policy types must be certified as being fit for deployment prior to run time deployment. Where design is executed using the SDC application, it is assumed the life cycle being implemented by SDC certifies any policy types that are declared within the ONAP Service CSAR. For other policy types and policy type implementations, the life cycle associated with the applied software development process suffices. Since policy types and their implementations are designed and implemented using software development best practices, they can be utilized and configured for various environments (eg. development, testing, production) as desired.

2.3 Policy Runtime Architecture

The Policy Framework Platform components are themselves designed as microservices that are easy to configure and deploy via Docker images and K8S both supporting resiliency and scalability if required. PAPs and PDPs are deployed by the underlying ONAP management infrastructure and are designed to comply with the ONAP interfaces for deploying containers.

The PAPs keep track of PDPs, support the deployment of PDP groups and the deployment of a policy set across those PDP groups. A PAP is stateless in a RESTful sense. Therefore, if there is more than one PAP deployed, it does not matter which PAP a user contacts to handle a request. The PAP uses the database (persistent storage) to keep track of ongoing sessions with PDPs. Policy management on PDPs is the responsibility of PAPs; management of policy sets or policies by any other manner is not permitted.

In the ONAP Policy Framework, the interfaces to the PDP are designed to be as streamlined as possible. Because the PDP is the main unit of scalability in the Policy Framework, the framework is designed to allow PDPs in a PDP group to arbitrarily appear and disappear and for policy consistency across all PDPs in a PDP group to be easily maintained. Therefore, PDPs have just two interfaces; an interface that users can use to execute policies and interface to the PAP for administration, life cycle management and monitoring. The PAP is responsible for controlling the state across the PDPs in a PDP group. The PAP interacts with the Policy database and transfers policy sets to PDPs, and may cache the policy sets for PDP groups.

See also Section 2 of the Policy Design and Development page, where the mechanisms for PDP Deployment and Registration with PAP are explained.

2.3.1 Policy Framework Services

The ONAP Policy Framework follows the architectural approach for microservices recommended by the ONAP Architecture Subcommittee.

The ONAP Policy Framework uses an infrastructure such as Kubernetes Services to manage the life cycle of Policy Framework executable components at runtime. A Kubernetes service allows, among other parameters, the number of instances (pods in Kubernetes terminology) that should be deployed for a particular service to be specified and a common endpoint for that service to be defined. Once the service is started in Kubernetes, Kubernetes ensures that the specified number of instances is always kept running. As requests are received on the common endpoint, they are distributed across the service instances. More complex call distribution and instance deployment strategies may be used; please see the Kubernetes Services documentation for those details.

If, for example, a service called policy-pdpd-control-loop is defined that runs 5 PDP-D instances. The service has the end point https://policy-pdpd-control-loop.onap/<service-specific-path>. When the service is started, Kubernetes spins up 5 PDP-Ds. Calls to the end point https://policy-pdpd-control-loop.onap/<service-specific-path> are distributed across the 5 PDP-D instances. Note that the .onap part of the service endpoint is the namespace being used and is specified for the full ONAP Kubernetes installation.

The following services will be required for the ONAP Policy Framework:

Service

Endpoint

Description

PAP

https://policy-pap

The PAP service, used for policy administration and deployment. See Policy Design and Development for details of the API for this service

PDP-X-domain

https://policy-pdpx-domain

A PDP service is defined for each PDP group. A PDP group is identified by the domain on which it operates.

For example, there could be two PDP-X domains, one for admission policies for ONAP proper and another for admission policies for VNFs of operator Supacom. Two PDP-X services are defined:

PDP-D-domain

https://policy-pdpd-domain

PDP-A-domain

https://policy-pdpa-domain

There is one and only one PAP service, which handles policy deployment, administration, and monitoring for all policies in all PDPs and PDP groups in the system. There are multiple PDP services, one PDP service for each domain for which there are policies.

2.3.2 The Policy Framework Information Structure

The following diagram captures the relationship between Policy Framework concepts at run time.

_images/RuntimeRelationships.svg

There is a one to one relationship between a PDP SubGroup, a Kubernetes PDP service, and the set of policies assigned to run in the PDP subgroup. Each PDP service runs a single PDP subgroup with multiple PDPs, which executes a specific Policy Set containing a number of policies that have been assigned to that PDP subgroup. Having and maintaining this principle makes policy deployment and administration much more straightforward than it would be if complex relationships between PDP services, PDP subgroups, and policy sets.

The topology of the PDPs and their policy sets is held in the Policy Framework database and is administered by the PAP service.

_images/PolicyDatabase.svg

The diagram above gives an indicative structure of the run time topology information in the Policy Framework database. Note that the PDP_SUBGROUP_STATE and PDP_STATE fields hold state information for life cycle management of PDP groups and PDPs.

2.3.3 Startup, Shutdown and Restart

This section describes the interactions between Policy Framework components themselves and with other ONAP components at startup, shutdown and restart.

2.3.3.1 PAP Startup and Shutdown

The sequence diagram below shows the actions of the PAP at startup.

_images/PAPStartStop.svg

The PAP is the run time point of coordination for the ONAP Policy Framework. When it is started, it initializes itself using data from the database. It then waits for periodic PDP status updates and for administration requests.

PAP shutdown is trivial. On receipt or a shutdown request, the PAP completes or aborts any ongoing operations and shuts down gracefully.

2.3.3.2 PDP Startup and Shutdown

The sequence diagram below shows the actions of the PDP at startup. See also Section 4 of the Policy Design and Development page for the API used to implement this sequence.

_images/PDPStartStop.svg

At startup, the PDP initializes itself. At this point it is in PASSIVE mode. The PDP begins sending periodic Status messages to the PAP. The first Status message initializes the process of loading the correct Policy Set on the PDP in the PAP.

On receipt or a shutdown request, the PDP completes or aborts any ongoing policy executions and shuts down gracefully.

2.3.4 Policy Execution

Policy execution is the execution of a policy in a PDP. Policy enforcement occurs in the component that receives a policy decision.

_images/PolicyExecutionFlow.svg

Policy execution can be synchronous or asynchronous. In synchronous policy execution, the component requesting a policy decision requests a policy decision and waits for the result. The PDP-X and PDP-A implement synchronous policy execution. In asynchronous policy execution, the component that requests a policy decision does not wait for the decision. Indeed, the decision may be passed to another component. The PDP-D and PDP-A implement asynchronous polic execution.

Policy execution is carried out using the current life cycle mode of operation of the PDP. While the actual implementation of the mode may vary somewhat between PDPs of different types, the principles below hold true for all PDP types:

Lifecycle Mode

Behaviour

PASSIVE MODE

Policy execution is always rejected irrespective of PDP type.

ACTIVE MODE

Policy execution is executed in the live environment by the PDP.

SAFE MODE*

Policy execution proceeds, but changes to domain state or context are not carried out. The PDP returns an indication that it is running in SAFE mode together with the action it would have performed if it was operating in ACTIVE mode. The PDP type and the policy types it is running must support SAFE mode operation.

TEST MODE*

Policy execution proceeds and changes to domain and state are carried out in a test or sandbox environment. The PDP returns an indication it is running in TEST mode together with the action it has performed on the test environment. The PDP type and the policy types it is running must support TEST mode operation.

* SAFE Mode and TEST Mode will be implemented in future versions of the Policy Framework.

2.3.5 Policy Lifecycle Management

Policy lifecycle management manages the deployment and life cycle of policies in PDP groups at run time. Policy sets can be deployed at run time without restarting PDPs or stopping policy execution. PDPs preserve state for minor/patch version upgrades and rollbacks.

2.3.5.1 Load/Update Policies on PDP

The sequence diagram below shows how policies are loaded or updated on a PDP.

_images/DownloadPoliciesToPDP.svg

This sequence can be initiated in two ways; from the PDP or from a user action.

  1. A PDP sends regular status update messages to the PAP. If this message indicates that the PDP has no policies or outdated policies loaded, then this sequence is initiated

  2. A user may explicitly trigger this sequence to load policies on a PDP

The PAP controls the entire process. The PAP reads the current PDP metadata and the required policy and policy set artifacts from the database. It then builds the policy set for the PDP. Once the policies are ready, the PAP sets the mode of the PDP to PASSIVE. The Policy Set is transparently passed to the PDP by the PAP. The PDP loads all the policies in the policy set including any models, rules, tasks, or flows in the policy set in the policy implementations.

Once the Policy Set is loaded, the PAP orders the PDP to enter the life cycle mode that has been specified for it (ACTIVE/SAFE*/TEST*). The PDP begins to execute policies in the specified mode (see section 2.3.4).

* SAFE Mode and TEST Mode will be implemented in future versions of the Policy Framework.

2.3.5.2 Policy Rollout

A policy set steps through a number of life cycle modes when it is rolled out.

_images/PolicyRollout.svg

The user defines the set of policies for a PDP group. It is deployed to a PDP group and is initially in PASSIVE mode. The user sets the PDP Group into TEST mode. The policies are run in a test or sandboxed environment for a period of time. The test results are passed back to the user. The user may revert the policy set to PASSIVE mode a number of times and upgrade the policy set during test operation.

When the user is satisfied with policy set execution and when quality criteria have been reached for the policy set, the PDP group is set to run in SAFE mode. In this mode, the policies run on the target environment but do not actually exercise any actions or change any context in the target environment. Again, as in TEST mode, the operator may decide to revert back to TEST mode or even PASSIVE mode if issues arise with a policy set.

Finally, when the user is satisfied with policy set execution and when quality criteria have been reached, the PDP group is set into ACTIVE state and the policy set executes on the target environment. The results of target operation are reported. The PDP group can be reverted to SAFE, TEST, or even PASSIVE mode at any time if problems arise.

* SAFE Mode and TEST Mode will be implemented in future versions of the Policy Framework. In current versions, policies transition directly from PASSIVE mode to ACTIVE mode.

2.3.5.3 Policy Upgrade and Rollback

There are a number of approaches for managing policy upgrade and rollback. Upgrade and rollback will be implemented in future versions of the Policy Framework.

The most straightforward approach is to use the approach described in section 2.3.5.2 Policy Rollout for upgrading and rolling back policy sets. In order to upgrade a policy set, one follows the process in 2.3.5.2 Policy Rollout with the new policy set version. For rollback, one follows the process in 2.3.5.2 Policy Rollout with the older policy set, most probably setting the old policy set into ACTIVE mode immediately. The advantage of this approach is that the approach is straightforward. The obvious disadvantage is that the PDP group is not executing on the target environment while the new policy set is in PASSIVE, TEST, and SAFE mode.

A second manner to tackle upgrade and rollback is to use a spare-wheel approach. An special upgrade PDP group service is set up as a K8S service in parallel with the active one during the upgrade procedure. The spare wheel service is used to execute the process described in 2.3.5.2 Policy Rollout. When the time comes to activate the policy set, the references for the active and spare wheel services are simply swapped. The advantage of this approach is that the down time during upgrade is minimized, the spare wheel PDP group can be abandoned at any time without affecting the in service PDP group, and the upgrade can be rolled back easily for a period simply by preserving the old service for a time. The disadvantage is that this approach is more complex and uses more resources than the first approach.

A third approach is to have two policy sets running in each PDP, an active set and a standby set. However such an approach would increase the complexity of implementation in PDPs significantly.

2.3.6 Policy Monitoring

PDPs provide a periodic report of their status to the PAP. All PDPs report using a standard reporting format that is extended to provide information for specific PDP types. PDPs provide at least the information below:

Field

Description

State

Lifecycle State (PASSIVE/TEST*/SAFE*/ACTIVE)

Timestamp

Time the report record was generated

InvocationCount

The number of execution invocations the PDP has processed since the last report

LastInvocationTime

The time taken to process the last execution invocation

AverageInvocationTime

The average time taken to process an invocation since the last report

StartTime

The start time of the PDP

UpTime

The length of time the PDP has been executing

RealTimeInfo

Real time information on running policies.

* SAFE Mode and TEST Mode will be implemented in future versions of the Policy Framework.

Currently, policy monitoring is supported by PAP and by pdp-apex. Policy monitoring for all PDPs will be supported in future versions of the Policy Framework.

2.3.7 PEP Registration and Enforcement Guidelines

In ONAP there are several applications outside the Policy Framework that enforce policy decisions based on models provided to the Policy Framework. These applications are considered Policy Enforcement Engines (PEP) and roles will be provided to those applications using AAF/CADI to ensure only those applications can make calls to the Policy Decision APIs. Some example PEPs are: DCAE, OOF, and SDNC.

See Section 3.4 of the Policy Design and Development for more information on the Decision APIs.

3. APIs Provided by the Policy Framework

See the Policy Design and Development page.

4. Terminology

PAP (Policy Administration Point)

A component that administers and manages policies

PDP (Policy Deployment Point)

A component that executes a policy artifact (One or many?)

PDP_<>

A specific type of PDP

PDP Group

A group of PDPs that execute the same set of policies

Policy Development

The development environment for policies

Policy Type

A generic prototype definition of a type of policy in TOSCA, see the TOSCA Policy Primer

Policy

An executable policy defined in TOSCA and created using a Policy Type, see the TOSCA Policy Primer

Policy Set

A set of policies that are deployed on a PDP group. One and only one Policy Set is deployed on a PDP group

End of Document

Policy Design and Development

This document describes the design principles that should be used to write, deploy, and run policies of various types using the Policy Framework. It explains the APIs that are available for Policy Framework users. It provides copious examples to illustrate policy design and API usage.

The figure below shows the Artifacts (Blue) in the ONAP Policy Framework, the Activities (Yellow) that manipulate them, and important components (Salmon) that interact with them. The Policy Framework is fully TOSCA compliant, and uses TOSCA to model policies. Please see the TOSCA Policy Primer page for an introduction to TOSCA policy concepts.

_images/APIsInPolicyFramework.svg

TOSCA defines the concept of a PolicyType, the definition of a type of policy that can be applied to a service. It also defines the concept of a Policy, an instance of a PolicyType. In the Policy Framework, we handle and manage these TOSCA definitions and tie them to real implementations of policies that can run on PDPs.

The diagram above outlines how this is achieved. Each TOSCA PolicyType must have a corresponding PolicyTypeImpl in the Policy Framework. The TOSCA PolicyType definition can be used to create a TOSCA Policy definition, either directly by the Policy Framework, by CLAMP, or by some other system. Once the Policy artifact exists, it can be used together with the PolicyTypeImpl artifact to create a PolicyImpl artifact. A PolicyImpl artifact is an executable policy implementation that can run on a PDP.

The TOSCA PolicyType artifact defines the external characteristics of the policy; defining its properties, the types of entities it acts on, and its triggers.  A PolicyTypeImpl artifact is an XACML, Drools, or APEX implementation of that policy definition. PolicyType and PolicyTypeImpl artifacts may be preloaded, may be loaded manually, or may be created using the Lifecycle API. Alternatively, PolicyType definitions may be loaded over the Lifecycle API for preloaded PolicyTypeImpl artifacts. A TOSCA PolicyType artifact can be used by clients (such as CLAMP or CLI tools) to create, parse, serialize, and/or deserialize an actual Policy.

The TOSCA Policy artifact is used internally by the Policy Framework, or is input by CLAMP or other systems. This artifact specifies the values of the properties for the policy and specifies the specific entities the policy acts on. Policy Design uses the TOSCA Policy artifact and the PolicyTypeImpl artifact to create an executable PolicyImpl artifact.

ONAP Policy Types

Policy Type Design manages TOSCA PolicyType artifacts and their PolicyTypeImpl implementations.

A TOSCA PolicyType may ultimately be defined by the modeling team but for now are defined by the Policy Framework project. Various editors and GUIs are available for creating PolicyTypeImpl implementations. However, systematic integration of PolicyTypeImpl implementation is outside the scope of the ONAP Dublin release.

The PolicyType definitions and implementations listed below can be preloaded so that they are available for use in the Policy Framework upon platform installation. For a full listing of available preloaded policy types, see the Policy API Preloaded Policy Type List.

Base Policy Types

Description

onap.policies.Monitoring

Base model that supports Policy driven DCAE microservice components used in Control Loops

onap.policies.controlloop.operational.Common

Base Control Loop operational policy common definitions

onap.policies.controlloop.guard.Common

Control Loop Guard Policy common definitions

onap.policies.Optimization

Base OOF Optimization Policy Type definition

onap.policies.Naming

Base SDNC Naming Policy Type definition

onap.policies.Native

Base Native Policy Type for PDPs to inherit from in order to provide their own native policy type.

Note

The El Alto onap.policies.controlloop.Guard policy types were deprecated and removed in Frankfurt.

1 Base Policy Type: onap.policies.Monitoring

This is a base Policy Type that supports Policy driven DCAE microservice components used in a Control Loops. The implementation of this Policy Type is done in the XACML PDP. The Decision API is used by the DCAE Policy Handler to retrieve a decision on which policy to enforce during runtime.

Base Policy Type definition for onap.policies.Monitoring
1tosca_definitions_version: tosca_simple_yaml_1_1_0
2topology_template:
3  policy_types:
4    - onap.policies.Monitoring:
5        derived_from: tosca.policies.Root
6        version: 1.0.0
7        description: a base policy type for all policies that govern monitoring provision

The PolicyTypeImpl implementation of the onap.policies.Montoring Policy Type is generic to support definition of TOSCA PolicyType artifacts in the Policy Framework using the Policy Type Design API. Therefore many TOSCA PolicyType artifacts will use the same PolicyTypeImpl implementation with different property types and towards different targets. This allows dynamically generated DCAE microservice component Policy Types to be created at Design Time.

Please be sure to name your Policy Type appropriately by prepending it with onap.policies.monitoring.Custom. Notice the lowercase m for monitoring, which follows TOSCA conventions. And also notice the capitalized “C” for your analytics policy type name.

Example PolicyType onap.policies.monitoring.MyDCAEComponent derived from onap.policies.Monitoring
1tosca_definitions_version: tosca_simple_yaml_1_1_0
2policy_types:
3 - onap.policies.monitoring.Mycomponent:
4      derived_from: onap.policies.Monitoring
5      version: 1.0.0
6      properties:
7          my_property_1:
8          type: string
9          description: A description of this property

For more examples of monitoring policy type definitions, please refer to the examples in the ONAP policy-models gerrit repository. Please note that some of the examples do not adhere to TOSCA naming conventions due to backward compatibility.

2 Base Policy Type onap.policies.controlloop.operational.Common

This is the new Operational Policy Type introduced in Frankfurt release to fully support TOSCA Policy Type. There are common properties and datatypes that are independent of the PDP engine used to enforce this Policy Type.

Operational Policy Type Inheritance
2.1 onap.policies.controlloop.operational.common.Drools

Drools PDP Control Loop Operational Policy definition extends the base common policy type by adding a property for controllerName.

Please see the definition of the Drools Operational Policy Type

2.2 onap.policies.controlloop.operational.common.Apex

Apex PDP Control Loop Operational Policy definition extends the base common policy type by adding additional properties.

Please see the definition of the Apex Operational Policy Type

3 Base Policy Type: onap.policies.controlloop.guard.Common

This base policy type is the the type definition for Control Loop guard policies for frequency limiting, blacklisting and min/max guards to help protect runtime Control Loop Actions from doing harm to the network. This policy type is developed using the XACML PDP to support question/answer Policy Decisions during runtime for the Drools and APEX onap.controlloop.Operational policy type implementations.

Guard Policy Type Inheritance

Please see the definition of the Common Guard Policy Type

3.1 Frequency Limiter Guard onap.policies.controlloop.guard.common.FrequencyLimiter

The frequency limiter supports limiting the frequency of actions being taken by an Actor.

Please see the definition of the Guard Frequency Limiter Policy Type

3.2 Min/Max Guard onap.policies.controlloop.guard.common.MinMax

The Min/Max Guard supports Min/Max number of entity for scaling operations.

Please see the definition of the Guard Min/Max Policy Type

3.3 Blacklist Guard onap.policies.controlloop.guard.common.Blacklist

The Blacklist Guard Supports blacklisting control loop actions from being performed on specific entity id’s.

Please see the definition of the Guard Blacklist Policy Type

3.4 Filter Guard onap.policies.controlloop.guard.common.Filter

The Filter Guard Supports filtering control loop actions from being performed on specific entity id’s.

Please see the definition of the Guard Filter Policy Type

4 Optimization onap.policies.Optimization

The Optimization Base Policy Type supports the OOF optimization policies. The Base policy Type has common properties shared by all its derived policy types.

Optimization Policy Type Inheritance

Please see the definition of the Base Optimization Policy Type.

These Policy Types are unique in that some properties have an additional metadata property matchable set to true which indicates that this property can be used to support more fine-grained Policy Decisions. For more information, see the XACML Optimization application implementation.

4.1 Optimization Service Policy Type onap.policies.optimization.Service

This policy type further extends the base onap.policies.Optimization type by defining additional properties specific to a service. For more information:

Service Optimization Base Policy Type

Several additional policy types inherit from the Service Optimization Policy Type. For more information, XACML Optimization application implementation.

4.2 Optimization Resource Policy Type onap.policies.optimization.Resource

This policy type further extends the base onap.policies.Optimization type by defining additional properties specific to a resource. For more information:

Resource Optimization Base Policy Type

Several additional policy types inherit from the Resource Optimization Policy Type. For more information, XACML Optimization application implementation.

5 Naming onap.policies.Naming

Naming policies are used in SDNC to enforce which naming policy should be used during instantiation.

Policies of this type are composed using the Naming Policy Type Model.

6 Native Policy Types onap.policies.Native

This is the Base Policy Type used by PDP engines to support their native language policies. PDP engines inherit from this base policy type to implement support for their own custom policy type:

tosca_definitions_version: tosca_simple_yaml_1_1_0
policy_types:
    onap.policies.Native:
        derived_from: tosca.policies.Root
        description: a base policy type for all native PDP policies
        version: 1.0.0
6.1 Policy Type: onap.policies.native.drools.Controller

This policy type supports creation of native PDP-D controllers via policy. A controller is an abstraction on the PDP-D that groups communication channels, message mapping rules, and any other arbitrary configuration data to realize an application.

Policies of this type are composed using the onap.policies.native.drools.Controller policy type specification specification.

6.2 Policy Type: onap.policies.native.drools.Artifact

This policy type supports the dynamic association of a native PDP-D controller with rules and dependent java libraries. This policy type is used in conjuction with the onap.policies.native.drools.Controller type to create or upgrade a drools application on a live PDP-D.

Policies of this type are composed against the onap.policies.native.drools.Controller policy type specification specification.

6.3 Policy Type: onap.policies.native.Xacml

This policy type supports XACML OASIS 3.0 XML Policies. The policies are URL encoded in order to be easily transported via Lifecycle API json and yaml Content-Types. When deployed to the XACML PDP (PDP-X), they will be managed by the native application. The PDP-X will route XACML Request/Response RESTful API calls to the native application who manages those decisions.

XACML Native Policy Type

6.4 Policy Type: onap.policies.native.Apex

This policy type supports Apex native policy types.

Apex Native Policy Type

Policy Offered APIs

The Policy Framework supports the public APIs listed in the links below:

Policy Life Cycle API

The purpose of this API is to support CRUD of TOSCA PolicyType and Policy entities. This API is provided by the PolicyDevelopment component of the Policy Framework, see the The ONAP Policy Framework Architecture page. The Policy design API backend is running in an independent building block component of the policy framework that provides REST services for the aforementioned CRUD behaviors. The Policy design API component interacts with a policy database for storing and fetching new policies or policy types as needed. Apart from CRUD, an API is also exposed for clients to retrieve healthcheck status of the API REST service and statistics report including a variety of counters that reflect the history of API invocation.

We strictly follow TOSCA Specification to define policy types and policies. A policy type defines the schema for a policy, expressing the properties, targets, and triggers that a policy may have. The type (string, int etc) and constraints (such as the range of legal values) of each property is defined in the Policy Type. Both Policy Type and policy are included in a TOSCA Service Template, which is used as the entity passed into an API POST call and the entity returned by API GET and DELETE calls. More details are presented in following sections. Policy Types and Policies can be composed for any given domain of application. All Policy Types and Policies must be composed as well-formed TOSCA Service Templates. One Service Template can contain multiple policies and policy types.

Child policy types can inherit from parent policy types, so a hierarchy of policy types can be built up. For example, the HpaPolicy Policy Type in the table below is a child of a Resource Policy Type, which is a child of an Optimization policy. See also the examples in Github.

onap.policies.Optimization.yaml
 onap.policies.optimization.Resource.yaml
  onap.policies.optimization.resource.AffinityPolicy.yaml
  onap.policies.optimization.resource.DistancePolicy.yaml
  onap.policies.optimization.resource.HpaPolicy.yaml
  onap.policies.optimization.resource.OptimizationPolicy.yaml
  onap.policies.optimization.resource.PciPolicy.yaml
  onap.policies.optimization.resource.Vim_fit.yaml
  onap.policies.optimization.resource.VnfPolicy.yaml
onap.policies.optimization.Service.yaml
  onap.policies.optimization.service.QueryPolicy.yaml
  onap.policies.optimization.service.SubscriberPolicy.yaml

Custom data types can be defined in TOSCA for properties specified in Policy Types. Data types can also inherit from parents, so a hierarchy of data types can also be built up.

Warning

When creating a Policy Type, the ancestors of the Policy Type and all its custom Data Type definitions and ancestors MUST either already exist in the database or MUST also be defined in the incoming TOSCA Service Template. Requests with missing or bad references are rejected by the API.

Each Policy Type can have multiple Policy instances created from it. Therefore, many Policy instances of the HpaPolicy Policy Type above can be created. When a policy is created, its Policy Type is specified in the type and type_version fields of the policy.

Warning

The Policy Type specified for a Policy MUST exist in the database before the policy can be created. Requests with missing or bad Policy Type references are rejected by the API.

The API allows applications to create, update, delete, and query PolicyType entities so that they become available for use in ONAP by applications such as CLAMP. Some Policy Type entities are preloaded in the Policy Framework.

Warning

If a TOSCA entity (Data Type, Policy Type, or Policy with a certain version) already exists in the database and an attempt is made to re-create the entity with different fields, the API will reject the request with the error message “entity in incoming fragment does not equal existing entity”. In such cases, delete the Policy or Policy Type and re-create it using the API.

The TOSCA fields below are valid on API calls:

Field

GET

POST

DELETE

Comment

(name)

M

M

M

The definition of the reference to the Policy Type, GET allows ranges to be specified

version

O

M

C

GET allows ranges to be specified, must be specified if more than one version of the Policy Type exists and a specific version is required

description

R

O

N/A

Desciption of the Policy Type

derived_from

R

C

N/A

Must be specified when a Policy Type is derived from another Policy Type such as in the case of derived Monitoring Policy Types. The referenced Policy Type must either already exist in the database or be defined as another policy type in the incoming TOSCA service template

metadata

R

O

N/A

Metadata for the Policy Type

properties

R

M

N/A

This field holds the specification of the specific Policy Type in ONAP. Any user defined data types specified on properties must either already exist in the database or be defined in the incoming TOSCA service template

targets

R

O

N/A

A list of node types and/or group types to which the Policy Type can be applied

triggers

R

O

N/A

Specification of policy triggers, not currently supported in ONAP

Note

On this and subsequent tables, use the following legend: M-Mandatory, O-Optional, R-Read-only, C-Conditional. Conditional means the field is mandatory when some other field is present.

Note

Preloaded policy types may only be queried over this API, modification or deletion of preloaded policy type implementations is disabled.

Note

Policy types that are in use (referenced by defined Policies and/or child policy types) may not be deleted.

Note

The group types of targets in TOSCA are groups of TOSCA nodes, not PDP groups; the target concept in TOSCA is equivalent to the Policy Enforcement Point (PEP) concept

To ease policy creation, we preload several widely used policy types in policy database. Below is a table listing the preloaded policy types.

Policy Type Name

Payload

Monitoring.TCA

onap.policies.monitoring.tcagen2.yaml

Monitoring.Collectors

onap.policies.monitoring.dcaegen2.collectors.datafile.datafile-app-server.yaml

Optimization

onap.policies.Optimization.yaml

Optimization.Resource

onap.policies.optimization.Resource.yaml

Optimization.Resource.AffinityPolicy

onap.policies.optimization.resource.AffinityPolicy.yaml

Optimization.Resource.DistancePolicy

onap.policies.optimization.resource.DistancePolicy.yaml

Optimization.Resource.HpaPolicy

onap.policies.optimization.resource.HpaPolicy.yaml

Optimization.Resource.OptimizationPolicy

onap.policies.optimization.resource.OptimizationPolicy.yaml

Optimization.Resource.PciPolicy

onap.policies.optimization.resource.PciPolicy.yaml

Optimization.Resource.Vim_fit

onap.policies.optimization.resource.Vim_fit.yaml

Optimization.Resource.VnfPolicy

onap.policies.optimization.resource.VnfPolicy.yaml

Optimization.Service

onap.policies.optimization.Service.yaml

Optimization.Service.QueryPolicy

onap.policies.optimization.service.QueryPolicy.yaml

Optimization.Service.SubscriberPolicy

onap.policies.optimization.service.SubscriberPolicy.yaml

Controlloop.Guard.Common

onap.policies.controlloop.guard.Common.yaml

Controlloop.Guard.Common.Blacklist

onap.policies.controlloop.guard.common.Blacklist.yaml

Controlloop.Guard.Common.FrequencyLimiter

onap.policies.controlloop.guard.common.FrequencyLimiter.yaml

Controlloop.Guard.Common.MinMax

onap.policies.controlloop.guard.common.MinMax.yaml

Controlloop.Guard.Common.Filter

onap.policies.controlloop.guard.common.Filter.yaml

Controlloop.Guard.Coordination.FirstBlocksSecond

onap.policies.controlloop.guard.coordination.FirstBlocksSecond.yaml

Controlloop.Operational.Common

onap.policies.controlloop.operational.Common.yaml

Controlloop.Operational.Common.Apex

onap.policies.controlloop.operational.common.Apex.yaml

Controlloop.Operational.Common.Drools

onap.policies.controlloop.operational.common.Drools.yaml

Naming

onap.policies.Naming.yaml

Native.Drools

onap.policies.native.Drools.yaml

Native.Xacml

onap.policies.native.Xacml.yaml

Native.Apex

onap.policies.native.Apex.yaml

We also preload a policy in the policy database. Below is a table listing the preloaded polic(ies).

Policy Type Name

Payload

SDNC.Naming

sdnc.policy.naming.input.tosca.yaml

Below is a table containing sample well-formed TOSCA compliant policies.

Policy Name

Payload

vCPE.Monitoring.Tosca

vCPE.policy.monitoring.input.tosca.yaml vCPE.policy.monitoring.input.tosca.json

vCPE.Optimization.Tosca

vCPE.policies.optimization.input.tosca.yaml vCPE.policies.optimization.input.tosca.json

vCPE.Operational.Tosca

vCPE.policy.operational.input.tosca.yaml vCPE.policy.operational.input.tosca.json

vDNS.Guard.FrequencyLimiting.Tosca

vDNS.policy.guard.frequencylimiter.input.tosca.yaml

vDNS.Guard.MinMax.Tosca

vDNS.policy.guard.minmaxvnfs.input.tosca.yaml

vDNS.Guard.Blacklist.Tosca

vDNS.policy.guard.blacklist.input.tosca.yaml

vDNS.Monitoring.Tosca

vDNS.policy.monitoring.input.tosca.yaml vDNS.policy.monitoring.input.tosca.json

vDNS.Operational.Tosca

vDNS.policy.operational.input.tosca.yaml vDNS.policy.operational.input.tosca.json

vFirewall.Monitoring.Tosca

vFirewall.policy.monitoring.input.tosca.yaml vFirewall.policy.monitoring.input.tosca.json

vFirewall.Operational.Tosca

vFirewall.policy.operational.input.tosca.yaml vFirewall.policy.operational.input.tosca.json

vFirewallCDS.Operational.Tosca

vFirewallCDS.policy.operational.input.tosca.yaml

Below is a global API table from where swagger JSON for different types of policy design API can be downloaded.

Global API Table

API name

Swagger JSON

Healthcheck API

link

Statistics API

link

Tosca Policy Type API

link

Tosca Policy API

link

API Swagger

It is worth noting that we use basic authorization for API access with username and password set to healthcheck and zb!XztG34 respectively. Also, the new APIs support both http and https.

For every API call, client is encouraged to insert an uuid-type requestID as parameter. It is helpful for tracking each http transaction and facilitates debugging. Mostly importantly, it complies with Logging requirements v1.2. If a client does not provide the requestID in API call, one will be randomly generated and attached to response header x-onap-requestid.

In accordance with ONAP API Common Versioning Strategy Guidelines, in the response of each API call, several custom headers are added:

x-latestversion: 1.0.0
x-minorversion: 0
x-patchversion: 0
x-onap-requestid: e1763e61-9eef-4911-b952-1be1edd9812b
x-latestversion is used only to communicate an API's latest version.

x-minorversion is used to request or communicate a MINOR version back from the client to the server, and from the server back to the client.

x-patchversion is used only to communicate a PATCH version in a response for troubleshooting purposes only, and will not be provided by the client on request.

x-onap-requestid is used to track REST transactions for logging purpose, as described above.

HealthCheck

GET /policy/api/v1/healthcheck

Perform a system healthcheck

  • Description: Returns healthy status of the Policy API component

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation; Healthcheck report will be returned.

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

Statistics

GET /policy/api/v1/statistics

Retrieve current statistics

  • Description: Returns current statistics including the counters of API invocation

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation; All statistics counters of API invocation will be returned.

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

PolicyType

GET /policy/api/v1/policytypes

Retrieve existing policy types

  • Description: Returns a list of existing policy types stored in Policy Framework

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation; All policy types will be returned.

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

POST /policy/api/v1/policytypes

Create a new policy type

  • Description: Create a new policy type. Client should provide TOSCA body of the new policy type

  • Consumes: [‘application/json’, ‘application/yaml’]

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

body

body

Entity body of policy type

ToscaServiceTemplate

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation; The newly created policy type will be returned.

400 - Invalid Body

401 - Authentication Error

403 - Authorization Error

406 - Not Acceptable Version

500 - Internal Server Error

GET /policy/api/v1/policytypes/{policyTypeId}

Retrieve all available versions of a policy type

  • Description: Returns a list of all available versions for the specified policy type

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

policyTypeId

path

ID of policy type

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation; All versions of specified policy type will be returned.

401 - Authentication Error

403 - Authorization Error

404 - Resource Not Found

500 - Internal Server Error

GET /policy/api/v1/policytypes/{policyTypeId}/versions/{versionId}

Retrieve one particular version of a policy type

  • Description: Returns a particular version for the specified policy type

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

policyTypeId

path

ID of policy type

string

versionId

path

Version of policy type

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation; One specified version of specified policy type will be returned.

401 - Authentication Error

403 - Authorization Error

404 - Resource Not Found

500 - Internal Server Error

DELETE /policy/api/v1/policytypes/{policyTypeId}/versions/{versionId}

Delete one version of a policy type

  • Description: Delete one version of a policy type. It must follow two rules. Rule 1: pre-defined policy types cannot be deleted; Rule 2: policy types that are in use (parameterized by a TOSCA policy) cannot be deleted. The parameterizing TOSCA policies must be deleted first.

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

policyTypeId

path

ID of policy type

string

versionId

path

Version of policy type

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation; Newly deleted policy type will be returned.

401 - Authentication Error

403 - Authorization Error

404 - Resource Not Found

409 - Delete Conflict, Rule Violation

500 - Internal Server Error

GET /policy/api/v1/policytypes/{policyTypeId}/versions/latest

Retrieve latest version of a policy type

  • Description: Returns latest version for the specified policy type

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

policyTypeId

path

ID of policy type

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation; Latest version of specified policy type will be returned.

401 - Authentication Error

403 - Authorization Error

404 - Resource Not Found

500 - Internal Server Error

Policy

GET /policy/api/v1/policytypes/{policyTypeId}/versions/{policyTypeVersion}/policies

Retrieve all versions of a policy created for a particular policy type version

  • Description: Returns a list of all versions of specified policy created for the specified policy type version

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

policyTypeId

path

ID of policy type

string

policyTypeVersion

path

Version of policy type

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation; All policies matching specified policy type will be returned.

401 - Authentication Error

403 - Authorization Error

404 - Resource Not Found

500 - Internal Server Error

POST /policy/api/v1/policytypes/{policyTypeId}/versions/{policyTypeVersion}/policies

Create a new policy for a policy type version

  • Description: Create a new policy for a policy type. Client should provide TOSCA body of the new policy

  • Consumes: [‘application/json’, ‘application/yaml’]

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

policyTypeId

path

ID of policy type

string

policyTypeVersion

path

Version of policy type

string

X-ONAP-RequestID

header

RequestID for http transaction

string

body

body

Entity body of policy

ToscaServiceTemplate

Responses

200 - successful operation; Newly created policy matching specified policy type will be returned.

400 - Invalid Body

401 - Authentication Error

403 - Authorization Error

404 - Resource Not Found

406 - Not Acceptable Version

500 - Internal Server Error

GET /policy/api/v1/policytypes/{policyTypeId}/versions/{policyTypeVersion}/policies/{policyId}

Retrieve all version details of a policy created for a particular policy type version

  • Description: Returns a list of all version details of the specified policy

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

policyTypeId

path

ID of policy type

string

policyTypeVersion

path

Version of policy type

string

policyId

path

ID of policy

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation; All versions of specified policy matching specified policy type will be returned.

401 - Authentication Error

403 - Authorization Error

404 - Resource Not Found

500 - Internal Server Error

GET /policy/api/v1/policytypes/{policyTypeId}/versions/{policyTypeVersion}/policies/{policyId}/versions/{policyVersion}

Retrieve one version of a policy created for a particular policy type version

  • Description: Returns a particular version of specified policy created for the specified policy type version

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

policyTypeId

path

ID of policy type

string

policyTypeVersion

path

Version of policy type

string

policyId

path

ID of policy

string

policyVersion

path

Version of policy

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation; The specified policy matching specified policy type will be returned.

401 - Authentication Error

403 - Authorization Error

404 - Resource Not Found

500 - Internal Server Error

DELETE /policy/api/v1/policytypes/{policyTypeId}/versions/{policyTypeVersion}/policies/{policyId}/versions/{policyVersion}

Delete a particular version of a policy

  • Description: Delete a particular version of a policy. It must follow one rule. Rule: the version that has been deployed in PDP group(s) cannot be deleted

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

policyTypeId

path

PolicyType ID

string

policyTypeVersion

path

Version of policy type

string

policyId

path

ID of policy

string

policyVersion

path

Version of policy

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation; Newly deleted policy matching specified policy type will be returned.

401 - Authentication Error

403 - Authorization Error

404 - Resource Not Found

409 - Delete Conflict, Rule Violation

500 - Internal Server Error

GET /policy/api/v1/policytypes/{policyTypeId}/versions/{policyTypeVersion}/policies/{policyId}/versions/latest

Retrieve the latest version of a particular policy

  • Description: Returns the latest version of specified policy

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

policyTypeId

path

ID of policy type

string

policyTypeVersion

path

Version of policy type

string

policyId

path

ID of policy

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation; Latest version of specified policy matching specified policy type will be returned.

401 - Authentication Error

403 - Authorization Error

404 - Resource Not Found

500 - Internal Server Error

GET /policy/api/v1/policies/{policyId}/versions/{policyVersion}

Retrieve specific version of a specified policy

  • Description: Returns a particular version of specified policy

  • Consumes: [‘application/json’, ‘application/yaml’]

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

policyId

path

Name of policy

string

policyVersion

path

Version of policy

string

X-ONAP-RequestID

header

RequestID for http transaction

string

mode

query

Fetch mode for policies, BARE for bare policies (default), REFERENCED for fully referenced policies

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

404 - Resource Not Found

500 - Internal Server Error

DELETE /policy/api/v1/policies/{policyId}/versions/{policyVersion}

Delete a particular version of a policy

  • Description: Rule: the version that has been deployed in PDP group(s) cannot be deleted

  • Consumes: [‘application/json’, ‘application/yaml’]

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

policyId

path

ID of policy

string

policyVersion

path

Version of policy

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

404 - Resource Not Found

409 - Delete Conflict, Rule Violation

500 - Internal Server Error

GET /policy/api/v1/policies

Retrieve all versions of available policies

  • Description: Returns all version of available policies

  • Consumes: [‘application/json’, ‘application/yaml’]

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

mode

query

Fetch mode for policies, BARE for bare policies (default), REFERENCED for fully referenced policies

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

404 - Resource Not Found

500 - Internal Server Error

POST /policy/api/v1/policies

Create one or more new policies

  • Description: Create one or more new policies. Client should provide TOSCA body of the new policies

  • Consumes: [‘application/json’, ‘application/yaml’]

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

body

body

Entity body of policies

ToscaServiceTemplate

Responses

200 - successful operation; Newly created policies will be returned.

400 - Invalid Body

401 - Authentication Error

403 - Authorization Error

404 - Resource Not Found

406 - Not Acceptable Version

500 - Internal Server Error

When making a POST policy API call, the client must not only provide well-formed JSON/YAML, but also must conform to the TOSCA specification. For example. the “type” field for a TOSCA policy should strictly match the policy type name it derives. Please check out the sample policies in above policy table.

Also, in the POST payload passed into each policy or policy type creation call (i.e. POST API invocation), the client needs to explicitly specify the version of the policy or policy type to create. That being said, the “version” field is mandatory in the TOSCA service template formatted policy or policy type payload. If the version is missing, that POST call will return “406 - Not Acceptable” and the policy or policy type to create will not be stored in the database.

To avoid inconsistent versions between the database and policies deployed in the PDPs, policy API REST service employs some enforcement rules that validate the version specified in the POST payload when a new version is to create or an existing version to update. Policy API will not blindly override the version of the policy or policy type to create/update. Instead, we encourage the client to carefully select a version for the policy or policy type to change and meanwhile policy API will check the validity of the version and feed an informative warning back to the client if the specified version is not good. To be specific, the following rules are implemented to enforce the version:

  1. If the incoming version is not in the database, we simply insert it. For example: if policy version 1.0.0 is stored in the database and now a client wants to create the same policy with updated version 3.0.0, this POST call will succeed and return “200” to the client.

  2. If the incoming version is already in the database and the incoming payload is different from the same version in the database, “406 - Not Acceptable” will be returned. This forces the client to update the version of the policy if the policy is changed.

  3. If a client creates a version of a policy and wishes to update a property on the policy, they must delete that version of the policy and re-create it.

  4. If multiple policies are included in the POST payload, policy API will also check if duplicate version exists in between any two policies or policy types provided in the payload. For example, a client provides a POST payload which includes two policies with the same name and version but different policy properties. This POST call will fail and return “406” error back to the calling application along with a message such as “duplicate policy {name}:{version} found in the payload”.

  5. The same version validation is applied to policy types too.

  6. To avoid unnecessary id/version inconsistency between the ones specified in the entity fields and the ones returned in the metadata field, “policy-id” and “policy-version” in the metadata will only be set by policy API. Any incoming explicit specification in the POST payload will be ignored. For example, A POST payload has a policy with name “sample-policy-name1” and version “1.0.0” specified. In this policy, the metadata also includes “policy-id”: “sample-policy-name2” and “policy-version”: “2.0.0”. The 200 return of this POST call will have this created policy with metadata including “policy-id”: “sample-policy-name1” and “policy-version”: “1.0.0”.

Regarding DELETE APIs for TOSCA compliant policies, we only expose API to delete one particular version of policy or policy type at a time for safety purpose. If client has the need to delete multiple or a group of policies or policy types, they will need to delete them one by one.

Sample API Curl Commands

From an API client perspective, using http or https does not make much difference to the curl command. Here we list some sample curl commands (using http) for POST, GET and DELETE monitoring and operational policies that are used in vFirewall use case. JSON payload for POST calls can be downloaded from policy table above.

If you are accessing the api from the container, the default ip and port would be https:/policy-api:6969/policy/api/v1/.

Create vFirewall Monitoring Policy::

curl –user ‘healthcheck:zb!XztG34’ -X POST “http://{ip}:{port}/policy/api/v1/policytypes/onap.policies.monitoring.tcagen2/versions/1.0.0/policies” -H “Accept: application/json” -H “Content-Type: application/json” -d @vFirewall.policy.monitoring.input.tosca.json

Get vFirewall Monitoring Policy::

curl –user ‘healthcheck:zb!XztG34’ -X GET “http://{ip}:{port}/policy/api/v1/policytypes/onap.policies.monitoring.tcagen2/versions/1.0.0/policies/onap.vfirewall.tca/versions/1.0.0” -H “Accept: application/json” -H “Content-Type: application/json”

Delete vFirewall Monitoring Policy::

curl –user ‘healthcheck:zb!XztG34’ -X DELETE “http://{ip}:{port}/policy/api/v1/policytypes/onap.policies.monitoring.tcagen2/versions/1.0.0/policies/onap.vfirewall.tca/versions/1.0.0” -H “Accept: application/json” -H “Content-Type: application/json”

Create vFirewall Operational Policy::

curl –user ‘healthcheck:zb!XztG34’ -X POST “http://{ip}:{port}/policy/api/v1/policytypes/onap.policies.controlloop.operational.common.Drools/versions/1.0.0/policies” -H “Accept: application/json” -H “Content-Type: application/json” -d @vFirewall.policy.operational.input.tosca.json

Get vFirewall Operational Policy::

curl –user ‘healthcheck:zb!XztG34’ -X GET “http://{ip}:{port}/policy/api/v1/policytypes/onap.policies.controlloop.operational.common.Drools/versions/1.0.0/policies/operational.modifyconfig/versions/1.0.0” -H “Accept: application/json” -H “Content-Type: application/json”

Delete vFirewall Operational Policy::

curl –user ‘healthcheck:zb!XztG34’ -X DELETE “http://{ip}:{port}/policy/api/v1/policytypes/onap.policies.controlloop.operational.common.Drools/versions/1.0.0/policies/operational.modifyconfig/versions/1.0.0” -H “Accept: application/json” -H “Content-Type: application/json”

Get all available policies::

curl –user ‘healthcheck:zb!XztG34’ -X GET “http://{ip}:{port}/policy/api/v1/policies” -H “Accept: application/json” -H “Content-Type: application/json”

Get version 1.0.0 of vFirewall Monitoring Policy::

curl –user ‘healthcheck:zb!XztG34’ -X GET “http://{ip}:{port}/policy/api/v1/policies/onap.vfirewall.tca/versions/1.0.0” -H “Accept: application/json” -H “Content-Type: application/json”

Delete version 1.0.0 of vFirewall Monitoring Policy::

curl –user ‘healthcheck:zb!XztG34’ -X DELETE “http://{ip}:{port}/policy/api/v1/policies/onap.vfirewall.tca/versions/1.0.0” -H “Accept: application/json” -H “Content-Type: application/json”

Policy Administration Point (PAP) Architecture

The Internal Policy Framework PAP-PDP API

This page describes the API between the PAP and PDPs. The APIs in this section are implemented using DMaaP API messaging. The APIs in this section are used for internal communication in the Policy Framework. The APIs are NOT supported for use by components outside the Policy Framework and are subject to revision and change at any time.

There are four messages on the API:

  1. PDP_STATUS: PDP→PAP, used by PDPs to report to the PAP

  2. PDP_UPDATE: PAP→PDP, used by the PAP to update the policies running on PDPs, triggers a PDP_STATUS message with the result of the PDP_UPDATE operation

  3. PDP_STATE_CHANGE: PAP→PDP, used by the PAP to change the state of PDPs, triggers a PDP_STATUS message with the result of the PDP_STATE_CHANGE operation

  4. PDP_HEALTH_CHECK: PAP→PDP, used by the PAP to order a health check on PDPs, triggers a PDP_STATUS message with the result of the PDP_HEALTH_CHECK operation

The fields in the table below are valid on API calls:

Field

PDP STATUS

PDP UPDATE

PDP STATE CHANGE

PDP HEALTH CHECK

Comment

(message_name)

M

M

M

M

pdp_status, pdp_update, pdp_state_change, or pdp_health_check

name

M

M

C

C

The name of the PDP, for state changes and health checks, the PDP group and subgroup can be used to specify the scope of the operation

version

M

N/A

N/A

N/A

The version of the PDP

pdp_type

M

M

N/A

N/A

The type of the PDP, currently xacml, drools, or apex

state

M

N/A

M

N/A

The administrative state of the PDP group: PASSIVE, SAFE, TEST, ACTIVE, or TERMINATED

healthy

M

N/A

N/A

N/A

The result of the latest health check on the PDP: HEALTHY/NOT_HEALTHY/TEST_IN_PROGRESS

description

O

O

N/A

N/A

The description of the PDP

pdp_group

O

M

C

C

The PDP group to which the PDP belongs, the PDP group and subgroup can be used to specify the scope of the operation

pdp_subgroup

O

M

C

C

The PDP subgroup to which the PDP belongs, the PDP group and subgroup can be used to specify the scope of the operation

supported_policy_types

M

N/A

N/A

N/A

A list of the policy types supported by the PDP

policies

O

M

N/A

N/A

The list of policies running on the PDP

->(name)

O

M

N/A

N/A

The name of a TOSCA policy running on the PDP

->policy_type

O

M

N/A

N/A

The TOSCA policy type of the policyWhen a PDP starts, it commences periodic sending of PDP_STATUS messages on DMaaP. The PAP receives these messages and acts in whatever manner is appropriate.

->policy_type_version

O

M

N/A

N/A

The version of the TOSCA policy type of the policy

->properties

O

M

N/A

N/A

The properties of the policy for the XACML, Drools, or APEX PDP for details

instance

M

N/A

N/A

N/A

The instance ID of the PDP running in a Kuberenetes Pod

deployment_instance_info

M

N/A

N/A

N/A

Information on the node running the PDP

properties

O

O

N/A

N/A

Other properties specific to the PDP

statistics

M

N/A

N/A

N/A

Statistics on policy execution in the PDP

->policy_download_count

M

N/A

N/A

N/A

The number of policies downloaded into the PDP

->policy_download_success_count

M

N/A

N/A

N/A

The number of policies successfully downloaded into the PDP

->policy_download_fail_count

M

N/A

N/A

N/A

The number of policies downloaded into the PDP where the download failed

->policy_executed_count

M

N/A

N/A

N/A

The number of policy executions on the PDP

->policy_executed_success_count

M

N/A

N/A

N/A

The number of policy executions on the PDP that completed successfully

->policy_executed_fail_count

M

N/A

N/A

N/A

The number of policy executions on the PDP that failed

response

O

N/A

N/A

N/A

The response to the last operation that the PAP executed on the PDP

->response_to

M

N/A

N/A

N/A

The PAP to PDP message to which this is a response

->response_status

M

N/A

N/A

N/A

SUCCESS or FAIL

->response_message

O

N/A

N/A

N/A

Message giving further information on the successful or failed operation

YAML is used for illustrative purposes in the examples in this section. JSON (application/json) is used as the content type in the implementation of this API.

1 PAP API for PDPs

The purpose of this API is for PDPs to provide heartbeat, status, health, and statistical information to Policy Administration. There is a single PDP_STATUS message on this API. PDPs send this message to the PAP using the POLICY_PDP_PAP DMaaP topic. The PAP listens on this topic for messages.

When a PDP starts, it commences periodic sending of PDP_STATUS messages on DMaaP. The PAP receives these messages and acts in whatever manner is appropriate. PDP_UPDATE, PDP_STATE_CHANGE, and PDP_HEALTH_CHECK operations trigger a PDP_STATUS message as a response.

The PDP_STATUS message is used for PDP heartbeat monitoring. A PDP sends a PDP_STATUS message with a state of TERMINATED when it terminates normally. If a PDP_STATUS message is not received from a PDP periodically or in response to a pdp_update, pdp-state_change, or pdp_health_check message in a certain configurable time, then the PAP assumes the PDP has failed.

A PDP may be preconfigured with its PDP group, PDP subgroup, and policies. If the PDP group, subgroup, or any policy sent to the PAP in a PDP_STATUS message is unknown to the PAP, the PAP locks the PDP in state PASSIVE.

PDP_STATUS message from an XACML PDP running control loop policies
 1pdp_status:
 2  name: xacml_1
 3  version: 1.2.3
 4  pdp_type: xacml
 5  state: active
 6  healthy: true
 7  description: XACML PDP running control loop policies
 8  pdp_group: onap.pdpgroup.controlloop.operational
 9  pdp_subgroup: xacml
10  supported_policy_types:
11    - onap.policies.controlloop.guard.FrequencyLimiter
12    - onap.policies.controlloop.guard.BlackList
13    - onap.policies.controlloop.guard.MinMax
14  policies:
15    - onap.policies.controlloop.guard.frequencylimiter.EastRegion:
16        policy_type: onap.policies.controlloop.guard.FrequencyLimiter
17        policy_type_version: 1.0.0
18        properties:
19          # Omitted for brevity
20   - onap.policies.controlloop.guard.blacklist.eastRegion:
21        policy_type: onap.policies.controlloop.guard.BlackList
22        policy_type_version: 1.0.0
23        properties:
24          # Omitted for brevity
25    - onap.policies.controlloop.guard.minmax.eastRegion:
26        policy_type: onap.policies.controlloop.guard.MinMax
27        policy_type_version: 1.0.0
28        properties:
29          # Omitted for brevity
30  instance: xacml_1
31  deployment_instance_info:
32    node_address: xacml_1_pod
33    # Other deployment instance info
34  statistics:
35    policy_download_count: 0
36    policy_download_success_count: 0
37    policy_download_fail_count: 0
38    policy_executed_count: 123
39    policy_executed_success_count: 122
40    policy_executed_fail_count: 1
PDP_STATUS message from a Drools PDP running control loop policies
 1pdp_status:
 2  name: drools_2
 3  version: 2.3.4
 4  pdp_type: drools
 5  state: safe
 6  healthy: true
 7  description: Drools PDP running control loop policies
 8  pdp_group: onap.pdpgroup.controlloop.operational
 9  pdp_subgroup: drools
10  supported_policy_types:
11    - onap.controllloop.operational.drools.vCPE
12    - onap.controllloop.operational.drools.vFW
13  policies:
14    - onap.controllloop.operational.drools.vcpe.EastRegion:
15        policy_type: onap.controllloop.operational.drools.vCPE
16        policy_type_version: 1.0.0
17        properties:
18          # Omitted for brevity
19    - onap.controllloop.operational.drools.vfw.EastRegion:
20        policy_type: onap.controllloop.operational.drools.vFW
21        policy_type_version: 1.0.0
22        properties:
23          # Omitted for brevity
24  instance: drools_2
25  deployment_instance_info:
26    node_address: drools_2_pod
27    # Other deployment instance info
28  statistics:
29    policy_download_count: 3
30    policy_download_success_count: 3
31    policy_download_fail_count: 0
32    policy_executed_count: 123
33    policy_executed_success_count: 122
34    policy_executed_fail_count: 1
35  response:
36    response_to: PDP_HEALTH_CHECK
37    response_status: SUCCESS
PDP_STATUS message from an APEX PDP running control loop policies
 1pdp_status:
 2  name: drools_2
 3  version: 2.3.4
 4  pdp_type: drools
 5  state: safe
 6  healthy: true
 7  description: Drools PDP running control loop policies
 8  pdp_group: onap.pdpgroup.controlloop.operational
 9  pdp_subgroup: drools
10  supported_policy_types:
11    - onap.controllloop.operational.drools.vCPE
12    - onap.controllloop.operational.drools.vFW
13  policies:
14    - onap.controllloop.operational.drools.vcpe.EastRegion:
15        policy_type: onap.controllloop.operational.drools.vCPE
16        policy_type_version: 1.0.0
17        properties:
18          # Omitted for brevity
19    - onap.controllloop.operational.drools.vfw.EastRegion:
20        policy_type: onap.controllloop.operational.drools.vFW
21        policy_type_version: 1.0.0
22        properties:
23          # Omitted for brevity
24  instance: drools_2
25  deployment_instance_info:
26    node_address: drools_2_pod
27    # Other deployment instance info
28  statistics:
29    policy_download_count: 3
30    policy_download_success_count: 3
31    policy_download_fail_count: 0
32    policy_executed_count: 123
33    policy_executed_success_count: 122
34    policy_executed_fail_count: 1
35  response:
36    response_to: PDP_HEALTH_CHECK
37    response_status: SUCCESS
PDP_STATUS message from an XACML PDP running monitoring policies
 1pdp_status:
 2  name: xacml_1
 3  version: 1.2.3
 4  pdp_type: xacml
 5  state: active
 6  healthy: true
 7  description: XACML PDP running monitoring policies
 8  pdp_group: onap.pdpgroup.Monitoring
 9  pdp_subgroup: xacml
10  supported_policy_types:
11    - onap.monitoring.tcagen2
12   policies:
13    - onap.scaleout.tca:message
14        policy_type: onap.policies.monitoring.tcagen2
15        policy_type_version: 1.0.0
16        properties:
17          # Omitted for brevity
18  instance: xacml_1
19  deployment_instance_info:
20    node_address: xacml_1_pod
21    # Other deployment instance info
22  statistics:
23    policy_download_count: 0
24    policy_download_success_count: 0
25    policy_download_fail_count: 0
26    policy_executed_count: 123
27    policy_executed_success_count: 122
28    policy_executed_fail_count: 1
2 PDP API for PAPs

The purpose of this API is for the PAP to load and update policies on PDPs and to change the state of PDPs. It also allows the PAP to order health checks to run on PDPs. The PAP sends PDP_UPDATEPDP_STATE_CHANGE, and PDP_HEALTH_CHECK messages to PDPs using the POLICY_PAP_PDP DMaaP topic. PDPs listen on this topic for messages.

The PAP can set the scope of PDP_STATE_CHANGE and PDP_HEALTH_CHECK messages:

  • PDP Group: If a PDP group is specified in a message, then the PDPs in that PDP group respond to the message and all other PDPs ignore it.

  • PDP Group and subgroup: If a PDP group and subgroup are specified in a message, then only the PDPs of that subgroup in the PDP group respond to the message and all other PDPs ignore it.

  • Single PDP: If the name of a PDP is specified in a message, then only that PDP responds to the message and all other PDPs ignore it.

Note: PDP_UPDATE messages must be issued individually to PDPs because the PDP_UPDATE operation can change the PDP group to which a PDP belongs.

2.1 PDP Update

The PDP_UPDATE operation allows the PAP to modify the PDP group to which a PDP belongs and the policies in a PDP.

The following examples illustrate how the operation is used.

PDP_UPDATE message to upgrade XACML PDP control loop policies to version 1.0.1
 1pdp_update:
 2  name: xacml_1
 3  pdp_type: xacml
 4  description: XACML PDP running control loop policies, Upgraded
 5  pdp_group: onap.pdpgroup.controlloop.operational
 6  pdp_subgroup: xacml
 7  policies:
 8    - onap.policies.controlloop.guard.frequencylimiter.EastRegion:
 9        policy_type: onap.policies.controlloop.guard.FrequencyLimiter
10        policy_type_version: 1.0.1
11        properties:
12          # Omitted for brevity
13   - onap.policies.controlloop.guard.blackList.EastRegion:
14        policy_type: onap.policies.controlloop.guard.BlackList
15        policy_type_version: 1.0.1
16        properties:
17          # Omitted for brevity
18    - onap.policies.controlloop.guard.minmax.EastRegion:
19        policy_type: onap.policies.controlloop.guard.MinMax
20        policy_type_version: 1.0.1
21        properties:
22          # Omitted for brevity
PDP_UPDATE message to a Drools PDP to add an extra control loop policy
 1pdp_update:
 2  name: drools_2
 3  pdp_type: drools
 4  description: Drools PDP running control loop policies, extra policy added
 5  pdp_group: onap.pdpgroup.controlloop.operational
 6  pdp_subgroup: drools
 7  policies:
 8    - onap.controllloop.operational.drools.vcpe.EastRegion:
 9        policy_type: onap.controllloop.operational.drools.vCPE
10        policy_type_version: 1.0.0
11        properties:
12          # Omitted for brevity
13    - onap.controllloop.operational.drools.vfw.EastRegion:
14        policy_type: onap.controllloop.operational.drools.vFW
15        policy_type_version: 1.0.0
16        properties:
17          # Omitted for brevity
18    - onap.controllloop.operational.drools.vfw.WestRegion:
19        policy_type: onap.controllloop.operational.drools.vFW
20        policy_type_version: 1.0.0
21        properties:
22          # Omitted for brevity
PDP_UPDATE message to an APEX PDP to remove a control loop policy
 1  pdp_update:
 2  name: apex_3
 3  pdp_type: apex
 4  description: APEX PDP updated to remove a control loop policy
 5  pdp_group: onap.pdpgroup.controlloop.operational
 6  pdp_subgroup: apex
 7  policies:
 8    - onap.controllloop.operational.apex.bbs.EastRegion:
 9        policy_type: onap.controllloop.operational.apex.BBS
10        policy_type_version: 1.0.0
11        properties:
12          # Omitted for brevity
2.2 PDP State Change

The PDP_STATE_CHANGE operation allows the PAP to order state changes on PDPs in PDP groups and subgroups. The following examples illustrate how the operation is used.

Change the state of all control loop Drools PDPs to ACTIVE
1pdp_state_change:
2  state: active
3  pdp_group: onap.pdpgroup.controlloop.Operational
4  pdp_subgroup: drools
Change the state of all monitoring PDPs to SAFE
1pdp_state_change:
2  state: safe
3  pdp_group: onap.pdpgroup.Monitoring
Change the state of a single APEX PDP to TEST
1pdp_state_change:
2  state: test
3  name: apex_3
2.3 PDP Health Check

The PDP_HEALTH_CHECK operation allows the PAP to order health checks on PDPs in PDP groups and subgroups. The following examples illustrate how the operation is used.

Perform a health check on all control loop Drools PDPs
1pdp_health_check:
2  pdp_group: onap.pdpgroup.controlloop.Operational
3  pdp_subgroup: drools
perform a health check on all monitoring PDPs
1pdp_health_check:
2  pdp_group: onap.pdpgroup.Monitoring
Perform a health check on a single APEX PDP
1pdp_health_check:
2  name: apex_3

The Policy Administration Point (PAP) keeps track of PDPs, supporting the deployment of PDP groups and the deployment of policies across those PDP groups. Policies are created using the Policy API, but are deployed via the PAP.

The PAP is stateless in a RESTful sense, using the database (persistent storage) to track PDPs and the deployment of policies to those PDPs. In short, policy management on PDPs is the responsibility of PAP; management of policies by any other manner is not permitted.

Because the PDP is the main unit of scalability in the Policy Framework, the framework is designed to allow PDPs in a PDP group to arbitrarily appear and disappear and for policy consistency across all PDPs in a PDP group to be easily maintained. The PAP is responsible for controlling the state across the PDPs in a PDP group. The PAP interacts with the policy database and transfers policies to PDPs.

The unit of execution and scaling in the Policy Framework is a PolicyImpl entity. A PolicyImpl entity runs on a PDP. As is explained above, a PolicyImpl entity is a PolicyTypeImpl implementation parameterized with a TOSCA Policy.

_images/PolicyImplPDPSubGroup.svg

In order to achieve horizontal scalability, we group the PDPs running instances of a given PolicyImpl entity logically together into a PDPSubGroup. The number of PDPs in a PDPSubGroup can then be scaled up and down using Kubernetes. In other words, all PDPs in a subgroup run the same PolicyImpl, that is the same policy template implementation (in XACML, Drools, or APEX) with the same parameters.

The figure above shows the layout of PDPGroup and PDPSubGroup entities. The figure shows examples of PDP groups for Control Loop and Monitoring policies on the right.

The health of PDPs is monitored by the PAP in order to alert operations teams managing policies. The PAP manages the life cycle of policies running on PDPs.

The table below shows the deployment methods in which PolicyImpl entities can be deployed to PDP Subgroups.

Method

Description

Advantages

Disadvantages

Cold

The PolicyImpl (PolicyTypeImpl and TOSCA Policy) are predeployed on the PDP. PDP is fully configured and ready to execute when started.

PDPs register with the PAP when they start, providing the pdpGroup they have been preconfigured with.

No run time configuration required and run time administration is simple.

Very restrictive, no run time configuration of PDPs is possible.

Warm

The PolicyTypeImpl entity is predeployed on the PDP. A TOSCA Policy may be loaded at startup. The PDP may be configured or reconfigured with a new or updated TOSCA Policy at run time.

PDPs register with the PAP when they start, providing the pdpGroup they have been predeployed with if any. The PAP may update the TOSCA Policy on a PDP at any time after registration.

The configuration, parameters, and PDP group of PDPs may be changed at run time by loading or updating a TOSCA Policy into the PDP.

Support TOSCA Policy entity life cycle managgement is supported, allowing features such as PolicyImpl Safe Mode and PolicyImpl retirement.

Administration and management is required. The configuration and life cycle of the TOSCA policies can change at run time and must be administered and managed.

Hot

The PolicyImpl (PolicyTypeImpl and TOSCA Policy) are deployed at run time. The PolicyImpl (PolicyTypeImpl and TOSCA Policy) may be loaded at startup. The PDP may be configured or reconfigured with a new or updated PolicyTypeImpl and/or TOSCA Policy at run time.

PDPs register with the PAP when they start, providing the pdpGroup they have been preconfigured with if any. The PAP may update the TOSCA Policy and PolicyTypeImpl on a PDP at any time after registration

The policy logic, rules, configuration, parameters, and PDP group of PDPs may be changed at run time by loading or updating a TOSCA Policy and PolicyTypeImpl into the PDP.

Lifecycle management of TOSCA Policy entities and PolicyTypeImpl entites is supported, allowing features such as PolicyImpl Safe Mode and PolicyImpl retirement.

Administration and management is more complex. The PolicyImpl itself and its configuration and life cycle as well as the life cycle of the TOSCA policies can change at run time and must be administered and managed.

1 APIs

The APIs in the subchapters below are supported by the PAP.

1.1 REST API

The purpose of this API is to support CRUD of PDP groups and subgroups and to support the deployment and life cycles of policies on PDP sub groups and PDPs. This API is provided by the PolicyAdministration component (PAP) of the Policy Framework, see the ONAP Policy Framework Architecture page.

PDP groups and subgroups may be prefedined in the system. Predefined groups and subgroups may be modified or deleted over this API. The policies running on predefined groups or subgroups as well as the instance counts and properties may also be modified.

A PDP may be preconfigured with its PDP group, PDP subgroup, and policies. The PDP sends this information to the PAP when it starts. If the PDP group, subgroup, or any policy is unknown to the PAP, the PAP locks the PDP in state PASSIVE.

PAP supports the operations listed in the following table, via its REST API:

Operation

Description

Health check

Queries the health of the PAP

Consolidated healthcheck

Queries the health of all policy components

Statistics

Queries various statistics

PDP state change

Changes the state of all PDPs in a PDP Group

PDP Group create/update

Creates/updates PDP Groups

PDP Group delete

Deletes a PDP Group

PDP Group query

Queries all PDP Groups

Deployment update

Deploy/undeploy one or more policies in specified PdpGroups

Deploy policy

Deploys one or more policies to the PDPs

Undeploy policy

Undeploys a policy from the PDPs

Policy Status

Queries the status of all policies

Policy deployment status

Queries the status of all deployed policies

PDP statistics

Queries the statistics of PDPs

1.2 DMaaP API

PAP interacts with the PDPs via the DMaaP Message Router. The messages listed in the following table are transmitted via DMaaP:

Message

Direction

Description

PDP status

Incoming

Registers a PDP with PAP; also sent as a periodic heart beat; also sent in response to requests from the PAP

PDP update

Outgoing

Assigns a PDP to a PDP Group and Subgroup; also deploys or undeploys policies from the PDP

PDP state change

Outgoing

Changes the state of a PDP or all PDPs within a PDP Group or Subgroup

In addition, PAP generates notifications via the DMaaP Message Router when policies are successfully or unsuccessfully deployed (or undeployed) from all relevant PDPs.

Here is a sample notification:

{
    "deployed-policies": [
        {
            "policy-type": "onap.policies.monitoring.tcagen2",
            "policy-type-version": "1.0.0",
            "policy-id": "onap.scaleout.tca",
            "policy-version": "2.0.0",
            "success-count": 3,
            "failure-count": 0
        }
    ],
    "undeployed-policies": [
        {
            "policy-type": "onap.policies.monitoring.tcagen2",
            "policy-type-version": "1.0.0",
            "policy-id": "onap.firewall.tca",
            "policy-version": "6.0.0",
            "success-count": 3,
            "failure-count": 0
        }
    ]
}

2 PAP REST API Swagger

It is worth noting that we use basic authorization for access with user name and password set to healthcheck and zb!XztG34, respectively.

For every call, the client is encouraged to insert a uuid-type requestID as parameter. It is helpful for tracking each http transaction and facilitates debugging. More importantly, it complies with Logging requirements v1.2. If the client does not provide the requestID in a call, one will be randomly generated and attached to the response header, x-onap-requestid.

In accordance with ONAP API Common Versioning Strategy Guidelines, several custom headers are added in the response to each call:

Header

Example value

Description

x-latestversion

1.0.0

latest version of the API

x-minorversion

0

MINOR version of the API

x-patchversion

0

PATCH version of the API

x-onap-requestid

e1763e61-9eef-4911-b952-1be1edd9812b

described above; used for logging purposes

Download Health Check PAP API Swagger

HealthCheck

GET /policy/pap/v1/healthcheck

Perform healthcheck

  • Description: Returns healthy status of the Policy Administration component

  • Produces: [‘application/json’]

Parameters

Name

Position

Description

Type

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

This operation performs a health check on the PAP.

Here is a sample response:

{
    "code": 200,
    "healthy": true,
    "message": "alive",
    "name": "Policy PAP",
    "url": "self"
}

Download Consolidated Health Check PAP API Swagger

Consolidated Healthcheck

GET /policy/pap/v1/components/healthcheck

Returns health status of all policy components, including PAP, API, Distribution, and PDPs

  • Description: Queries health status of all policy components, returning all policy components health status

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

This operation performs a health check of all policy components. The response contains the health check result of each component. The consolidated health check is reported as healthy only if all the components are healthy, otherwise the “healthy” flag is marked as false.

Here is a sample response:

{
  "pdps": {
    "xacml": [
      {
        "instanceId": "dev-policy-xacml-pdp-5b6697c845-9j8lb",
        "pdpState": "ACTIVE",
        "healthy": "HEALTHY"
      }
    ],
    "drools": [
      {
        "instanceId": "dev-drools-0",
        "pdpState": "ACTIVE",
        "healthy": "HEALTHY"
      }
    ],
    "apex": [
      {
        "instanceId": "dev-policy-apex-pdp-0",
        "pdpState": "ACTIVE",
        "healthy": "HEALTHY",
        "message": "Pdp Heartbeat"
      }
    ]
  },
  "healthy": true,
  "api": {
    "name": "Policy API",
    "url": "https://dev-policy-api-7fb479754f-7nr5s:6969/policy/api/v1/healthcheck",
    "healthy": true,
    "code": 200,
    "message": "alive"
  },
  "distribution": {
    "name": "Policy SSD",
    "url": "https://dev-policy-distribution-84854cd6c7-zn8vh:6969/healthcheck",
    "healthy": true,
    "code": 200,
    "message": "alive"
  },
  "pap": {
    "name": "Policy PAP",
    "url": "https://dev-pap-79fd8f78d4-hwx7j:6969/policy/pap/v1/healthcheck",
    "healthy": true,
    "code": 200,
    "message": "alive"
  }
}

Download Statistics PAP API Swagger

Statistics

GET /policy/pap/v1/statistics

Fetch current statistics

  • Description: Returns current statistics of the Policy Administration component

  • Produces: [‘application/json’]

Parameters

Name

Position

Description

Type

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

This operation allows statistics for PDP groups, PDP subgroups, and individual PDPs to be retrieved.

Note

While this API is supported, most of the statistics are not currently updated; that work has been deferred to a later release.

Here is a sample response:

{
    "code": 200,
    "policyDeployFailureCount": 0,
    "policyDeploySuccessCount": 0,
    "policyDownloadFailureCount": 0,
    "policyDownloadSuccessCount": 0,
    "totalPdpCount": 0,
    "totalPdpGroupCount": 0,
    "totalPolicyDeployCount": 0,
    "totalPolicyDownloadCount": 0
}

Download State Change PAP Swagger

PdpGroup State Change

PUT /policy/pap/v1/pdps/groups/{name}

Change state of a PDP Group

  • Description: Changes state of PDP Group, returning optional error details

  • Produces: [‘application/json’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

name

path

PDP Group Name

string

state

query

PDP Group State

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

The state of PDP groups is managed by this operation. PDP groups can be in states PASSIVE, TEST, SAFE, or ACTIVE. For a full description of PDP group states, see the ONAP Policy Framework Architecture page.

Download Group Batch PAP API Swagger

PdpGroup Create/Update

POST /policy/pap/v1/pdps/groups/batch

Create or update PDP Groups

  • Description: Create or update one or more PDP Groups, returning optional error details

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

body

body

List of PDP Group Configuration

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

This operation allows the PDP groups and subgroups to be created and updated. Many PDP groups can be created or updated in a single POST operation by specifying more than one PDP group in the POST operation body. This can be used to create the PDP group by providing all the details including the supported policy types for each subgroup. However, it cannot be used to update policies; that is done using one of the deployment requests. Consequently, the “policies” property of this request will be ignored. This can also be used to update a PDP Group, but supported policy types cannot be updated during the update operation. So, “policies” and “supportedPolicyTypes” properties in the request will be ignored if provided during the PDP Group update operation.

The “desiredInstanceCount” specifies the minimum number of PDPs of the given type that should be registered with PAP. Currently, this is just used for health check purposes; if the number of PDPs registered with PAP drops below the given value, then PAP will return an “unhealthy” indicator if a “Consolidated Health Check” is performed.

Note

If a subgroup is to be deleted from a PDP Group, then the policies must be removed from the subgroup first.

Note

Policies cannot be added/updated during PDP Group create/update operations. So, if provided, they are ignored. Supported policy types are defined during PDP Group creation. They cannot be updated once they are created. So, supportedPolicyTypes are expected during PDP Group create, but ignored if provided during PDP Group update.

Here is a sample request:

{
    "groups": [
        {
            "name": "SampleGroup",
            "pdpGroupState": "ACTIVE",
            "properties": {},
            "pdpSubgroups": [
                {
                    "pdpType": "apex",
                    "desiredInstanceCount": 2,
                    "properties": {},
                    "supportedPolicyTypes": [
                        {
                            "name": "onap.policies.controlloop.operational.common.Apex",
                            "version": "1.0.0"
                        }
                    ],
                    "policies": []
                },
                {
                    "pdpType": "xacml",
                    "desiredInstanceCount": 1,
                    "properties": {},
                    "supportedPolicyTypes": [
                        {
                            "name": "onap.policies.monitoring.tcagen2",
                            "version": "1.0.0"
                        }
                    ],
                    "policies": []
                }
            ]
        }
    ]
}

Download Group Delete PAP API Swagger

PdpGroup Delete

DELETE /policy/pap/v1/pdps/groups/{name}

Delete PDP Group

  • Description: Deletes a PDP Group, returning optional error details

  • Produces: [‘application/json’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

name

path

PDP Group Name

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

The API also allows PDP groups to be deleted. DELETE operations are only permitted on PDP groups in PASSIVE state.

Download Group Query PAP API Swagger

PdpGroup Query

GET /policy/pap/v1/pdps

Query details of all PDP groups

  • Description: Queries details of all PDP groups, returning all group details

  • Produces: [‘application/json’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

This operation allows the PDP groups and subgroups to be listed as well as the policies that are deployed on each PDP group and subgroup.

Here is a sample response:

{
    "groups": [
        {
            "description": "This group should be used for managing all control loop related policies and pdps",
            "name": "controlloop",
            "pdpGroupState": "ACTIVE",
            "pdpSubgroups": [
                {
                    "currentInstanceCount": 0,
                    "desiredInstanceCount": 1,
                    "pdpInstances": [],
                    "pdpType": "apex",
                    "policies": [],
                    "properties": {},
                    "supportedPolicyTypes": [
                        {
                            "name": "onap.policies.controlloop.operational.common.Apex",
                            "version": "1.0.0"
                        }
                    ]
                },
                {
                    "currentInstanceCount": 0,
                    "desiredInstanceCount": 1,
                    "pdpInstances": [],
                    "pdpType": "drools",
                    "policies": [],
                    "properties": {},
                    "supportedPolicyTypes": [
                        {
                            "name": "onap.policies.controlloop.operational.common.Drools",
                            "version": "1.0.0"
                        }
                    ]
                },
                {
                    "currentInstanceCount": 0,
                    "desiredInstanceCount": 1,
                    "pdpInstances": [],
                    "pdpType": "xacml",
                    "policies": [],
                    "properties": {},
                    "supportedPolicyTypes": [
                        {
                            "name": "onap.policies.controlloop.Guard",
                            "version": "1.0.0"
                        }
                    ]
                }
            ],
            "properties": {}
        },
        {
            "description": "This group should be used for managing all monitoring related policies and pdps",
            "name": "monitoring",
            "pdpGroupState": "ACTIVE",
            "pdpSubgroups": [
                {
                    "currentInstanceCount": 0,
                    "desiredInstanceCount": 1,
                    "pdpInstances": [],
                    "pdpType": "xacml",
                    "policies": [],
                    "properties": {},
                    "supportedPolicyTypes": [
                        {
                            "name": "onap.policies.Monitoring",
                            "version": "1.0.0"
                        }
                    ]
                }
            ],
            "properties": {}
        },
        {
            "description": "The default group that registers all supported policy types and pdps.",
            "name": "defaultGroup",
            "pdpGroupState": "ACTIVE",
            "pdpSubgroups": [
                {
                    "currentInstanceCount": 0,
                    "desiredInstanceCount": 1,
                    "pdpInstances": [],
                    "pdpType": "apex",
                    "policies": [],
                    "properties": {},
                    "supportedPolicyTypes": [
                        {
                            "name": "onap.policies.controlloop.operational.common.Apex",
                            "version": "1.0.0"
                        },
                        {
                            "name": "onap.policies.native.Apex",
                            "version": "1.0.0"
                        }
                    ]
                },
                {
                    "currentInstanceCount": 0,
                    "desiredInstanceCount": 1,
                    "pdpInstances": [],
                    "pdpType": "drools",
                    "policies": [],
                    "properties": {},
                    "supportedPolicyTypes": [
                        {
                            "name": "onap.policies.controlloop.operational.common.Drools",
                            "version": "1.0.0"
                        },
                        {
                            "name": "onap.policies.native.drools.Controller",
                            "version": "1.0.0"
                        },
                        {
                            "name": "onap.policies.native.drools.Artifact",
                            "version": "1.0.0"
                        }
                    ]
                },
                {
                    "currentInstanceCount": 0,
                    "desiredInstanceCount": 1,
                    "pdpInstances": [],
                    "pdpType": "xacml",
                    "policies": [],
                    "properties": {},
                    "supportedPolicyTypes": [
                        {
                            "name": "onap.policies.controlloop.guard.FrequencyLimiter",
                            "version": "1.0.0"
                        },
                        {
                            "name": "onap.policies.controlloop.guard.MinMax",
                            "version": "1.0.0"
                        },
                        {
                            "name": "onap.policies.controlloop.guard.Blacklist",
                            "version": "1.0.0"
                        },
                        {
                            "name": "onap.policies.controlloop.guard.coordination.FirstBlocksSecond",
                            "version": "1.0.0"
                        },
                        {
                            "name": "onap.Monitoring",
                            "version": "1.0.0"
                        },
                        {
                            "name": "onap.policies.monitoring.*",
                            "version": "1.0.0"
                        },
                        {
                            "name": "onap.policies.optimization.AffinityPolicy",
                            "version": "1.0.0"
                        },
                        {
                            "name": "onap.policies.optimization.DistancePolicy",
                            "version": "1.0.0"
                        },
                        {
                            "name": "onap.policies.optimization.HpaPolicy",
                            "version": "1.0.0"
                        },
                        {
                            "name": "onap.policies.optimization.OptimizationPolicy",
                            "version": "1.0.0"
                        },
                        {
                            "name": "onap.policies.optimization.PciPolicy",
                            "version": "1.0.0"
                        },
                        {
                            "name": "onap.policies.optimization.QueryPolicy",
                            "version": "1.0.0"
                        },
                        {
                            "name": "onap.policies.optimization.SubscriberPolicy",
                            "version": "1.0.0"
                        },
                        {
                            "name": "onap.policies.optimization.Vim_fit",
                            "version": "1.0.0"
                        },
                        {
                            "name": "onap.policies.optimization.VnfPolicy",
                            "version": "1.0.0"
                        }
                    ]
                }
            ],
            "properties": {}
        }
    ]
}

Download Deployments Batch PAP API Swagger

Deployments Update

POST /policy/pap/v1/pdps/deployments/batch

Updates policy deployments within specific PDP groups

  • Description: Updates policy deployments within specific PDP groups, returning optional error details

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

body

body

List of PDP Group Deployments

Responses

202 - operation accepted

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

This operation allows policies to be deployed on specific PDP groups. Each subgroup includes an “action” property, which is used to indicate that the policies are being added (POST) to the subgroup, deleted (DELETE) from the subgroup, or that the subgroup’s entire set of policies is being replaced (PATCH) by a new set of policies. As such, a subgroup may appear more than once in a single request, one time to delete some policies and another time to add new policies to the same subgroup.

Here is a sample request:

{
    "groups": [
        {
            "name": "SampleGroup",
            "deploymentSubgroups": [
                {
                    "pdpType": "apex",
                    "action": "POST",
                    "policies": [
                        {
                            "name": "onap.policies.native.apex.bbs.EastRegion",
                            "version": "1.0.0"
                        }
                    ]
                }
            ]
        }
    ]
}

Here is a sample response:

{
    "message": "Use the policy status url to fetch the latest status. Kindly note that when a policy is successfully undeployed, it will no longer appear in policy status response.",
    "uri": "/policy/pap/v1/policies/status"
}

Download Deploy PAP API Swagger

Deploy Policy

POST /policy/pap/v1/pdps/policies

Deploy or update PDP Policies

  • Description: Deploys or updates PDP Policies, returning optional error details

  • Produces: [‘application/json’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

body

body

PDP Policies; only the name is required

Responses

202 - operation accepted

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

This operation allows policies to be deployed across all relevant PDP groups. PAP will deploy the specified policies to all relevant subgroups. Only the policies supported by a given subgroup will be deployed to that subgroup.

Note

The policy version is optional. If left unspecified, then the latest version of the policy is deployed. On the other hand, if it is specified, it may be an integer, or it may be a fully qualified version (e.g., “3.0.2”). In addition, a subgroup to which a policy is being deployed must have at least one PDP instance, otherwise the request will be rejected.

Here is a sample request:

{
  "policies": [
    {
      "policy-id": "onap.scaleout.tca",
      "policy-version": 1
    },
    {
      "policy-id": "ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3"
    },
    {
      "policy-id": "guard.frequency.ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3"
    },
    {
      "policy-id": "guard.minmax.ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3"
    }
  ]
}

Here is a sample response:

{
    "message": "Use the policy status url to fetch the latest status. Kindly note that when a policy is successfully undeployed, it will no longer appear in policy status response.",
    "uri": "/policy/pap/v1/policies/status"
}

Download Undeploy PAP API Swagger

Undeploy Policy

DELETE /policy/pap/v1/pdps/policies/{name}

Undeploy a PDP Policy from PDPs

  • Description: Undeploys the latest version of a policy from the PDPs, returning optional error details

  • Produces: [‘application/json’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

name

path

PDP Policy Name

string

Responses

202 - operation accepted

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

DELETE /policy/pap/v1/pdps/policies/{name}/versions/{version}

Undeploy version of a PDP Policy from PDPs

  • Description: Undeploys a specific version of a policy from the PDPs, returning optional error details

  • Produces: [‘application/json’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

name

path

PDP Policy Name

string

version

path

PDP Policy Version

string

Responses

202 - operation accepted

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

This operation allows policies to be undeployed from PDP groups.

Note

If the policy version is specified, then it may be an integer, or it may be a fully qualified version (e.g., “3.0.2”). On the other hand, if left unspecified, then the latest deployed version will be undeployed.

Note

Due to current limitations, a fully qualified policy version must always be specified.

Here is a sample response:

{
    "message": "Use the policy status url to fetch the latest status. Kindly note that when a policy is successfully undeployed, it will no longer appear in policy status response.",
    "uri": "/policy/pap/v1/policies/status"
}

Download Policy Status PAP API Swagger

Policy Status

GET /policy/pap/v1/policies/status

Queries status of policies in all PdpGroups

  • Description: Queries status of policies in all PdpGroups, returning status of policies in all the PDPs belonging to all PdpGroups

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

GET /policy/pap/v1/policies/status/{pdpGroupName}

Queries status of policies in a specific PdpGroup

  • Description: Queries status of policies in a specific PdpGroup, returning status of policies in all the PDPs belonging to the PdpGroup

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

pdpGroupName

path

Name of the PdpGroup

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

404 - Resource not found

500 - Internal Server Error

GET /policy/pap/v1/policies/status/{pdpGroupName}/{policyName}

Queries status of all versions of a specific policy in a specific PdpGroup

  • Description: Queries status of all versions of a specific policy in a specific PdpGroup, returning status of all versions of the policy in the PDPs belonging to the PdpGroup

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

pdpGroupName

path

Name of the PdpGroup

string

policyName

path

Name of the Policy

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

404 - Resource not found

500 - Internal Server Error

GET /policy/pap/v1/policies/status/{pdpGroupName}/{policyName}/{policyVersion}

Queries status of a specific version of a specific policy in a specific PdpGroup

  • Description: Queries status of a specific version of a specific policy in a specific PdpGroup, returning status of the policy in the PDPs belonging to the PdpGroup

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

pdpGroupName

path

Name of the PdpGroup

string

policyName

path

Name of the Policy

string

policyVersion

path

Version of the Policy

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

404 - Resource not found

500 - Internal Server Error

This operation allows the status of all policies that are deployed or undeployed to be listed together. The result can be filtered based on pdp group name, policy name & version.

Note

When a policy is successfully undeployed, it will no longer appear in the policy status response.

Here is a sample response:

[
    {
        "pdpGroup": "defaultGroup",
        "pdpType": "apex",
        "pdpId": "policy-apex-pdp-0",
        "policy": {
            "name": "onap.policies.apex.Controlloop",
            "version": "1.0.0"
        },
        "policyType": {
            "name": "onap.policies.native.Apex",
            "version": "1.0.0"
        },
        "deploy": true,
        "state": "SUCCESS"
    },
    {
        "pdpGroup": "defaultGroup",
        "pdpType": "drools",
        "pdpId": "policy-drools-pdp-0",
        "policy": {
            "name": "OPERATIONAL_vFW_CDS_Service_v2_0_Drools_1_0_0_6SN",
            "version": "1.0.0"
        },
        "policyType": {
            "name": "onap.policies.controlloop.operational.common.Drools",
            "version": "1.0.0"
        },
        "deploy": true,
        "state": "SUCCESS"
    }
]

Download Deployed Policy PAP API Swagger

Policy Deployment Status

GET /policy/pap/v1/policies/deployed

Queries status of all deployed policies

  • Description: Queries status of all deployed policies, returning success and failure counts of the PDPs

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

GET /policy/pap/v1/policies/deployed/{name}

Queries status of specific deployed policies

  • Description: Queries status of specific deployed policies, returning success and failure counts of the PDPs

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

name

path

Policy Id

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

GET /policy/pap/v1/policies/deployed/{name}/{version}

Queries status of a specific deployed policy

  • Description: Queries status of a specific deployed policy, returning success and failure counts of the PDPs

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

name

path

Policy Id

string

version

path

Policy Version

string

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

This operation allows the deployed policies to be listed together with their respective deployment status. The result can be filtered based on policy name & version.

Here is a sample response:

[
  {
    "policy-type": "onap.policies.monitoring.tcagen2",
    "policy-type-version": "1.0.0",
    "policy-id": "MICROSERVICE_vFW_CDS_Service_v2_0_app_1_0_0_I95",
    "policy-version": "1.0.0",
    "success-count": 1,
    "failure-count": 0,
    "incomplete-count": 0
  },
  {
    "policy-type": "onap.policies.monitoring.tcagen2",
    "policy-type-version": "1.0.0",
    "policy-id": "MICROSERVICE_vFW_CDS_Service_v2_0_app_1_0_0_WNX",
    "policy-version": "1.0.0",
    "success-count": 1,
    "failure-count": 0,
    "incomplete-count": 0
  },
  {
    "policy-type": "onap.policies.controlloop.operational.common.Drools",
    "policy-type-version": "1.0.0",
    "policy-id": "OPERATIONAL_vFW_CDS_Service_v2_0_Drools_1_0_0_6SN",
    "policy-version": "1.0.0",
    "success-count": 1,
    "failure-count": 0,
    "incomplete-count": 0
  }
]

Download PDP Statistics PAP API Swagger

PDP Statistics

GET /policy/pap/v1/pdps/statistics

Fetch statistics for all PDP Groups and subgroups in the system

  • Description: Returns for all PDP Groups and subgroups statistics of the Policy Administration component

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

GET /policy/pap/v1/pdps/statistics/{group}

Fetch current statistics for given PDP Group

  • Description: Returns statistics for given PDP Group of the Policy Administration component

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

group

path

PDP Group Name

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

GET /policy/pap/v1/pdps/statistics/{group}/{type}

Fetch statistics for the specified subgroup

  • Description: Returns statistics for the specified subgroup of the Policy Administration component

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

group

path

PDP Group Name

string

type

path

PDP SubGroup type

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

GET /policy/pap/v1/pdps/statistics/{group}/{type}/{pdp}

Fetch statistics for the specified pdp

  • Description: Returns statistics for the specified pdp of the Policy Administration component

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

group

path

PDP Group Name

string

type

path

PDP SubGroup type

string

pdp

path

PDP Instance name

string

recordCount

query

Record Count

integer

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

This operation allows the PDP statistics to be retrieved for all registered PDPs. The result can be filtered based on PDP group, PDP subgroup & PDP instance.

Here is a sample response:

{
  "defaultGroup": {
    "apex": [
      {
        "pdpInstanceId": "dev-policy-apex-pdp-0",
        "timeStamp": "Apr 29, 2020, 6:15:29 PM",
        "pdpGroupName": "defaultGroup",
        "pdpSubGroupName": "apex",
        "policyDeployCount": 0,
        "policyDeploySuccessCount": 0,
        "policyDeployFailCount": 0,
        "policyExecutedCount": 0,
        "policyExecutedSuccessCount": 0,
        "policyExecutedFailCount": 0,
        "engineStats": []
      },
      {
        "pdpInstanceId": "dev-policy-apex-pdp-0",
        "timeStamp": "Apr 29, 2020, 6:17:29 PM",
        "pdpGroupName": "defaultGroup",
        "pdpSubGroupName": "apex",
        "policyDeployCount": 0,
        "policyDeploySuccessCount": 0,
        "policyDeployFailCount": 0,
        "policyExecutedCount": 0,
        "policyExecutedSuccessCount": 0,
        "policyExecutedFailCount": 0,
        "engineStats": []
      },
      {
        "pdpInstanceId": "dev-policy-apex-pdp-0",
        "timeStamp": "Apr 29, 2020, 6:19:29 PM",
        "pdpGroupName": "defaultGroup",
        "pdpSubGroupName": "apex",
        "policyDeployCount": 0,
        "policyDeploySuccessCount": 0,
        "policyDeployFailCount": 0,
        "policyExecutedCount": 0,
        "policyExecutedSuccessCount": 0,
        "policyExecutedFailCount": 0,
        "engineStats": []
      }
    ]
  }
}

3 Future Features

3.1 Disable policies in PDP

This operation will allow disabling individual policies running in PDP engine. It is mainly beneficial in scenarios where network operators/administrators want to disable a particular policy in PDP engine for a period of time due to a failure in the system or for scheduled maintenance.

End of Document

Decision API

The Decision API is used by ONAP components that enforce policies and need a decision on which policy to enforce for a specific situation. The Decision API mimics closely the XACML request standard in that it supports a subject, action and resource.

Field

Required

XACML equivalent

Description

ONAPName

True

subject

The name of the ONAP project making the call

ONAPComponent

True

subject

The name of the ONAP sub component making the call

ONAPInstance

False

subject

An optional instance ID for that sub component

action

True

action

The action being performed

resource

True

resource

An object specific to the action that contains properties describing the resource

It is worth noting that we use basic authorization for API access with username and password set to healthcheck and zb!XztG34 respectively. Also, the new APIs support both http and https.

For every API call, the client is encouraged to insert an uuid-type requestID as parameter. It is helpful for tracking each http transaction and facilitates debugging. Most importantly, it complies with Logging requirements v1.2. If the client does not provide the requestID in the API call, one will be randomly generated and attached to the response header x-onap-requestid.

In accordance with ONAP API Common Versioning Strategy Guidelines, in the response of each API call, several custom headers are added:

x-latestversion: 1.0.0
x-minorversion: 0
x-patchversion: 0
x-onap-requestid: e1763e61-9eef-4911-b952-1be1edd9812b

x-latestversion is used only to communicate an API’s latest version.

x-minorversion is used to request or communicate a MINOR version back from the client to the server, and from the server back to the client.

x-patchversion is used only to communicate a PATCH version in a response for troubleshooting purposes only, and will be provided to the client on request.

x-onap-requestid is used to track REST transactions for logging purpose, as described above.

Download the Decision API Swagger

HealthCheck

GET /policy/pdpx/v1/healthcheck

Perform a system healthcheck

  • Description: Provides healthy status of the Policy Xacml PDP component

  • Consumes: [‘application/json’, ‘application/yaml’]

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

Decision

POST /policy/pdpx/v1/xacml

Fetch the decision using specified decision parameters

  • Description: Returns the policy decision from Policy Xacml PDP

  • Consumes: [‘application/xacml+json’, ‘application/xacml+xml’]

  • Produces: [‘application/xacml+json’, ‘application/xacml+xml’]

Parameters

Name

Position

Description

Type

body

body

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation

400 - Bad Request

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

POST /policy/pdpx/v1/decision

Fetch the decision using specified decision parameters

  • Description: Returns the policy decision from Policy Xacml PDP

  • Consumes: [‘application/json’, ‘application/yaml’]

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

body

body

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation

400 - Bad Request

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

Statistics

GET /policy/pdpx/v1/statistics

Fetch current statistics

  • Description: Provides current statistics of the Policy Xacml PDP component

  • Consumes: [‘application/json’, ‘application/yaml’]

  • Produces: [‘application/json’, ‘application/yaml’]

Parameters

Name

Position

Description

Type

X-ONAP-RequestID

header

RequestID for http transaction

string

Responses

200 - successful operation

401 - Authentication Error

403 - Authorization Error

500 - Internal Server Error

End of Document

Postman Environment for API Testing

The following environment file from postman can be used for testing API’s. All you need to do is fill in the IP and Port information for the installation that you have created.

link

Postman Collection for API Testing

Postman collection for Policy Framework Lifecycle API

Postman collection for Policy Framework Administration API

Postman collection for Policy Framework Decision API

API Swagger Generation

The standard for API definition in the RESTful API world is the OpenAPI Specification (OAS). The OAS, which is based on the original “Swagger Specification,” is being widely used in API developments.

Execute the below curl command for swagger generation by filling in the authorization details, IP and Port information:

“curl -k --user ‘{user_id}:{password}’ https://{ip}:{port}/swagger.json”

Policy Component Installation

Policy OOM Installation

Policy OOM Charts

The policy K8S charts are located in the OOM repository.

Please refer to the OOM documentation on how to install and deploy ONAP.

Policy Pods

To get a listing of the Policy Pods, run the following command:

kubectl get pods -n onap | grep dev-policy

dev-policy-59684c7b9c-5gd6r                        2/2     Running            0          8m41s
dev-policy-apex-pdp-0                              1/1     Running            0          8m41s
dev-policy-api-56f55f59c5-nl5cg                    1/1     Running            0          8m41s
dev-policy-distribution-54cc59b8bd-jkg5d           1/1     Running            0          8m41s
dev-policy-mariadb-0                               1/1     Running            0          8m41s
dev-policy-xacml-pdp-765c7d58b5-l6pr7              1/1     Running            0          8m41s

Note

To get a listing of the Policy services, run this command: kubectl get svc -n onap | grep policy

Accessing Policy Containers

Accessing the policy docker containers is the same as for any kubernetes container. Here is an example:

kubectl -n onap exec -it dev-policy-policy-xacml-pdp-584844b8cf-9zptx bash

Installing or Upgrading Policy

The assumption is you have cloned the charts from the OOM repository into a local directory.

Step 1 Go into local copy of OOM charts

From your local copy, edit any of the values.yaml files in the policy tree to make desired changes.

Step 2 Build the charts

make policy
make SKIP_LINT=TRUE onap

Note

SKIP_LINT is only to reduce the “make” time

Step 3 Undeploy Policy After undeploying policy, loop on monitoring the policy pods until they go away.

helm del --purge dev-policy
kubectl get pods -n onap | grep dev-policy

Step 4 Delete NFS persisted data for Policy

rm -fr /dockerdata-nfs/dev/policy

Step 5 Make sure there is no orphan policy database persistent volume or claim.

First, find if there is an orphan database PV or PVC with the following commands:

kubectl get pvc -n onap | grep policy
kubectl get pv -n onap | grep policy

If there are any orphan resources, delete them with

kubectl delete pvc <orphan-policy-mariadb-resource>
kubectl delete pv <orphan-policy-mariadb-resource>

Step 6 Re-Deploy Policy pods

After deploying policy, loop on monitoring the policy pods until they come up.

helm deploy dev-policy local/onap --namespace onap
kubectl get pods -n onap | grep dev-policy

Restarting a faulty component

Each policy component can be restarted independently by issuing the following command:

kubectl delete pod <policy-pod> -n onap

Exposing ports

For security reasons, the ports for the policy containers are configured as ClusterIP and thus not exposed. If you find you need those ports in a development environment, then the following will expose them.

kubectl -n onap expose service policy-api --port=7171 --target-port=6969 --name=api-public --type=NodePort

Overriding certificate stores

Policy components package default key and trust stores that support https based communication with other AAF-enabled ONAP components. Each store can be overridden at installation.

To override a default keystore, the new certificate store (policy-keystore) file should be placed at the appropriate helm chart locations below:

  • oom/kubernetes/policy/charts/drools/resources/secrets/policy-keystore drools pdp keystore override.

  • oom/kubernetes/policy/charts/policy-apex-pdp/resources/config/policy-keystore apex pdp keystore override.

  • oom/kubernetes/policy/charts/policy-api/resources/config/policy-keystore api keystore override.

  • oom/kubernetes/policy/charts/policy-distribution/resources/config/policy-keystore distribution keystore override.

  • oom/kubernetes/policy/charts/policy-pap/resources/config/policy-keystore pap keystore override.

  • oom/kubernetes/policy/charts/policy-xacml-pdp/resources/config/policy-keystore xacml pdp keystore override.

In the event that the truststore (policy-truststore) needs to be overriden as well, place it at the appropriate location below:

  • oom/kubernetes/policy/charts/drools/resources/configmaps/policy-truststore drools pdp truststore override.

  • oom/kubernetes/policy/charts/policy-apex-pdp/resources/config/policy-truststore apex pdp truststore override.

  • oom/kubernetes/policy/charts/policy-api/resources/config/policy-truststore api truststore override.

  • oom/kubernetes/policy/charts/policy-distribution/resources/config/policy-truststore distribution truststore override.

  • oom/kubernetes/policy/charts/policy-pap/resources/config/policy-truststore pap truststore override.

  • oom/kubernetes/policy/charts/policy-xacml-pdp/resources/config/policy-truststore xacml pdp truststore override.

When the keystore passwords are changed, the corresponding component configuration (1) should also change:

  • oom/kubernetes/policy/charts/drools/values.yaml

  • oom/kubernetes/policy-apex-pdp/resources/config/config.json

  • oom/kubernetes/policy-distribution/resources/config/config.json

This procedure is applicable to an installation that requires either AAF or non-AAF derived certificates. The reader is refered to the AAF documentation when new AAF-compliant keystores are desired:

After these changes, follow the procedures in the Installing or Upgrading Policy section to make usage of the new stores effective.

Additional PDP-D Customizations

Credentials and other configuration parameters can be set as values when deploying the policy (drools) subchart. Please refer to PDP-D Default Values for the current default values. It is strongly recommended that sensitive information is secured appropriately before using in production.

Additional customization can be applied to the PDP-D. Custom configuration goes under the “resources” directory of the drools subchart (oom/kubernetes/policy/charts/drools/resources). This requires rebuilding the policy subchart (see section Installing or Upgrading Policy).

Configuration is done by adding or modifying configmaps and/or secrets. Configmaps are placed under “drools/resources/configmaps”, and secrets under “drools/resources/secrets”.

Custom configuration supportes these types of files:

  • *.conf files to support additional environment configuration.

  • features*.zip to add additional custom features.

  • *.pre.sh scripts to be executed before starting the PDP-D process.

  • *.post.sh scripts to be executed after starting the PDP-D process.

  • policy-keystore to override the PDP-D policy-keystore.

  • policy-truststore to override the PDP-D policy-truststore.

  • aaf-cadi.keyfile to override the PDP-D AAF key.

  • *.properties to override or add properties files.

  • *.xml to override or add xml configuration files.

  • *.json to override json configuration files.

  • *settings.xml to override maven repositories configuration .

Examples

To disable AAF, simply override the “aaf.enabled” value when deploying the helm chart (see the OOM installation instructions mentioned above).

To override the PDP-D keystore or trustore, add a suitable replacement(s) under “drools/resources/secrets”. Modify the drools chart values.yaml with new credentials, and follow the procedures described at Installing or Upgrading Policy to redeploy the chart.

To disable https for the DMaaP configuration topic, add a copy of engine.properties with “dmaap.source.topics.PDPD-CONFIGURATION.https” set to “false”, or alternatively create a “.pre.sh” script (see above) that edits this file before the PDP-D is started.

To use noop topics for standalone testing, add a “noop.pre.sh” script under oom/kubernetes/policy/charts/drools/resources/configmaps/:

#!/bin/bash
sed -i "s/^dmaap/noop/g" $POLICY_HOME/config/*.properties

Footnotes

1

There is a limitation that store passwords are not configurable for policy-api, policy-pap, and policy-xacml-pdp.

Policy Docker Installation

Building the ONAP Policy Framework Docker Images

The instructions here are based on the instructions in the file ~/git/onap/policy/docker/README.md.

Step 1: Build the Policy API Docker image

cd ~/git/onap/policy/api/packages
mvn clean install -P docker

Step 2: Build the Policy PAP Docker image

cd ~/git/onap/policy/pap/packages
mvn clean install -P docker

Step 3: Build the Drools PDP docker image.

This image is a standalone vanilla Drools engine, which does not contain any pre-built drools rules or applications.

cd ~/git/onap/policy/drools-pdp/
mvn clean install -P docker

Step 4: Build the Drools Application Control Loop image.

This image has the drools use case application and the supporting software built together with the Drools PDP engine. It is recommended to use this image if you are first working with ONAP Policy and wish to test or learn how the use cases work.

cd ~/git/onap/policy/drools-applications
mvn clean install -P docker

Step 5: Build the Apex PDP docker image:

cd ~/git/onap/policy/apex-pdp
mvn clean install -P docker

Step 6: Build the XACML PDP docker image:

cd ~/git/onap/policy/xacml-pdp/packages
mvn clean install -P docker

Step 7: Build the policy engine docker image (If working with the legacy Policy Architecture/API):

cd ~/git/onap/policy/engine/
mvn clean install -P docker

Step 8: Build the Policy SDC Distribution docker image:

cd ~/git/onap/policy/distribution/packages
mvn clean install -P docker

Starting the ONAP Policy Framework Docker Images

In order to run the containers, you can use docker-compose. This uses the docker-compose.yml yaml file to bring up the ONAP Policy Framework. This file is located in the policy/docker repository.

Step 1: Set the environment variable MTU to be a suitable MTU size for the application.

export MTU=9126

Step 2: Determine if you want the legacy Policy Engine to have policies pre-loaded or not. By default, all the configuration and operational policies will be pre-loaded by the docker compose script. If you do not wish for that to happen, then export this variable:

Note

This applies ONLY to the legacy Engine and not the Policy Lifecycle polices

export PRELOAD_POLICIES=false

Step 3: Run the system using docker-compose. Note that on some systems you may have to run the docker-compose command as root or using sudo. Note that this command takes a number of minutes to execute on a laptop or desktop computer.

docker-compose up -d

You now have a full standalone ONAP Policy framework up and running!

Policy Platform Development

Policy Platform Development Tools

This article explains how to build the ONAP Policy Framework for development purposes and how to run stability/performance tests for a variety of components. To start, the developer should consult the latest ONAP Wiki to familiarize themselves with developer best practices and how-tos to setup their environment, see https://wiki.onap.org/display/DW/Developer+Best+Practices.

This article assumes that:

  • You are using a *nix operating system such as linux or macOS.

  • You are using a directory called git off your home directory (~/git) for your git repositories

  • Your local maven repository is in the location ~/.m2/repository

  • You have copied the settings.xml from oparent to ~/.m2/ directory

  • You have added settings to access the ONAP Nexus to your M2 configuration, see Maven Settings Example (bottom of the linked page)

The procedure documented in this article has been verified to work on a MacBook laptop running macOS Mojave Version 10.14.6 and an Unbuntu 18.06 VM.

Cloning All The Policy Repositories

Run a script such as the script below to clone the required modules from the ONAP git repository. This script clones all the ONAP Policy Framework repositories.

ONAP Policy Framework has dependencies to the ONAP Parent oparent module, the ONAP ECOMP SDK ecompsdkos module, and the A&AI Schema module.

Typical ONAP Policy Framework Clone Script
  1 #!/usr/bin/env bash
  2
  3 ## script name for output
  4 MOD_SCRIPT_NAME=`basename $0`
  5
  6 ## the ONAP clone directory, defaults to "onap"
  7 clone_dir="onap"
  8
  9 ## the ONAP repos to clone
 10 onap_repos="\
 11 policy/parent \
 12 policy/common \
 13 policy/models \
 14 policy/docker \
 15 policy/api \
 16 policy/pap \
 17 policy/apex-pdp \
 18 policy/drools-pdp \
 19 policy/drools-applications \
 20 policy/xacml-pdp \
 21 policy/distribution \
 22 policy/gui \
 23 policy/engine "
 24
 25 ##
 26 ## Help screen and exit condition (i.e. too few arguments)
 27 ##
 28 Help()
 29 {
 30     echo ""
 31     echo "$MOD_SCRIPT_NAME - clones all required ONAP git repositories"
 32     echo ""
 33     echo "       Usage:  $MOD_SCRIPT_NAME [-options]"
 34     echo ""
 35     echo "       Options"
 36     echo "         -d          - the ONAP clone directory, defaults to '.'"
 37     echo "         -h          - this help screen"
 38     echo ""
 39     exit 255;
 40 }
 41
 42 ##
 43 ## read command line
 44 ##
 45 while [ $# -gt 0 ]
 46 do
 47     case $1 in
 48         #-d ONAP clone directory
 49         -d)
 50             shift
 51             if [ -z "$1" ]; then
 52                 echo "$MOD_SCRIPT_NAME: no clone directory"
 53                 exit 1
 54             fi
 55             clone_dir=$1
 56             shift
 57         ;;
 58
 59         #-h prints help and exists
 60         -h)
 61             Help;exit 0;;
 62
 63         *)    echo "$MOD_SCRIPT_NAME: undefined CLI option - $1"; exit 255;;
 64     esac
 65 done
 66
 67 if [ -f "$clone_dir" ]; then
 68     echo "$MOD_SCRIPT_NAME: requested clone directory '$clone_dir' exists as file"
 69     exit 2
 70 fi
 71 if [ -d "$clone_dir" ]; then
 72     echo "$MOD_SCRIPT_NAME: requested clone directory '$clone_dir' exists as directory"
 73     exit 2
 74 fi
 75
 76 mkdir $clone_dir
 77 if [ $? != 0 ]
 78 then
 79     echo cannot clone ONAP repositories, could not create directory '"'$clone_dir'"'
 80     exit 3
 81 fi
 82
 83 for repo in $onap_repos
 84 do
 85     repoDir=`dirname "$repo"`
 86     repoName=`basename "$repo"`
 87
 88     if [ ! -z $dirName ]
 89     then
 90         mkdir "$clone_dir/$repoDir"
 91         if [ $? != 0 ]
 92         then
 93             echo cannot clone ONAP repositories, could not create directory '"'$clone_dir/repoDir'"'
 94             exit 4
 95         fi
 96     fi
 97
 98     git clone https://gerrit.onap.org/r/${repo} $clone_dir/$repo
 99 done
100
101 echo ONAP has been cloned into '"'$clone_dir'"'

Execution of the script above results in the following directory hierarchy in your ~/git directory:

  • ~/git/onap

  • ~/git/onap/policy

  • ~/git/onap/policy/parent

  • ~/git/onap/policy/common

  • ~/git/onap/policy/models

  • ~/git/onap/policy/api

  • ~/git/onap/policy/pap

  • ~/git/onap/policy/gui

  • ~/git/onap/policy/docker

  • ~/git/onap/policy/drools-applications

  • ~/git/onap/policy/drools-pdp

  • ~/git/onap/policy/engine

  • ~/git/onap/policy/apex-pdp

  • ~/git/onap/policy/xacml-pdp

  • ~/git/onap/policy/distribution

Building ONAP Policy Framework Components

Step 1: Optionally, for a completely clean build, remove the ONAP built modules from your local repository.

rm -fr ~/.m2/repository/org/onap

Step 2: A pom such as the one below can be used to build the ONAP Policy Framework modules. Create the pom.xml file in the directory ~/git/onap/policy.

Typical pom.xml to build the ONAP Policy Framework
 1 <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
 2     <modelVersion>4.0.0</modelVersion>
 3     <groupId>org.onap</groupId>
 4     <artifactId>onap-policy</artifactId>
 5     <version>1.0.0-SNAPSHOT</version>
 6     <packaging>pom</packaging>
 7     <name>${project.artifactId}</name>
 8     <inceptionYear>2017</inceptionYear>
 9     <organization>
10         <name>ONAP</name>
11     </organization>
12
13     <modules>
14         <module>parent</module>
15         <module>common</module>
16         <module>models</module>
17         <module>api</module>
18         <module>pap</module>
19         <module>apex-pdp</module>
20         <module>xacml-pdp</module>
21         <module>drools-pdp</module>
22         <module>drools-applications</module>
23         <module>distribution</module>
24         <module>gui</module>
25         <!-- The engine repo is being deprecated,
26         and can be ommitted if not working with
27         legacy api and components. -->
28         <module>engine</module>
29     </modules>
30 </project>

Policy Architecture/API Transition

In Dublin, a new Policy Architecture was introduced. The legacy architecture runs in parallel with the new architecture. It will be deprecated after Frankfurt release. If the developer is only interested in working with the new architecture components, the engine sub-module can be ommitted.

Step 3: You can now build the Policy framework.

Java artifacts only:

cd ~/git/onap
mvn clean install

With docker images:

cd ~/git/onap
mvn clean install -P docker

Developing and Debugging each Policy Component

Running a MariaDb Instance

The Policy Framework requires a MariaDb instance running. The easiest way to do this is to run a docker image locally.

One example on how to do this is to use the scripts used by the policy/api S3P tests.

Simulator Setup Script Example

cd ~/git/onap/api/testsuites/stability/src/main/resources/simulatorsetup
./setup_components.sh

Another example on how to run the MariaDb is using the docker compose file used by the Policy API CSITs:

Example Compose Script to run MariaDB

Running the API component standalone

Assuming you have successfully built the codebase using the instructions above. The only requirement for the API component to run is a running MariaDb database instance. The easiest way to do this is to run the docker image, please see the mariadb documentation for the latest information on doing so. Once the mariadb is up and running, a configuration file must be provided to the api in order for it to know how to connect to the mariadb. You can locate the default configuration file in the packaging of the api component:

Default API Configuration

You will want to change the fields pertaining to “host”, “port” and “databaseUrl” to your local environment settings.

Running the API component using Docker Compose

An example of running the api using a docker compose script is located in the Policy Integration CSIT test repository.

Policy CSIT API Docker Compose

Running the Stability/Performance Tests

The following links contain instructions on how to run the S3P Stability and Performance tests. These may be helpful to developers to become familiar with the Policy Framework components and test any local changes.

Policy API S3P Tests
72 Hours Stability Test of Policy API
Introduction

The 72 hour stability test of policy API has the goal of verifying the stability of running policy design API REST service by ingesting a steady flow of transactions in a multi-threaded fashion to simulate multiple clients’ behaviors. All the transaction flows are initiated from a test client server running JMeter for the duration of 72 hours.

Setup Details

The stability test was performed on a default ONAP OOM installation in the Intel Wind River Lab environment. JMeter was installed on a separate VM to inject the traffic defined in the API stability script with the following command:

jmeter.sh --nongui --testfile policy_api_stability.jmx --logfile result.jtl
Test Plan

The 72+ hours stability test will be running the following steps sequentially in multi-threaded loops. Thread number is set to 5 to simulate 5 API clients’ behaviors (they can be calling the same policy CRUD API simultaneously). Each thread creates a different version of the policy types and policies to not interfere with one another while operating simultaneously. The point version of each entity is set to the running thread number.

Setup Thread (will be running only once)

  • Get policy-api Healthcheck

  • Get API Counter Statistics

  • Get Preloaded Policy Types

API Test Flow (5 threads running the same steps in the same loop)

  • Create a new Monitoring Policy Type with Version 6.0.#

  • Create a new Monitoring Policy Type with Version 7.0.#

  • Create a new Optimization Policy Type with Version 6.0.#

  • Create a new Guard Policy Type with Version 6.0.#

  • Create a new Native APEX Policy Type with Version 6.0.#

  • Create a new Native Drools Policy Type with Version 6.0.#

  • Create a new Native XACML Policy Type with Version 6.0.#

  • Get All Policy Types

  • Get All Versions of the new Monitoring Policy Type

  • Get Version 6.0.# of the new Monitoring Policy Type

  • Get Version 6.0.# of the new Optimzation Policy Type

  • Get Version 6.0.# of the new Guard Policy Type

  • Get Version 6.0.# of the new Native APEX Policy Type

  • Get Version 6.0.# of the new Native Drools Policy Type

  • Get Version 6.0.# of the new Native XACML Policy Type

  • Get the Latest Version of the New Monitoring Policy Type

  • Create Monitoring Policy Ver 6.0.# w/Monitoring Policy Type Ver 6.0.#

  • Create Monitoring Policy Ver 7.0.# w/Monitoring Policy Type Ver 7.0.#

  • Create Optimization Policy Ver 6.0.# w/Optimization Policy Type Ver 6.0.#

  • Create Guard Policy Ver 6.0.# w/Guard Policy Type Ver 6.0.#

  • Create Native APEX Policy Ver 6.0.# w/Native APEX Policy Type Ver 6.0.#

  • Create Native Drools Policy Ver 6.0.# w/Native Drools Policy Type Ver 6.0.#

  • Create Native XACML Policy Ver 6.0.# w/Native XACML Policy Type Ver 6.0.#

  • Get Version 6.0.# of the new Monitoring Policy

  • Get Version 6.0.# of the new Optimzation Policy

  • Get Version 6.0.# of the new Guard Policy

  • Get Version 6.0.# of the new Native APEX Policy

  • Get Version 6.0.# of the new Native Drools Policy

  • Get Version 6.0.# of the new Native XACML Policy

  • Get the Latest Version of the new Monitoring Policy

  • Delete Version 6.0.# of the new Monitoring Policy

  • Delete Version 7.0.# of the new Monitoring Policy

  • Delete Version 6.0.# of the new Optimzation Policy

  • Delete Version 6.0.# of the new Guard Policy

  • Delete Version 6.0.# of the new Native APEX Policy

  • Delete Version 6.0.# of the new Native Drools Policy

  • Delete Version 6.0.# of the new Native XACML Policy

  • Delete Monitoring Policy Type with Version 6.0.#

  • Delete Monitoring Policy Type with Version 7.0.#

  • Delete Optimization Policy Type with Version 6.0.#

  • Delete Guard Policy Type with Version 6.0.#

  • Delete Native APEX Policy Type with Version 6.0.#

  • Delete Native Drools Policy Type with Version 6.0.#

  • Delete Native XACML Policy Type with Version 6.0.#

TearDown Thread (will only be running after API Test Flow is completed)

  • Get policy-api Healthcheck

  • Get Preloaded Policy Types

Test Results

Summary

No errors were found during the 72 hours of the Policy API stability run. The load was performed against a non-tweaked ONAP OOM installation.

Test Statistics

Total # of requests

Success %

TPS

Avg. time taken per request

Min. time taken per request

Max. time taken per request

627746

100%

2.42

2058 ms

26 ms

72809 ms

_images/api-s3p-jm-1_H.png

JMeter Results

The following graphs show the response time distributions. The “Get Policy Types” API calls are the most expensive calls that average a 10 seconds plus response time.

_images/api-response-time-distribution_H.png _images/api-response-time-overtime_H.png
Performance Test of Policy API
Introduction

Performance test of policy-api has the goal of testing the min/avg/max processing time and rest call throughput for all the requests when the number of requests are large enough to saturate the resource and find the bottleneck.

Setup Details

The performance test was performed on a default ONAP OOM installation in the Intel Wind River Lab environment. JMeter was installed on a separate VM to inject the traffic defined in the API performace script with the following command:

jmeter.sh --nongui --testfile policy_api_performance.jmx --logfile result.jtl
Test Plan

Performance test plan is the same as stability test plan above. Only differences are, in performance test, we increase the number of threads up to 20 (simulating 20 users’ behaviors at the same time) whereas reducing the test time down to 2.5 hours.

Run Test

Running/Triggering performance test will be the same as stability test. That is, launch JMeter pointing to corresponding .jmx test plan. The API_HOST and API_PORT are already set up in .jmx.

Test Statistics

Total # of requests

Success %

TPS

Avg. time taken per request

Min. time taken per request

Max. time taken per request

4082

100%

0.45

1297 ms

4 ms

63612 ms

_images/api-s3p-jm-2_H.png
Test Results

The following graphs show the response time distributions.

_images/api-response-time-distribution_performance_H.png _images/api-response-time-overtime_performance_H.png
Policy PAP component

Both the Performance and the Stability tests were executed by performing requests against Policy components installed as part of a full ONAP OOM deployment in Nordix lab.

Setup Details
  • Policy-PAP along with all policy components deployed as part of a full ONAP OOM deployment.

  • A second instance of APEX-PDP is spun up in the setup. Update the configuration file(OnapPfConfig.json) such that the PDP can register to the new group created by PAP in the tests.

  • Both tests were run via jMeter, which was installed on a separate VM.

Stability Test of PAP
Test Plan

The 72 hours stability test ran the following steps sequentially in a single threaded loop.

  • Create Policy defaultDomain - creates an operational policy using policy/api component

  • Create Policy sampleDomain - creates an operational policy using policy/api component

  • Check Health - checks the health status of pap

  • Check Statistics - checks the statistics of pap

  • Change state to ACTIVE - changes the state of defaultGroup PdpGroup to ACTIVE

  • Check PdpGroup Query - makes a PdpGroup query request and verifies that PdpGroup is in the ACTIVE state.

  • Deploy defaultDomain Policy - deploys the policy defaultDomain in the existing PdpGroup

  • Check status of defaultGroup - checks the status of defaultGroup PdpGroup with the defaultDomain policy 1.0.0.

  • Create/Update PDP Group - creates a new PDPGroup named sampleGroup.

  • Check PdpGroup Query - makes a PdpGroup query request and verifies that 2 PdpGroups are in the ACTIVE state and defaultGroup has a policy deployed on it.

  • Deployment Update sampleDomain - deploys the policy sampleDomain in sampleGroup PdpGroup using pap api

  • Check status of sampleGroup - checks the status of the sampleGroup PdpGroup.

  • Check status of PdpGroups - checks the status of both PdpGroups.

  • Check PdpGroup Query - makes a PdpGroup query request and verifies that the defaultGroup has a policy defaultDomain deployed on it and sampleGroup has policy sampleDomain deployed on it.

  • Check Consolidated Health - checks the consolidated health status of all policy components.

  • Check Deployed Policies - checks for all the deployed policies using pap api.

  • Undeploy Policy sampleDomain - undeploys the policy sampleDomain from sampleGroup PdpGroup using pap api

  • Undeploy Default Policy - undeploys the policy defaultDomain from PdpGroup

  • Change state to PASSIVE(sampleGroup) - changes the state of sampleGroup PdpGroup to PASSIVE

  • Delete PdpGroup SampleGroup - delete the sampleGroup PdpGroup using pap api

  • Change State to PASSIVE(defaultGroup) - changes the state of defaultGroup PdpGroup to PASSIVE

  • Check PdpGroup Query - makes a PdpGroup query request and verifies that PdpGroup is in the PASSIVE state.

  • Delete Policy defaultDomain - deletes the operational policy defaultDomain using policy/api component

  • Delete Policy sampleDomain - deletes the operational policy sampleDomain using policy/api component

The following steps can be used to configure the parameters of test plan.

  • HTTP Authorization Manager - used to store user/password authentication details.

  • HTTP Header Manager - used to store headers which will be used for making HTTP requests.

  • User Defined Variables - used to store following user defined parameters.

Name

Description

PAP_HOST

IP Address or host name of PAP component

PAP_PORT

Port number of PAP for making REST API calls

API_HOST

IP Address or host name of API component

API_PORT

Port number of API for making REST API calls

The test was run in the background via “nohup”, to prevent it from being interrupted:

nohup ./jMeter/apache-jmeter-5.3/bin/jmeter.sh -n -t stabil.jmx -l testresults.jtl
Test Results

Summary

Stability test plan was triggered for 24 hours.

Note

As part of the OOM deployment, another APEX-PDP pod is spun up with the pdpGroup name specified as ‘sampleGroup’. After creating the new group called ‘sampleGroup’ as part of the test, a time delay of 2 minutes is added, so that the pdp is registered to the newly created group. This has resulted in a spike in the Average time taken per request. But, this is required to make proper assertions, and also for the consolidated health check.

Test Statistics

Total # of requests

Success %

Error %

Average time taken per request

11921

100.00 %

0.00 %

382 ms

Note

There were no failures during the 24 hours test.

JMeter Screenshot

_images/pap-s3p-stability-result-jmeter.PNG

Memory and CPU usage

The memory and CPU usage can be monitored by running “top” command on the PAP pod. A snapshot is taken before and after test execution to monitor the changes in resource utilization.

Memory and CPU usage before test execution:

_images/pap-s3p-mem-bt.PNG

Memory and CPU usage after test execution:

_images/pap-s3p-mem-at.PNG
Performance Test of PAP
Introduction

Performance test of PAP has the goal of testing the min/avg/max processing time and rest call throughput for all the requests with multiple requests at the same time.

Setup Details

The performance test is performed on a similar setup as Stability test. The JMeter VM will be sending a large number of REST requests to the PAP component and collecting the statistics.

Test Plan

Performance test plan is the same as the stability test plan above except for the few differences listed below.

  • Increase the number of threads up to 5 (simulating 5 users’ behaviours at the same time).

  • Reduce the test time to 2 hours.

  • Usage of counters to create different groups by the ‘Create/Update PDP Group’ test case.

  • Removed the delay to wait for the new PDP to be registered. Also removed the corresponding assertions where the Pdp instance registration to the newly created group is validated.

Run Test

Running/Triggering the performance test will be the same as the stability test. That is, launch JMeter pointing to corresponding .jmx test plan. The API_HOST , API_PORT , PAP_HOST , PAP_PORT are already set up in .jmx.

nohup ./jMeter/apache-jmeter-5.3/bin/jmeter.sh -n -t perf.jmx -l perftestresults.jtl

Once the test execution is completed, execute the below script to get the statistics:

$ cd /home/ubuntu/pap/testsuites/performance/src/main/resources/testplans
$ ./results.sh /home/ubuntu/pap_perf/resultTree.log
Test Results

Test results are shown as below.

Test Statistics

Total # of requests

Success %

Error %

Average time taken per request

46314

100 %

0.00 %

1084 ms

JMeter Screenshot

_images/pap-s3p-performance-result-jmeter.PNG
Policy APEX PDP component
Setting up Stability Tests in APEX
Introduction

The 72 hour Stability Test for apex-pdp has the goal of introducing a steady flow of transactions initiated from a test client server running JMeter. The pdp is configured to start a rest server inside it and take input from rest clients (JMeter) and send back output to the rest clients (JMeter).

The input events will be submitted through rest interface of apex-pdp and the results are verified using the rest responses coming out from apex-pdp.

The test will be performed in a multi-threaded environment where 20 threads running in JMeter will keep sending events to apex-pdp in every 500 milliseconds for the duration of 72 hours.

Setup details

The stability test is performed on VM’s running in OpenStack cloud environment. There are 2 seperate VM’s, one for running apex pdp & other one for running JMeter to simulate steady flow of transactions.

Install & Configure VisualVM

VisualVM needs to be installed in the virtual machine having apex-pdp. It will be used to monitor CPU, Memory, GC for apex-pdp while stability test is running.

Install visualVM

sudo apt-get install visualvm

Login to VM using graphical interface in separate terminal window.

ssh -X <user>@<VM-IP-ADDRESS>

Open visualVM

visualvm &

Connect to apex-pdp JVM’s JMX agent 1. Right click on “Local” in the left panel of the screen and select “Add Local JMX Connection…” 2. Enter localhost:9911 for “Connection”, and click OK 3. Double click on the newly added nodes under “Local” to start monitoring CPU, Memory & GC.

Sample Screenshot of visualVM

_images/stability-visualvm1.PNG _images/stability-visualvm2.PNG
Test Plan

The 72 hours stability test will run the following steps in 5 threaded loop.

  • Send Input Event - sends an input message to rest interface of apex-pdp.

  • Assert Response Code - assert the response code coming from apex-pdp.

  • Assert Response Message - assert the response message coming from apex-pdp.

The following steps can be used to configure the parameters of test plan.

  • HTTP Header Manager - used to store headers which will be used for making HTTP requests.

  • HTTP Request Defaults - used to store HTTP request details like Server Name or IP, Port, Protocol etc.

  • User Defined Variables - used to store following user defined parameters.

Name

Description

Default Value

wait

Wait time after each request (in milliseconds)

500

threads

Number of threads to run test cases in parallel.

5

threadsTimeOutInMs

Synchronization timer for threads running in parallel (in milliseconds).

5000

Download and update the jmx file presented in the apex-pdp git repository - jmx file path.

  • HTTPSampler.domain - The ip address of VM which the apex container is running

  • HTTPSampler.port - The listening port, here is 23324

  • ThreadGroup.druation - Set the duration to 72 hours (in seconds)

Use the CLI mode to start the test

./jmeter.sh -n -t ~/apexPdpStabilityTestPlan.jmx -Jusers=1 -l ~/stability.log
Stability Test Result

Summary

Stability test plan was triggered for 72 hours injecting input events to apex-pdp from 5 client threads.

Once the test has complete - we can generate a HTML test report via the following command

~/jMeter/apache-jmeter-5.2.1/bin/jmeter -g stability.log -o ./result/

Number of Client Threads running in JMeter

Total number of input events

Success %

Error %

Average Time per Request

5

129326

100%

0%

6716.12

_images/stability-jmeter.PNG

download:result.zip <apex-s3p-results/apex_s3p_results.zip>

Stability Test of Apex PDP

The 72 hour Stability Test for apex-pdp has the goal of introducing a steady flow of transactions using jMeter.

The input event will be submitted through the rest interface of DMaaP , which then triggers a grpc request to CDS. Based on the response, another DMaaP event is triggered.

This test will be performed in an OOM deployment setup. The test will be performed in a multi-threaded environment where 5 threads running in JMeter will keep sending events for the duration of 72 hours.

Test Plan

The 72 hours stability test will run the following steps in a 5 threaded loop.

  • Create Policy - creates a policy using the policy/api component

  • Deploy Policy - deploys the policy in the existing PdpGroup

  • Check Health - checks the health status of apex

  • Send Input Event - trigger ‘unauthenticated.DCAE_CL_OUTPUT’ event of DMaaP.

  • Get Output Event Response - check for the triggered output event.

  • Undeploy Policy - undeploys the policy from PdpGroup

  • Delete Policy - deletes the policy using the policy/api component

The following steps can be used to configure the parameters of the test plan.

  • HTTP Header Manager - used to store headers which will be used for making HTTP requests.

  • HTTP Request Defaults - used to store HTTP request details like Server Name or IP, Port, Protocol etc.

  • User Defined Variables - used to store the following user defined parameters:

Name

Description

Default Value

wait

Wait time after each request (in milliseconds)

120000

threads

Number of threads to run test cases in parallel.

5

threadsTimeOutInMs

Synchronization timer for threads running in parallel (in milliseconds).

150000

PAP_PORT

Port number of PAP for making REST API calls

API_PORT

Port number of API for making REST API calls

APEX_PORT

Port number of APEX for making REST API calls

DMAAP_PORT

Port number of DMAAP for making REST API calls

Download and update the jmx file presented in the apex-pdp git repository - jmx file path.

  • HTTPSampler.domain - The ip address of the VM in which the apex container is running

  • HTTPSampler.port - The listening port, here is 23324

  • ThreadGroup.duration - Set the duration to 72 hours (in seconds)

Use the CLI mode to start the test

nohup ./jmeter.sh -n -t ~/apexPdpStabilityTestPlan.jmx -Jusers=1 -l ~/stability.log
Stability Test Results

The stability test plan was triggered for 72 hours, injecting input events to apex-pdp pod from 5 client threads running in JMeter.

The stability tests were executed as part of a full ONAP OOM deployment in Nordix lab.

Once the tests complete, we can generate an HTML test report via the command:

~/jMeter/apache-jmeter-5.2.1/bin/jmeter -g stability.log -o ./result/

Number of Client Threads running in JMeter

Total number of input events

Success %

Error %

Average Time per Request

5

129326

100%

0%

6716.12

JMeter Screenshot

_images/apex_s3p_jm-1.png _images/apex_s3p_jm-2.png

download:result.zip <apex-s3p-results/apex_s3p_results.zip>

Setting up Performance Tests in APEX

The Performance test is performed on a similar setup to the Stability test. JMeter will send a large number of REST requests and will then retrieve those requests.

Performance test plan will be the same as the stability test plan except for some differences listed below:

  • Increase the number of threads from 5 to 20.

  • Reduce test time to ninety minutes.

  • Calculate the amount of requests handled in the time frame.

Run Test

Running the performance test will be the same as the stability test. That is, launch JMeter pointing to corresponding .jmx test plan. The API_HOST , API_PORT , PAP_HOST , PAP_PORT are already set up in .jmx.

nohup ./jmeter.sh -n -t ~/performance.jmx -Jusers=1 -l ~/perf.log

Once the tests have completed, run the following the gather results.

~/jMeter/apache-jmeter-5.2.1/bin/jmeter -g perf.log -o ./performance_result/
Performance Test Result

Summary

Performance test was triggered for 90 minutes. The results are shown below.

Test Statistics

Total Number of Requests

Success

Error

Average Time Taken per Request

32304

99,99 %

0.01 %

8746.50 ms

JMeter Screenshot

_images/apex_perf_jm_1.PNG _images/apex_perf_jm_2.PNG
Policy Drools PDP component

Both the Performance and the Stability tests were executed against a default ONAP installation in the policy-k8s tenant in the windriver lab, from an independent VM running the jmeter tool to inject the load.

General Setup

The kubernetes installation allocated all policy components in the same worker node VM and some additional ones. The worker VM hosting the policy components has the following spec:

  • 16GB RAM

  • 8 VCPU

  • 160GB Ephemeral Disk

The standalone VM designated to run jmeter has the same configuration. The jmeter JVM was instantiated with a max heap configuration of 12G.

The drools-pdp container uses the default JVM memory settings from a default OOM installation:

VM settings:
    Max. Heap Size (Estimated): 989.88M
    Using VM: OpenJDK 64-Bit Server VM

Other ONAP components used during the stability tests are:

  • Policy XACML PDP to process guard queries for each transaction.

  • DMaaP to carry PDP-D and jmeter initiated traffic to complete transactions.

  • Policy API to create (and delete at the end of the tests) policies for each scenario under test.

  • Policy PAP to deploy (and undeploy at the end of the tests) policies for each scenario under test.

The following components are simulated during the tests.

  • SO actor for the vDNS use case.

  • APPC responses for the vCPE and vFW use cases.

  • AAI to answer queries for the use cases under test.

In order to avoid interferences with the APPC component while running the tests, the APPC component was disabled.

SO, and AAI actors were simulated within the PDP-D JVM by enabling the feature-controlloop-utils before running the tests.

PDP-D Setup

The kubernetes charts were modified previous to the installation with the changes below.

The feature-controlloop-utils was started by adding the following script:

oom/kubernetes/policy/charts/drools/resources/configmaps/features.pre.sh:

#!/bin/sh
sh -c "features enable controlloop-utils"
Stability Test of Policy PDP-D
PDP-D performance

The test set focused on the following use cases:

  • vCPE

  • vDNS

  • vFirewall

For 72 hours the following 5 scenarios ran in parallel:

  • vCPE success scenario

  • vCPE failure scenario (failure returned by simulated APPC recipient through DMaaP).

  • vDNS success scenario.

  • vDNS failure scenario.

  • vFirewall success scenario.

Five threads ran in parallel, one for each scenario. The transactions were initiated by each jmeter thread group. Each thread initiated a transaction, monitored the transaction, and as soon as the transaction ending was detected, it initiated the next one, so back to back with no pauses.

All transactions completed successfully as it was expected in each scenario, with no failures.

The command executed was

./jmeter -n -t /home/ubuntu/drools-applications/testsuites/stability/src/main/resources/s3p.jmx  -l /home/ubuntu/jmeter_result/jmeter.jtl -e -o /home/ubuntu/jmeter_result > /dev/null 2>&1

The results were computed by monitoring the statistics REST endpoint accessible through the telemetry shell or APIs.

vCPE Success scenario

ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e:

# Times are in milliseconds

# Previous to the run, there was 1 failure as a consequence of testing
# the flows before the stability load was initiated.   There was
# an additional failure encountered during the execution.

"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e": {
    "policyExecutedCount": 161328,
    "policyExecutedSuccessCount": 161326,
    "totalElapsedTime": 44932780,
    "averageExecutionTime": 278.5181741545175,
    "birthTime": 1616092087842,
    "lastStart": 1616356511841,
    "lastExecutionTime": 1616356541972,
    "policyExecutedFailCount": 2
}
vCPE Failure scenario

ControlLoop-vCPE-Fail:

# Times are in milliseconds

"ControlLoop-vCPE-Fail": {
    "policyExecutedCount": 250172,
    "policyExecutedSuccessCount": 0,
    "totalElapsedTime": 63258856,
    "averageExecutionTime": 252.8614553187407,
    "birthTime": 1616092143137,
    "lastStart": 1616440688824,
    "lastExecutionTime": 1616440689010,
    "policyExecutedFailCount": 250172
}
vDNS Success scenario

ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3:

# Times are in milliseconds

"ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3": {
    "policyExecutedCount": 235438,
    "policyExecutedSuccessCount": 235438,
    "totalElapsedTime": 37564263,
    "averageExecutionTime": 159.550552587093,
    "birthTime": 1616092578063,
    "lastStart": 1616356511253,
    "lastExecutionTime": 1616356511653,
    "policyExecutedFailCount": 0
}
vDNS Failure scenario

ControlLoop-vDNS-Fail:

# Times are in milliseconds

"ControlLoop-vDNS-Fail": {
    "policyExecutedCount": 2754574,
    "policyExecutedSuccessCount": 0,
    "totalElapsedTime": 14396495,
    "averageExecutionTime": 5.22639616869977,
    "birthTime": 1616092659237,
    "lastStart": 1616440696444,
    "lastExecutionTime": 1616440696444,
    "policyExecutedFailCount": 2754574
}
vFirewall Success scenario

ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a:

# Times are in milliseconds

# Previous to the run, there were 2 failures as a consequence of testing
# the flows before the stability load was initiated.   There was
# an additional failure encountered during the execution.

"ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a": {
    "policyExecutedCount": 145197,
    "policyExecutedSuccessCount": 145194,
    "totalElapsedTime": 33100249,
    "averageExecutionTime": 227.96785746261975,
    "birthTime": 1616092985229,
    "lastStart": 1616356511732,
    "lastExecutionTime": 1616356541972,
    "policyExecutedFailCount": 3
}

Performance Test of Policy XACML PDP

The Performance test was executed by performing requests against the Policy RESTful APIs residing on the XACML PDP installed on a Cloud based Virtual Machine.

VM Configuration: - 16GB RAM - 8 VCPU - 1TB Disk

ONAP was deployed using a K8s Configuration on a separate VM.

Summary

The Performance test was executed, and the result analyzed, via:

jmeter -Jduration=1200 -Jusers=10 \
    -Jxacml_ip=$ip -Jpap_ip=$ip -Japi_ip=$ip \
    -Jxacml_port=31104 -Jpap_port=32425 -Japi_port=30709 \
    -n -t perf.jmx -l testresults.jtl

Note: the ports listed above correspond to port 6969 of the respective components.

The performance test, perf.jmx, runs the following, all in parallel:

  • Healthcheck, 10 simultaneous threads

  • Statistics, 10 simultaneous threads

  • Decisions, 10 simultaneous threads, each running the following in sequence:

    • Monitoring Decision

    • Monitoring Decision, abbreviated

    • Naming Decision

    • Optimization Decision

    • Default Guard Decision (always “Permit”)

    • Frequency Limiter Guard Decision

    • Min/Max Guard Decision

When the script starts up, it uses policy-api to create, and policy-pap to deploy, the policies that are needed by the test. It assumes that the “naming” policy has already been created and deployed. Once the test completes, it undeploys and deletes the policies that it previously created.

Results

The test was run for 20 minutes at a time, for different numbers of users (i.e., threads), with the following results:

Number of Users

Throughput (requests/second)

Average Latency (ms)

10

8929

3.10

20

10827

5.05

40

11800

9.35

80

11750

18.62

Stability Test of Policy XACML PDP

The stability test was executed by performing requests against the Policy RESTful APIs residing on the XACML PDP installed in the windriver lab. This was running on a kubernetes pod having the following configuration:

  • 16GB RAM

  • 8 VCPU

  • 160GB Disk

The test was run via jmeter, which was installed on a separate VM so-as not to impact the performance of the XACML-PDP being tested.

Summary

The stability test was performed on a default ONAP OOM installation in the Intel Wind River Lab environment. JMeter was installed on a separate VM to inject the traffic defined in the XACML PDP stability script with the following command:

jmeter.sh -Jduration=259200 -Jusers=2 -Jxacml_ip=$ip -Jpap_ip=$ip -Japi_ip=$ip \
    -Jxacml_port=31104 -Jpap_port=32425 -Japi_port=30709 --nongui --testfile stability.jmx

Note: the ports listed above correspond to port 6969 of the respective components.

The default log level of the root and org.eclipse.jetty.server.RequestLog loggers in the logback.xml of the XACML PDP (om/kubernetes/policy/components/policy-xacml-pdp/resources/config/logback.xml) was set to ERROR since the OOM installation did not have log rotation enabled of the container logs in the kubernetes worker nodes.

The stability test, stability.jmx, runs the following, all in parallel:

  • Healthcheck, 2 simultaneous threads

  • Statistics, 2 simultaneous threads

  • Decisions, 2 simultaneous threads, each running the following tasks in sequence:
    • Monitoring Decision

    • Monitoring Decision, abbreviated

    • Naming Decision

    • Optimization Decision

    • Default Guard Decision (always “Permit”)

    • Frequency Limiter Guard Decision

    • Min/Max Guard Decision

When the script starts up, it uses policy-api to create, and policy-pap to deploy the policies that are needed by the test. It assumes that the “naming” policy has already been created and deployed. Once the test completes, it undeploys and deletes the policies that it previously created.

Results

The stability summary results were reported by JMeter with the following summary line:

summary =  207771010 in 72:00:01 =  801.6/s Avg:     6 Min:     0 Max:   411 Err:     0 (0.00%)

The XACML PDP offered good performance with JMeter for the traffic mix described above, using 801 threads per second to inject the traffic load. No errors were encountered, and no significant CPU spikes were noted. The average transaction time was 6ms. with a maximum of 411ms.

Policy Distribution component
72h Stability and 4h Performance Tests of Distribution
VM Details

The stability and performance tests are performed on VM’s running in the OpenStack cloud environment in the ONAP integration lab. There are two separate VMs, one for running backend policy services which policy distribution needs, and the other for the policy distribution service itself and Jmeter.

OpenStack environment details

  • Version: Windriver Titanium

Policy Backend VM details (VM1)

  • OS: Ubuntu 18.04.5 LTS

  • CPU: 8 core, Intel Xeon E3-12xx v2 (Ivy Bridge), 2693.668 MHz, 16384 kB cache

  • RAM: 32 GB

  • HardDisk: 200 GB

  • Docker version 19.03.8, build afacb8b7f0

  • Java: openjdk 11.0.8 2020-07-14

JMeter and Distribution VM details (VM2)

  • OS: Ubuntu 18.04.5 LTS

  • CPU: 8 core, Intel Xeon E3-12xx v2 (Ivy Bridge), 2693.668 MHz, 16384 kB cache

  • RAM: 32 GB

  • HardDisk: 200 GB

  • Docker version 19.03.8, build afacb8b7f0

  • Java: openjdk 11.0.8 2020-07-14

  • JMeter: 5.1.1

VM1 & VM2: Common Setup

Make sure to execute below commands on both VM1 & VM2

Update the ubuntu software installer

sudo apt update

Install Java

sudo apt install -y openjdk-11-jdk

Ensure that the Java version that is executing is OpenJDK version 11

$ java --version
openjdk 11.0.8 2020-07-14
OpenJDK Runtime Environment (build 11.0.8+10-post-Ubuntu-0ubuntu118.04.1)
OpenJDK 64-Bit Server VM (build 11.0.8+10-post-Ubuntu-0ubuntu118.04.1, mixed mode, sharing)

Install Docker

# Add docker repository
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt update

# Check available docker versions (if necessary)
apt-cache policy docker-ce

# Install docker
sudo apt install -y docker-ce=5:19.03.8~3-0~ubuntu-bionic docker-ce-cli=5:19.03.8~3-0~ubuntu-bionic containerd.io

Change the permissions of the Docker socket file

sudo chmod 666 /var/run/docker.sock

Check the status of the Docker service and ensure it is running correctly

$ systemctl status --no-pager docker
docker.service - Docker Application Container Engine
   Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2020-10-14 13:59:40 UTC; 1 weeks 0 days ago
   # ... (truncated for brevity)

$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

Clone the policy-distribution repo to access the test scripts

git clone https://gerrit.onap.org/r/policy/distribution
VM1 Only: Install Simulators, Policy-PAP, Policy-API and MariaDB

Modify the setup_components.sh script located at:

  • ~/distribution/testsuites/stability/src/main/resources/simulatorsetup/setup_components.sh

Ensure the correct docker image versions are specified - e.g. for Guilin-RC0

  • nexus3.onap.org:10001/onap/policy-api:2.3.2

  • nexus3.onap.org:10001/onap/policy-pap:2.3.2

Run the setup_components.sh script to start the test support components:

~/distribution/testsuites/stability/src/main/resources/simulatorsetup/setup_components.sh

After installation, ensure the following docker containers are up and running:

$ docker ps
CONTAINER ID        IMAGE                                         COMMAND                  CREATED             STATUS              PORTS                    NAMES
a187cb0ff08a        nexus3.onap.org:10001/onap/policy-pap:2.3.2   "bash ./policy-pap.sh"   4 days ago          Up 4 days           0.0.0.0:7000->6969/tcp   policy-pap
2f7632fe90c3        nexus3.onap.org:10001/onap/policy-api:2.3.2   "bash ./policy-api.sh"   4 days ago          Up 4 days           0.0.0.0:6969->6969/tcp   policy-api
70fa27d6d992        pdp/simulator:latest                          "bash pdp-sim.sh"        4 days ago          Up 4 days                                    pdp-simulator
3c9ff28ba050        dmaap/simulator:latest                        "bash dmaap-sim.sh"      4 days ago          Up 4 days           0.0.0.0:3904->3904/tcp   message-router
60cfcf8cfe65        mariadb:10.2.14                               "docker-entrypoint.s…"   4 days ago          Up 4 days           0.0.0.0:3306->3306/tcp   mariadb
VM2 Only: Install Distribution

Modify the setup_distribution.sh script located at:

  • ~/distribution/testsuites/stability/src/main/resources/distributionsetup/setup_distribution.sh

Ensure the correct docker image version is specified - e.g. for Guilin-RC0:

  • nexus3.onap.org:10001/onap/policy-distribution:2.4.2

Run the setup_distribution.sh script to install the distribution service, provide the IP of VM1 (twice) as the arguments to the script:

~/distribution/testsuites/stability/src/main/resources/distributionsetup/setup_distribution.sh <vm1-ipaddr> <vm1-ipaddr>

Ensure the distribution container is running.

$ docker ps
CONTAINER ID        IMAGE                                                  COMMAND                  CREATED             STATUS              PORTS                                            NAMES
9a8db2bad156        nexus3.onap.org:10001/onap/policy-distribution:2.4.2   "bash ./policy-dist.…"   29 hours ago        Up 29 hours         0.0.0.0:6969->6969/tcp, 0.0.0.0:9090->9090/tcp   policy-distribution
VM2 Only: Install JMeter

Download and install JMeter

# Install required packages
sudo apt install -y wget unzip

# Install JMeter
mkdir -p jmeter
wget https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-5.1.1.zip
unzip -qd jmeter apache-jmeter-5.1.1.zip
rm apache-jmeter-5.1.1.zip
VM2 Only: Install & configure visualVM

VisualVM needs to be installed in the virtual machine running Distribution (VM2). It will be used to monitor CPU, Memory and GC for Distribution while the stability tests are running.

sudo apt install -y visualvm

Run these commands to configure permissions

# Create Java security policy file for VisualVM
sudo cat > /usr/lib/jvm/java-11-openjdk-amd64/bin/visualvm.policy << EOF
grant codebase "jrt:/jdk.jstatd" {
   permission java.security.AllPermission;
};
grant codebase "jrt:/jdk.internal.jvmstat" {
   permission java.security.AllPermission;
};
EOF

# Set globally accessable permissions on policy file
sudo chmod 777 /usr/lib/jvm/java-11-openjdk-amd64/bin/visualvm.policy

Run the following command to start jstatd using port 1111

/usr/lib/jvm/java-11-openjdk-amd64/bin/jstatd -p 1111 -J-Djava.security.policy=/usr/lib/jvm/java-11-openjdk-amd64/bin/visualvm.policy &

Run visualVM to connect to localhost:9090

visualvm &

This will load up the visualVM GUI

Connect to Distribution JMX Port.

  1. Right click on “Local” in the left panel of the screen and select “Add JMX Connection”

  2. Enter the Port 9090. this is the JMX port exposed by the distribution container

  3. Double click on the newly added nodes under “Local” to start monitoring CPU, Memory & GC.

Example Screenshot of visualVM

_images/distribution-s3p-vvm-sample.png
Stability Test of Policy Distribution
Introduction

The 72 hour Stability Test for policy distribution has the goal of introducing a steady flow of transactions initiated from a test client server running JMeter. The policy distribution is configured with a special FileSystemReception plugin to monitor a local directory for newly added csar files to be processed by itself. The input CSAR will be added/removed by the test client(JMeter) and the result will be pulled from the backend (PAP and PolicyAPI) by the test client (JMeter).

The test will be performed in an environment where Jmeter will continuously add/remove a test csar into the special directory where policy distribution is monitoring and will then get the processed results from PAP and PolicyAPI to verify the successful deployment of the policy. The policy will then be undeployed and the test will loop continuously until 72 hours have elapsed.

Test Plan Sequence

The 72h stability test will run the following steps sequentially in a single threaded loop.

  • Delete Old CSAR - Checks if CSAR already exists in the watched directory, if so it deletes it

  • Add CSAR - Adds CSAR to the directory that distribution is watching

  • Get Healthcheck - Ensures Healthcheck is returning 200 OK

  • Get Statistics - Ensures Statistics is returning 200 OK

  • CheckPDPGroupQuery - Checks that PDPGroupQuery contains the deployed policy

  • CheckPolicyDeployed - Checks that the policy is deployed

  • Undeploy Policy - Undeploys the policy

  • Delete Policy - Deletes the Policy for the next loop

  • Check PDP Group for Deletion - Ensures the policy has been removed and does not exist

The following steps can be used to configure the parameters of the test plan.

  • HTTP Authorization Manager - used to store user/password authentication details.

  • HTTP Header Manager - used to store headers which will be used for making HTTP requests.

  • User Defined Variables - used to store following user defined parameters.

Name

Description

PAP_HOST

IP Address or host name of PAP component

PAP_PORT

Port number of PAP for making REST API calls

API_HOST

IP Address or host name of API component

API_PORT

Port number of API for making REST API calls

DURATION

Duration of Test

Screenshot of Distribution stability test plan

_images/distribution-s3p-testplan.png
Running the Test Plan

Edit the /tmp folder permissions to allow the testplan to insert the CSAR into the /tmp/policydistribution/distributionmount folder

sudo mkdir -p /tmp/policydistribution/distributionmount
sudo chmod -R a+trwx /tmp

From the apache JMeter folder run the test for 72h, pointing it towards the stability.jmx file inside the testplans folder and specifying a logfile to collect the results

~/jmeter/apache-jmeter-5.1.1/bin/jmeter -n -t ~/distribution/testsuites/stability/src/main/resources/testplans/stability.jmx -Jduration=259200 -l ~/distr-stability.jtl &
Test Results

Summary

  • Stability test plan was triggered for 72 hours.

  • No errors were reported

Test Statistics

_images/dist_stability_statistics.PNG _images/dist_stability_threshold.PNG

VisualVM Screenshots

_images/dist_stability_monitor.PNG _images/dist_stability_threads.PNG
Performance Test of Policy Distribution
Introduction

The 4h Performance Test of Policy Distribution has the goal of testing the min/avg/max processing time and rest call throughput for all the requests when the number of requests are large enough to saturate the resource and find the bottleneck.

It also tests that distribution can handle multiple policy CSARs and that these are deployed within 30 seconds consistently.

Setup Details

The performance test is based on the same setup as the distribution stability tests.

Test Plan Sequence

Performance test plan is different from the stability test plan.

  • Instead of handling one policy csar at a time, multiple csar’s are deployed within the watched folder at the exact same time.

  • We expect all policies from these csar’s to be deployed within 30 seconds.

  • There are also multithreaded tests running towards the healthcheck and statistics endpoints of the distribution service.

Running the Test Plan

Edit the /tmp folder permissions to allow the Testplan to insert the CSAR into the /tmp/policydistribution/distributionmount folder.

sudo mkdir -p /tmp/policydistribution/distributionmount
sudo chmod -R a+trwx /tmp

From the apache JMeter folder run the test for 4h, pointing it towards the performance.jmx file inside the testplans folder and specifying a logfile to collect the results

~/jmeter/apache-jmeter-5.1.1/bin/jmeter -n -t ~/distribution/testsuites/performance/src/main/resources/testplans/performance.jmx -Jduration=14400 -l ~/distr-performance.jtl &
Test Results

Summary

  • Performance test plan was triggered for 4 hours.

  • No errors were reported

Test Statistics

_images/dist_perf_statistics.PNG _images/dist_perf_threshold.PNG

VisualVM Screenshots

_images/20201020-1730-distr-performance-20201020T2025-monitor.png _images/20201020-1730-distr-performance-20201020T2025-threads.png

Policy Platform Actor Development Guidelines

Actor Design Overview

Intro

An actor/operation is any ONAP component that an Operational Policy can use to control a VNF/VM/etc. during execution of a control loop operational policy when a Control Loop Event is triggered.

_images/topview.png

An Actor Service object contains one or more Actor objects, which are found and created using ServiceLoader. Each Actor object, in turn, creates one or more Operator objects. All of these components, the Actor Service, the Actor, and the Operator are typically singletons that are created once, at start-up (or on the first request). The Actor Service includes several methods, configure(), start(), and stop(), which are cascaded to the Actors and then to the Operators.

Operation objects, on the other hand, are not singletons; a new Operation object is created for each operation that an application wishes to perform. For instance, if an application wishes to use the “SO” Actor to add two new modules, then two separate Operation objects would be created, one for each module.

Actors are configured by invoking the Actor Service configure() method, passing it a set of properties. The configure() method extracts the properties that are relevant to each Actor and passes them to the Actor’s configure() method. Similarly, the Actor’s configure() method extracts the properties that are relevant to each Operator and passes them to the Operator’s configure() method. Note: Actors typically extract “default” properties from their respective property sets and include those when invoking each Operator’s configure() method.

Once the Actor Service has been configured, it can be started via start(). It will then continue to run until no longer needed, at which point stop() can be invoked.

Note: it is possible to create separate instances of an Actor Service, each with its own set of properties. In that case, each Actor Service will get its own instances of Actors and Operators.

Components

This section describes things to consider when creating a new Actor/Operator.

Actor
  • The constructor should use addOperator() to add operators

  • By convention, the name of the actor is specified by a static field, “NAME”

  • An actor is registered via the Java ServiceLoader by including its jar on the classpath and adding its class name to this file, typically contained within the jar:

    onap.policy.controlloop.actorServiceProvider.spi

  • Actor loading is ordered, so that those having a lower (i.e., earlier) sequence number are loaded first. If a later actor has the same name as one that has already been loaded, a warning will be generated and the later actor discarded. This makes it possible for an organization to override an actor implementation

  • An implementation for a specific Actor will typically be derived from HttpActor or BidirectionalTopicActor, depending whether it is HTTP/REST-based or DMaaP-topic-based. These super classes provide most of the functionality needed to configure the operators, extracting operator-specific properties and adding default, actor-level properties

Operator
  • Typically, developers don’t have to implement any Operator classes; they just use HttpOperator or BidirectionalTopicOperator

Operation
  • Most operations require guard checks to be performed first. Thus, at a minimum, they should override startPreprocessorAsync() and have it invoke startGuardAsync()

  • In addition, if the operation depends on data being previously gathered and placed into the context, then it should override startPreprocessorAsync() and have it invoke obtain(). Note: obtain() and the guard can be performed in parallel by using the allOf() method. If the guard happens to depend on the same data, then it will block until the data is available, and then continue; the invoker need not deal with the dependency

  • Subclasses will typically derive from HttpOperation or BidirectionalTopicOperation, though if neither of those suffice, then they can extend OperationPartial, or even just implement a raw Operation. OperationPartial is the super class of HttpOperation and BidirectionalTopicOperation and provides most of the methods used by the Operation subclasses, including a number of utility methods (e.g., cancellable allOf)

  • Operation subclasses should be written in a way so-as to avoid any blocking I/O. If this proves too difficult, then the implementation should override doOperation() instead of startOperationAsync()

  • Operations return a “future” when start() is invoked. Typically, if the “future” is canceled, then any outstanding operation should be canceled. For instance, HTTP connections should be closed without waiting for a response

  • If an operation sets the outcome to “FAILURE”, it will be automatically retried; other failure types are not retried

ControlLoopParams
  • Identifies the operation to be performed

  • Includes timeout and retry information, though the actors typically provide default values if they are not specified in the parameters

  • Includes the event “context”

  • Includes “Policy” fields (e.g., “actor” and “operation”)

Context (aka, Event Context)
  • Includes:

    • the original onset event

    • enrichment data associated with the event

    • results of A&AI queries

XxxParams and XxxConfig
  • XxxParams objects are POJOs into which the property Maps are decoded when configuring actors or operators

  • XxxConfig objects contain a single Operator’s (or Actor’s) configuration information, based on what was in the XxxParams. For instance, the HttpConfig contains a reference to the HttpClient that is used to perform HTTP operations, while the associated HttpParams just contains the name of the HttpClient. XxxConfig objects are shared by all operations created by a single Operator. As a result, it should not contain any data associated with an individual operation; such data should be stored within the Operation object, itself

Junit tests
  • Operation Tests may choose to subclass from BasicHttpOperation, which provides some supporting utilities and mock objects

  • Should include a test to verify that the Actor, and possibly each Operator, can be retrieved via an Actor Service

  • Tests with an actual REST server are performed within HttpOperationTest, so need not be repeated in subclasses. Instead, they can catch the callback to the get(), post(), etc., methods and pass the rawResponse to it there. That being said, a number of actors spin up a simulator to verify end-to-end request/response processing

Clients (e.g., drools-applications)
  • When using callbacks, a client may want to use the isFor() method to verify that the outcome is for the desired operation, as callbacks are invoked with the outcome of all operations performed, including any preprocessor steps

Flow of operation
  • PDP:

    • Populates a ControlLoopParams using ControlLoopParams.builder()

    • Invokes start() on the ControlLoopParams

  • ControlLoopParams:

    • Finds the actor/operator

    • Uses it to invoke buildOperation()

    • Invokes start() on the Operation

  • Operation:

    • start() invokes startPreprocessorAsync() and then startOperationAsync()

    • Exceptions that occur while constructing the operation pipeline propagate back to the client that invoked start()

    • Exceptions that occur while executing the operation pipeline are caught and turned into an OperationOutcome whose result is FAILURE_EXCEPTION. In addition, the “start” callback (i.e., specified via the ControlLoopParams) will be invoked, if it hasn’t been invoked yet, and then the “complete” callback will be invoked

    • By default, startPreprocessorAsync() does nothing, thus most subclasses will override it to:

      • Do any A&AI query that is needed (beyond enrichment, which is already available in the Context)

      • Use Context obtain() to request the data asynchronously

      • Invoke startGuardAsync()

    • By default, startGuardAsync() will simply perform a guard check, passing it the “standard” payload

    • Subclasses may override makeGuardPayload() to add extra fields to the payload (e.g., some SO operations add the VF count)

    • If any preprocessing step fails, then the “start” and “complete” callbacks will be invoked to indicate a failure of the operation as a whole. Otherwise, the flow will continue on to startOperationAsync(), after the “start” callback is invoked

    • StartOperationAsync() will perform whatever needs to be done to start the operation

    • Once it completes, the “complete” callback will be invoked with the outcome of the operation. StartOperationAsync() should not invoke the callback, as that is handled automatically by OperationPartial, which is the superclass of most Operations

A&AI Actor

Overview of A&AI Actor

ONAP Policy Framework enables various actors, several of which require additional data to be gathered from A&AI via a REST call. Previously, the request was built, and the REST call made, by the application. However, A&AI queries have now been implemented using the new Actor framework.

Each operation supported by the actor is associated with its own java class, which is responsible for populating the request structure and invoking the REST service. The class hierarchy is shown below.

_images/classHierarchy.png

Currently, the following operations are supported:

  • Tenant

  • Pnf

  • CustomQuery

One thing that sets the A&AI Actor implementation apart from the other Actor implementations is that it is typically used to gather data for input to the other actors. Consequently, when an A&AI operation completes, it places its response into the properties field of the context, which is passed via the ControlLoopOperationParams. The names of the keys within the properties field are typically of the form, “AAI.<operation>.<targetEntity>”, where “operation” is the name of the operation, and “targetEntity” is the targetEntity passed via the ControlLoopOperationParams. For example, the response for the Tenant query for a target entity named “ozVserver” would be stored as a properties named “AAI.Tenant.ozVserver”.

On the other hand, as there is only one “custom query” for a given ONSET, the Custom Query operation deviates from this, in that it always stores its response using the key, “AAI.AaiCqResponse”.

Request

Most of the the A&AI operations use “GET” requests and thus do not populate a request structure. However, for those that do, the request structure is described in the table below.

Note: the Custom Query Operation requires tenant data, thus it performs a Tenant operation before sending its request. The tenant data is gathered for the vserver whose name is found in the “vserver.vserver-name” field of the enrichment data provided by DCAE with the ONSET event.

Field Name

Type

Description

Custom Query:

start

string

Extracted from the result-data[0].resource-link field of the Tenant query response.

Examples

Suppose the ControlLoopOperationParams were populated as follows, with the tenant query having already been performed:

{
    "actor": "AAI",
    "operation": "CustomQuery",
    "context": {
        "enrichment": {
            "vserver.vserver-name": "Ete_vFWCLvFWSNK_7ba1fbde_0"
        },
        "properties": {
            "AAI.Tenant.Ete_vFWCLvFWSNK_7ba1fbde_0": {
                "result-data": [
                    {
                        "resource-type": "vserver",
                        "resource-link": "/aai/v15/cloud-infrastructure/cloud-regions/cloud-region/CloudOwner/RegionOne/tenants/tenant/3f2aaef74ecb4b19b35e26d0849fe9a2/vservers/vserver/6c3b3714-e36c-45af-9f16-7d3a73d99497"
                    }
                ]
            }
        }
    }
}

An example of a Custom Query request constructed by the actor using the above parameters, sent to the A&AI REST server:

{
  "start": "/aai/v15/cloud-infrastructure/cloud-regions/cloud-region/CloudOwner/RegionOne/tenants/tenant/3f2aaef74ecb4b19b35e26d0849fe9a2/vservers/vserver/6c3b3714-e36c-45af-9f16-7d3a73d99497",
  "query": "query/closed-loop"
}

An example response received from the A&AI REST service:

{
    "results": [
        {
            "vserver": {
                "vserver-id": "f953c499-4b1e-426b-8c6d-e9e9f1fc730f",
                "vserver-name": "Ete_vFWCLvFWSNK_7ba1fbde_0",
                "vserver-name2": "Ete_vFWCLvFWSNK_7ba1fbde_0",
                "prov-status": "ACTIVE",
                "vserver-selflink": "http://10.12.25.2:8774/v2.1/41d6d38489bd40b09ea8a6b6b852dcbd/servers/f953c499-4b1e-426b-8c6d-e9e9f1fc730f",
                "in-maint": false,
                "is-closed-loop-disabled": false,
    ...
}
Configuration of the A&AI Actor

The following table specifies the fields that should be provided to configure the A&AI actor.

Field name

type

Description

clientName

string

Name of the HTTP client to use to send the request to the A&AI REST server.

timeoutSec

integer (optional)

Maximum time, in seconds, to wait for a response to be received from the REST server. Defaults to 90s.

path

string

URI appended to the URL. This field only applies to individual operations; it does not apply at the actor level. Note: the path should not include a leading or trailing slash.

The individual operations are configured using these same field names. However, all of them, except the path, are optional, as they inherit their values from the corresponding actor-level fields.

APPC Legacy Actor

Overview of APPC Legacy Actor

ONAP Policy Framework enables APPC Legacy as one of the supported actors. APPC Legacy uses a single DMaaP topic for both requests and responses. As a result, the actor implementation must cope with the fact that requests may appear on the same stream from which it is reading responses, thus it must use the message content to distinguish responses from requests. This particular implementation uses the Status field to identify responses.

In addition, APPC may generate more than one response for a particular request, the first response simply indicating that the request was accepted, while the second response indicates completion of the request. For each request, a unique sub-request ID is generated. This is used to match the received responses with the published requests.

Each operation supported by the actor is associated with its own java class, which is responsible for populating the request structure appropriately. The operation-specific classes are all derived from the AppcOperation class, which is, itself, derived from BidirectionalTopicOperation.

Request
CommonHeader

The “CommonHeader” field in the request is built by policy.

“CommonHeader” field name

type

Description

SubRequestID

string

Generated by Policy. Is a UUID and used internally by policy to match the response with the request.

RequestID

string

Inserted by Policy. Maps to the UUID sent by DCAE i.e. the ID used throughout the closed loop lifecycle to identify a request.

Action

The “Action” field uniquely identifies the operation to perform. Currently, only “ModifyConfig” is supported.

Payload

“Payload” field name

type

Description

generic-vnf.vnf-id

string

The ID of the VNF selected from the A&AI Custom Query response using the Target resource ID specified in the ControlLoopOperationParams.

Additional fields are populated based on the payload specified within the ControlLoopOperationParams. Each value found within the payload is treated as a JSON string and is decoded into a POJO, which is then inserted into the request payload using the same key.

Examples

Suppose the ControlLoopOperationParams were populated as follows:

{
    "actor": "APPC",
    "operation": "ModifyConfig",
    "target": {
        "resourceID": "2246ebc9-9b9f-42d0-a5e4-0248324fb884"
    },
    "payload": {
        "my-key-A": "{\"input\":\"hello\"}",
        "my-key-B": "{\"output\":\"world\"}"
    },
    "context": {
        "event": {
            "requestId": "c7c6a4aa-bb61-4a15-b831-ba1472dd4a65"
        },
        "cqdata": {
            "generic-vnf": [
                {
                    "vnfId": "my-vnf",
                    "vf-modules": [
                        {
                            "model-invariant-id": "2246ebc9-9b9f-42d0-a5e4-0248324fb884"
                        }
                    ]
                }
            ]
        }
    }
}

An example of a request constructed by the actor using the above parameters, published to the APPC topic:

{
  "CommonHeader": {
    "TimeStamp": 1589400050910,
    "APIver": "1.01",
    "RequestID": "c7c6a4aa-bb61-4a15-b831-ba1472dd4a65",
    "SubRequestID": "ee3f2dc0-a2e0-4ae8-98c3-478c784b8eb5",
    "RequestTrack": [],
    "Flags": []
  },
  "Action": "ModifyConfig",
  "Payload": {
    "my-key-B": {
      "output": "world"
    },
    "my-key-A": {
      "input": "hello"
    },
    "generic-vnf.vnf-id": "my-vnf"
  }
}

An example initial response received from APPC on the same topic:

{
  "CommonHeader": {
    "TimeStamp": 1589400050923,
    "APIver": "1.01",
    "RequestID": "c7c6a4aa-bb61-4a15-b831-ba1472dd4a65",
    "SubRequestID": "ee3f2dc0-a2e0-4ae8-98c3-478c784b8eb5",
    "RequestTrack": [],
    "Flags": []
  },
  "Status": {
    "Code": 100,
    "Value": "ACCEPTED"
  }
}

An example final response received from APPC on the same topic:

{
  "CommonHeader": {
    "TimeStamp": 1589400050934,
    "APIver": "1.01",
    "RequestID": "c7c6a4aa-bb61-4a15-b831-ba1472dd4a65",
    "SubRequestID": "ee3f2dc0-a2e0-4ae8-98c3-478c784b8eb5",
    "RequestTrack": [],
    "Flags": []
  },
  "Status": {
    "Code": 400,
    "Value": "SUCCESS"
  }
}
Configuration of the APPC Legacy Actor

The following table specifies the fields that should be provided to configure the APPC Legacy actor.

Field name

type

Description

sinkTopic

string

Name of the topic to which the request should be published.

sourceTopic

string

Name of the topic from which the response should be read.

timeoutSec

integer (optional)

Maximum time, in seconds, to wait for a response to be received on the topic.

The individual operations are configured using these same field names. However, all of them are optional, as they inherit their values from the corresponding actor-level fields.

APPC LCM Actor

Overview of APPC LCM Actor

ONAP Policy Framework enables APPC as one of the supported actors. The APPC LCM Actor contains operations supporting both the LCM interface and the legacy interface. As such, this actor supersedes the APPC Legacy actor. Its sequence number is lower than the APPC Legacy actor’s sequence number, which ensures that it is loaded first.

APPC Legacy uses a single DMaaP topic for both requests and responses. The class(es) supporting this interface are described in APPC Legacy Actor. The APPC LCM Actor only supports the APPC Legacy operation, ModifyConfig.

The APPC LCM interface, on the other hand, uses two DMaaP topics, one to which requests are published, and another from which responses are received. Similar to the legacy interface, APPC LCM may generate more than one response for a particular request, the first response simply indicating that the request was accepted, while the second response indicates completion of the request.

For each request, a unique sub-request ID is generated. This is used to match the received responses with the published requests. (APPC LCM also has a “correlation-id” field, which could potentially be used to match the response to the request, but apparently APPC LCM has not implemented that capability yet.)

All APPC LCM operations are currently supported by a single java class, AppcLcmOperation, which is responsible for populating the request structure appropriately. This class is derived from BidirectionalTopicOperation.

The remainder of this discussion describes the operations that are specific to APPC LCM.

Request
CommonHeader

The “common-header” field in the request is built by policy.

“common-header” field name

type

Description

sub-request-id

string

Generated by Policy. Is a UUID and used internally by policy to match the response with the request.

request-id

string

Inserted by Policy. Maps to the UUID sent by DCAE i.e. the ID used throughout the closed loop lifecycle to identify a request.

originator-id

string

Copy of the request-id.

Action

The “action” field uniquely identifies the operation to perform. Currently, the following operations are supported:

  • Restart

  • Rebuild

  • Migrate

  • ConfigModify

The valid operations are listed in AppcLcmConstants. These are the values that must be specified in the policy. However, before being stuffed into the “action” field, they are converted to camel case, stripping any hyphens, and translating the first character to upper case, if it isn’t already.

Action Identifiers

Currently, the “action-identifiers” field contains only the VNF ID, which should be the targetEntity specified within the ControlLoopOperationParams.

Payload

The “payload” field is populated based on the payload specified within the ControlLoopOperationParams. Unlike the APPC Legacy operations, which inject POJOs into the “payload” field, the APPC LCM operations simply encode the entire parameter payload into a JSON string, and then place the encoded string into the “payload” field of the request.

Examples

Suppose the ControlLoopOperationParams were populated as follows:

{
    "actor": "APPC",
    "operation": "Restart",
    "targetEntity": "my-target",
    "payload": {
        "my-key-A": "hello",
        "my-key-B": "world"
    },
    "context": {
        "event": {
            "requestId": "664be3d2-6c12-4f4b-a3e7-c349acced200"
        }
    }
}

An example of a request constructed by the actor using the above parameters, published to the APPC LCM request topic:

{
  "version": "2.0",
  "rpc-name": "restart",
  "correlation-id": "664be3d2-6c12-4f4b-a3e7-c349acced200-8c4c1914-00ed-4be0-ae3b-49dd22e8f461",
  "type": "request",
  "body": {
    "input": {
      "common-header": {
        "timestamp": "2020-05-14T19:19:32.619890900Z",
        "api-ver": "2.00",
        "originator-id": "664be3d2-6c12-4f4b-a3e7-c349acced200",
        "request-id": "664be3d2-6c12-4f4b-a3e7-c349acced200",
        "sub-request-id": "8c4c1914-00ed-4be0-ae3b-49dd22e8f461",
        "flags": {}
      },
      "action": "Restart",
      "action-identifiers": {
        "vnf-id": "my-target"
      },
      "payload": "{\"my-key-A\":\"hello\", \"my-key-B\":\"world\"}"
    }
  }
}

An example initial response received from the APPC LCM response topic:

{
  "body": {
    "output": {
      "common-header": {
        "timestamp": "2020-05-14T19:19:32.619897000Z",
        "api-ver": "5.00",
        "originator-id": "664be3d2-6c12-4f4b-a3e7-c349acced200",
        "request-id": "664be3d2-6c12-4f4b-a3e7-c349acced200",
        "sub-request-id": "8c4c1914-00ed-4be0-ae3b-49dd22e8f461",
        "flags": {}
      },
      "status": {
        "code": 100,
        "message": "Restart accepted"
      }
    }
  },
  "version": "2.0",
  "rpc-name": "restart",
  "correlation-id": "664be3d2-6c12-4f4b-a3e7-c349acced200-1",
  "type": "response"
}

An example final response received from the APPC LCM on the same response topic:

{
  "body": {
    "output": {
      "common-header": {
        "timestamp": "2020-05-14T19:19:32.619898000Z",
        "api-ver": "5.00",
        "originator-id": "664be3d2-6c12-4f4b-a3e7-c349acced200",
        "request-id": "664be3d2-6c12-4f4b-a3e7-c349acced200",
        "sub-request-id": "8c4c1914-00ed-4be0-ae3b-49dd22e8f461",
        "flags": {}
      },
      "status": {
        "code": 400,
        "message": "Restart Successful"
      }
    }
  },
  "version": "2.0",
  "rpc-name": "restart",
  "correlation-id": "664be3d2-6c12-4f4b-a3e7-c349acced200-1",
  "type": "response"
}
Configuration of the APPC LCM Actor

The following table specifies the fields that should be provided to configure the APPC LCM actor.

Field name

type

Description

sinkTopic

string

Name of the topic to which the request should be published.

sourceTopic

string

Name of the topic from which the response should be read. This must not be the same as the sinkTopic.

timeoutSec

integer (optional)

Maximum time, in seconds, to wait for a response to be received on the topic.

The individual operations are configured using these same field names. However, all of them are optional, as they inherit their values from the corresponding actor-level fields. That being said, the APPC Legacy operation(s) use a different topic than the APPC LCM operations. As a result, the sink and source topics should be specified for each APPC Legacy operation supported by this actor.

CDS actor support in Policy

1. Overview of CDS Actor support in Policy

ONAP Policy Framework now enables Controller Design Studio (CDS) as one of the supported actors. This allows the users to configure Operational Policy to use CDS as an actor to remedy a situation.

Behind the scene, when an incoming event is received and validated against rules, Policy uses gRPC to trigger the CBA (Controller Blueprint Archive: CDS artifact) as configured in the operational policy and providing CDS with all the input parameters that is required to execute the chosen CBA.

2. Objective

The goal of the user guide is to clarify the contract between Policy and CDS so that a CBA developer can respect this input contract towards CDS when implementing the CBA.

3. Contract between Policy and CDS

Policy upon receiving an incoming event from DCAE fires the rules and decides which actor to trigger. If the CDS actor is the chosen, Policy triggers the CBA execution using gRPC.

The parameters required for the execution of a CBA are internally handled by Policy. It makes uses of the incoming event, the operational policy configured and AAI lookup to build the CDS request payload.

3.1 CDS Blueprint Execution Payload format as invoked by Policy

Below are the details of the contract established between Policy and CDS to enable the automation of a remediation action within the scope of a closed loop usecase in ONAP.

The format of the input payload for CDS follows the below guidelines, hence a CBA developer must consider this when implementing the CBA logic. For the sake of simplicity a JSON payload is used instead of a gRPC payload and each attribute of the child-nodes is documented.

3.1.1 CommonHeader

The “commonHeader” field in the CBA execute payload is built by policy.

“commonHeader” field name

type

Description

subRequestId

string

Generated by Policy. Is a UUID and used internally by policy.

requestId

string

Inserted by Policy. Maps to the UUID sent by DCAE i.e. the ID used throughout the closed loop lifecycle to identify a request.

originatorId

string

Generated by Policy and fixed to “POLICY”

3.1.2 ActionIdentifiers

The “actionIdentifiers” field uniquely identifies the CBA and the workflow to execute.

“actionIdentifiers” field name

type

Description

mode

string

Inserted by Policy and fixed to “sync” presently.

blueprintName

string

Inserted by Policy. Maps to the attribute that holds the blueprint-name in the operational policy configuration.

blueprintVersion

string

Inserted by Policy. Maps to the attribute that holds the blueprint-version in the operational policy configuration.

actionName

string

Inserted by Policy. Maps to the attribute that holds the action-name in the operational policy configuration.

3.1.3 Payload

The “payload” JSON node is generated by Policy for the action-name specified in the “actionIdentifiers” field which is eventually supplied through the operational policy configuration as indicated above.

3.1.3.1 Action request object

The “$actionName-request” object is generated by CDS for the action-name specified in the “actionIdentifiers” field.

The “$actionName-request” object contains:

  • a field called “resolution-key” which CDS uses to store the resolved parameters into the CDS context

  • a child node object called “$actionName-properties” which holds a map of all the parameters that serve as inputs to the CBA. It presently holds the below information:

    • all the AAI enriched attributes

    • additional parameters embedded in the Control Loop Event format which is sent by DCAE (analytics application).

    • any static information supplied through operational policy configuration which is not specific to an event but applies across all the events.

The data description for the action request object fields is as below:

  • Resolution-key

“$actionName-request” field name

type

Description

resolution-key

string

Generated by Policy. Is a UUID, generated each time CBA execute request is invoked.

  • Action properties object

“$actionName-properties” field name

type

Description

[$aai_node_type.$aai_attribute]

map

Inserted by Policy after performing AAI enrichment. Is a map that contains AAI parameters for the target and conforms to the notation: $aai_node_type.$aai_attribute. E.g. for PNF the map looks like below.

{
  "pnf.equip-vendor":"Vendor-A",
  "pnf.ipaddress-v4-oam":"10.10.10.10",
  "pnf.in-maint":false,
  "pnf.pnf-ipv4-address":"3.3.3.3",
  "pnf.resource-version":"1570746989505",
  "pnf.nf-role":"ToR DC101",
  "pnf.equip-type":"Router",
  "pnf.equip-model":"model-123456",
  "pnf.frame-id":"3",
  "pnf.pnf-name":"demo-pnf"
}

data

json object OR string

Inserted by Policy. Maps to the static payload supplied through operational policy configuration. Used to hold any static information which applies across all the events as described above. If the value of the data field is a valid JSON string it is converted to a JSON object, else will be retained as a string.

[$additionalEventParams]

map

Inserted by Policy. Maps to the map of additionalEvent parameters embedded into the Control Loop Event message from DCAE.

3.1.4 Summing it up: CBA execute payload generation as done by Policy

Putting all the above information together below is the REST equivalent of the CDS blueprint execute gRPC request generated by Policy.

REST equivalent of the gRPC request from Policy to CDS to execute a CBA.

curl -X POST \
  'http://{{ip}}:{{port}}/api/v1/execution-service/process' \
  -H 'Authorization: Basic Y2NzZGthcHBzOmNjc2RrYXBwcw==' \
  -H 'Content-Type: application/json' \
  -H 'cache-control: no-cache' \
  -d '{
    "commonHeader":{
        "subRequestId":"{generated_by_policy}",
        "requestId":"{req_id_from_DCAE}",
        "originatorId":"POLICY"
    },
    "actionIdentifiers":{
        "mode":"sync",
        "blueprintName":"{blueprint_name_from_operational_policy_config}",
        "blueprintVersion":"{blueprint_version_from_operational_policy_config}",
        "actionName":"{blueprint_action_name_from_operational_policy_config}"
    },
    "payload":{
        "$actionName-request":{
            "resolution-key":"{generated_by_policy}",
            "$actionName-properties":{
                "$aai_node_type.$aai_attribute_1":"",
                "$aai_node_type.$aai_attribute_2":"",
                .........
                "data":"{static_payload_data_from_operational_policy_config}",
                "$additionalEventParam_1":"",
                "$additionalEventParam_2":"",
                .........
            }
        }
    }
}'
3.1.5 Examples

Sample CBA execute request generated by Policy for PNF target type when “data” field is a string:

curl -X POST \
  'http://{{ip}}:{{port}}/api/v1/execution-service/process' \
  -H 'Authorization: Basic Y2NzZGthcHBzOmNjc2RrYXBwcw==' \
  -H 'Content-Type: application/json' \
  -H 'cache-control: no-cache' \
  -d '{
    "commonHeader":{
        "subRequestId":"14384b21-8224-4055-bb9b-0469397db801",
        "requestId":"d57709fb-bbec-491d-a2a6-8a25c8097ee8",
        "originatorId":"POLICY"
    },
    "actionIdentifiers":{
        "mode":"sync",
        "blueprintName":"PNF-demo",
        "blueprintVersion":"1.0.0",
        "actionName":"reconfigure-pnf"
    },
    "payload":{
        "reconfigure-pnf-request":{
            "resolution-key":"8338b828-51ad-4e7c-ac8b-08d6978892e2",
            "reconfigure-pnf-properties":{
                "pnf.equip-vendor":"Vendor-A",
                "pnf.ipaddress-v4-oam":"10.10.10.10",
                "pnf.in-maint":false,
                "pnf.pnf-ipv4-address":"3.3.3.3",
                "pnf.resource-version":"1570746989505",
                "pnf.nf-role":"ToR DC101",
                "pnf.equip-type":"Router",
                "pnf.equip-model":"model-123456",
                "pnf.frame-id":"3",
                "pnf.pnf-name":"demo-pnf",
                "data": "peer-as=64577",
                "peer-group":"demo-peer-group",
                "neighbor-address":"4.4.4.4"
            }
        }
    }
}'

Sample CBA execute request generated by Policy for VNF target type when “data” field is a valid JSON string:

curl -X POST \
  'http://{{ip}}:{{port}}/api/v1/execution-service/process' \
  -H 'Authorization: Basic Y2NzZGthcHBzOmNjc2RrYXBwcw==' \
  -H 'Content-Type: application/json' \
  -H 'cache-control: no-cache' \
  -d '{
    "commonHeader":{
        "subRequestId":"14384b21-8224-4055-bb9b-0469397db801",
        "requestId":"d57709fb-bbec-491d-a2a6-8a25c8097ee8",
        "originatorId":"POLICY"
    },
    "actionIdentifiers":{
        "mode":"sync",
        "blueprintName":"vFW-CDS",
        "blueprintVersion":"1.0.0",
        "actionName":"config-deploy"
    },
    "payload":{
        "config-deploy-request":{
            "resolution-key":"6128eb53-0eac-4c79-855c-ff56a7b81141",
            "config-deploy-properties":{
                "service-instance.service-instance-id":"40004db6-c51f-45b0-abab-ea4156bae422",
                "generic-vnf.vnf-id":"8d09e3bd-ae1d-4765-b26e-4a45f568a092",
                "data":{
                    "active-streams":"7"
                }
            }
        }
    }
}'
4. Operational Policy configuration to use CDS as an actor
4.1 TOSCA compliant Control Loop Operational Policy to support CDS actor

A common base TOSCA policy type for defining an operational policy is documented below:

APEX PDP specific operational policy is derived from the common operational TOSCA policy type as defined in the link below: * https://gerrit.onap.org/r/gitweb?p=policy/models.git;a=blob;f=models-examples/src/main/resources/policytypes/onap.policies.controlloop.operational.common.Apex.yaml;h=54b69c2d8a78ab7fd8d41d3f7c05632c4d7e433d;hb=HEAD

Drools PDP specific operational policy is also derived from the common operational TOSCA policy type and is defined in the link below: * https://gerrit.onap.org/r/gitweb?p=policy/models.git;a=blob;f=models-examples/src/main/resources/policytypes/onap.policies.controlloop.operational.common.Drools.yaml;h=69d73db5827cb6743172f9e0b1930eca8ba4ec0c;hb=HEAD

For integration testing CLAMP UI can be used to configure the Operational Policy.

E.g. Sample Operational Policy definition for vFW usecase to use CDS as an actor:

tosca_definitions_version: tosca_simple_yaml_1_1_0
topology_template:
    policies:
    -   operational.modifyconfig.cds:
            type: onap.policies.controlloop.operational.common.Drools
            type_version: 1.0.0
            version: 1.0.0
            metadata:
                policy-id: operational.modifyconfig.cds
            properties:
                id: ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a
                timeout: 1200
                abatement: false
                trigger: unique-policy-id-1-modifyConfig
                operations:
                -   id: unique-policy-id-1-modifyConfig
                    description: Modify the packet generator
                    operation:
                        actor: CDS
                        operation: ModifyConfig
                        target:
                            targetType: VNF
                            entityId:
                                resourceID: bbb3cefd-01c8-413c-9bdd-2b92f9ca3d38
                        payload:
                            artifact_name: vfw-cds
                            artifact_version: 1.0.0
                            mode: async
                            data: '{"active-streams":"7"}'
                    timeout: 300
                    retries: 0
                    success: final_success
                    failure: final_failure
                    failure_timeout: final_failure_timeout
                    failure_retries: final_failure_retries
                    failure_exception: final_failure_exception
                    failure_guard: final_failure_guard
                controllerName: usecases
4.2 API to configure the Control Loop Operational policy
4.2.1 Policy creation

Policy API endpoint is used to create policy i.e. an instance of the TOSCA compliant Operational policy type. E.g. For vFW usecase the policy-type is “onap.policies.controlloop.operational.common.Drools”.

In the below rest endpoint, the hostname points to K8S service “policy-api” and internal port 6969.

curl POST 'https://{$POLICY_API_URL}:{$POLICY_API_SERVICE_PORT}/policy/api/v1/policytypes/onap.policies.controlloop.operational.common.Drools/versions/1.0.0/policies/operational.modifyconfig/versions/1.0.0' \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
-H 'Authorization: Basic aGVhbHRoY2hlY2s6ZmlyZWFudHNfZGV2QHBvbGljeSE=' \
-d '{$vfw-tosca-policy}

Note: In order to create an operational policy when using APEX PDP use the policy-type: “onap.policies.controlloop.operational.common.Apex”.

4.2.2 Policy deployment to PDP

Policy PAP endpoint is used in order to deploy the policy to the appropriate PDP instance. In the rest endpoint URI, the hostname points to the service “policy-pap” and internal port 6969.

curl POST 'https://{$POLICY_PAP_URL}:{$POLICY_PAP_SERVICE_PORT}/policy/pap/v1/pdps/deployments/batch' \
-H 'Content-Type: application/json' \
-H 'Accept: application/json' \
-H 'Authorization: Basic {$auth}' \
-d '{
    "groups": [
        {
            "name": "defaultGroup",
            "deploymentSubgroups": [
                {
                    "pdpType": "drools",
                    "action": "POST",
                    "policies": [{
                            "name": "operational.modifyconfig.cds",
                            "version": "1.0.0"
                        }]
                }
            ]
        }
    ]
}'

To view the configured policies use the below REST API.

curl GET 'https://{$POLICY_API_URL}:{$POLICY_API_SERVICE_PORT}/policy/api/v1/policytypes/onap.policies.controlloop.operational.common.Drools/versions/1.0.0/policies/operational.modifyconfig/versions/1.0.0' \
-H 'Accept: application/json' \
-H 'Authorization: Basic {$auth}'
curl --location --request GET 'https://{$POLICY_PAP_URL}:{$POLICY_PAP_SERVICE_PORT}/policy/pap/v1/pdps' \
-H 'Accept: application/json' \
-H 'Authorization: Basic {$auth}'

GUARD Actor

Overview of GUARD Actor

Within ONAP Policy Framework, a GUARD is typically an implicit check performed at the start of each operation and is performed by making a REST call to the XACML-PDP. Previously, the request was built, and the REST call made, by the application. However, Guard checks have now been implemented using the new Actor framework.

Currently, there is a single operation, Decision, which is implemented by the java class, GuardOperation. This class is derived from HttpOperation.

Request

A number of the request fields are populated from values specified in the actor/operation’s configuration parameters (e.g., “onapName”). Additional fields are specified below.

Request ID

The “requestId” field is set to a UUID.

Resource

The “resource” field is populated with a Map containing a single item, “guard”. The value of the item is set to the contents of the payload specified within the ControlLoopOperationParams.

Examples

Suppose the ControlLoopOperationParams were populated as follows:

{
    "actor": "GUARD",
    "operation": "Decision",
    "payload": {
      "actor": "SO",
      "operation": "VF Module Create",
      "target": "OzVServer",
      "requestId": "c7c6a4aa-bb61-4a15-b831-ba1472dd4a65",
      "clname": "ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3",
      "vfCount": 2
    }
}

An example of a request constructed by the actor using the above parameters, sent to the GUARD REST server:

{
  "ONAPName": "Policy",
  "ONAPComponent": "Drools PDP",
  "ONAPInstance": "Usecases",
  "requestId": "90ee99d2-f2d8-4d90-b162-605203c30180",
  "action": "guard",
  "resource": {
    "guard": {
      "actor": "SO",
      "operation": "VF Module Create",
      "target": "OzVServer",
      "requestId": "c7c6a4aa-bb61-4a15-b831-ba1472dd4a65",
      "clname": "ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3",
      "vfCount": 2
    }
  }
}

An example response received from the GUARD REST service:

{
    "status": "Permit",
    "advice": {},
    "obligations": {},
    "policies": {}
}
Configuration of the GUARD Actor

The following table specifies the fields that should be provided to configure the GUARD actor.

Field name

type

Description

clientName

string

Name of the HTTP client to use to send the request to the GUARD REST server.

timeoutSec

integer (optional)

Maximum time, in seconds, to wait for a response to be received from the REST server. Defaults to 90s.

path

string

URI appended to the URL. This field only applies to individual operations; it does not apply at the actor level. Note: the path should not include a leading or trailing slash.

onapName

string

ONAP Name (e.g., “Policy”)

onapComponent

string

ONAP Component (e.g., “Drools PDP”)

onapInstance

string

ONAP Instance (e.g., “Usecases”)

action

string (optional)

Used to populate the “action” request field. Defaults to “guard”.

disabled

boolean (optional)

True, to disable guard checks, false otherwise. Defaults to false.

The individual operations are configured using these same field names. However, all of them, except the path, are optional, as they inherit their values from the corresponding actor-level fields.

SDNC Actor

Overview of SDNC Actor

ONAP Policy Framework enables SDNC as one of the supported actors. SDNC uses a REST-based interface, and supports the following operations: BandwidthOnDemand, Reroute.

Each operation supported by the actor is associated with its own java class, which is responsible for populating the request structure appropriately. The operation-specific classes are all derived from the SdncOperation class, which is, itself, derived from HttpOperation. Each operation class implements its own makeRequest() method to construct a request appropriate to the operation.

Request

A number of nested structures are populated within the request. The following table lists the contents of some of the fields that appear within these structures.

Field Name

Type

Description

top level:

requestId

string

Inserted by Policy. Maps to the UUID sent by DCAE i.e. the ID used throughout the closed loop lifecycle to identify a request.

sdnc-request-header:

svc-action

string

Set by Policy, based on the operation.

svc-request-id

string

Generated by Policy. Is a UUID.

request-information:

request-action

string

Set by Policy, based on the operation.

network-information:

Applicable to Reroute.

network-id

string

Set by Policy, using the “network-information.network-id” property found within the enrichment data provided by DCAE with the ONSET event.

vnf-information:

Applicable to BandwidthOnDemand.

vnf-id

string

Set by Policy, using the “vnfId” property found within the enrichment data provided by DCAE with the ONSET event.

vf-module-input-parameters:

Applicable to BandwidthOnDemand.

param[0]

string

Set by Policy, using the “bandwidth” property found within the enrichment data provided by DCAE with the ONSET event.

param[1]

string

Set by Policy, using the “bandwidth-change-time” property found within the enrichment data provided by DCAE with the ONSET event.

vf-module-information:

Applicable to BandwidthOnDemand.

vf-module-id

string

Set by Policy to “”.

Examples

Suppose the ControlLoopOperationParams were populated as follows:

{
    "actor": "SDNC",
    "operation": "Reroute",
    "context": {
        "enrichment": {
            "service-instance.service-instance-id": "c7c6a4aa-bb61-4a15-b831-ba1472dd4a65",
            "network-information.network-id": "2246ebc9-9b9f-42d0-a5e4-0248324fb884"
        }
    }
}

An example of a request constructed by the actor using the above parameters, sent to the SDNC REST server:

{
    "input": {
        "sdnc-request-header": {
            "svc-request-id": "2612653e-d946-423b-96d9-a8d5e8e39618",
            "svc-action": "reoptimize"
        },
        "request-information": {
            "request-action": "ReoptimizeSOTNInstance"
        },
        "service-information": {
            "service-instance-id": "c7c6a4aa-bb61-4a15-b831-ba1472dd4a65"
        },
        "network-information": {
            "network-id": "2246ebc9-9b9f-42d0-a5e4-0248324fb884"
        }
    }
}

An example response received from the SDNC REST service:

{
  "output": {
    "svc-request-id": "2612653e-d946-423b-96d9-a8d5e8e39618",
    "response-code": "200",
    "ack-final-indicator": "Y"
  }
}
Configuration of the SDNC Actor

The following table specifies the fields that should be provided to configure the SDNC actor.

Field name

type

Description

clientName

string

Name of the HTTP client to use to send the request to the SDNC REST server.

timeoutSec

integer (optional)

Maximum time, in seconds, to wait for a response to be received from the REST server. Defaults to 90s.

path

string

URI appended to the URL. This field only applies to individual operations; it does not apply at the actor level. Note: the path should not include a leading or trailing slash.

The individual operations are configured using these same field names. However, all of them, except the path, are optional, as they inherit their values from the corresponding actor-level fields.

SDNR Actor

Overview of SDNR Actor

ONAP Policy Framework enables SDNR as one of the supported actors. SDNR uses two DMaaP topics, one to which requests are published, and another from which responses are received. SDNR may generate more than one response for a particular request, the first response simply indicating that the request was accepted, while the second response indicates completion of the request. For each request, a unique sub-request ID is generated. This is used to match the received responses with the published requests.

When an SDNR request completes, whether successfully or unsuccessfully, the actor populates the controlLoopResponse within the OperationOutcome. The application will typically publish this to a notification topic so that downstream systems can take appropriate action.

All SDNR operations are currently supported by a single java class, SdnrOperation, which is responsible for populating the request structure appropriately. This class is derived from BidirectionalTopicOperation.

Request
CommonHeader

The “CommonHeader” field in the request is built by policy.

“CommonHeader” field name

type

Description

SubRequestID

string

Generated by Policy. Is a UUID and used internally by policy to match the response with the request.

RequestID

string

Inserted by Policy. Maps to the UUID sent by DCAE i.e. the ID used throughout the closed loop lifecycle to identify a request.

Action

The “action” field uniquely identifies the operation to perform. Operation names are not validated. Instead, they are passed to SDNR, untouched.

RPC Name

The “rpc-name” field is the same as the “action” field, with everything mapped to lower case.

Payload

The “payload” field is populated with the payload text that is provided within the ONSET event; no additional transformation is applied.

Examples

Suppose the ControlLoopOperationParams were populated as follows:

{
    "actor": "SDNR",
    "operation": "ModifyConfig",
    "context": {
        "event": {
            "requestId": "664be3d2-6c12-4f4b-a3e7-c349acced200",
            "payload": "some text"
        }
    }
}

An example of a request constructed by the actor using the above parameters, published to the SDNR request topic:

{
  "body": {
    "input": {
      "CommonHeader": {
        "TimeStamp": "2020-05-18T14:43:58.550499700Z",
        "APIVer": "1.0",
        "RequestID": "664be3d2-6c12-4f4b-a3e7-c349acced200",
        "SubRequestID": "848bfd15-b189-43a1-bdea-80982b41fa24",
        "RequestTrack": {},
        "Flags": {}
      },
      "Action": "ModifyConfig",
      "Payload": "some text"
    }
  },
  "version": "1.0",
  "rpc-name": "modifyconfig",
  "correlation-id": "664be3d2-6c12-4f4b-a3e7-c349acced200-848bfd15-b189-43a1-bdea-80982b41fa24",
  "type": "request"
}

An example initial response received from the SDNR response topic:

{
    "body": {
        "output": {
            "CommonHeader": {
                "TimeStamp": "2020-05-18T14:44:10.000Z",
                "APIver": "1.0",
                "RequestID": "664be3d2-6c12-4f4b-a3e7-c349acced200",
                "SubRequestID": "848bfd15-b189-43a1-bdea-80982b41fa24",
                "RequestTrack": [],
                "Flags": []
            },
            "Status": {
                "Code": 100,
                "Value": "ACCEPTED"
            }
        }
    },
    "version": "1.0",
    "rpc-name": "modifyconfig",
    "correlation-id": "664be3d2-6c12-4f4b-a3e7-c349acced200-848bfd15-b189-43a1-bdea-80982b41fa24",
    "type": "response"
}

An example final response received from the SDNR on the same response topic:

{
    "body": {
        "output": {
            "CommonHeader": {
                "TimeStamp": "2020-05-18T14:44:20.000Z",
                "APIver": "1.0",
                "RequestID": "664be3d2-6c12-4f4b-a3e7-c349acced200",
                "SubRequestID": "848bfd15-b189-43a1-bdea-80982b41fa24",
                "RequestTrack": [],
                "Flags": []
            },
            "Status": {
                "Code": 200,
                "Value": "SUCCESS"
            },
            "Payload": "{ \"Configurations\":[ { \"Status\": { \"Code\": 200, \"Value\": \"SUCCESS\" }, \"data\":{\"FAPService\":{\"alias\":\"Chn0330\",\"X0005b9Lte\":{\"phyCellIdInUse\":6,\"pnfName\":\"ncserver23\"},\"CellConfig\":{\"LTE\":{\"RAN\":{\"Common\":{\"CellIdentity\":\"Chn0330\"}}}}}} } ] }"
        }
    },
    "version": "1.0",
    "rpc-name": "modifyconfig",
    "correlation-id": "664be3d2-6c12-4f4b-a3e7-c349acced200-848bfd15-b189-43a1-bdea-80982b41fa24",
    "type": "response"
}
Configuration of the SDNR Actor

The following table specifies the fields that should be provided to configure the SNDR actor.

Field name

type

Description

sinkTopic

string

Name of the topic to which the request should be published.

sourceTopic

string

Name of the topic from which the response should be read. This must not be the same as the sinkTopic.

timeoutSec

integer (optional)

Maximum time, in seconds, to wait for a response to be received on the topic.

The individual operations are configured using these same field names. However, all of them are optional, as they inherit their values from the corresponding actor-level fields.

SO Actor

Overview of SO Actor

ONAP Policy Framework enables SO as one of the supported actors. SO uses a REST-based interface. However, as requests may not complete right away, a REST-based polling interface is used to check the status of the request. The requestId is extracted from the initial response and is appended to the pathGet configuration parameter to generate the URL used to poll for completion.

Each operation supported by the actor is associated with its own java class, which is responsible for populating the request structure appropriately and sending the request. Note: the request may be issued via POST, DELETE, etc., depending on the operation. The operation-specific classes are all derived from the SoOperation class, which is, itself, derived from HttpOperation. The following operations are currently supported:

  • VF Module Create

  • VF Module Delete

Request

A number of nested structures are populated within the request. Several of them are populated with data extracted from the A&AI Custom Query response that is retrieved using the Target resource ID specified in the ControlLoopOperationParams. The following table lists the contents of some of the fields that appear within these structures.

Field Name

Type

Description

top level:

operationType

string

Inserted by Policy. Name of the operation.

requestDetails:

requestParameters

Applicable to VF Module Create. Set by Policy from the requestParameters specified in the payload of the ControlLoopOperationParams. The value is treated as a JSON string and decoded into an SoRequestParameters object that is placed into this field.

configurationParameters

Applicable to VF Module Create. Set by Policy from the configurationParameters specified in the payload of the ControlLoopOperationParams. The value is treated as a JSON string and decoded into a List of Maps that is placed into this field.

modelInfo:

Set by Policy. Copied from the target specified in the ControlLoopOperationParams.

cloudConfiguration:

tenantId

string

The ID of the “default” Tenant selected from the A&AI Custom Query response.

lcpCloudRegionId

string

The ID of the “default” Cloud Region selected from the A&AI Custom Query response.

relatedInstanceList[0]:

Applicable to VF Module Create. The “default” Service Instance selected from the A&AI Custom Query response.

relatedInstanceList[1]:

Applicable to VF Module Create. The VNF selected from the A&AI Custom Query response.

Examples

Suppose the ControlLoopOperationParams were populated as follows:

{
    "actor": "SO",
    "operation": "Reroute",
    "target": {
        "modelInvariantId": "2246ebc9-9b9f-42d0-a5e4-0248324fb884",
        "modelName": "VlbCdsSb00..vdns..module-3",
        "modelVersion": "1",
        "modelCustomizationUuid": "3a74410a-6c74-4a32-94b2-71488be6da1a",
        "modelVersionId": "1f94cedb-f656-4ddb-9f55-60ba1fc7d4b1",
        "modelCustomizationId": "3a74410a-6c74-4a32-94b2-71488be6da1a",
        "modelUuid": "1f94cedb-f656-4ddb-9f55-60ba1fc7d4b1",
        "modelInvariantUuid": "2246ebc9-9b9f-42d0-a5e4-0248324fb884"
    },
    "context": {
        "cqdata": {
            "tenant": {
                "id": "41d6d38489bd40b09ea8a6b6b852dcbd"
            },
            "cloud-region": {
                "id": "RegionOne"
            },
            "service-instance": {
                "id": "c14e61b5-1ee6-4925-b4a9-b9c8dbfe3f34",
                "modelInvariantId": "6418bb39-61e1-45fc-a36b-3f211bb846c7",
                "modelName": "vLB_CDS_SB00_02",
                "modelVersion": "1.0",
                "modelVersionId": "d01d9dec-afb6-4a53-bd9e-2eb10ca07a51",
                "modelUuid": "d01d9dec-afb6-4a53-bd9e-2eb10ca07a51",
                "modelInvariantUuid": "6418bb39-61e1-45fc-a36b-3f211bb846c7"
            },
            "generic-vnf": [
                {
                    "vnfId": "6636c4d5-f608-4376-b6d8-7977e98cb16d",
                    "vf-modules": [
                        {
                            "modelInvariantId": "827356a9-cb60-4976-9713-c30b4f850b41",
                            "modelName": "vLB_CDS_SB00",
                            "modelVersion": "1.0",
                            "modelCustomizationUuid": "6478f94b-0b20-4b44-afc0-94e48070586a",
                            "modelVersionId": "ca3c4797-0cdd-4797-8bec-9a3ce78ac4da",
                            "modelCustomizationId": "6478f94b-0b20-4b44-afc0-94e48070586a",
                            "modelUuid": "ca3c4797-0cdd-4797-8bec-9a3ce78ac4da",
                            "modelInvariantUuid": "827356a9-cb60-4976-9713-c30b4f850b41"
                        }
                    ]
                }
            ]
        }
    },
    "payload": {
        "requestParameters": "{\"usePreload\": false}",
        "configurationParameters": "[{\"ip-addr\": \"$.vf-module-topology.vf-module-parameters.param[16].value\", \"oam-ip-addr\": \"$.vf-module-topology.vf-module-parameters.param[30].value\"}]"
    }
}

An example of a request constructed by the actor using the above parameters, sent to the SO REST server:

{
  "requestDetails": {
    "modelInfo": {
        "modelInvariantId": "2246ebc9-9b9f-42d0-a5e4-0248324fb884",
        "modelType": "vfModule",
        "modelName": "VlbCdsSb00..vdns..module-3",
        "modelVersion": "1",
        "modelCustomizationUuid": "3a74410a-6c74-4a32-94b2-71488be6da1a",
        "modelVersionId": "1f94cedb-f656-4ddb-9f55-60ba1fc7d4b1",
        "modelCustomizationId": "3a74410a-6c74-4a32-94b2-71488be6da1a",
        "modelUuid": "1f94cedb-f656-4ddb-9f55-60ba1fc7d4b1",
        "modelInvariantUuid": "2246ebc9-9b9f-42d0-a5e4-0248324fb884"
    },
    "cloudConfiguration": {
        "tenantId": "41d6d38489bd40b09ea8a6b6b852dcbd",
        "lcpCloudRegionId": "RegionOne"
    },
    "requestInfo": {
      "instanceName": "vfModuleName",
      "source": "POLICY",
      "suppressRollback": false,
      "requestorId": "policy"
    },
    "relatedInstanceList": [
      {
        "relatedInstance": {
            "instanceId": "c14e61b5-1ee6-4925-b4a9-b9c8dbfe3f34",
            "modelInfo": {
                "modelInvariantId": "6418bb39-61e1-45fc-a36b-3f211bb846c7",
                "modelType": "service",
                "modelName": "vLB_CDS_SB00_02",
                "modelVersion": "1.0",
                "modelVersionId": "d01d9dec-afb6-4a53-bd9e-2eb10ca07a51",
                "modelUuid": "d01d9dec-afb6-4a53-bd9e-2eb10ca07a51",
                "modelInvariantUuid": "6418bb39-61e1-45fc-a36b-3f211bb846c7"
            }
        }
      },
      {
        "relatedInstance": {
            "instanceId": "6636c4d5-f608-4376-b6d8-7977e98cb16d",
            "modelInfo": {
                "modelInvariantId": "827356a9-cb60-4976-9713-c30b4f850b41",
                "modelType": "vnf",
                "modelName": "vLB_CDS_SB00",
                "modelVersion": "1.0",
                "modelCustomizationUuid": "6478f94b-0b20-4b44-afc0-94e48070586a",
                "modelVersionId": "ca3c4797-0cdd-4797-8bec-9a3ce78ac4da",
                "modelCustomizationId": "6478f94b-0b20-4b44-afc0-94e48070586a",
                "modelUuid": "ca3c4797-0cdd-4797-8bec-9a3ce78ac4da",
                "modelInvariantUuid": "827356a9-cb60-4976-9713-c30b4f850b41"
            }
        }
      }
    ],
    "requestParameters": {
        "usePreload": false
    },
    "configurationParameters": [
        {
            "ip-addr": "$.vf-module-topology.vf-module-parameters.param[16].value",
            "oam-ip-addr": "$.vf-module-topology.vf-module-parameters.param[30].value"
        }
    ]
  }
}

An example response received to the initial request, from the SO REST service:

{
    "requestReferences": {
        "requestId": "70f28791-c271-4cae-b090-0c2a359e26d9",
        "instanceId": "68804843-18e0-41a3-8838-a6d90a035e1a",
        "requestSelfLink": "http://so.onap:8080/orchestrationRequests/v7/b789e4e6-0b92-42c3-a723-1879af9c799d"
    }
}

An example URL used for the “get” (i.e., poll) request subsequently sent to SO:

GET https://so.onap:6969/orchestrationRequests/v5/70f28791-c271-4cae-b090-0c2a359e26d9

An example response received to the poll request, when SO has not completed the request:

{
    "request": {
        "requestId": "70f28791-c271-4cae-b090-0c2a359e26d9",
        "startTime": "Fri, 15 May 2020 12:12:50 GMT",
        "requestScope": "vfModule",
        "requestType": "scaleOut",
        "requestDetails": {
            "modelInfo": {
                "modelInvariantId": "2246ebc9-9b9f-42d0-a5e4-0248324fb884",
                "modelType": "vfModule",
                "modelName": "VlbCdsSb00..vdns..module-3",
                "modelVersion": "1",
                "modelCustomizationUuid": "3a74410a-6c74-4a32-94b2-71488be6da1a",
                "modelVersionId": "1f94cedb-f656-4ddb-9f55-60ba1fc7d4b1",
                "modelCustomizationId": "3a74410a-6c74-4a32-94b2-71488be6da1a",
                "modelUuid": "1f94cedb-f656-4ddb-9f55-60ba1fc7d4b1",
                "modelInvariantUuid": "2246ebc9-9b9f-42d0-a5e4-0248324fb884"
            },
            "requestInfo": {
                "source": "POLICY",
                "instanceName": "vfModuleName",
                "suppressRollback": false,
                "requestorId": "policy"
            },
            "relatedInstanceList": [
                {
                    "relatedInstance": {
                        "instanceId": "c14e61b5-1ee6-4925-b4a9-b9c8dbfe3f34",
                        "modelInfo": {
                            "modelInvariantId": "6418bb39-61e1-45fc-a36b-3f211bb846c7",
                            "modelType": "service",
                            "modelName": "vLB_CDS_SB00_02",
                            "modelVersion": "1.0",
                            "modelVersionId": "d01d9dec-afb6-4a53-bd9e-2eb10ca07a51",
                            "modelUuid": "d01d9dec-afb6-4a53-bd9e-2eb10ca07a51",
                            "modelInvariantUuid": "6418bb39-61e1-45fc-a36b-3f211bb846c7"
                        }
                    }
                },
                {
                    "relatedInstance": {
                        "instanceId": "6636c4d5-f608-4376-b6d8-7977e98cb16d",
                        "modelInfo": {
                            "modelInvariantId": "827356a9-cb60-4976-9713-c30b4f850b41",
                            "modelType": "vnf",
                            "modelName": "vLB_CDS_SB00",
                            "modelVersion": "1.0",
                            "modelCustomizationUuid": "6478f94b-0b20-4b44-afc0-94e48070586a",
                            "modelVersionId": "ca3c4797-0cdd-4797-8bec-9a3ce78ac4da",
                            "modelCustomizationId": "6478f94b-0b20-4b44-afc0-94e48070586a",
                            "modelUuid": "ca3c4797-0cdd-4797-8bec-9a3ce78ac4da",
                            "modelInvariantUuid": "827356a9-cb60-4976-9713-c30b4f850b41"
                        }
                    }
                }
            ],
            "cloudConfiguration": {
                "tenantId": "41d6d38489bd40b09ea8a6b6b852dcbd",
                "tenantName": "Integration-SB-00",
                "cloudOwner": "CloudOwner",
                "lcpCloudRegionId": "RegionOne"
            },
            "requestParameters": {
                "usePreload": false
            },
            "configurationParameters": [
                {
                    "ip-addr": "$.vf-module-topology.vf-module-parameters.param[16].value",
                    "oam-ip-addr": "$.vf-module-topology.vf-module-parameters.param[30].value"
                }
            ]
        },
        "instanceReferences": {
            "serviceInstanceId": "c14e61b5-1ee6-4925-b4a9-b9c8dbfe3f34",
            "vnfInstanceId": "6636c4d5-f608-4376-b6d8-7977e98cb16d",
            "vfModuleInstanceId": "68804843-18e0-41a3-8838-a6d90a035e1a",
            "vfModuleInstanceName": "vfModuleName"
        },
        "requestStatus": {
            "requestState": "IN_PROGRESS",
            "statusMessage": "FLOW STATUS: Execution of ActivateVfModuleBB has completed successfully, next invoking ConfigurationScaleOutBB (Execution Path progress: BBs completed = 4; BBs remaining = 2). TASK INFORMATION: Last task executed: Call SDNC RESOURCE STATUS: The vf module was found to already exist, thus no new vf module was created in the cloud via this request",
            "percentProgress": 68,
            "timestamp": "Fri, 15 May 2020 12:13:41 GMT"
        }
    }
}

An example response received to the poll request, when SO has completed the request:

{
    "request": {
        "requestId": "70f28791-c271-4cae-b090-0c2a359e26d9",
        "startTime": "Fri, 15 May 2020 12:12:50 GMT",
        "finishTime": "Fri, 15 May 2020 12:14:21 GMT",
        "requestScope": "vfModule",
        "requestType": "scaleOut",
        "requestDetails": {
            "modelInfo": {
                "modelInvariantId": "2246ebc9-9b9f-42d0-a5e4-0248324fb884",
                "modelType": "vfModule",
                "modelName": "VlbCdsSb00..vdns..module-3",
                "modelVersion": "1",
                "modelCustomizationUuid": "3a74410a-6c74-4a32-94b2-71488be6da1a",
                "modelVersionId": "1f94cedb-f656-4ddb-9f55-60ba1fc7d4b1",
                "modelCustomizationId": "3a74410a-6c74-4a32-94b2-71488be6da1a",
                "modelUuid": "1f94cedb-f656-4ddb-9f55-60ba1fc7d4b1",
                "modelInvariantUuid": "2246ebc9-9b9f-42d0-a5e4-0248324fb884"
            },
            "requestInfo": {
                "source": "POLICY",
                "instanceName": "vfModuleName",
                "suppressRollback": false,
                "requestorId": "policy"
            },
            "relatedInstanceList": [
                {
                    "relatedInstance": {
                        "instanceId": "c14e61b5-1ee6-4925-b4a9-b9c8dbfe3f34",
                        "modelInfo": {
                            "modelInvariantId": "6418bb39-61e1-45fc-a36b-3f211bb846c7",
                            "modelType": "service",
                            "modelName": "vLB_CDS_SB00_02",
                            "modelVersion": "1.0",
                            "modelVersionId": "d01d9dec-afb6-4a53-bd9e-2eb10ca07a51",
                            "modelUuid": "d01d9dec-afb6-4a53-bd9e-2eb10ca07a51",
                            "modelInvariantUuid": "6418bb39-61e1-45fc-a36b-3f211bb846c7"
                        }
                    }
                },
                {
                    "relatedInstance": {
                        "instanceId": "6636c4d5-f608-4376-b6d8-7977e98cb16d",
                        "modelInfo": {
                            "modelInvariantId": "827356a9-cb60-4976-9713-c30b4f850b41",
                            "modelType": "vnf",
                            "modelName": "vLB_CDS_SB00",
                            "modelVersion": "1.0",
                            "modelCustomizationUuid": "6478f94b-0b20-4b44-afc0-94e48070586a",
                            "modelVersionId": "ca3c4797-0cdd-4797-8bec-9a3ce78ac4da",
                            "modelCustomizationId": "6478f94b-0b20-4b44-afc0-94e48070586a",
                            "modelUuid": "ca3c4797-0cdd-4797-8bec-9a3ce78ac4da",
                            "modelInvariantUuid": "827356a9-cb60-4976-9713-c30b4f850b41"
                        }
                    }
                }
            ],
            "cloudConfiguration": {
                "tenantId": "41d6d38489bd40b09ea8a6b6b852dcbd",
                "tenantName": "Integration-SB-00",
                "cloudOwner": "CloudOwner",
                "lcpCloudRegionId": "RegionOne"
            },
            "requestParameters": {
                "usePreload": false
            },
            "configurationParameters": [
                {
                    "ip-addr": "$.vf-module-topology.vf-module-parameters.param[16].value",
                    "oam-ip-addr": "$.vf-module-topology.vf-module-parameters.param[30].value"
                }
            ]
        },
        "instanceReferences": {
            "serviceInstanceId": "c14e61b5-1ee6-4925-b4a9-b9c8dbfe3f34",
            "vnfInstanceId": "6636c4d5-f608-4376-b6d8-7977e98cb16d",
            "vfModuleInstanceId": "68804843-18e0-41a3-8838-a6d90a035e1a",
            "vfModuleInstanceName": "vfModuleName"
        },
        "requestStatus": {
            "requestState": "COMPLETE",
            "statusMessage": "STATUS: ALaCarte-VfModule-scaleOut request was executed correctly. FLOW STATUS: Successfully completed all Building Blocks RESOURCE STATUS: The vf module was found to already exist, thus no new vf module was created in the cloud via this request",
            "percentProgress": 100,
            "timestamp": "Fri, 15 May 2020 12:14:21 GMT"
        }
    }
}
Configuration of the SO Actor

The following table specifies the fields that should be provided to configure the SO actor.

Field name

type

Description

clientName

string

Name of the HTTP client to use to send the request to the SO REST server.

timeoutSec

integer (optional)

Maximum time, in seconds, to wait for a response to be received from the REST server. Defaults to 90s.

path

string

URI appended to the URL. This field only applies to individual operations; it does not apply at the actor level. Note: the path should not include a leading or trailing slash.

maxGets

integer (optional)

Maximum number of get/poll requests to make to determine the final outcome of the request. Defaults to 20.

waitSecGet

integer (optional)

Time, in seconds, to wait between issuing “get” requests. Defaults to 20s.

pathGet

string (optional)

Path to use when polling (i.e., issuing “get” requests). Note: this should include a trailing slash, but no leading slash.

The individual operations are configured using these same field names. However, all of them, except the path, are optional, as they inherit their values from the corresponding actor-level fields.

VFC Actor

Overview of VFC Actor

ONAP Policy Framework enables VFC as one of the supported actors.

There has not been any support given to the Policy Framework project for the VFC Actor in several releases. Thus, the code and information provided is to the best of the knowledge of the team. If there are any questions or problems, please consult the VFC Project to help provide guidance.

VFC uses a REST-based interface. However, as requests may not complete right away, a REST-based polling interface is used to check the status of the request. The jobId is extracted from each response and is appended to the pathGet configuration parameter to generate the URL used to poll for completion.

Each operation supported by the actor is associated with its own java class, which is responsible for populating the request structure appropriately and sending the request. The operation-specific classes are all derived from the VfcOperation class, which is, itself, derived from HttpOperation. The following operations are currently supported:

  • Restart

Request

A number of nested structures are populated within the request. Several of them are populated from items found within the A&AI “enrichment” data provided by DCAE with the ONSET event. The following table lists the contents of some of the fields that appear within these structures.

Field Name

Type

Description

top level:

requestId

string

Inserted by Policy. Maps to the UUID sent by DCAE i.e. the ID used throughout the closed loop lifecycle to identify a request.

nsInstanceId

string

Set by Policy, using the “service-instance.service-instance-id” property found within the enrichment data.

healVnfData:

cause

string

Set by Policy to the name of the operation.

vnfInstanceId

string

Set by Policy, using the “generic-vnf.vnf-id” property found within the enrichment data.

additionalParams:

action

Set by Policy to the name of the operation.

actionvminfo:

vmid

string

Set by Policy, using the “vserver.vserver-id” property found within the enrichment data.

vmname

string

Set by Policy, using the “vserver.vserver-name” property found within the enrichment data.

Examples

Suppose the ControlLoopOperationParams were populated as follows:

{
    TBD
}

An example of a request constructed by the actor using the above parameters, sent to the VFC REST server:

{
    TBD
}

An example response received to the initial request, from the VFC REST service:

{
    TBD
}

An example URL used for the “get” (i.e., poll) request subsequently sent to VFC:

TBD

An example response received to the poll request, when VFC has not completed the request:

{
    TBD
}

An example response received to the poll request, when VFC has completed the request:

{
    TBD
}
Configuration of the VFC Actor

The following table specifies the fields that should be provided to configure the VFC actor.

Field name

type

Description

clientName

string

Name of the HTTP client to use to send the request to the VFC REST server.

timeoutSec

integer (optional)

Maximum time, in seconds, to wait for a response to be received from the REST server. Defaults to 90s.

The individual operations are configured using these same field names. However, all of them are optional, as they inherit their values from the corresponding actor-level fields. The following additional fields are specified at the individual operation level.

Field name

type

Description

path

string

URI appended to the URL. Note: this should not include a leading or trailing slash.

maxGets

integer (optional)

Maximum number of get/poll requests to make to determine the final outcome of the request. Defaults to 0 (i.e., no polling).

waitSecGet

integer

Time, in seconds, to wait between issuing “get” requests. Defaults to 20s.

pathGet

string

Path to use when polling (i.e., issuing “get” requests). Note: this should include a trailing slash, but no leading slash.

Policy Drools PDP Engine

The Drools PDP, aka PDP-D, is the PDP in the Policy Framework that uses the Drools BRMS to enforce policies.

The PDP-D functionality has been partitioned into two functional areas:

  • PDP-D Engine.

  • PDP-D Applications.

PDP-D Engine

The PDP-D Engine is the infrastructure that policy applications use. It provides networking services, resource grouping, and diagnostics.

The PDP-D Engine supports the following Tosca Native Policy Types:

  • onap.policies.native.Drools

  • onap.policies.native.drools.Controller

These types are used to dynamically add and configure new application controllers.

The PDP-D Engine hosts applications by means of controllers. Controllers may support other Tosca Policy Types. The types supported by the Control Loop applications are:

  • onap.policies.controlloop.operational.common.Drools

  • onap.policies.controlloop.Operational

PDP-D Applications

A PDP-D application, ie. a controller, contains references to the resources that the application needs. These include networked endpoint references, and maven coordinates.

Control Loop applications are used in ONAP to enforce operational policies.

The following guides offer more information in these two functional areas.

PDP-D Engine

Overview

The PDP-D Core Engine provides an infrastructure and services for drools based applications in the context of Policies and ONAP.

A PDP-D supports applications by means of controllers. A controller is a named grouping of resources. These typically include references to communication endpoints, maven artifact coordinates, and coders for message mapping.

Controllers use communication endpoints to interact with remote networked entities typically using messaging (dmaap or ueb), or http.

PDP-D Engine capabilities can be extended via features. Integration with other Policy Framework components (API, PAP, and PDP-X) is through one of them (feature-lifecycle).

The PDP-D Engine infrastructure provides mechanisms for data migration, diagnostics, and application management.

Software

Source Code repositories

The PDP-D software is mainly located in the policy/drools repository with the communication endpoints software residing in the policy/common repository and Tosca policy models in the policy/models repository.

Docker Image

Check the drools-pdp released versions page for the latest versions. At the time of this writing 1.6.3 is the latest version.

docker pull onap/policy-drools:1.6.3

A container instantiated from this image will run under the non-priviledged policy account.

The PDP-D root directory is located at the /opt/app/policy directory (or $POLICY_HOME), with the exception of the $HOME/.m2 which contains the local maven repository. The PDP-D configuration resides in the following directories:

  • /opt/app/policy/config: ($POLICY_HOME/config or $POLICY_CONFIG) contains engine, controllers, and endpoint configuration.

  • /home/policy/.m2: ($HOME/.m2) maven repository configuration.

  • /opt/app/policy/etc/: ($POLICY_HOME/etc) miscellaneous configuration such as certificate stores.

The following command can be used to explore the directory layout.

docker run --rm -it nexus3.onap.org:10001/onap/policy-drools:1.6.3 -- bash

Communication Endpoints

PDP-D supports the following networked infrastructures. This is also referred to as communication infrastructures in the source code.

  • DMaaP

  • UEB

  • NOOP

  • Http Servers

  • Http Clients

The source code is located at the policy-endpoints module in the policy/commons repository.

These network resources are named and typically have a global scope, therefore typically visible to the PDP-D engine (for administration purposes), application controllers, and features.

DMaaP, UEB, and NOOP are message-based communication infrastructures, hence the terminology of source and sinks, to denote their directionality into or out of the controller, respectively.

An endpoint can either be managed or unmanaged. The default for an endpoint is to be managed, meaning that they are globally accessible by name, and managed by the PDP-D engine. Unmanaged topics are used when neither global visibility, or centralized PDP-D management is desired. The software that uses unmanaged topics is responsible for their lifecycle management.

DMaaP Endpoints

These are messaging enpoints that use DMaaP as the communication infrastructure.

Typically, a managed endpoint configuration is stored in the <topic-name>-topic.properties files.

For example, the DCAE_TOPIC-topic.properties is defined as

dmaap.source.topics=DCAE_TOPIC

dmaap.source.topics.DCAE_TOPIC.effectiveTopic=${env:DCAE_TOPIC}
dmaap.source.topics.DCAE_TOPIC.servers=${env:DMAAP_SERVERS}
dmaap.source.topics.DCAE_TOPIC.consumerGroup=${env:DCAE_CONSUMER_GROUP}
dmaap.source.topics.DCAE_TOPIC.https=true

In this example, the generic name of the source endpoint is DCAE_TOPIC. This is known as the canonical name. The actual topic used in communication exchanges in a physical lab is contained in the $DCAE_TOPIC environment variable. This environment variable is usually set up by devops on a per installation basis to meet the needs of each lab spec.

In the previous example, DCAE_TOPIC is a source-only topic.

Sink topics are similarly specified but indicating that are sink endpoints from the perspective of the controller. For example, the APPC-CL topic is configured as

dmaap.source.topics=APPC-CL
dmaap.sink.topics=APPC-CL

dmaap.source.topics.APPC-CL.servers=${env:DMAAP_SERVERS}
dmaap.source.topics.APPC-CL.https=true

dmaap.sink.topics.APPC-CL.servers=${env:DMAAP_SERVERS}
dmaap.sink.topics.APPC-CL.https=true

Although not shown in these examples, additional configuration options are available such as user name, password, security keys, consumer group and consumer instance.

UEB Endpoints

Similary, UEB endpoints are messaging endpoints, similar to the DMaaP ones.

For example, the DCAE_TOPIC-topic.properties can be converted to an UEB one, by replacing the dmaap prefix with ueb. For example:

ueb.source.topics=DCAE_TOPIC

ueb.source.topics.DCAE_TOPIC.effectiveTopic=${env:DCAE_TOPIC}
ueb.source.topics.DCAE_TOPIC.servers=${env:DMAAP_SERVERS}
ueb.source.topics.DCAE_TOPIC.consumerGroup=${env:DCAE_CONSUMER_GROUP}
ueb.source.topics.DCAE_TOPIC.https=true
NOOP Endpoints

NOOP (no-operation) endpoints are messaging endpoints that don’t have any network attachments. They are used for testing convenience. To convert the DCAE_TOPIC-topic.properties to a NOOP endpoint, simply replace the dmaap prefix with noop:

noop.source.topics=DCAE_TOPIC
noop.source.topics.DCAE_TOPIC.effectiveTopic=${env:DCAE_TOPIC}
HTTP Clients

HTTP Clients are typically stored in files following the naming convention: <name>-http-client.properties convention. One such example is the AAI HTTP Client:

http.client.services=AAI

http.client.services.AAI.managed=true
http.client.services.AAI.https=true
http.client.services.AAI.host=${envd:AAI_HOST}
http.client.services.AAI.port=${envd:AAI_PORT}
http.client.services.AAI.userName=${envd:AAI_USERNAME}
http.client.services.AAI.password=${envd:AAI_PASSWORD}
http.client.services.AAI.contextUriPath=${envd:AAI_CONTEXT_URI}
HTTP Servers

HTTP Servers are stored in files that follow a similar naming convention <name>-http-server.properties. The following is an example of a server named CONFIG, getting most of its configuration from environment variables.

http.server.services=CONFIG

http.server.services.CONFIG.host=${envd:TELEMETRY_HOST}
http.server.services.CONFIG.port=7777
http.server.services.CONFIG.userName=${envd:TELEMETRY_USER}
http.server.services.CONFIG.password=${envd:TELEMETRY_PASSWORD}
http.server.services.CONFIG.restPackages=org.onap.policy.drools.server.restful
http.server.services.CONFIG.managed=false
http.server.services.CONFIG.swagger=true
http.server.services.CONFIG.https=true
http.server.services.CONFIG.aaf=${envd:AAF:false}

Endpoints configuration resides in the $POLICY_HOME/config (or $POLICY_CONFIG) directory in a container.

Controllers

Controllers are the means for the PDP-D to run applications. Controllers are defined in <name>-controller.properties files.

For example, see the usecases controller configuration.

This configuration file has two sections: a) application maven coordinates, and b) endpoint references and coders.

Maven Coordinates

The coordinates section (rules) points to the controller-usecases kjar artifact. It is the brain of the control loop application.

controller.name=usecases

rules.groupId=${project.groupId}
rules.artifactId=controller-usecases
rules.version=${project.version}
.....

This kjar contains the usecases DRL file (there may be more than one DRL file included).

...
rule "NEW.TOSCA.POLICY"
    when
        $policy : ToscaPolicy()
    then

    ...

    ControlLoopParams params = ControlLoopUtils.toControlLoopParams($policy);
    if (params != null) {
        insert(params);
    }
end
...

The DRL in conjuction with the dependent java libraries in the kjar pom realizes the application’s function. For intance, it realizes the vFirewall, vCPE, and vDNS use cases in ONAP.

..
<dependency>
    <groupId>org.onap.policy.models.policy-models-interactions.model-actors</groupId>
    <artifactId>actor.appclcm</artifactId>
    <version>${policy.models.version}</version>
    <scope>provided</scope>
</dependency>
...
Endpoints References and Coders

The usecases-controller.properties configuration also contains a mix of source (of incoming controller traffic) and sink (of outgoing controller traffic) configuration. This configuration also contains specific filtering and mapping rules for incoming and outgoing dmaap messages known as coders.

...
dmaap.source.topics=DCAE_TOPIC,APPC-CL,APPC-LCM-WRITE,SDNR-CL-RSP
dmaap.sink.topics=APPC-CL,APPC-LCM-READ,POLICY-CL-MGT,SDNR-CL,DCAE_CL_RSP


dmaap.source.topics.APPC-LCM-WRITE.events=org.onap.policy.appclcm.AppcLcmDmaapWrapper
dmaap.source.topics.APPC-LCM-WRITE.events.org.onap.policy.appclcm.AppcLcmDmaapWrapper.filter=[?($.type == 'response')]
dmaap.source.topics.APPC-LCM-WRITE.events.custom.gson=org.onap.policy.appclcm.util.Serialization,gson

dmaap.sink.topics.APPC-CL.events=org.onap.policy.appc.Request
dmaap.sink.topics.APPC-CL.events.custom.gson=org.onap.policy.appc.util.Serialization,gsonPretty
...

In this example, the coders specify that incoming messages over the DMaaP endpoint reference APPC-LCM-WRITE, that have a field called type under the root JSON object with value response are allowed into the controller application. In this case, the incoming message is converted into an object (fact) of type org.onap.policy.appclcm.AppcLcmDmaapWrapper. The coder has attached a custom implementation provided by the application with class org.onap.policy.appclcm.util.Serialization. Note that the coder filter is expressed in JSONPath notation.

Note that not all the communication endpoint references need to be explicitly referenced within the controller configuration file. For example, Http clients do not. The reasons are historical, as the PDP-D was initially intended to only communicate through messaging-based protocols such as UEB or DMaaP in asynchronous unidirectional mode. The introduction of Http with synchronous bi-directional communication with remote endpoints made it more convenient for the application to manage each network exchange.

Controllers configuration resides in the $POLICY_HOME/config (or $POLICY_CONFIG) directory in a container.

Other Configuration Files

There are other types of configuration files that controllers can use, for example .environment files that provides a means to share data across applications. The controlloop.properties.environment is one such example.

Tosca Policies

PDP-D supports Tosca Policies through the feature-lifecycle. The PDP-D receives its policy set from the PAP. A policy conforms to its Policy Type specification. Policy Types and policy creation is done by the API component. Policy deployments are orchestrated by the PAP.

All communication between PAP and PDP-D is over the DMaaP POLICY-PDP-PAP topic.

Native Policy Types

The PDP-D Engine supports two (native) Tosca policy types by means of the lifecycle feature:

  • onap.policies.native.drools.Controller

  • onap.policies.native.drools.Artifact

These types can be used to dynamically deploy or undeploy application controllers, assign policy types, and upgrade or downgrade their attached maven artifact versions.

For instance, an example native controller policy is shown below.

{
    "tosca_definitions_version": "tosca_simple_yaml_1_0_0",
    "topology_template": {
        "policies": [
            {
                "example.controller": {
                    "type": "onap.policies.native.drools.Controller",
                    "type_version": "1.0.0",
                    "version": "1.0.0",
                    "name": "example.controller",
                    "metadata": {
                        "policy-id": "example.controller"
                    },
                    "properties": {
                        "controllerName": "lifecycle",
                        "sourceTopics": [
                            {
                                "topicName": "DCAE_TOPIC",
                                "events": [
                                    {
                                        "eventClass": "java.util.HashMap",
                                        "eventFilter": "[?($.closedLoopEventStatus == 'ONSET')]"
                                    },
                                    {
                                        "eventClass": "java.util.HashMap",
                                        "eventFilter": "[?($.closedLoopEventStatus == 'ABATED')]"
                                    }
                                ]
                            }
                        ],
                        "sinkTopics": [
                            {
                                "topicName": "APPC-CL",
                                "events": [
                                    {
                                        "eventClass": "java.util.HashMap",
                                        "eventFilter": "[?($.CommonHeader && $.Status)]"
                                    }
                                ]
                            }
                        ],
                        "customConfig": {
                            "field1" : "value1"
                        }
                    }
                }
            }
        ]
    }
}

The actual application coordinates are provided with a policy of type onap.policies.native.drools.Artifact, see the example native artifact

{
    "tosca_definitions_version": "tosca_simple_yaml_1_0_0",
    "topology_template": {
        "policies": [
            {
                "example.artifact": {
                    "type": "onap.policies.native.drools.Artifact",
                    "type_version": "1.0.0",
                    "version": "1.0.0",
                    "name": "example.artifact",
                    "metadata": {
                        "policy-id": "example.artifact"
                    },
                    "properties": {
                        "rulesArtifact": {
                            "groupId": "org.onap.policy.drools.test",
                            "artifactId": "lifecycle",
                            "version": "1.0.0"
                        },
                        "controller": {
                            "name": "lifecycle"
                        }
                    }
                }
            }
        ]
    }
}
Operational Policy Types

The PDP-D also recognizes Tosca Operational Policies, although it needs an application controller that understands them to execute them. These are:

  • onap.policies.controlloop.operational.common.Drools

  • onap.policies.controlloop.Operational

A minimum of one application controller that supports these capabilities must be installed in order to honor the operational policy types. One such controller is the usecases controller residing in the policy/drools-applications repository.

Controller Policy Type Support

Note that a controller may support other policy types. A controller may declare them explicitly in a native onap.policies.native.drools.Controller policy.

"customConfig": {
    "controller.policy.types" : "policy.type.A"
}

The controller application could declare its supported policy types in the kjar. For example, the usecases controller packages this information in the kmodule.xml. One advantage of this approach is that the PDP-D would only commit to execute policies against these policy types if a supporting controller is up and running.

<kmodule xmlns="http://jboss.org/kie/6.0.0/kmodule">
    <kbase name="onap.policies.controlloop.operational.common.Drools" default="false" equalsBehavior="equality"/>
    <kbase name="onap.policies.controlloop.Operational" equalsBehavior="equality"
           packages="org.onap.policy.controlloop" includes="onap.policies.controlloop.operational.common.Drools">
        <ksession name="usecases"/>
    </kbase>
</kmodule>

Software Architecture

PDP-D is divided into 2 layers:

Core Layer

The core layer directly interfaces with the drools libraries with 2 main abstractions:

Policy Container and Sessions

The PolicyContainer abstracts the drools KieContainer, while a PolicySession abstracts a drools KieSession. PDP-D uses stateful sessions in active mode (fireUntilHalt) (please visit the drools website for additional documentation).

Management Layer

The management layer manages the PDP-D and builds on top of the core capabilities.

PolicyEngine

The PDP-D PolicyEngine is the top abstraction and abstracts away the PDP-D and all the resources it holds. The reader looking at the source code can start looking at this component in a top-down fashion. Note that the PolicyEngine abstraction should not be confused with the sofware in the policy/engine repository, there is no relationship whatsoever other than in the naming.

The PolicyEngine represents the PDP-D, holds all PDP-D resources, and orchestrates activities among those.

The PolicyEngine manages applications via the PolicyController abstractions in the base code. The relationship between the PolicyEngine and PolicyController is one to many.

The PolicyEngine holds other global resources such as a thread pool, policies validator, telemetry server, and unmanaged topics for administration purposes.

The PolicyEngine has interception points that allow *features* to observe and alter the default PolicyEngine behavior.

The PolicyEngine implements the *Startable* and *Lockable* interfaces. These operations have a cascading effect on the resources the PolicyEngine holds, as it is the top level entity, thus affecting controllers and endpoints. These capabilities are intended to be used for extensions, for example active/standby multi-node capabilities. This programmability is exposed via the telemetry API, and feature hooks.

Configuration

PolicyEngine related configuration is located in the engine.properties, and engine-system.properties.

The engine configuration files reside in the $POLICY_CONFIG directory.

PolicyController

A PolicyController represents an application. Each PolicyController has an instance of a DroolsController. The PolicyController provides the means to group application specific resources into a single unit. Such resources include the application’s maven coordinates, endpoint references, and coders.

A PolicyController uses a DroolsController to interface with the core layer (PolicyContainer and PolicySession).

The relationship between the PolicyController and the DroolsController is one-to-one. The DroolsController currently supports 2 implementations, the MavenDroolsController, and the NullDroolsController. The DroolsController’s polymorphic behavior depends on whether a maven artifact is attached to the controller or not.

Configuration

The controllers configuration resides in the $POLICY_CONFIG directory.

Programmability

PDP-D is programmable through:

  • Features and Event Listeners.

  • Maven-Drools applications.

Using Features and Listeners

Features hook into the interception points provided by the the PDP-D main entities.

Endpoint Listeners, see here and here, can be used in conjuction with features for additional capabilities.

Using Maven-Drools applications

Maven-based drools applications can run any arbitrary functionality structured with rules and java logic.

Telemetry Extensions

It is recommended to features (extensions) to offer a diagnostics REST API to integrate with the telemetry API. This is done by placing JAX-RS files under the package org.onap.policy.drools.server.restful. The root context path for all the telemetry services is /policy/pdp/engine.

Features

Features is an extension mechanism for the PDP-D functionality. Features can be toggled on and off. A feature is composed of:

  • Java libraries.

  • Scripts and configuration files.

Java Extensions

Additional functionality can be provided in the form of java libraries that hook into the PolicyEngine, PolicyController, DroolsController, and PolicySession interception points to observe or alter the PDP-D logic.

See the Feature APIs available in the management and core layers.

The convention used for naming these extension modules are api-<name> for interfaces, and feature-<name> for the actual java extensions.

Configuration Items

Installation items such as scripts, SQL, maven artifacts, and configuration files.

The reader can refer to the policy/drools-pdp repository and the <https://git.onap.org/policy/drools-applications>`__ repository for miscellaneous feature implementations.

Layout

A feature is packaged in a feature-<name>.zip and has this internal layout:

# #######################################################################################
# Features Directory Layout:
#
# $POLICY_HOME/
#   L─ features/
#        L─ <feature-name>*/
#            L─ [config]/
#            |   L─ <config-file>+
#            L─ [bin]/
#            |   L─ <bin-file>+
#            L─ lib/
#            |   L─ [dependencies]/
#            |   |   L─ <dependent-jar>+
#            │   L─ feature/
#            │       L─ <feature-jar>
#            L─ [db]/
#            │   L─ <db-name>/+
#            │       L─ sql/
#            │           L─ <sql-scripts>*
#            L─ [artifacts]/
#                L─ <artifact>+
#            L─ [install]
#                L─ [enable]
#                L─ [disable]
#                L─ [other-directories-or-files]
#
# notes:  [] = optional , * = 0 or more , + = 1 or more
#   <feature-name> directory without "feature-" prefix.
#   [config]       feature configuration directory that contains all configuration
#                  needed for this features
#   [config]/<config-file>  preferably named with "feature-<feature-name>" prefix to
#                  precisely match it against the exact features, source code, and
#                  associated wiki page for configuration details.
#   [bin]       feature bin directory that contains helper scripts for this feature
#   [bin]/<executable-file>  preferably named with "feature-<feature-name>" prefix.
#   lib            jar libraries needed by this features
#   lib/[dependencies]  3rd party jar dependencies not provided by base installation
#                  of pdp-d that are necessary for <feature-name> to operate
#                  correctly.
#   lib/feature    the single feature jar that implements the feature.
#   [db]           database directory, if the feature contains sql.
#   [db]/<db-name> database to which underlying sql scripts should be applied.
#                  ideally, <db-name> = <feature-name> so it is easily to associate
#                  the db data with a feature itself.   In addition, since a feature is
#                  a somewhat independent isolated unit of functionality,the <db-name>
#                  database ideally isolates all its data.
#   [db]/<db-name>/sql  directory with all the sql scripts.
#   [db]/<db-name>/sql/<sql-scripts>  for this feature, sql
#                  upgrade scripts should be suffixed with ".upgrade.sql"
#                  and downgrade scripts should be suffixed with ".downgrade.sql"
#   [artifacts]    maven artifacts to be deployed in a maven repository.
#   [artifacts]/<artifact>  maven artifact with identifiable maven coordinates embedded
#                  in the artifact.
#   [install]      custom installation directory where custom enable or disable scripts
#                  and other free form data is included to be used for the enable and
#                  and disable scripts.
#   [install]/[enable] enable script executed when the enable operation is invoked in
#                  the feature.
#   [install]/[disable] disable script executed when the disable operation is invoked in
#                  the feature.
#   [install]/[other-directories-or-files] other executables, or data that can be used
#                  by the feature for any of its operations.   The content is determined
#                  by the feature designer.
# ########################################################################################

The features is the tool used for administration purposes:

Usage:  features status
            Get enabled/disabled status on all features
        features enable <feature> ...
            Enable the specified feature
        features disable <feature> ...
            Disable the specified feature
        features install [ <feature> | <file-name> ] ...
            Install the specified feature
        features uninstall <feature> ...
            Uninstall the specified feature
Features available in the Docker image

The only enabled feature in the onap/policy-drools image is:

  • lifecycle: enables the lifecycle capability to integrate with the Policy Framework components.

The following features are included in the image but disabled.

  • distributed locking: distributed resource locking.

  • healthcheck: basic PDP-D Engine healthcheck.

Healthcheck

The Healthcheck feature provides reports used to verify the health of PolicyEngine.manager in addition to the construction, operation, and deconstruction of HTTP server/client objects.

When enabled, the feature takes as input a properties file named “feature-healtcheck.properties. This file should contain configuration properties necessary for the construction of HTTP client and server objects.

Upon initialization, the feature first constructs HTTP server and client objects using the properties from its properties file. A healthCheck operation is then triggered. The logic of the healthCheck verifies that PolicyEngine.manager is alive, and iteratively tests each HTTP server object by sending HTTP GET requests using its respective client object. If a server returns a “200 OK” message, it is marked as “healthy” in its individual report. Any other return code results in an “unhealthy” report.

After the testing of the server objects has completed, the feature returns a single consolidated report.

Lifecycle

The “lifecycle” feature enables a PDP-D to work with the architectural framework introduced in the Dublin release.

The lifecycle feature maintains three states: TERMINATED, PASSIVE, and ACTIVE. The PAP interacts with the lifecycle feature to put a PDP-D in PASSIVE or ACTIVE states. The PASSIVE state allows for Tosca Operational policies to be deployed. Policy execution is enabled when the PDP-D transitions to the ACTIVE state.

This feature can coexist side by side with the legacy mode of operation that pre-dates the Dublin release.

Distributed Locking

The Distributed Locking Feature provides locking of resources across a pool of PDP-D hosts. The list of locks is maintained in a database, where each record includes a resource identifier, an owner identifier, and an expiration time. Typically, a drools application will unlock the resource when it’s operation completes. However, if it fails to do so, then the resource will be automatically released when the lock expires, thus preventing a resource from becoming permanently locked.

Other features

The following features have been contributed to the policy/drools-pdp but are either unnecessary or have not been thoroughly tested:

Feature: Active/Standby Management

When the Feature Session Persistence is enabled, there can only be one active/providing service Drools PDP due to the behavior of Drools persistence. The Active/Standby Management Feature controls the selection of the Drools PDP that is providing service. It utilizes its own database and the State Management Feature database in the election algorithm. All Drools PDP nodes periodically run the election algorithm and, since they all use the same data, all nodes come to the same conclusion with the “elected” node assuming an active/providingservice state. Thus, the algorithm is distributed and has no single point of failure - assuming the database is configured for high availability.

When the algorithm selects a Drools PDP to be active/providing service the controllers and topic endpoints are unlocked and allowed to process transactions. When a Drools PDP transitions to a hotstandby or coldstandby state, the controllers and topic endpoints are locked, preventing the Drools PDP from handling transactions.

Enabling and Disabling Feature State Management

The Active/Standby Management Feature is enabled from the command line when logged in as policy after configuring the feature properties file (see Description Details section). From the command line:

  • > features status - Lists the status of features

  • > features enable active-standby-management - Enables the Active-Standby Management Feature

  • > features disable active-standby-management - Disables the Active-Standby Management Feature

The Drools PDP must be stopped prior to enabling/disabling features and then restarted after the features have been enabled/disabled.

Enabling Active/Standby Management Feature
 policy@hyperion-4:/opt/app/policy$ policy stop
 [drools-pdp-controllers]
  L []: Stopping Policy Management... Policy Management (pid=354) is stopping... Policy Management has stopped.
 policy@hyperion-4:/opt/app/policy$ features enable active-standby-management
 name                      version         status
 ----                      -------         ------
 controlloop-utils         1.1.0-SNAPSHOT  disabled
 healthcheck               1.1.0-SNAPSHOT  disabled
 test-transaction          1.1.0-SNAPSHOT  disabled
 eelf                      1.1.0-SNAPSHOT  disabled
 state-management          1.1.0-SNAPSHOT  disabled
 active-standby-management 1.1.0-SNAPSHOT  enabled
 session-persistence       1.1.0-SNAPSHOT  disabled
Description Details
Election Algorithm

The election algorithm selects the active/providingservice Drools PDP. The algorithm on each node reads the standbystatus from the StateManagementEntity table for all other nodes to determine if they are providingservice or in a hotstandby state and able to assume an active status. It uses the DroolsPdpEntity table to verify that other node election algorithms are currently functioning and when the other nodes were last designated as the active Drools PDP.

In general terms, the election algorithm periodically gathers the standbystatus and designation status for all the Drools PDPs. If the node which is currently designated as providingservice is “current” in updating its status, no action is required. If the designated node is either not current or has a standbystatus other than providingservice, it is time to choose another designated DroolsPDP. The algorithm will build a list of all DroolsPDPs that are current and have a standbystatus of hotstandby. It will then give preference to DroolsPDPs within the same site, choosing the DroolsPDP with the lowest lexicographic value to the droolsPdpId (resourceName). If the chosen DroolsPDP is itself, it will promote its standbystatus from hotstandby to providingservice. If the chosen DroolsPDP is other than itself, it will do nothing.

When the DroolsPDP promotes its standbystatus from hotstandby to providing service, a state change notification will occur and the Standby State Change Handler will take appropriate action.

Standby State Change Handler

The Standby State Change Handler (PMStandbyStateChangeHandler class) extends the IntegrityMonitor StateChangeNotifier class which implements the Observer class. When the DroolsPDP is constructed, an instance of the handler is constructed and registered with StateManagement. Whenever StateManagement implements a state transition, it calls the handleStateChange() method of the handler. If the StandbyStatus transitions to hot or cold standby, the handler makes a call into the lower level management layer to lock the application controllers and topic endpoints, preventing it from handling transactions. If the StandbyStatus transitions to providingservice, the handler makes a call into the lower level management layer to unlock the application controllers and topic endpoints, allowing it to handle transactions.

Database

The Active/Standby Feature creates a database named activestandbymanagement with a single table, droolspdpentity. The election handler uses that table to determine which DroolsPDP was/is designated as the active DroolsPDP and which DroolsPDP election handlers are healthy enough to periodically update their status.

The droolspdpentity table has the following columns:
  • pdpId - The unique indentifier for the DroolsPDP. It is the same as the resourceName

  • designated - Has a value of 1 if the DroolsPDP is designated as active/providingservice. It has a value of 0 otherwise

  • priority - Indicates the priority level of the DroolsPDP for the election handler. In general, this is ignore and all have the same priority.

  • updatedDate - This is the timestamp for the most recent update of the record.

  • designatedDate - This is the timestamp that indicates when the designated column was most recently set to a value of 1

  • site - This is the name of the site

Properties

The properties are found in the feature-active-standby-management.properties file. In general, the properties are adequately described in the properties file. Parameters which must be replaced prior to usage are indicated thus: ${{parameter to be replaced}}

feature-active-standby-mangement.properties
 # DB properties
 javax.persistence.jdbc.driver=org.mariadb.jdbc.Driver
 javax.persistence.jdbc.url=jdbc:mariadb://${{SQL_HOST}}:3306/activestandbymanagement
 javax.persistence.jdbc.user=${{SQL_USER}}
 javax.persistence.jdbc.password=${{SQL_PASSWORD}}

 # Must be unique across the system
 resource.name=pdp1
 # Name of the site in which this node is hosted
 site_name=site1

 # Needed by DroolsPdpsElectionHandler
 pdp.checkInterval=1500 # The interval in ms between updates of the updatedDate
 pdp.updateInterval=1000 # The interval in ms between executions of the election handler
 #pdp.timeout=3000
 # Need long timeout, because testTransaction is only run every 10 seconds.
 pdp.timeout=15000
 #how long do we wait for the pdp table to populate on initial startup
 pdp.initialWait=20000

End of Document

Feature: Controller Logging

The controller logging feature provides a way to log network topic messages to a separate controller log file for each controller. This allows a clear separation of network traffic between all of the controllers.

Type “features enable controller-logging”. The feature will now display as “enabled”.

_images/ctrlog_enablefeature.png

When the feature’s enable script is executed, it will search the $POLICY_HOME/config directory for any logback files containing the prefix “logback-include-”. These logger configuration files are typically provided with a feature that installs a controlloop (ex: controlloop-amsterdam and controlloop-casablanca features). Once these configuration files are found by the enable script, the logback.xml config file will be updated to include the configurations.

_images/ctrlog_logback.png
Controller Logger Configuration

The contents of a logback-include-*.xml file follows the same configuration syntax as the logback.xml file. It will contain the configurations for the logger associated with the given controller.

Note

A controller logger MUST be configured with the same name as the controller (ex: a controller named “casablanca” will have a logger named “casablanca”).

_images/ctrlog_config.png
Viewing the Controller Logs

Once a logger for the controller is configured, start the drools-pdp and navigate to the $POLICY_LOGS directory. A new controller specific network log will be added that contains all the network topic traffic of the controller.

_images/ctrlog_view.png

The original network log remains and will append traffic information from all topics regardless of which controller it is for. To abbreviate and customize messages for the network log, refer to the Feature MDC Filters documentation.

End of Document

Feature: EELF (Event and Error Logging Framework)

The EELF feature provides backwards compatibility with R0 logging functionality. It supports the use of EELF/Common Framework style logging at the same time as traditional logging.

See also

Additional information for EELF logging can be found at EELF wiki.

To utilize the eelf logging capabilities, first stop policy engine and then enable the feature using the “features” command.

Enabling EELF Feature
 policy@hyperion-4:/opt/app/policy$ policy stop
 [drools-pdp-controllers]
  L []: Stopping Policy Management... Policy Management (pid=354) is stopping... Policy Management has stopped.
 policy@hyperion-4:/opt/app/policy$ features enable eelf
 name                      version         status
 ----                      -------         ------
 controlloop-utils         1.1.0-SNAPSHOT  disabled
 healthcheck               1.1.0-SNAPSHOT  disabled
 test-transaction          1.1.0-SNAPSHOT  disabled
 eelf                      1.1.0-SNAPSHOT  enabled
 state-management          1.1.0-SNAPSHOT  disabled
 active-standby-management 1.1.0-SNAPSHOT  disabled
 session-persistence       1.1.0-SNAPSHOT  disabled

The output of the enable command will indicate whether or not the feature was enabled successfully.

Policy engine can then be started as usual.

End of Document

Feature: MDC Filters

The MDC Filter Feature provides configurable properties for network topics to extract fields from JSON strings and place them in a mapped diagnostic context (MDC).

Before enabling the feature, the network log contains the entire content of each message received on a topic. Below is a sample message from the network log. Note that the topic used for this tutorial is DCAE-CL.

[2019-03-22T16:36:42.942+00:00|DMAAP-source-DCAE-CL][IN|DMAAP|DCAE-CL]
{"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","closedLoopAlarmStart":1463679805324,"closedLoopEventClient":"DCAE_INSTANCE_ID.dcae-tca","closedLoopEventStatus":"ONSET","requestID":"664be3d2-6c12-4f4b-a3e7-c349acced200","target_type":"VNF","target":"generic-vnf.vnf-id","AAI":{"vserver.is-closed-loop-disabled":"false","vserver.prov-status":"ACTIVE","generic-vnf.vnf-id":"vCPE_Infrastructure_vGMUX_demo_app"},"from":"DCAE","version":"1.0.2"}

The network log can become voluminous if messages received from various topics carry large messages for various controllers. With the MDC Filter Feature, users can define keywords in JSON messages to extract and structure according to a desired format. This is done through configuring the feature’s properties.

Configuring the MDC Filter Feature

To configure the feature, the feature must be enabled using the following command:

features enable mdc-filters
_images/mdc_enablefeature.png

Once the feature is enabled, there will be a new properties file in $POLICY_HOME/config called feature-mdc-filters.properties.

_images/mdc_properties.png

The properties file contains filters to extract key data from messages on the network topics that are saved in an MDC, which can be referenced in logback.xml. The configuration format is as follows:

<protocol>.<type>.topics.<topic-name>.mdcFilters=<filters>

Where:
   <protocol> = ueb, dmaap, noop
   <type> = source, sink
   <topic-name> = Name of DMaaP or UEB topic
   <filters> = Comma separated list of key/json-path(s)

The filters consist of an MDC key used by logback.xml (see below) and the JSON path(s) to the desired data. The path always begins with ‘$’, which signifies the root of the JSON document. The underlying library, JsonPath, uses a query syntax for searching through a JSON file. The query syntax and some examples can be found at https://github.com/json-path/JsonPath. An example filter for the DCAE-CL is provided below:

dmaap.source.topics.DCAE-CL.mdcFilters=requestID=$.requestID

This filter is specifying that the dmaap source topic DCAE-CL will search each message received for requestID by following the path starting at the root ($) and searching for the field requestID. If the field is found, it is placed in the MDC with the key “requestID” as signified by the left hand side of the filter before the “=”.

Configuring Multiple Filters and Paths

Multiple fields can be found for a given JSON document by a comma separated list of <mdcKey,jsonPath> pairs. For the previous example, another filter is added by adding a comma and specifying the filter as follows:

dmaap.source.topics.DCAE-CL.mdcFilters=requestID=$.requestID,closedLoopName=$.closedLoopControlName

The feature will now search for both requestID and closedLoopControlName in a JSON message using the specified “$.” path notations and put them in the MDC using the keys “requestID” and “closedLoopName” respectively. To further refine the filter, if a topic receives different message structures (ex: a response message structure vs an error message structure) the “|” notation allows multiple paths to a key to be defined. The feature will search through each specified path until a match is found. An example can be found below:

dmaap.source.topics.DCAE-CL.mdcFilters=requestID=$.requestID,closedLoopName=$.closedLoopControlName|$.AAI.closedLoopControlName

Now when the filter is searching for closedLoopControlName it will check the first path “$.closedLoopControlName”, if it is not present then it will try the second path “$.AAI.closedLoopControlName”. If the user is unsure of the path to a field, JsonPath supports a deep scan by using the “..” notation. This will search the entire JSON document for the field without specifying the path.

Accessing the MDC Values in logback.xml

Once the feature properties have been defined, logback.xml contains a “abstractNetworkPattern” property that will hold the desired message structure defined by the user. The user has the flexibility to define the message structure however they choose but for this tutorial the following pattern is used:

<property name="abstractNetworkPattern" value="[%d{yyyy-MM-dd'T'HH:mm:ss.SSS+00:00, UTC}] [%X{networkEventType:-NULL}|%X{networkProtocol:-NULL}|%X{networkTopic:-NULL}|%X{requestID:-NULL}|%X{closedLoopName:-NULL}]%n" />

The “value” portion consists of two headers in bracket notation, the first header defines the timestamp while the second header references the keys from the MDC filters defined in the feature properties. The standard logback syntax is used and more information on the syntax can be found here. Note that some of the fields here were not defined in the feature properties file. The feature automatically puts the network infrastructure information in the keys that are prepended with “network”. The current supported network infrastructure information is listed below.

Field

Values

networkEventType

IN, OUT

networkProtocol

DMAAP, UEB, NOOP

networkTopic

The name of the topic that received the message

To reference the keys from the feature properties the syntax “%X{KEY_DEFINED_IN_PROPERTIES}” provides access to the value. An optional addition is to append “:-”, which specifies a default value to display in the log if the field was not found in the message received. For this tutorial, a default of “NULL” is displayed for any of the fields that were not found while filtering. The “|” has no special meaning and is just used as a field separator for readability; the user can decorate the log format to their desired visual appeal.

Network Log Structure After Feature Enabled

Once the feature and logback.xml is configured to the user’s desired settings, start the PDP-D by running “policy start”. Based on the configurations from the previous sections of this tutorial, the following log message is written to network log when a message is received on the DCAE-CL topic:

[2019-03-22T16:38:23.884+00:00] [IN|DMAAP|DCAE-CL|664be3d2-6c12-4f4b-a3e7-c349acced200|ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e]

The message has now been filtered to display the network infrastructure information and the extracted data from the JSON message based on the feature properties. In order to view the entire message received from a topic, a complementary feature was developed to display the entire message on a per controller basis while preserving the compact network log. Refer to the Feature Controller Logging documentation for details.

End of Document

Feature: Pooling

The Pooling feature provides the ability to load-balance work across a “pool” of active-active Drools-PDP hosts. This particular implementation uses a DMaaP topic for communication between the hosts within the pool.

The pool is adjusted automatically, with no manual intervention when:
  • a new host is brought online

  • a host goes offline, whether gracefully or due to a failure in the host or in the network

Assumptions and Limitations
  • Session persistence is not required

  • Data may be lost when processing is moved from one host to another

  • The entire pool may shut down if the inter-host DMaaP topic becomes inaccessible

_images/poolingDesign.png
Key Points
  • Requests are received on a common DMaaP topic
    • DMaaP distributes the requests randomly to the hosts

    • The request topic should have at least as many partitions as there are hosts

  • Uses a single, internal DMaaP topic for all inter-host communication

  • Allocates buckets to each host
    • Requests are assigned to buckets based on their respective “request IDs”

  • No session persistence

  • No objects copied between hosts

  • Requires feature(s): distributed-locking

  • Precludes feature(s): session-persistence, active-standby, state-management

Example Scenario
  1. Incoming DMaaP message is received on a topic — all hosts are listening, but only one random host receives the message

  2. Decode message to determine “request ID” key (message-specific operation)

  3. Hash request ID to determine the bucket number

  4. Look up host associated with hash bucket (most likely remote)

  5. Publish “forward” message to internal DMaaP topic, including remote host, bucket number, DMaaP topic information, and message body

  6. Remote host verifies ownership of bucket, and routes the DMaaP message to its own rule engine for processing

The figure below shows several different hosts in a pool. Each host has a copy of the bucket assignments, which specifies which buckets are assigned to which hosts. Incoming requests are mapped to a bucket, and a bucket is mapped to a host, to which the request is routed. The host table includes an entry for each active host in the pool, to which one or more buckets are mapped.

_images/poolingPdps.png
Bucket Reassignment
  • When a host goes up or down, buckets are rebalanced

  • Attempts to maintain an even distribution

  • Leaves buckets with their current owner, where possible

  • Takes a few buckets from each host to assign to new hosts

For example, in the diagram below, the left side shows how 32 buckets might be assigned among four different hosts. When the first host fails, the buckets from host 1 would be reassigned among the remaining hosts, similar to what is shown on the right side of the diagram. Any requests that were being processed by host 1 will be lost and must be restarted. However, the buckets that had already been assigned to the remaining hosts are unchanged, thus requests associated with those buckets are not impacted by the loss of host 1.

_images/poolingBuckets.png
Usage

For pooling to be enabled, the distributed-locking feature must be also be enabled.

Enable Feature Pooling
 policy stop

 features enable distributed-locking
 features enable pooling-dmaap

The configuration is located at:

  • $POLICY_HOME/config/feature-pooling-dmaap.properties

Start the PDP-D using pooling
 policy start
Disable the pooling feature
 policy stop
 features disable pooling-dmaap
 policy start

End of Document

Feature: Session Persistence

The session persistence feature allows drools kie sessions to be persisted in a database surviving pdp-d restarts.

Enable session persistence
1 policy stop
2 features enable session-persistence

The configuration is located at:

  • $POLICY_HOME/config/feature-session-persistence.properties

Each controller that wants to be started with persistence should contain the following line in its <controller-name>-controller.properties

  • persistence.type=auto

Start the PDP-D using session-persistence
1 db-migrator -o upgrade -s ALL
2 policy start

Facts will survive PDP-D restart using the native drools capabilities and introduce a performance overhead.

Disable the session-persistence feature
1 policy stop
2 features disable session-persistence
3 sed -i "/persistence.type=auto/d" <controller-name>-controller.properties
4 db-migrator -o erase -s sessionpersistence   # delete all its database data (optional)
5 policy start

End of Document

Feature: State Management

The State Management Feature provides:

  • Node-level health monitoring

  • Monitoring the health of dependency nodes - nodes on which a particular node is dependent

  • Ability to lock/unlock a node and suspend or resume all application processing

  • Ability to suspend application processing on a node that is disabled or in a standby state

  • Interworking/Coordination of state values

  • Support for ITU X.731 states and state transitions for:
    • Administrative State

    • Operational State

    • Availability Status

    • Standby Status

Enabling and Disabling Feature State Management

The State Management Feature is enabled from the command line when logged in as policy after configuring the feature properties file (see Description Details section). From the command line:

  • > features status - Lists the status of features

  • > features enable state-management - Enables the State Management Feature

  • > features disable state-management - Disables the State Management Feature

The Drools PDP must be stopped prior to enabling/disabling features and then restarted after the features have been enabled/disabled.

Enabling State Management Feature
 policy@hyperion-4:/opt/app/policy$ policy stop
 [drools-pdp-controllers]
  L []: Stopping Policy Management... Policy Management (pid=354) is stopping... Policy Management has stopped.
 policy@hyperion-4:/opt/app/policy$ features enable state-management
 name                      version         status
 ----                      -------         ------
 controlloop-utils         1.1.0-SNAPSHOT  disabled
 healthcheck               1.1.0-SNAPSHOT  disabled
 test-transaction          1.1.0-SNAPSHOT  disabled
 eelf                      1.1.0-SNAPSHOT  disabled
 state-management          1.1.0-SNAPSHOT  enabled
 active-standby-management 1.1.0-SNAPSHOT  disabled
 session-persistence       1.1.0-SNAPSHOT  disabled
Description Details
State Model
The state model follows the ITU X.731 standard for state management. The supported state values are:
Administrative State:
  • Locked - All application transaction processing is prohibited

  • Unlocked - Application transaction processing is allowed

Administrative State Transitions:
  • The transition from Unlocked to Locked state is triggered with a Lock operation

  • The transition from the Locked to Unlocked state is triggered with an Unlock operation

Operational State:
  • Enabled - The node is healthy and able to process application transactions

  • Disabled - The node is not healthy and not able to process application transactions

Operational State Transitions:
  • The transition from Enabled to Disabled is triggered with a disableFailed or disableDependency operation

  • The transition from Disabled to Enabled is triggered with an enableNotFailed and enableNoDependency operation

Availability Status:
  • Null - The Operational State is Enabled

  • Failed - The Operational State is Disabled because the node is no longer healthy

  • Dependency - The Operational State is Disabled because all members of a dependency group are disabled

  • Dependency.Failed - The Operational State is Disabled because the node is no longer healthy and all members of a dependency group are disabled

Availability Status Transitions:
  • The transition from Null to Failed is triggered with a disableFailed operation

  • The transtion from Null to Dependency is triggered with a disableDependency operation

  • The transition from Failed to Dependency.Failed is triggered with a disableDependency operation

  • The transition from Dependency to Dependency.Failed is triggered with a disableFailed operation

  • The transition from Dependency.Failed to Failed is triggered with an enableNoDependency operation

  • The transition from Dependency.Failed to Dependency is triggered with an enableNotFailed operation

  • The transition from Failed to Null is triggered with an enableNotFailed operation

  • The transition from Dependency to Null is triggered with an enableNoDependency operation

Standby Status:
  • Null - The node does not support active-standby behavior

  • ProvidingService - The node is actively providing application transaction service

  • HotStandby - The node is capable of providing application transaction service, but is currently waiting to be promoted

  • ColdStandby - The node is not capable of providing application service because of a failure

Standby Status Transitions:
  • The transition from Null to HotStandby is triggered by a demote operation when the Operational State is Enabled

  • The transition for Null to ColdStandby is triggered is a demote operation when the Operational State is Disabled

  • The transition from ColdStandby to HotStandby is triggered by a transition of the Operational State from Disabled to Enabled

  • The transition from HotStandby to ColdStandby is triggered by a transition of the Operational State from Enabled to Disabled

  • The transition from ProvidingService to ColdStandby is triggered by a transition of the Operational State from Enabled to Disabled

  • The transition from HotStandby to ProvidingService is triggered by a Promote operation

  • The transition from ProvidingService to HotStandby is triggered by a Demote operation

Database

The State Management feature creates a StateManagement database having three tables:

StateManagementEntity - This table has the following columns:
  • id - Automatically created unique identifier

  • resourceName - The unique identifier for a node

  • adminState - The Administrative State

  • opState - The Operational State

  • availStatus - The Availability Status

  • standbyStatus - The Standby Status

  • created_Date - The timestamp the resource entry was created

  • modifiedDate - The timestamp the resource entry was last modified

ForwardProgressEntity - This table has the following columns:
  • forwardProgressId - Automatically created unique identifier

  • resourceName - The unique identifier for a node

  • fpc_count - A forward progress counter which is periodically incremented if the node is healthy

  • created_date - The timestamp the resource entry was created

  • last_updated - The timestamp the resource entry was last updated

ResourceRegistrationEntity - This table has the following columns:
  • ResourceRegistrationId - Automatically created unique identifier

  • resourceName - The unique identifier for a node

  • resourceUrl - The JMX URL used to check the health of a node

  • site - The name of the site in which the resource resides

  • nodeType - The type of the node (i.e, pdp_xacml, pdp_drools, pap, pap_admin, logparser, brms_gateway, astra_gateway, elk_server, pypdp)

  • created_date - The timestamp the resource entry was created

  • last_updated - The timestamp the resource entry was last updated

Node Health Monitoring

Application Monitoring

Application monitoring can be implemented using the startTransaction() and endTransaction() methods. Whenever a transaction is started, the startTransaction() method is called. If the node is locked, disabled or in a hot/cold standby state, the method will throw an exception. Otherwise, it resets the timer which triggers the default testTransaction() method.

When a transaction completes, calling endTransaction() increments the forward process counter in the ForwardProgressEntity DB table. As long as this counter is updating, the integrity monitor will assume the node is healthy/sane.

If the startTransaction() method is not called within a provisioned period of time, a timer will expire which calls the testTransaction() method. The default implementation of this method simply increments the forward progress counter. The testTransaction() method may be overwritten to perform a more meaningful test of system sanity, if desired.

If the forward progress counter stops incrementing, the integrity monitoring routine will assume the node application has lost sanity and it will trigger a statechange (disableFailed) to cause the operational state to become disabled and the availability status attribute to become failed. Once the forward progress counter again begins incrementing, the operational state will return to enabled.

Application Monitoring with AllSeemsWell

The IntegrityMonitor class provides a facility for applications to directly control updates of the forwardprogressentity table. As previously described, startTransaction() and endTransaction() are provided to monitor the forward progress of transactions. This, however, does not monitor things such as internal threads that may be blocked or die. An example is the feature-state-management DroolsPdpElectionHandler.run() method.

The run() method is monitored by a timer task, checkWaitTimer(). If the run() method is stalled an extended period of time, the checkWaitTimer() method will call StateManagementFeature.allSeemsWell(<className>, <AllSeemsWell State>, <String message>) with the AllSeemsWell state of Boolean.FALSE.

The IntegrityMonitor instance owned by StateManagementFeature will then store an entry in the allSeemsWellMap and block updates of the forwardprogressentity table. This in turn, will cause the Drools PDP operational state to be set to “disabled” and availability status to be set to “failed”.

Once the blocking condition is cleared, the checkWaiTimer() will again call the allSeemsWell() method and include an AllSeemsWell state of Boolean.True. This will cause the IntegrityMonitor to remove the entry for that className from the allSeemsWellMap and allow updating of the forwardprogressentity table, so long as there are no other entries in the map.

Dependency Monitoring

When a Drools PDP (or other node using the IntegrityMonitor policy/common module) is dependent upon other nodes to perform its function, those other nodes can be defined as dependencies in the properties file. In order for the dependency algorithm to function, the other nodes must also be running the IntegrityMonitor. Periodically the Drools PDP will check the state of dependencies. If all of a node type have failed, the Drools PDP will declare that it can no longer function and change the operational state to disabled and the availability status to dependency.

In addition to other policy node types, there is a subsystemTest() method that is periodically called by the IntegrityMonitor. In Drools PDP, subsystemTest has been overwritten to execute an audit of the Database and of the Maven Repository. If the audit is unable to verify the function of either the DB or the Maven Repository, he Drools PDP will declare that it can no longer function and change the operational state to disabled and the availability status to dependency.

When a failed dependency returns to normal operation, the IntegrityMontor will change the operational state to enabled and availability status to null.

External Health Monitoring Interface

The Drools PDP has a http test interface which, when called, will return 200 if all seems well and 500 otherwise. The test interface URL is defined in the properties file.

Site Manager

The Site Manager is not deployed with the Drools PDP, but it is available in the policy/common repository in the site-manager directory. The Site Manager provides a lock/unlock interface for nodes and a way to display node information and status.

The following is from the README file included with the Site Manager.

Site Manager README extract
 Before using 'siteManager', the file 'siteManager.properties' needs to be
 edited to configure the parameters used to access the database:

     javax.persistence.jdbc.driver - typically 'org.mariadb.jdbc.Driver'

     javax.persistence.jdbc.url - URL referring to the database,
         which typically has the form: 'jdbc:mariadb://<host>:<port>/<db>'
         ('<db>' is probably 'xacml' in this case)

     javax.persistence.jdbc.user - the user id for accessing the database

     javax.persistence.jdbc.password - password for accessing the database

 Once the properties file has been updated, the 'siteManager' script can be
 invoked as follows:

     siteManager show [ -s <site> | -r <resourceName> ] :
         display node information (Site, NodeType, ResourceName, AdminState,
                                   OpState, AvailStatus, StandbyStatus)

     siteManager setAdminState { -s <site> | -r <resourceName> } <new-state> :
         update admin state on selected nodes

     siteManager lock { -s <site> | -r <resourceName> } :
         lock selected nodes

     siteManager unlock { -s <site> | -r <resourceName> } :
         unlock selected nodes

Note that the ‘siteManager’ script assumes that the script, ‘site-manager-${project.version}.jar’ file and ‘siteManager.properties’ file are all in the same directory. If the files are separated, the ‘siteManager’ script will need to be modified so it can locate the jar and properties files.

Properties

The feature-state-mangement.properties file controls the function of the State Management Feature. In general, the properties have adequate descriptions in the file. Parameters which must be replaced prior to usage are indicated thus: ${{parameter to be replaced}}.

feature-state-mangement.properties
 # DB properties
 javax.persistence.jdbc.driver=org.mariadb.jdbc.Driver
 javax.persistence.jdbc.url=jdbc:mariadb://${{SQL_HOST}}:3306/statemanagement
 javax.persistence.jdbc.user=${{SQL_USER}}
 javax.persistence.jdbc.password=${{SQL_PASSWORD}}

 # DroolsPDPIntegrityMonitor Properties
 # Test interface host and port defaults may be overwritten here
 http.server.services.TEST.host=0.0.0.0
 http.server.services.TEST.port=9981
 #These properties will default to the following if no other values are provided:
 # http.server.services.TEST.restClasses=org.onap.policy.drools.statemanagement.IntegrityMonitorRestManager
 # http.server.services.TEST.managed=false
 # http.server.services.TEST.swagger=true

 #IntegrityMonitor Properties

 # Must be unique across the system
 resource.name=pdp1
 # Name of the site in which this node is hosted
 site_name=site1
 # Forward Progress Monitor update interval seconds
 fp_monitor_interval=30
 # Failed counter threshold before failover
 failed_counter_threshold=3
 # Interval between test transactions when no traffic seconds
 test_trans_interval=10
 # Interval between writes of the FPC to the DB seconds
 write_fpc_interval=5
 # Node type Note: Make sure you don't leave any trailing spaces, or you'll get an 'invalid node type' error!
 node_type=pdp_drools
 # Dependency groups are groups of resources upon which a node operational state is dependent upon.
 # Each group is a comma-separated list of resource names and groups are separated by a semicolon.  For example:
 # dependency_groups=site_1.astra_1,site_1.astra_2;site_1.brms_1,site_1.brms_2;site_1.logparser_1;site_1.pypdp_1
 dependency_groups=
 # When set to true, dependent health checks are performed by using JMX to invoke test() on the dependent.
 # The default false is to use state checks for health.
 test_via_jmx=true
 # This is the max number of seconds beyond which a non incrementing FPC is considered a failure
 max_fpc_update_interval=120
 # Run the state audit every 60 seconds (60000 ms).  The state audit finds stale DB entries in the
 # forwardprogressentity table and marks the node as disabled/failed in the statemanagemententity
 # table. NOTE! It will only run on nodes that have a standbystatus = providingservice.
 # A value of <= 0 will turn off the state audit.
 state_audit_interval_ms=60000
 # The refresh state audit is run every (default) 10 minutes (600000 ms) to clean up any state corruption in the
 # DB statemanagemententity table. It only refreshes the DB state entry for the local node.  That is, it does not
 # refresh the state of any other nodes.  A value <= 0 will turn the audit off. Any other value will override
 # the default of 600000 ms.
 refresh_state_audit_interval_ms=600000

 # Repository audit properties
 # Assume it's the releaseRepository that needs to be audited,
 # because that's the one BRMGW will publish to.
 repository.audit.id=${{releaseRepositoryID}}
 repository.audit.url=${{releaseRepositoryUrl}}
 repository.audit.username=${{repositoryUsername}}
 repository.audit.password=${{repositoryPassword}}
 repository2.audit.id=${{releaseRepository2ID}}
 repository2.audit.url=${{releaseRepository2Url}}
 repository2.audit.username=${{repositoryUsername2}}
 repository2.audit.password=${{repositoryPassword2}}

 # Repository Audit Properties
 # Flag to control the execution of the subsystemTest for the Nexus Maven repository
 repository.audit.is.active=false
 repository.audit.ignore.errors=true
 repository.audit.interval_sec=86400
 repository.audit.failure.threshold=3

 # DB Audit Properties
 # Flag to control the execution of the subsystemTest for the Database
 db.audit.is.active=false

End of Document

Feature: Test Transaction

The Test Transaction feature provides a mechanism by which the health of drools policy controllers can be tested.

When enabled, the feature functions by injecting an event object (identified by a UUID) into the drools session of each policy controller that is active in the system. Only an object with this UUID can trigger the Test Transaction-specific drools logic to execute.

The injection of the event triggers the “TT” rule (see TestTransactionTemplate.drl below) to fire. The “TT” rule simply increments a ForwardProgress counter object, thereby confirming that the drools session for this particular controller is active and firing its rules accordingly. This cycle repeats at 20 second intervals.

If it is ever the case that a drools controller does not have the “TT” rule present in its .drl, or that the forward progress counter is not incremented, the Test Transaction thread for that particular drools session (i.e. controller) is terminated and a message is logged to error.log.

Prior to being enabled, the following drools rules need to be appended to the rules templates of any use-case that is to be monitored by the feature.

TestTransactionTemplate.drl
 1 /*
 2  * ============LICENSE_START=======================================================
 3  * feature-test-transaction
 4  * ================================================================================
 5  * Copyright (C) 2017 AT&T Intellectual Property. All rights reserved.
 6  * ================================================================================
 7  * Licensed under the Apache License, Version 2.0 (the "License");
 8  * you may not use this file except in compliance with the License.
 9  * You may obtain a copy of the License at
10  *
11  *      http://www.apache.org/licenses/LICENSE-2.0
12  *
13  * Unless required by applicable law or agreed to in writing, software
14  * distributed under the License is distributed on an "AS IS" BASIS,
15  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
16  * See the License for the specific language governing permissions and
17  * limitations under the License.
18  * ============LICENSE_END=========================================================
19  */
20
21 package org.onap.policy.drools.rules;
22
23 import java.util.EventObject;
24
25 declare ForwardProgress
26     counter : Long
27 end
28
29 rule "TT.SETUP"
30 when
31 then
32     ForwardProgress fp = new ForwardProgress();
33     fp.setCounter(0L);
34     insert(fp);
35 end
36
37 rule "TT"
38 when
39     $fp : ForwardProgress()
40     $tt : EventObject(source == "43868e59-d1f3-43c2-bd6f-86f89a61eea5")
41 then
42     $fp.setCounter($fp.getCounter() + 1);
43     retract($tt);
44 end
45 query "TT.FPC"
46     ForwardProgress(counter >= 0, $ttc : counter)
47 end

Once the proper artifacts are built and deployed with the addition of the TestTransactionTemplate rules, the feature can then be enabled by entering the following commands:

PDPD Features Command
 policy@hyperion-4:/opt/app/policy$ policy stop
 [drools-pdp-controllers]
  L []: Stopping Policy Management... Policy Management (pid=354) is stopping... Policy Management has stopped.
 policy@hyperion-4:/opt/app/policy$ features enable test-transaction
 name                      version         status
 ----                      -------         ------
 controlloop-utils         1.1.0-SNAPSHOT  disabled
 healthcheck               1.1.0-SNAPSHOT  disabled
 test-transaction          1.1.0-SNAPSHOT  enabled
 eelf                      1.1.0-SNAPSHOT  disabled
 state-management          1.1.0-SNAPSHOT  disabled
 active-standby-management 1.1.0-SNAPSHOT  disabled
 session-persistence       1.1.0-SNAPSHOT  disabled

The output of the enable command will indicate whether or not the feature was enabled successfully.

Policy engine can then be started as usual.

End of Document

Feature: no locking

The no-locking feature allows applications to use a Lock Manager that always succeeds. It does not deny acquiring resource locks.

To utilize the no-locking feature, first stop policy engine, disable other locking features, and then enable it using the “features” command.

In an official OOM installation, place a script with a .pre.sh suffix:

features.pre.sh #!/bin/sh
 sh -c "features disable distributed-locking"
 sh -c "features enable no-locking"

under the directory:

and rebuild the policy charts.

At container initialization, the distributed-locking will be disabled, and the no-locking feature will be enabled.

End of Document

Data Migration

PDP-D data is migrated across releases with the db-migrator.

The migration occurs when different release data is detected. db-migrator will look under the $POLICY_HOME/etc/db/migration for databases and SQL scripts to migrate.

$POLICY_HOME/etc/db/migration/<schema-name>/sql/<sql-file>

where <sql-file> is of the form:

<VERSION>-<pdp|feature-name>[-description](.upgrade|.downgrade).sql

The db-migrator tool syntax is

syntax: db-migrator
     -s <schema-name>
     [-b <migration-dir>]
     [-f <from-version>]
     [-t <target-version>]
     -o <operations>

     where <operations>=upgrade|downgrade|auto|version|erase|report

Configuration Options:
     -s|--schema|--database:  schema to operate on ('ALL' to apply on all)
     -b|--basedir: overrides base DB migration directory
     -f|--from: overrides current release version for operations
     -t|--target: overrides target release to upgrade/downgrade

Operations:
     upgrade: upgrade operation
     downgrade: performs a downgrade operation
     auto: autonomous operation, determines upgrade or downgrade
     version: returns current version, and in conjunction if '-f' sets the current version
     erase: erase all data related <schema> (use with care)
     report: migration detailed report on an schema
     ok: is the migration status valid

See the feature-distributed-locking sql directory for an example of upgrade/downgrade scripts.

The following command will provide a report on the upgrade or downgrade activies:

db-migrator -s ALL -o report

For example in the official guilin delivery:

policy@dev-drools-0:/tmp/policy-install$ db-migrator -s ALL -o report
+---------+---------+
| name    | version |
+---------+---------+
| pooling | 1811    |
+---------+---------+
+-------------------------------------+-----------+---------+---------------------+
| script                              | operation | success | atTime              |
+-------------------------------------+-----------+---------+---------------------+
| 1804-distributedlocking.upgrade.sql | upgrade   | 1       | 2020-05-22 19:33:09 |
| 1811-distributedlocking.upgrade.sql | upgrade   | 1       | 2020-05-22 19:33:09 |
+-------------------------------------+-----------+---------+---------------------+

In order to use the db-migrator tool, the system must be configured with a database.

SQL_HOST=mariadb

Maven Repositories

The drools libraries in the PDP-D uses maven to fetch rules artifacts and software dependencies.

The default settings.xml file specifies the repositories to search. This configuration can be overriden with a custom copy that would sit in a mounted configuration directory. See an example of the OOM override settings.xml.

The default ONAP installation of the control loop child image onap/policy-pdpd-cl:1.6.4 is OFFLINE. In this configuration, the rules artifact and the dependencies retrieves all the artifacts from the local maven repository. Of course, this requires that the maven dependencies are preloaded in the local repository for it to work.

An offline configuration requires two items:

  • OFFLINE environment variable set to true.

  • override settings.xml customization, see settings.xml.

The default mode in the onap/policy-drools:1.6.3 is ONLINE instead.

In ONLINE mode, the controller initialization can take a significant amount of time.

The Policy ONAP installation includes a nexus repository component that can be used to host any arbitrary artifacts that an PDP-D application may require. The following environment variables configure its location:

SNAPSHOT_REPOSITORY_ID=policy-nexus-snapshots
SNAPSHOT_REPOSITORY_URL=http://nexus:8080/nexus/content/repositories/snapshots/
RELEASE_REPOSITORY_ID=policy-nexus-releases
RELEASE_REPOSITORY_URL=http://nexus:8080/nexus/content/repositories/releases/
REPOSITORY_OFFLINE=false

The deploy-artifact tool is used to deploy artifacts to the local or remote maven repositories. It also allows for dependencies to be installed locally. The features tool invokes it when artifacts are to be deployed as part of a feature. The tool can be useful for developers to test a new application in a container.

syntax: deploy-artifact
     [-f|-l|-d]
     -s <custom-settings>
     -a <artifact>

Options:
     -f|--file-repo: deploy in the file repository
     -l|--local-repo: install in the local repository
     -d|--dependencies: install dependencies in the local repository
     -s|--settings: custom settings.xml
     -a|--artifact: file artifact (jar or pom) to deploy and/or install

AAF

Policy can talk to AAF for authorization requests. To enable AAF set the following environment variables:

AAF=true
AAF_NAMESPACE=org.onap.policy
AAF_HOST=aaf-locate.onap

By default AAF is disabled.

Policy Tool

The policy tool can be used to stop, start, and provide status on the PDP-D.

syntax: policy [--debug] status|start|stop

The status option provides generic status of the system.

[drools-pdp-controllers]
 L []: Policy Management (pid 408) is running
    0 cron jobs installed.

[features]
name                   version         status
----                   -------         ------
healthcheck            1.6.3           enabled
distributed-locking    1.6.3           enabled
lifecycle              1.6.3           enabled
controlloop-management 1.6.4           enabled
controlloop-utils      1.6.4           enabled
controlloop-trans      1.6.4           enabled
controlloop-usecases   1.6.4           enabled

[migration]
pooling: OK @ 1811

It contains 3 sections:

  • PDP-D running status

  • features applied

  • Data migration status on a per database basis.

The start and stop commands are useful for developers testing functionality on a docker container instance.

Telemetry Shell

PDP-D offers an ample set of REST APIs to debug, introspect, and change state on a running PDP-D. This is known as the telemetry API. The telemetry shell wraps these APIs for shell-like access using http-prompt.

policy@dev-drools-0:~$ telemetry
Version: 1.0.0
https://localhost:9696/policy/pdp/engine> get controllers
HTTP/1.1 200 OK
Content-Length: 13
Content-Type: application/json
Date: Thu, 04 Jun 2020 01:07:38 GMT
Server: Jetty(9.4.24.v20191120)

[
    "usecases"
]

https://localhost:9696/policy/pdp/engine> exit
Goodbye!
policy@dev-drools-0:~$

Other tools

Refer to the $POLICY_HOME/bin/ directory for additional tooling.

PDP-D Docker Container Configuration

Both the PDP-D onap/policy-drools and onap/policy-pdpd-cl images can be used without other components.

There are 2 types of configuration data provided to the container:

  1. environment variables.

  2. configuration files and shell scripts.

Environment variables

As it was shown in the controller and endpoint sections, PDP-D configuration can rely on environment variables. In a container environment, these variables are set up by the user in the host environment.

Configuration Files and Shell Scripts

PDP-D is very flexible in its configuration.

The following file types are recognized when mounted under /tmp/policy-install/config.

These are the configuration items that can reside externally and override the default configuration:

  • settings.xml if working with external nexus repositories.

  • standalone-settings.xml if an external policy nexus repository is not available.

  • *.conf files containing environment variables. This is an alternative to use environment variables, as these files will be sourced in before the PDP-D starts.

  • features*.zip to load any arbitrary feature not present in the image.

  • *.pre.sh scripts that will be executed before the PDP-D starts.

  • *.post.sh scripts that will be executed after the PDP-D starts.

  • policy-keystore to override the default PDP-D java keystore.

  • policy-truststore to override the default PDP-D java truststore.

  • aaf-cadi.keyfile to override the default AAF CADI Key generated by AAF.

  • *.properties to override or add any properties file for the PDP-D, this includes controller, endpoint, engine or system configurations.

  • logback*.xml to override the default logging configuration.

  • *.xml to override other .xml configuration that may be used for example by an application.

  • *.json json configuration that may be used by an application.

Running PDP-D with a single container

Environment File

First create an environment file (in this example env.conf) to configure the PDP-D.

# SYSTEM software configuration

POLICY_HOME=/opt/app/policy
POLICY_LOGS=/var/log/onap/policy/pdpd
KEYSTORE_PASSWD=Pol1cy_0nap
TRUSTSTORE_PASSWD=Pol1cy_0nap

# Telemetry credentials

TELEMETRY_PORT=9696
TELEMETRY_HOST=0.0.0.0
TELEMETRY_USER=demo@people.osaaf.org
TELEMETRY_PASSWORD=demo123456!

# nexus repository

SNAPSHOT_REPOSITORY_ID=
SNAPSHOT_REPOSITORY_URL=
RELEASE_REPOSITORY_ID=
RELEASE_REPOSITORY_URL=
REPOSITORY_USERNAME=
REPOSITORY_PASSWORD=
REPOSITORY_OFFLINE=true

# Relational (SQL) DB access

SQL_HOST=
SQL_USER=
SQL_PASSWORD=

# AAF

AAF=false
AAF_NAMESPACE=org.onap.policy
AAF_HOST=aaf.api.simpledemo.onap.org

# PDP-D DMaaP configuration channel

PDPD_CONFIGURATION_TOPIC=PDPD-CONFIGURATION
PDPD_CONFIGURATION_API_KEY=
PDPD_CONFIGURATION_API_SECRET=
PDPD_CONFIGURATION_CONSUMER_GROUP=
PDPD_CONFIGURATION_CONSUMER_INSTANCE=
PDPD_CONFIGURATION_PARTITION_KEY=

# PAP-PDP configuration channel

POLICY_PDP_PAP_TOPIC=POLICY-PDP-PAP
POLICY_PDP_PAP_API_KEY=
POLICY_PDP_PAP_API_SECRET=

# DMaaP

DMAAP_SERVERS=localhost

Note that SQL_HOST, and REPOSITORY are empty, so the PDP-D does not attempt to integrate with those components.

Configuration

In order to avoid the noise in the logs that relate to dmaap configuration, a startup script (noop.pre.sh) is added to convert dmaap endpoints to noop in the host directory to be mounted.

noop.pre.sh
#!/bin/bash -x

sed -i "s/^dmaap/noop/g" $POLICY_HOME/config/*.properties
active.post.sh

To put the controller directly in active mode at initialization, place an active.post.sh script under the mounted host directory:

#!/bin/bash -x

bash -c "http --verify=no -a ${TELEMETRY_USER}:${TELEMETRY_PASSWORD} PUT https://localhost:9696/policy/pdp/engine/lifecycle/state/ACTIVE"
Bring up the PDP-D
docker run --rm -p 9696:9696 -v ${PWD}/config:/tmp/policy-install/config --env-file ${PWD}/env/env.conf -it --name PDPD -h pdpd nexus3.onap.org:10001/onap/policy-drools:1.6.3

To run the container in detached mode, add the -d flag.

Note that in this command, we are opening the 9696 telemetry API port to the outside world, the config directory (where the noop.pre.sh customization script resides) is mounted as /tmp/policy-install/config, and the customization environment variables (env/env.conf) are passed into the container.

To open a shell into the PDP-D:

docker exec -it pdp-d bash

Once in the container, run tools such as telemetry, db-migrator, policy to look at the system state:

To run the telemetry shell and other tools from the host:

docker exec -it PDPD bash -c "/opt/app/policy/bin/telemetry"
docker exec -it PDPD bash -c "/opt/app/policy/bin/policy status"
docker exec -it PDPD bash -c "/opt/app/policy/bin/db-migrator -s ALL -o report"
Controlled instantiation of the PDP-D

Sometimes a developer may want to start and stop the PDP-D manually:

# start a bash

docker run --rm -p 9696:9696 -v ${PWD}/config:/tmp/policy-install/config --env-file ${PWD}/env/env.conf -it --name PDPD -h pdpd nexus3.onap.org:10001/onap/policy-drools:1.6.3 bash

# use this command to start policy applying host customizations from /tmp/policy-install/config

pdpd-entrypoint.sh vmboot

# or use this command to start policy without host customization

policy start

# at any time use the following command to stop the PDP-D

policy stop

# and this command to start the PDP-D back again

policy start

Running PDP-D with nexus and mariadb

docker-compose can be used to test the PDP-D with other components. This is an example configuration that brings up nexus, mariadb and the PDP-D (docker-compose-pdp.yml)

docker-compose-pdp.yml
version: '3'
services:
   mariadb:
      image: mariadb:10.2.25
      container_name: mariadb
      hostname: mariadb
      command: ['--lower-case-table-names=1', '--wait_timeout=28800']
      env_file:
         - ${PWD}/db/db.conf
      volumes:
         - ${PWD}/db:/docker-entrypoint-initdb.d
      ports:
         - "3306:3306"
   nexus:
      image: sonatype/nexus:2.14.8-01
      container_name: nexus
      hostname: nexus
      ports:
         - "8081:8081"
   drools:
      image: nexus3.onap.org:10001/onap/policy-drools:1.6.3
      container_name: drools
      depends_on:
         - mariadb
         - nexus
      hostname: drools
      ports:
         - "9696:9696"
      volumes:
         - ${PWD}/config:/tmp/policy-install/config
      env_file:
         - ${PWD}/env/env.conf

with ${PWD}/db/db.conf:

db.conf
MYSQL_ROOT_PASSWORD=secret
MYSQL_USER=policy_user
MYSQL_PASSWORD=policy_user

and ${PWD}/db/db.sh:

db.sh
for db in support onap_sdk log migration operationshistory10 pooling policyadmin operationshistory
do
    mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};"
    mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;"
done

mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;"
env.conf

The environment file env/env.conf for PDP-D can be set up with appropriate variables to point to the nexus instance and the mariadb database:

# SYSTEM software configuration

POLICY_HOME=/opt/app/policy
POLICY_LOGS=/var/log/onap/policy/pdpd
KEYSTORE_PASSWD=Pol1cy_0nap
TRUSTSTORE_PASSWD=Pol1cy_0nap

# Telemetry credentials

TELEMETRY_PORT=9696
TELEMETRY_HOST=0.0.0.0
TELEMETRY_USER=demo@people.osaaf.org
TELEMETRY_PASSWORD=demo123456!

# nexus repository

SNAPSHOT_REPOSITORY_ID=policy-nexus-snapshots
SNAPSHOT_REPOSITORY_URL=http://nexus:8081/nexus/content/repositories/snapshots/
RELEASE_REPOSITORY_ID=policy-nexus-releases
RELEASE_REPOSITORY_URL=http://nexus:8081/nexus/content/repositories/releases/
REPOSITORY_USERNAME=admin
REPOSITORY_PASSWORD=admin123
REPOSITORY_OFFLINE=false

MVN_SNAPSHOT_REPO_URL=https://nexus.onap.org/content/repositories/snapshots/
MVN_RELEASE_REPO_URL=https://nexus.onap.org/content/repositories/releases/

# Relational (SQL) DB access

SQL_HOST=mariadb
SQL_USER=policy_user
SQL_PASSWORD=policy_user

# AAF

AAF=false
AAF_NAMESPACE=org.onap.policy
AAF_HOST=aaf.api.simpledemo.onap.org

# PDP-D DMaaP configuration channel

PDPD_CONFIGURATION_TOPIC=PDPD-CONFIGURATION
PDPD_CONFIGURATION_API_KEY=
PDPD_CONFIGURATION_API_SECRET=
PDPD_CONFIGURATION_CONSUMER_GROUP=
PDPD_CONFIGURATION_CONSUMER_INSTANCE=
PDPD_CONFIGURATION_PARTITION_KEY=

# PAP-PDP configuration channel

POLICY_PDP_PAP_TOPIC=POLICY-PDP-PAP
POLICY_PDP_PAP_API_KEY=
POLICY_PDP_PAP_API_SECRET=

# DMaaP

DMAAP_SERVERS=localhost
prepare.pre.sh

A pre-start script config/prepare.pres.sh”can be added the custom config directory to prepare the PDP-D to activate the distributed-locking feature (using the database) and to use “noop” topics instead of *dmaap topics:

#!/bin/bash

bash -c "/opt/app/policy/bin/features enable distributed-locking"
sed -i "s/^dmaap/noop/g" $POLICY_HOME/config/*.properties
active.post.sh

A post-start script config/active.post.sh can place PDP-D in active mode at initialization:


bash -c “http –verify=no -a ${TELEMETRY_USER}:${TELEMETRY_PASSWORD} PUT https://localhost:9696/policy/pdp/engine/lifecycle/state/ACTIVE

Bring up the PDP-D, nexus, and mariadb

To bring up the containers:

docker-compose -f docker-compose-pdpd.yaml up -d

To take it down:

docker-compose -f docker-compose-pdpd.yaml down -v
Other examples

The reader can also look at the integration/csit repository. More specifically, these directories have examples of other PDP-D configurations:

Configuring the PDP-D in an OOM Kubernetes installation

The PDP-D OOM chart can be customized at the following locations:

  • values.yaml: custom values for your installation.

  • configmaps: place in this directory any configuration extensions or overrides to customize the PDP-D that does not contain sensitive information.

  • secrets: place in this directory any configuration extensions or overrides to customize the PDP-D that does contain sensitive information.

The same customization techniques described in the docker sections for PDP-D, fully apply here, by placing the corresponding files or scripts in these two directories.

Additional information

For additional information, please see the Drools PDP Development and Testing (In Depth) page.

PDP-D Applications

Overview

PDP-D applications uses the PDP-D Engine middleware to provide domain specific services. See PDP-D Engine for the description of the PDP-D infrastructure.

At this time Control Loops are the only type of applications supported.

Control Loop applications must support at least one of the following Policy Types:

  • onap.policies.controlloop.Operational (Operational Policies for Legacy Control Loops)

  • onap.policies.controlloop.operational.common.Drools (Tosca Compliant Operational Policies)

Software

Source Code repositories

The PDP-D Applications software resides on the policy/drools-applications repository. The actor libraries introduced in the frankfurt release reside in the policy/models repository.

At this time, the control loop application is the only application supported in ONAP. All the application projects reside under the controlloop directory.

Docker Image

See the drools-applications released versions for the latest images:

docker pull onap/policy-pdpd-cl:1.6.4

At the time of this writing 1.6.4 is the latest version.

The onap/policy-pdpd-cl image extends the onap/policy-drools image with the usecases controller that realizes the control loop application.

Usecases Controller

The usecases controller is the control loop application in ONAP.

There are three parts in this controller:

The kmodule.xml specifies only one session, and declares in the kbase section the two operational policy types that it supports.

The Usecases controller relies on the new Actor framework to interact with remote components, part of a control loop transaction. The reader is referred to the Policy Platform Actor Development Guidelines in the documentation for further information.

Operational Policy Types

The usecases controller supports the two Operational policy types:

  • onap.policies.controlloop.Operational.

  • onap.policies.controlloop.operational.common.Drools.

The onap.policies.controlloop.Operational is the legacy operational type, used before the frankfurt release. The onap.policies.controlloop.operational.common.Drools is the Tosca compliant policy type introduced in frankfurt.

The legacy operational policy type is defined at the onap.policies.controlloop.Operational.yaml.

The Tosca Compliant Operational Policy Type is defined at the onap.policies.controlloop.operational.common.Drools.

An example of a Legacy Operational Policy can be found here.

An example of a Tosca Compliant Operational Policy can be found here.

Policy Chaining

The usecases controller supports chaining of multiple operations inside a Tosca Operational Policy. The next operation can be chained based on the result/output from an operation. The possibilities available for chaining are:

  • success: chain after the result of operation is success

  • failure: chain after the result of operation is failure due to issues with controller/actor

  • failure_timeout: chain after the result of operation is failure due to timeout

  • failure_retries: chain after the result of operation is failure after all retries

  • failure_exception: chain after the result of operation is failure due to exception

  • failure_guard: chain after the result of operation is failure due to guard not allowing the operation

An example of policy chaining for VNF can be found here.

An example of policy chaining for PNF can be found here.

Features

Since the PDP-D Control Loop Application image was created from the PDP-D Engine one (onap/policy-drools), it inherits all features and functionality.

The enabled features in the onap/policy-pdpd-cl image are:

  • distributed locking: distributed resource locking.

  • healthcheck: healthcheck.

  • lifecycle: enables the lifecycle APIs.

  • controlloop-trans: control loop transaction tracking.

  • controlloop-management: generic controller capabilities.

  • controlloop-usecases: new controller introduced in the guilin release to realize the ONAP use cases.

The following features are installed but disabled:

  • controlloop-frankfurt: controller used in the frankfurt release.

  • controlloop-tdjam: experimental java-only controller to be deprecated post guilin.

  • controlloop-utils: actor simulators.

Control Loops Transaction (controlloop-trans)

It tracks Control Loop Transactions and Operations. These are recorded in the $POLICY_LOGS/audit.log and $POLICY_LOGS/metrics.log, and accessible through the telemetry APIs.

Control Loops Management (controlloop-management)

It installs common control loop application resources, and provides telemetry API extensions. Actor configurations are packaged in this feature.

Usecases Controller (controlloop-usecases)

It is the guilin release implementation of the ONAP use cases. It relies on the new Actor model framework to carry out a policy’s execution.

Frankfurt Controller (controlloop-frankfurt)

This is the frankfurt controller that will be deprecated after the guilin release.

TDJAM Controller (controlloop-tdjam)

This is an experimental, java-only controller that will be deprecated after the guilin release.

Utilities (controlloop-utils)

Enables actor simulators for testing purposes.

Offline Mode

The default ONAP installation in onap/policy-pdpd-cl:1.6.4 is OFFLINE. In this configuration, the rules artifact and the dependencies are all in the local maven repository. This requires that the maven dependencies are preloaded in the local repository.

An offline configuration requires two configuration items:

  • OFFLINE environment variable set to true (see values.yaml.

  • override of the default settings.xml (see settings.xml) override.

Running the PDP-D Control Loop Application in a single container

Environment File

First create an environment file (in this example env.conf) to configure the PDP-D.

# SYSTEM software configuration

POLICY_HOME=/opt/app/policy
POLICY_LOGS=/var/log/onap/policy/pdpd
KEYSTORE_PASSWD=Pol1cy_0nap
TRUSTSTORE_PASSWD=Pol1cy_0nap

# Telemetry credentials

TELEMETRY_PORT=9696
TELEMETRY_HOST=0.0.0.0
TELEMETRY_USER=demo@people.osaaf.org
TELEMETRY_PASSWORD=demo123456!

# nexus repository

SNAPSHOT_REPOSITORY_ID=
SNAPSHOT_REPOSITORY_URL=
RELEASE_REPOSITORY_ID=
RELEASE_REPOSITORY_URL=
REPOSITORY_USERNAME=
REPOSITORY_PASSWORD=
REPOSITORY_OFFLINE=true

MVN_SNAPSHOT_REPO_URL=
MVN_RELEASE_REPO_URL=

# Relational (SQL) DB access

SQL_HOST=
SQL_USER=
SQL_PASSWORD=

# AAF

AAF=false
AAF_NAMESPACE=org.onap.policy
AAF_HOST=aaf.api.simpledemo.onap.org

# PDP-D DMaaP configuration channel

PDPD_CONFIGURATION_TOPIC=PDPD-CONFIGURATION
PDPD_CONFIGURATION_API_KEY=
PDPD_CONFIGURATION_API_SECRET=
PDPD_CONFIGURATION_CONSUMER_GROUP=
PDPD_CONFIGURATION_CONSUMER_INSTANCE=
PDPD_CONFIGURATION_PARTITION_KEY=

# PAP-PDP configuration channel

POLICY_PDP_PAP_TOPIC=POLICY-PDP-PAP
POLICY_PDP_PAP_GROUP=defaultGroup

# Symmetric Key for encoded sensitive data

SYMM_KEY=

# Healthcheck Feature

HEALTHCHECK_USER=demo@people.osaaf.org
HEALTHCHECK_PASSWORD=demo123456!

# Pooling Feature

POOLING_TOPIC=POOLING

# PAP

PAP_HOST=
PAP_USERNAME=
PAP_PASSWORD=

# PAP legacy

PAP_LEGACY_USERNAME=
PAP_LEGACY_PASSWORD=

# PDP-X

PDP_HOST=localhost
PDP_PORT=6669
PDP_CONTEXT_URI=pdp/api/getDecision
PDP_USERNAME=policy
PDP_PASSWORD=password
GUARD_DISABLED=true

# DCAE DMaaP

DCAE_TOPIC=unauthenticated.DCAE_CL_OUTPUT
DCAE_SERVERS=localhost
DCAE_CONSUMER_GROUP=dcae.policy.shared

# Open DMaaP

DMAAP_SERVERS=localhost

# AAI

AAI_HOST=localhost
AAI_PORT=6666
AAI_CONTEXT_URI=
AAI_USERNAME=policy
AAI_PASSWORD=policy

# SO

SO_HOST=localhost
SO_PORT=6667
SO_CONTEXT_URI=
SO_URL=https://localhost:6667/
SO_USERNAME=policy
SO_PASSWORD=policy

# VFC

VFC_HOST=localhost
VFC_PORT=6668
VFC_CONTEXT_URI=api/nslcm/v1/
VFC_USERNAME=policy
VFC_PASSWORD=policy

# SDNC

SDNC_HOST=localhost
SDNC_PORT=6670
SDNC_CONTEXT_URI=restconf/operations/
Configuration
noop.pre.sh

In order to avoid the noise in the logs that relate to dmaap configuration, a startup script (noop.pre.sh) is added to convert dmaap endpoints to noop in the host directory to be mounted.

#!/bin/bash -x

sed -i "s/^dmaap/noop/g" $POLICY_HOME/config/*.properties
features.pre.sh

We can enable the controlloop-utils and disable the distributed-locking feature to avoid using the database.

#!/bin/bash -x

bash -c "/opt/app/policy/bin/features disable distributed-locking"
bash -c "/opt/app/policy/bin/features enable controlloop-utils"
active.post.sh

The active.post.sh script makes the PDP-D active.

#!/bin/bash -x

bash -c "http --verify=no -a ${TELEMETRY_USER}:${TELEMETRY_PASSWORD} PUT https://localhost:9696/policy/pdp/engine/lifecycle/state/ACTIVE"
Actor Properties

In the guilin release, some actors configurations need to be overridden to support http for compatibility with the controlloop-utils feature.

AAI-http-client.properties
http.client.services=AAI

http.client.services.AAI.managed=true
http.client.services.AAI.https=false
http.client.services.AAI.host=${envd:AAI_HOST}
http.client.services.AAI.port=${envd:AAI_PORT}
http.client.services.AAI.userName=${envd:AAI_USERNAME}
http.client.services.AAI.password=${envd:AAI_PASSWORD}
http.client.services.AAI.contextUriPath=${envd:AAI_CONTEXT_URI}
SDNC-http-client.properties
http.client.services=SDNC

http.client.services.SDNC.managed=true
http.client.services.SDNC.https=false
http.client.services.SDNC.host=${envd:SDNC_HOST}
http.client.services.SDNC.port=${envd:SDNC_PORT}
http.client.services.SDNC.userName=${envd:SDNC_USERNAME}
http.client.services.SDNC.password=${envd:SDNC_PASSWORD}
http.client.services.SDNC.contextUriPath=${envd:SDNC_CONTEXT_URI}
VFC-http-client.properties
http.client.services=VFC

http.client.services.VFC.managed=true
http.client.services.VFC.https=false
http.client.services.VFC.host=${envd:VFC_HOST}
http.client.services.VFC.port=${envd:VFC_PORT}
http.client.services.VFC.userName=${envd:VFC_USERNAME}
http.client.services.VFC.password=${envd:VFC_PASSWORD}
http.client.services.VFC.contextUriPath=${envd:VFC_CONTEXT_URI:api/nslcm/v1/}
settings.xml

The standalone-settings.xml file is the default maven settings override in the container.

<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">

    <offline>true</offline>

    <profiles>
        <profile>
            <id>policy-local</id>
            <repositories>
                <repository>
                    <id>file-repository</id>
                    <url>file:${user.home}/.m2/file-repository</url>
                    <releases>
                        <enabled>true</enabled>
                        <updatePolicy>always</updatePolicy>
                    </releases>
                    <snapshots>
                        <enabled>true</enabled>
                        <updatePolicy>always</updatePolicy>
                    </snapshots>
                </repository>
            </repositories>
        </profile>
    </profiles>

    <activeProfiles>
        <activeProfile>policy-local</activeProfile>
    </activeProfiles>

</settings>
Bring up the PDP-D Control Loop Application
docker run --rm -p 9696:9696 -v ${PWD}/config:/tmp/policy-install/config --env-file ${PWD}/env/env.conf -it --name PDPD -h pdpd nexus3.onap.org:10001/onap/policy-pdpd-cl:1.6.4

To run the container in detached mode, add the -d flag.

Note that we are opening the 9696 telemetry API port to the outside world, mounting the config host directory, and setting environment variables.

To open a shell into the PDP-D:

docker exec -it pdp-d bash

Once in the container, run tools such as telemetry, db-migrator, policy to look at the system state:

docker exec -it PDPD bash -c "/opt/app/policy/bin/telemetry"
docker exec -it PDPD bash -c "/opt/app/policy/bin/policy status"
docker exec -it PDPD bash -c "/opt/app/policy/bin/db-migrator -s ALL -o report"
Controlled instantiation of the PDP-D Control Loop Appplication

Sometimes a developer may want to start and stop the PDP-D manually:

# start a bash

docker run --rm -p 9696:9696 -v ${PWD}/config:/tmp/policy-install/config --env-file ${PWD}/env/env.conf -it --name PDPD -h pdpd nexus3.onap.org:10001/onap/policy-pdpd-cl:1.6.4 bash

# use this command to start policy applying host customizations from /tmp/policy-install/config

pdpd-cl-entrypoint.sh vmboot

# or use this command to start policy without host customization

policy start

# at any time use the following command to stop the PDP-D

policy stop

# and this command to start the PDP-D back again

policy start

Scale-out use case testing

First step is to create the operational.scaleout policy.

policy.vdns.json
{
  "type": "onap.policies.controlloop.operational.common.Drools",
  "type_version": "1.0.0",
  "name": "operational.scaleout",
  "version": "1.0.0",
  "metadata": {
    "policy-id": "operational.scaleout"
  },
  "properties": {
    "id": "ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3",
    "timeout": 60,
    "abatement": false,
    "trigger": "unique-policy-id-1-scale-up",
    "operations": [
      {
        "id": "unique-policy-id-1-scale-up",
        "description": "Create a new VF Module",
        "operation": {
          "actor": "SO",
          "operation": "VF Module Create",
          "target": {
            "targetType": "VFMODULE",
            "entityIds": {
              "modelInvariantId": "e6130d03-56f1-4b0a-9a1d-e1b2ebc30e0e",
              "modelVersionId": "94b18b1d-cc91-4f43-911a-e6348665f292",
              "modelName": "VfwclVfwsnkBbefb8ce2bde..base_vfw..module-0",
              "modelVersion": 1,
              "modelCustomizationId": "47958575-138f-452a-8c8d-d89b595f8164"
            }
          },
          "payload": {
            "requestParameters": "{\"usePreload\":true,\"userParams\":[]}",
            "configurationParameters": "[{\"ip-addr\":\"$.vf-module-topology.vf-module-parameters.param[9]\",\"oam-ip-addr\":\"$.vf-module-topology.vf-module-parameters.param[16]\",\"enabled\":\"$.vf-module-topology.vf-module-parameters.param[23]\"}]"
          }
        },
        "timeout": 20,
        "retries": 0,
        "success": "final_success",
        "failure": "final_failure",
        "failure_timeout": "final_failure_timeout",
        "failure_retries": "final_failure_retries",
        "failure_exception": "final_failure_exception",
        "failure_guard": "final_failure_guard"
      }
    ]
  }
}

To provision the scale-out policy, issue the following command:

http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" https://localhost:9696/policy/pdp/engine/lifecycle/policies @usecases/policy.vdns.json

Verify that the policy shows with the telemetry tools:

docker exec -it PDPD bash -c "/opt/app/policy/bin/telemetry"
> get /policy/pdp/engine/lifecycle/policies
> get /policy/pdp/engine/controllers/usecases/drools/facts/usecases/controlloops
dcae.vdns.onset.json
{
  "closedLoopControlName": "ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3",
  "closedLoopAlarmStart": 1463679805324,
  "closedLoopEventClient": "microservice.stringmatcher",
  "closedLoopEventStatus": "ONSET",
  "requestID": "c7c6a4aa-bb61-4a15-b831-ba1472dd4a65",
  "target_type": "VNF",
  "target": "vserver.vserver-name",
  "AAI": {
    "vserver.is-closed-loop-disabled": "false",
    "vserver.prov-status": "ACTIVE",
    "vserver.vserver-name": "OzVServer"
  },
  "from": "DCAE",
  "version": "1.0.2"
}

To initiate a control loop transaction, simulate a DCAE ONSET to Policy:

http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" PUT https://localhost:9696/policy/pdp/engine/topics/sources/noop/DCAE_TOPIC/events @dcae.vdns.onset.json Content-Type:'text/plain'

This will trigger the scale out control loop transaction that will interact with the SO simulator to complete the transaction.

Verify in $POLICY_LOGS/network.log that a FINAL: SUCCESS notification is sent over the POLICY-CL-MGT channel. An entry in the $POLICY_LOGS/audit.log should indicate successful completion as well.

vCPE use case testing

First step is to create the operational.restart policy.

policy.vcpe.json
{
  "type": "onap.policies.controlloop.operational.common.Drools",
  "type_version": "1.0.0",
  "name": "operational.restart",
  "version": "1.0.0",
  "metadata": {
    "policy-id": "operational.restart"
  },
  "properties": {
    "id": "ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e",
    "timeout": 300,
    "abatement": false,
    "trigger": "unique-policy-id-1-restart",
    "operations": [
      {
        "id": "unique-policy-id-1-restart",
        "description": "Restart the VM",
        "operation": {
          "actor": "APPC",
          "operation": "Restart",
          "target": {
            "targetType": "VNF"
          }
        },
        "timeout": 240,
        "retries": 0,
        "success": "final_success",
        "failure": "final_failure",
        "failure_timeout": "final_failure_timeout",
        "failure_retries": "final_failure_retries",
        "failure_exception": "final_failure_exception",
        "failure_guard": "final_failure_guard"
      }
    ]
  }
}

To provision the operational.restart policy issue the following command:

http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" https://localhost:9696/policy/pdp/engine/lifecycle/policies @usecases/policy.vcpe.json

Verify that the policy shows with the telemetry tools:

docker exec -it PDPD bash -c "/opt/app/policy/bin/telemetry"
> get /policy/pdp/engine/lifecycle/policies
> get /policy/pdp/engine/controllers/usecases/drools/facts/usecases/controlloops
dcae.vcpe.onset.json
{
  "closedLoopControlName": "ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e",
  "closedLoopAlarmStart": 1463679805324,
  "closedLoopEventClient": "DCAE_INSTANCE_ID.dcae-tca",
  "closedLoopEventStatus": "ONSET",
  "requestID": "664be3d2-6c12-4f4b-a3e7-c349acced200",
  "target_type": "VNF",
  "target": "generic-vnf.vnf-id",
  "AAI": {
    "vserver.is-closed-loop-disabled": "false",
    "vserver.prov-status": "ACTIVE",
    "generic-vnf.vnf-id": "vCPE_Infrastructure_vGMUX_demo_app"
  },
  "from": "DCAE",
  "version": "1.0.2"
}

To initiate a control loop transaction, simulate a DCAE ONSET to Policy:

http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" PUT https://localhost:9696/policy/pdp/engine/topics/sources/noop/DCAE_TOPIC/events @dcae.vcpe.onset.json Content-Type:'text/plain'

This will spawn a vCPE control loop transaction in the PDP-D. Policy will send a restart message over the APPC-LCM-READ channel to APPC and wait for a response.

Verify that you see this message in the network.log by looking for APPC-LCM-READ messages.

Note the sub-request-id value from the restart message in the APPC-LCM-READ channel.

Replace REPLACEME in the appc.vcpe.success.json with this sub-request-id.

appc.vcpe.success.json
{
  "body": {
    "output": {
      "common-header": {
        "timestamp": "2017-08-25T21:06:23.037Z",
        "api-ver": "5.00",
        "originator-id": "664be3d2-6c12-4f4b-a3e7-c349acced200",
        "request-id": "664be3d2-6c12-4f4b-a3e7-c349acced200",
        "sub-request-id": "REPLACEME",
        "flags": {}
      },
      "status": {
        "code": 400,
        "message": "Restart Successful"
      }
    }
  },
  "version": "2.0",
  "rpc-name": "restart",
  "correlation-id": "664be3d2-6c12-4f4b-a3e7-c349acced200-1",
  "type": "response"
}

Send a simulated APPC response back to the PDP-D over the APPC-LCM-WRITE channel.

http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" PUT https://localhost:9696/policy/pdp/engine/topics/sources/noop/APPC-LCM-WRITE/events @appc.vcpe.success.json  Content-Type:'text/plain'

Verify in $POLICY_LOGS/network.log that a FINAL: SUCCESS notification is sent over the POLICY-CL-MGT channel, and an entry is added to the $POLICY_LOGS/audit.log indicating successful completion.

vFirewall use case testing

First step is to create the operational.modifyconfig policy.

policy.vfw.json
{
  "type": "onap.policies.controlloop.operational.common.Drools",
  "type_version": "1.0.0",
  "name": "operational.modifyconfig",
  "version": "1.0.0",
  "metadata": {
    "policy-id": "operational.modifyconfig"
  },
  "properties": {
    "id": "ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a",
    "timeout": 300,
    "abatement": false,
    "trigger": "unique-policy-id-1-modifyConfig",
    "operations": [
      {
        "id": "unique-policy-id-1-modifyConfig",
        "description": "Modify the packet generator",
        "operation": {
          "actor": "APPC",
          "operation": "ModifyConfig",
          "target": {
            "targetType": "VNF",
            "entityIds": {
              "resourceID": "bbb3cefd-01c8-413c-9bdd-2b92f9ca3d38"
            }
          },
          "payload": {
            "streams": "{\"active-streams\": 5 }"
          }
        },
        "timeout": 240,
        "retries": 0,
        "success": "final_success",
        "failure": "final_failure",
        "failure_timeout": "final_failure_timeout",
        "failure_retries": "final_failure_retries",
        "failure_exception": "final_failure_exception",
        "failure_guard": "final_failure_guard"
      }
    ]
  }
}

To provision the operational.modifyconfig policy, issue the following command:

http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" https://localhost:9696/policy/pdp/engine/lifecycle/policies @usecases/policy.vfw.json

Verify that the policy shows with the telemetry tools:

docker exec -it PDPD bash -c "/opt/app/policy/bin/telemetry"
> get /policy/pdp/engine/lifecycle/policies
> get /policy/pdp/engine/controllers/usecases/drools/facts/usecases/controlloops
dcae.vfw.onset.json
{
  "closedLoopControlName": "ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a",
  "closedLoopAlarmStart": 1463679805324,
  "closedLoopEventClient": "microservice.stringmatcher",
  "closedLoopEventStatus": "ONSET",
  "requestID": "c7c6a4aa-bb61-4a15-b831-ba1472dd4a65",
  "target_type": "VNF",
  "target": "generic-vnf.vnf-name",
  "AAI": {
    "vserver.is-closed-loop-disabled": "false",
    "vserver.prov-status": "ACTIVE",
    "generic-vnf.vnf-name": "fw0002vm002fw002",
    "vserver.vserver-name": "OzVServer"
  },
  "from": "DCAE",
  "version": "1.0.2"
}

To initiate a control loop transaction, simulate a DCAE ONSET to Policy:

http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" PUT https://localhost:9696/policy/pdp/engine/topics/sources/noop/DCAE_TOPIC/events @dcae.vfw.onset.json Content-Type:'text/plain'

This will spawn a vFW control loop transaction in the PDP-D. Policy will send a ModifyConfig message over the APPC-CL channel to APPC and wait for a response. This can be seen by searching the network.log for APPC-CL.

Note the SubRequestId field in the ModifyConfig message in the APPC-CL topic in the network.log

Send a simulated APPC response back to the PDP-D over the APPC-CL channel. To do this, change the REPLACEME text in the appc.vcpe.success.json with this SubRequestId.

appc.vcpe.success.json
{
  "CommonHeader": {
    "TimeStamp": 1506051879001,
    "APIver": "1.01",
    "RequestID": "c7c6a4aa-bb61-4a15-b831-ba1472dd4a65",
    "SubRequestID": "REPLACEME",
    "RequestTrack": [],
    "Flags": []
  },
  "Status": {
    "Code": 400,
    "Value": "SUCCESS"
  },
  "Payload": {
    "generic-vnf.vnf-id": "f17face5-69cb-4c88-9e0b-7426db7edddd"
  }
}
http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" PUT https://localhost:9696/policy/pdp/engine/topics/sources/noop/APPC-CL/events @appc.vcpe.success.json Content-Type:'text/plain'

Verify in $POLICY_LOGS/network.log that a FINAL: SUCCESS notification is sent over the POLICY-CL-MGT channel, and an entry is added to the $POLICY_LOGS/audit.log indicating successful completion.

Running PDP-D Control Loop Application with other components

The reader can also look at the integration/csit repository. More specifically, these directories have examples of other PDP-D Control Loop configurations:

  • plans: startup scripts.

  • scripts: docker-compose and related files.

  • plans: test plan.

Additional information

For additional information, please see the Drools PDP Development and Testing (In Depth) page.

Policy XACML PDP Engine

The ONAP XACML Policy PDP Engine uses an open source implementation of the OASIS XACML 3.0 Standard to support fine-grained policy decisions in the ONAP. The XACML 3.0 Standard is a language for both policies and requests/responses for access control decisions. The ONAP XACML PDP translates TOSCA Compliant Policies into the XACML policy language, loads the policies into the XACML engine and exposes a Decision API which uses the XACML request/response language to render decisions for ONAP components.

ONAP XACML PDP Supported Policy Types

The following Policy Types are supported by the XACML PDP Engine (PDP-X):

Supported Base Policy Types

Application

Base Policy Type

Action

Description

Monitoring

onap.policies.Monitoring

configure

Control Loop DCAE Monitoring Policies

Guard

onap.policies.controlloop.guard.Common

guard

Control Loop Guard and Coordination Policies

Optimization

onap.policies.Optimization

optimize

Optimization policy types used by OOF

Naming

onap.policies.Naming

naming

Naming policy types used by SDNC

Native

onap.policies.native.Xacml

native

Native XACML Policies

Match

onap.policies.Match

native

Matchable Policy Types for the ONAP community to use

Each Policy Type is implemented as an application that extends the XacmlApplicationServiceProvider, and provides a ToscaPolicyTranslator that translates the TOSCA representation of the policy into a XACML OASIS 3.0 standard policy.

By cloning the policy/xacml-pdp repository, a developer can run the JUnit tests for the applications to get a better understanding on how applications are built using translators and the XACML Policies that are generated for each Policy Type. Each application supports one or more Policy Types and an associated “action” used by the Decision API when making these calls.

See the Policy Platform Development Tools for more information on cloning and developing the policy repositories.

XACML-PDP applications are located in the ‘applications’ sub-module in the policy/xacml-pdp repo. Click here to view the applications sub-modules

XACML PDP TOSCA Translators

The following common translators are available in ONAP for use by developers. Each is used or extended by the standard PDP-X applications in ONAP.

StdCombinedPolicyResultsTranslator Translator

A simple translator that wraps the TOSCA policy into a XACML policy and performs matching of the policy based on either policy-id and/or policy-type. The use of this translator is discouraged as it behaves like a database call and does not take advantage of the fine-grain decision making features described by the XACML OASIS 3.0 standard. It is used to support backward compatibility of legacy “configure” policies.

Implementation of Combined Results Translator.

The Monitoring and Naming applications use this translator.

StdMatchableTranslator Translator

More robust translator that searches metadata of TOSCA properties for a matchable field set to true. The translator then uses those “matchable” properties to translate a policy into a XACML OASIS 3.0 policy which allows for fine-grained decision making such that ONAP applications can retrieve the appropriate policy(s) to be enforced during runtime.

Each of the properties designated as “matchable” are treated relative to each other as an “AND” during a Decision request call. In addition, each value of a “matchable property that is an array, is treated as an “OR”. The more properties specified in a decision request, the more fine-grained a policy will be returned. In addition, the use of “policy-type” can be used in a decision request to further filter the decision results to a specific type of policy.

Implementation of Matchable Translator.

The Optimization application uses this translator.

GuardTranslator and CoordinationGuardTranslator

These two translators are used by the Guard application and are very specific to those Policy Types. They are good examples on how to build your own translator for a very specific implementation of a policy type. This can be the case if any of the Std* translators are not appropriate to use directly or override for your application.

Implementation of Guard Translator

Implementation of Coordination Translator

Native XACML OAISIS 3.0 XML Policy Translator

This translator pulls a URL encoded XML XACML policy from a TOSCA Policy and loads it into a XACML Engine. This allows native XACML policies to be used to support complex use cases in which a translation from TOSCA to XACML is too difficult.

Implementation of Native Policy Translator

Monitoring Policy Types

These Policy Types are used by Control Loop DCAE microservice components to support monitoring of VNF/PNF entities to support an implementation of a Control Loops. The DCAE Platform makes a call to Decision API to request the contents of these policies. The implementation involves creating an overarching XACML Policy that contains the TOSCA policy as a payload that is returned to the DCAE Platform.

The following policy types derive from onap.policies.Monitoring:

Derived Policy Type

Action

Description

onap.policies.monitoring.tcagen2

configure

TCA DCAE microservice gen2 component

onap.policies.monitoring.dcaegen2.collectors.datafile.datafile-app-server

configure

REST Collector

onap.policies.monitoring.docker.sonhandler.app

configure

SON Handler microservice component

Note

DCAE project deprecated TCA DCAE microservice in lieu for their gen2 microservice. Thus, the policy type onap.policies.monitoring.cdap.tca.hi.lo.app was removed from Policy Framework.

This is an example Decision API payload made to retrieve a decision for a Monitoring Policy by id. Not recommended - as users may change id’s of a policy. Available for backward compatibility.

{
  "ONAPName": "DCAE",
  "ONAPComponent": "PolicyHandler",
  "ONAPInstance": "622431a4-9dea-4eae-b443-3b2164639c64",
  "action": "configure",
  "resource": {
      "policy-type": "onap.policies.monitoring.tcagen2"
  }
}

This is an example Decision API payload made to retrieve a decision for all deployed Monitoring Policies for a specific type of Monitoring policy.

{
  "ONAPName": "DCAE",
  "ONAPComponent": "PolicyHandler",
  "ONAPInstance": "622431a4-9dea-4eae-b443-3b2164639c64",
  "action": "configure",
  "resource": {
      "policy-id": "onap.scaleout.tca"
  }
}

Guard and Control Loop Coordination Policy Types

These Policy Types are used by Control Loop Drools Engine to support guarding control loop operations and coordination of Control Loops during runtime control loop execution.

Policy Type

Action

Description

onap.policies.controlloop.guard.common.FrequencyLimiter

guard

Limits frequency of actions over a specified time period

onap.policies.controlloop.guard.common.Blacklist

guard

Blacklists a regexp of VNF IDs

onap.policies.controlloop.guard.common.MinMax

guard

For scaling, enforces a min/max number of VNFS

onap.policies.controlloop.guard.common.Filter

guard

Used for filtering entities in A&AI from Control Loop actions

onap.policies.controlloop.guard.coordination.FirstBlocksSecond

guard

Gives priority to one control loop vs another

This is an example Decision API payload made to retrieve a decision for a Guard Policy Type.

{
  "ONAPName": "Policy",
  "ONAPComponent": "drools-pdp",
  "ONAPInstance": "usecase-template",
  "requestId": "unique-request-id-1",
  "action": "guard",
  "resource": {
      "guard": {
          "actor": "SO",
          "operation": "VF Module Create",
          "clname": "ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3",
          "target": "vLoadBalancer-00",
          "vfCount": "1"
      }
  }
}

The return decision simply has “permit” or “deny” in the response to tell the calling application whether they are allowed to perform the operation.

{"status":"Permit"}
Guard Common Base Policy Type

Each guard Policy Type derives from onap.policies.controlloop.guard.Common base policy type. Thus, they share a set of common properties.

Common Properties for all Guards

Property

Examples

Required

Type

Description

actor

APPC, SO

Required

String

Identifies the actor involved in the Control Loop operation.

operation

Restart, VF Module Create

Required

String

Identifies the Control Loop operation the actor must perform.

timeRange

start_time: T00:00:00Z end_time: T08:00:00Z

Optional

tosca.datatypes.TimeInterval

A given time range the guard is in effect. Following the TOSCA specification the format should be ISO 8601 format

id

control-loop-id

Optional

String

A specific Control Loop id the guard is in effect.

Common Guard Policy Type

Frequency Limiter Guard Policy Type

The Frequency Limiter Guard is used to specify limits as to how many operations can occur over a given time period.

Frequency Guard Properties

Property

Examples

Required

Type

Description

timeWindow

10, 60

Required

integer

The time window to count the actions against.

timeUnits

second minute, hour, day, week, month, year

Required

String

The units of time the window is counting

limit

5

Required

integer

The limit value to be checked against.

tosca_definitions_version: tosca_simple_yaml_1_1_0
topology_template:
  policies:
    -
      guard.frequency.scaleout:
        type: onap.policies.controlloop.guard.common.FrequencyLimiter
        type_version: 1.0.0
        version: 1.0.0
        name: guard.frequency.scaleout
        description: Here we limit the number of Restarts for my-controlloop to 3 in a ten minute period.
        metadata:
          policy-id : guard.frequency.scaleout
        properties:
          actor: APPC
          operation: Restart
          id: my-controlloop
          timeWindow: 10
          timeUnits: minute
          limit: 3

Frequency Limiter Guard Policy Type

Min/Max Guard Policy Type

The Min/Max Guard is used to specify a minimum or maximum number of instantiated entities in A&AI. Typically this is a VFModule for Scaling operations. One should specify either a min or a max value, or both a min and max value. At least one must be specified.

Min/Max Guard Properties

Property

Examples

Required

Type

Description

target

e6130d03-56f1-4b0a-9a1d-e1b2ebc30e0e

Required

String

The target entity that has scaling restricted.

min

1

Optional

integer

Minimum value. Optional only if max is not specified.

max

5

Optional

integer

Maximum value. Optional only if min is not specified.

tosca_definitions_version: tosca_simple_yaml_1_1_0
topology_template:
    policies:
    -   guard.minmax.scaleout:
            type: onap.policies.controlloop.guard.common.MinMax
            type_version: 1.0.0
            version: 1.0.0
            name: guard.minmax.scaleout
            metadata:
                policy-id: guard.minmax.scaleout
            properties:
                actor: SO
                operation: VF Module Create
                id: my-controlloop
                target: the-vfmodule-id
                min: 1
                max: 2

Min/Max Guard Policy Type

Blacklist Guard Policy Type

The Blacklist Guard is used to specify a list of A&AI entities that are blacklisted from having an operation performed on them. Recommendation is to use the vnf-id for the A&AI entity.

Blacklist Guard Properties

Property

Examples

Required

Type

Description

blacklist

e6130d03-56f1-4b0a-9a1d-e1b2ebc30e0e

Required

list of string

List of target entity’s that are blacklisted from an operation.

tosca_definitions_version: tosca_simple_yaml_1_1_0
topology_template:
    policies:
    -   guard.blacklist.scaleout:
            type: onap.policies.controlloop.guard.common.Blacklist
            type_version: 1.0.0
            version: 1.0.0
            name: guard.blacklist.scaleout
            metadata:
                policy-id: guard.blacklist.scaleout
            properties:
                actor: APPC
                operation: Restart
                id: my-controlloop
                blacklist:
                - vnf-id-1
                - vnf-id-2

Blacklist Guard Policy Type

Filter Guard Policy Type

The Filter Guard is a more robust guard for blacklisting and whitelisting A&AI entities when performing control loop operations. The intent for this guard is to filter in or out a block of entities, while allowing the ability to filter in or out specific entities. This allows a DevOps team to control the introduction of a Control Loop for a region or specific VNF’s, as well as block specific VNF’s that are being negatively affected when poor network conditions arise. Care and testing should be taken to understand the ramifications when combining multiple filters as well as their use in conjunction with other Guard Policy Types.

Filter Guard Properties

Property

Examples

Required

Type

Description

algorithm

blacklist-overrides

Required

What algorithm to be applied

blacklist-overrides or whitelist-overrides are the valid values. Indicates whether blacklisting or whitelisting has precedence.

filters

see table below

Required

list of onap.datatypes.guard.filter

List of datatypes that describe the filter.

Filter Guard onap.datatypes.guard.filter Properties

Property

Examples

Required

Type

Description

field

generic-vnf.vnf-name

Required

String

Field used to perform filter on and must be a string value. See the Policy Type below for valid values.

filter

vnf-id-1

Required

String

The filter being applied.

function

string-equal

Required

String

The function that is applied to the filter. See the Policy Type below for valid values.

blacklist

true

Required

boolean

Whether the result of the filter function applied to the filter is blacklisted or whitelisted (eg Deny or Permit).

tosca_definitions_version: tosca_simple_yaml_1_1_0
topology_template:
   policies:
   -  filter.block.region.allow.one.vnf:
         description: Block this region from Control Loop actions, but allow a specific vnf.
         type: onap.policies.controlloop.guard.common.Filter
         type_version: 1.0.0
         version: 1.0.0
         properties:
            actor: SO
            operation: VF Module Create
            algorithm: whitelist-overrides
            filters:
            -  field: cloud-region.cloud-region-id
               filter: RegionOne
               function: string-equal
               blacklist: true
            -  field: generic-vnf.vnf-id
               filter: e6130d03-56f1-4b0a-9a1d-e1b2ebc30e0e
               function: string-equal
               blacklist: false
   -  filter.allow.region.block.one.vnf:
         description: allow this region to do Control Loop actions, but block a specific vnf.
         type: onap.policies.controlloop.guard.common.Filter
         type_version: 1.0.0
         version: 1.0.0
         properties:
            actor: SO
            operation: VF Module Create
            algorithm: blacklist-overrides
            filters:
            -  field: cloud-region.cloud-region-id
               filter: RegionTwo
               function: string-equal
               blacklist: false
            -  field: generic-vnf.vnf-id
               filter: f17face5-69cb-4c88-9e0b-7426db7edddd
               function: string-equal
               blacklist: true

Filter Guard Policy Type

Optimization Policy Types

These Policy Types are designed to be used by the OOF Project support several domains including VNF placement in ONAP. The OOF Platform makes a call to the Decision API to request these Policies based on the values specified in the onap.policies.Optimization properties. Each of these properties are treated relative to each other as an “AND”. In addition, each value for each property itself is treated as an “OR”.

Policy Type

Action

onap.policies.Optimization

optimize

onap.policies.optimization.Service

optimize

onap.policies.optimization.Resource

optimize

onap.policies.optimization.resource.AffinityPolicy

optimize

onap.policies.optimization.resource.DistancePolicy

optimize

onap.policies.optimization.resource.HpaPolicy

optimize

onap.policies.optimization.resource.OptimizationPolicy

optimize

onap.policies.optimization.resource.PciPolicy

optimize

onap.policies.optimization.service.QueryPolicy

optimize

onap.policies.optimization.service.SubscriberPolicy

optimize

onap.policies.optimization.resource.Vim_fit

optimize

onap.policies.optimization.resource.VnfPolicy

optimize

The optimization application extends the StdMatchablePolicyTranslator in that the application applies a “closest match” algorithm internally after a XACML decision. This filters the results of the decision to return the one or more policies that match the incoming decision request as close as possible. In addition, there is special consideration for the Subscriber Policy Type. If a decision request contains subscriber context attributes, then internally the application will apply an initial decision to retrieve the scope of the subscriber. The resulting scope attributes are then added into a final internal decision call.

This is an example Decision API payload made to retrieve a decision for an Optimization Policy Type.

{
  "ONAPName": "OOF",
  "ONAPComponent": "OOF-component",
  "ONAPInstance": "OOF-component-instance",
  "action": "optimize",
  "resource": {
      "scope": [],
      "services": ["vCPE"],
      "resources": ["vGMuxInfra", "vG"],
      "geography": ["US", "INTERNATIONAL"]
  }
}

Native XACML Policy Type

This Policy type is used by any client or ONAP component who has the need of native XACML evaluation. A native XACML policy or policy set encoded in XML can be created off this policy type and loaded into the XACML PDP engine by invoking the PAP policy deployment API. Native XACML requests encoded in either JSON or XML can be sent to the XACML PDP engine for evaluation by invoking the native decision API. Native XACML responses will be returned upon evaluating the requests against the matching XACML policies. Those native XACML policies, policy sets, requests and responses all follow the OASIS XACML 3.0 Standard.

Policy Type

Action

Description

onap.policies.native.Xacml

native

any client or ONAP component

According to the XACML 3.0 specification, two content-types are supported and used to present the native requests/responses. They are formally defined as “application/xacml+json” and “application/xacml+xml”.

This is an example Native Decision API payload made to retrieve a decision for whether Julius Hibbert can read http://medico.com/record/patient/BartSimpson.

{
    "Request": {
        "ReturnPolicyIdList": false,
        "CombinedDecision": false,
        "AccessSubject": [
            {
                "Attribute": [
                    {
                        "IncludeInResult": false,
                        "AttributeId": "subject-id",
                        "Value": "Julius Hibbert"
                    }
                ]
            }
        ],
        "Resource": [
            {
                "Attribute": [
                    {
                        "IncludeInResult": false,
                        "AttributeId": "resource-id",
                        "Value": "http://medico.com/record/patient/BartSimpson",
                        "DataType": "anyURI"
                    }
                ]
            }
        ],
        "Action": [
            {
                "Attribute": [
                    {
                        "IncludeInResult": false,
                        "AttributeId": "action-id",
                        "Value": "read"
                    }
                ]
            }
        ],
        "Environment": []
    }
}

Match Policy Type

This Policy type can be used to design your own Policy Type and utilize the StdMatchableTranslator, and does not need to build your own custom application. You can design your Policy Type by inheriting from the Match policy type (eg. onap.policies.match.<YourPolicyType>) and adding a matchable metadata set to true for the properties that you would like to request a Decision on. All a user would need to do is then use the Policy Lifecycle API to add their Policy Type and then create policies from it. Then deploy those policies to the XACML PDP and they would be able to get Decisions without customizing their ONAP installation.

Here is an example Policy Type:

tosca_definitions_version: tosca_simple_yaml_1_1_0
policy_types:
   onap.policies.match.Test:
      derived_from: onap.policies.Match
      version: 1.0.0
      name: onap.policies.match.Test
      description: Test Matching Policy Type to test matchable policies
      properties:
         matchable:
            type: string
            metadata:
               matchable: true
            required: true
         nonmatchable:
            type: string
            required: true

Here are example Policies:

tosca_definitions_version: tosca_simple_yaml_1_1_0
topology_template:
    policies:
    -   test_match_1:
            type: onap.policies.match.Test
            version: 1.0.0
            type_version: 1.0.0
            name: test_match_1
            properties:
               matchable: foo
               nonmatchable: value1
    -   test_match_2:
            type: onap.policies.match.Test
            version: 1.0.0
            type_version: 1.0.0
            name: test_match_2
            properties:
               matchable: bar
               nonmatchable: value2

This is an example Decision API request that can be made:

{
  "ONAPName": "my-ONAP",
  "ONAPComponent": "my-component",
  "ONAPInstance": "my-instance",
  "requestId": "unique-request-1",
  "action": "match",
  "resource": {
      "matchable": "foo"
  }
}

Which would render the following decision response:

{
  "policies": {
    "test_match_1": {
      "type": "onap.policies.match.Test",
      "type_version": "1.0.0",
      "properties": {
        "matchable": "foo",
        "nonmatchable": "value1"
      },
      "name": "test_match_1",
      "version": "1.0.0",
      "metadata": {
        "policy-id": "test_match_1",
        "policy-version": "1.0.0"
      }
    }
  }
}

Supporting Your Own Policy Types and Translators

In order to support your own custom Policy Type that the XACML PDP Engine can support, one needs to build a Java service application that extends the XacmlApplicationServiceProvider interface and implement a ToscaPolicyTranslator application. Your application should register itself as a Java service application and expose it in the classpath used to be loaded into the ONAP XACML PDP Engine. Ensure you define and create the TOSCA Policy Type according to these Policy Design and Development. You should be able to load your custom Policy Type using the Policy Lifecycle API. Once successful, you should be able to start creating policies from your custom Policy Type.

XacmlApplicationServiceProvider

Interface for XacmlApplicationServiceProvider

See each of the ONAP Policy Type application implementations which re-use the StdXacmlApplicationServiceProvider class. This implementation can be used as a basis for your own custom applications.

Standard Application Service Provider implementation

ToscaPolicyTranslator

Your custom XacmlApplicationServiceProvider must provide an implementation of a ToscaPolicyTranslator.

Interface for ToscaPolicyTranslator

See each of the ONAP Policy type application implementations which each have their own ToscaPolicyTranslator. Most use or extend the StdBaseTranslator.

Standard Tosca Policy Translator implementation <https://github.com/onap/policy-xacml-pdp/blob/master/applications/common/src/main/java/org/onap/policy/pdp/xacml/application/common/std/StdBaseTranslator.java>.

XACML Application and Enforcement Tutorials

The following tutorials can be helpful to get started on building your own decision application as well as building enforcement into your application.

Policy XACML - Custom Application Tutorial

This tutorial shows how to build a XACML application for a Policy Type. Please be sure to clone the policy repositories before going through the tutorial. See Policy Platform Development Tools for details.

Design a Policy Type

Follow TOSCA Policy Primer for more information. For the tutorial, we will use this example Policy Type in which an ONAP PEP client would like to enforce an action authorize for a user to execute a permission on an entity. See here for latest Tutorial Policy Type.

Example Tutorial Policy Type
 1tosca_definitions_version: tosca_simple_yaml_1_1_0
 2policy_types:
 3    onap.policies.Authorization:
 4        derived_from: tosca.policies.Root
 5        version: 1.0.0
 6        description: Example tutorial policy type for doing user authorization
 7        properties:
 8            user:
 9                type: string
10                required: true
11                description: The unique user name
12            permissions:
13                type: list
14                required: true
15                description: A list of resource permissions
16                entry_schema:
17                    type: onap.datatypes.Tutorial
18data_types:
19    onap.datatypes.Tutorial:
20        derived_from: tosca.datatypes.Root
21        version: 1.0.0
22        properties:
23            entity:
24                type: string
25                required: true
26                description: The resource
27            permission:
28                type: string
29                required: true
30                description: The permission level
31                constraints:
32                    - valid_values: [read, write, delete]

We would expect then to be able to create the following policies to allow the demo user to Read/Write an entity called foo, while the audit user can only read the entity called foo. Neither user has Delete permission. See here for latest Tutorial Policies.

Example Policies Derived From Tutorial Policy Type
 1tosca_definitions_version: tosca_simple_yaml_1_1_0
 2topology_template:
 3    policies:
 4        -
 5            onap.policy.tutorial.demo:
 6                type: onap.policies.Authorization
 7                type_version: 1.0.0
 8                version: 1.0.0
 9                metadata:
10                    policy-id: onap.policy.tutorial.demo
11                    policy-version: 1
12                properties:
13                    user: demo
14                    permissions:
15                        -
16                            entity: foo
17                            permission: read
18                        -
19                            entity: foo
20                            permission: write
21        -
22            onap.policy.tutorial.audit:
23                type: onap.policies.Authorization
24                version: 1.0.0
25                type_version: 1.0.0
26                metadata:
27                    policy-id: onap.policy.tutorial.bar
28                    policy-version: 1
29                properties:
30                    user: audit
31                    permissions:
32                        -
33                            entity: foo
34                            permission: read
Design Decision Request and expected Decision Response

For the PEP (Policy Enforcement Point) client applications that call the Decision API, you need to design how the Decision API Request resource fields will be sent via the PEP.

Example Decision Request
 1{
 2  "ONAPName": "TutorialPEP",
 3  "ONAPComponent": "TutorialPEPComponent",
 4  "ONAPInstance": "TutorialPEPInstance",
 5  "requestId": "unique-request-id-tutorial",
 6  "action": "authorize",
 7  "resource": {
 8    "user": "demo",
 9    "entity": "foo",
10    "permission" : "write"
11  }
12}

For simplicity, this tutorial expects only a Permit or Deny in the Decision Response. However, one could customize the Decision Response object and send back whatever information is desired.

Example Decision Response
1{
2    "status":"Permit"
3}
Create A Maven Project

Use whatever tool or environment to create your application project. This tutorial assumes you use Maven to build it.

Add Dependencies Into Application pom.xml

Here we import the XACML PDP Application common dependency which has the interfaces we need to implement. In addition, we are importing a testing dependency that has common code for producing a JUnit test.

pom.xml dependencies
  <dependency>
    <groupId>org.onap.policy.xacml-pdp.applications</groupId>
    <artifactId>common</artifactId>
    <version>2.3.3</version>
  </dependency>
  <dependency>
    <groupId>org.onap.policy.xacml-pdp</groupId>
    <artifactId>xacml-test</artifactId>
    <version>2.3.3</version>
    <scope>test</scope>
  </dependency>
Create META-INF to expose Java Service

The ONAP XACML PDP Engine will not be able to find the tutorial application unless it has a property file located in src/main/resources/META-INF/services that contains a property file declaring the class that implements the service.

The name of the file must match org.onap.policy.pdp.xacml.application.common.XacmlApplicationServiceProvider and the contents of the file is one line org.onap.policy.tutorial.tutorial.TutorialApplication.

META-INF/services/org.onap.policy.pdp.xacml.application.common.XacmlApplicationServiceProvider
  org.onap.policy.tutorial.tutorial.TutorialApplication
Create A Java Class That Extends StdXacmlApplicationServiceProvider

You could implement XacmlApplicationServiceProvider if you wish, but for simplicity if you just extend StdXacmlApplicationServiceProvider you will get a lot of implementation done for your application up front. All that needs to be implemented is providing a custom translator.

Custom Tutorial Application Service Provider
package org.onap.policy.tutorial.tutorial;

import org.onap.policy.pdp.xacml.application.common.ToscaPolicyTranslator;
import org.onap.policy.pdp.xacml.application.common.std.StdXacmlApplicationServiceProvider;

public class TutorialApplication extends StdXacmlApplicationServiceProvider {

      @Override
      protected ToscaPolicyTranslator getTranslator(String type) {
              // TODO Auto-generated method stub
              return null;
      }

}
Override Methods for Tutorial

Override these methods to differentiate Tutorial from other applications so that the XACML PDP Engine can determine how to route policy types and policies to the application.

Custom Tutorial Application Service Provider
package org.onap.policy.tutorial.tutorial;

import java.util.Arrays;
import java.util.List;

import org.onap.policy.models.tosca.authorative.concepts.ToscaPolicyTypeIdentifier;
import org.onap.policy.pdp.xacml.application.common.ToscaPolicyTranslator;
import org.onap.policy.pdp.xacml.application.common.std.StdXacmlApplicationServiceProvider;

public class TutorialApplication extends StdXacmlApplicationServiceProvider {

  private final ToscaPolicyTypeIdentifier supportedPolicyType = new ToscaPolicyTypeIdentifier();

  @Override
  public String applicationName() {
      return "tutorial";
  }

  @Override
  public List<String> actionDecisionsSupported() {
      return Arrays.asList("authorize");
  }

  @Override
  public synchronized List<ToscaPolicyTypeIdentifier> supportedPolicyTypes() {
      return Arrays.asList(supportedPolicyType);
  }

  @Override
  public boolean canSupportPolicyType(ToscaPolicyTypeIdentifier policyTypeId) {
      return supportedPolicyType.equals(policyTypeId);
  }

  @Override
      protected ToscaPolicyTranslator getTranslator(String type) {
      // TODO Auto-generated method stub
      return null;
  }

}
Create A Translation Class that extends the ToscaPolicyTranslator Class

Please be sure to review the existing translators in the policy/xacml-pdp repo to see if they could be re-used for your policy type. For the tutorial, we will create our own translator.

The custom translator is not only responsible for translating Policies derived from the Tutorial Policy Type, but also for translating Decision API Requests/Responses to/from the appropriate XACML requests/response objects the XACML engine understands.

Custom Tutorial Translator Class
package org.onap.policy.tutorial.tutorial;

import org.onap.policy.models.decisions.concepts.DecisionRequest;
import org.onap.policy.models.decisions.concepts.DecisionResponse;
import org.onap.policy.models.tosca.authorative.concepts.ToscaPolicy;
import org.onap.policy.pdp.xacml.application.common.ToscaPolicyConversionException;
import org.onap.policy.pdp.xacml.application.common.ToscaPolicyTranslator;

import com.att.research.xacml.api.Request;
import com.att.research.xacml.api.Response;

import oasis.names.tc.xacml._3_0.core.schema.wd_17.PolicyType;

public class TutorialTranslator implements ToscaPolicyTranslator {

  public PolicyType convertPolicy(ToscaPolicy toscaPolicy) throws ToscaPolicyConversionException {
      // TODO Auto-generated method stub
      return null;
  }

  public Request convertRequest(DecisionRequest request) {
      // TODO Auto-generated method stub
      return null;
  }

  public DecisionResponse convertResponse(Response xacmlResponse) {
      // TODO Auto-generated method stub
      return null;
  }

}
Implement the TutorialTranslator Methods

This is the part where knowledge of the XACML OASIS 3.0 specification is required. Please refer to that specification on the many ways to design a XACML Policy.

For the tutorial, we will build code that translates the TOSCA Policy into one XACML Policy that matches on the user and action. It will then have one or more rules for each entity and permission combination. The default combining algorithm for the XACML Rules are to “Deny Unless Permit”.

See the tutorial example for details on how the translator is implemented

Note

There are many ways to build the policy based on the attributes. How to do so is a matter of experience and fine tuning using the many options for combining algorithms, target and/or condition matching and the rich set of functions available.

Use the TutorialTranslator in the TutorialApplication

Be sure to go back to the TutorialApplication and create an instance of the translator to return to the StdXacmlApplicationServiceProvider. The StdXacmlApplicationServiceProvider uses the translator to convert a policy when a new policy is deployed to the ONAP XACML PDP Engine. See the Tutorial Application Example.

Final TutorialApplication Class
 1  package org.onap.policy.tutorial.tutorial;
 2
 3  import java.util.Arrays;
 4  import java.util.List;
 5  import org.onap.policy.models.tosca.authorative.concepts.ToscaPolicyTypeIdentifier;
 6  import org.onap.policy.pdp.xacml.application.common.ToscaPolicyTranslator;
 7  import org.onap.policy.pdp.xacml.application.common.std.StdXacmlApplicationServiceProvider;
 8
 9  public class TutorialApplication extends StdXacmlApplicationServiceProvider {
10
11      private final ToscaPolicyTypeIdentifier supportedPolicyType =
12              new ToscaPolicyTypeIdentifier("onap.policies.Authorization", "1.0.0");
13      private final TutorialTranslator translator = new TutorialTranslator();
14
15      @Override
16      public String applicationName() {
17          return "tutorial";
18      }
19
20      @Override
21      public List<String> actionDecisionsSupported() {
22          return Arrays.asList("authorize");
23      }
24
25      @Override
26      public synchronized List<ToscaPolicyTypeIdentifier> supportedPolicyTypes() {
27          return Arrays.asList(supportedPolicyType);
28      }
29
30      @Override
31      public boolean canSupportPolicyType(ToscaPolicyTypeIdentifier policyTypeId) {
32          return supportedPolicyType.equals(policyTypeId);
33      }
34
35      @Override
36      protected ToscaPolicyTranslator getTranslator(String type) {
37          return translator;
38      }
39
40  }
Create a XACML Request from ONAP Decision Request

The easiest way to do this is to use the annotations feature from XACML PDP library to create an example XACML request. Then create an instance and simply populate it from an incoming ONAP Decision Request.

See the Tutorial Request

Create a JUnit and use the TestUtils.java class in xacml-test dependency

Be sure to create a JUnit that will test your translator and application code. You can utilize a TestUtils.java class from the policy/xamcl-pdp repo’s xacml-test submodule to use some utility methods for building the JUnit test.

Build the code and run the JUnit test. Its easiest to run it via a terminal command line using maven commands.

Running Maven Commands
1> mvn clean install
Building Docker Image

To build a docker image that incorporates your application with the XACML PDP Engine. The XACML PDP Engine must be able to find your Java.Service in the classpath. This is easy to do, just create a jar file for your application and copy into the same directory used to startup the XACML PDP.

Here is a Dockerfile as an example:

Dockerfile
1  FROM onap/policy-xacml-pdp
2
3  ADD maven/${project.build.finalName}.jar /opt/app/policy/pdpx/lib/${project.build.finalName}.jar
4
5  RUN mkdir -p /opt/app/policy/pdpx/apps/tutorial
6
7  COPY --chown=policy:policy xacml.properties /opt/app/policy/pdpx/apps/tutorial
Download Tutorial Application Example

If you clone the XACML-PDP repo, the tutorial is included for local testing without building your own.

Tutorial code located in xacml-pdp repo

There is an example Docker compose script that you can use to run the Policy Framework components locally and test the tutorial out.

Docker compose script

In addition, there is a POSTMAN collection available for setting up and running tests against a running instance of ONAP Policy Components (api, pap, dmaap-simulator, tutorial-xacml-pdp).

POSTMAN collection for testing

Policy XACML - Policy Enforcement Tutorial

This tutorial shows how to build Policy Enforcement into your application. Please be sure to clone the policy repositories before going through the tutorial. See Policy Platform Development Tools for details.

This tutorial can be found in the XACML PDP repository. See the tutorial

Policy Type being Enforced

For this tutorial, we will be enforcing a Policy Type that inherits from the onap.policies.Monitoring Policy Type. This Policy Type is used by DCAE analytics for configuration purposes. Any inherited Policy Type is automatically supported by the XACML PDP for Decisions.

See the latest example Policy Type

Example Policy Type
  tosca_definitions_version: tosca_simple_yaml_1_1_0
  policy_types:
     onap.policies.Monitoring:
        derived_from: tosca.policies.Root
        version: 1.0.0
        name: onap.policies.Monitoring
        description: a base policy type for all policies that govern monitoring provisioning
     onap.policies.monitoring.MyAnalytic:
        derived_from: onap.policies.Monitoring
        type_version: 1.0.0
        version: 1.0.0
        description: Example analytic
        properties:
           myProperty:
              type: string
              required: true
Example Policy

See the latest example policy

Example Policy
  tosca_definitions_version: tosca_simple_yaml_1_1_0
  topology_template:
     policies:
       -
         policy1:
             type: onap.policies.monitoring.MyAnalytic
             type_version: 1.0.0
             version: 1.0.0
             name: policy1
             metadata:
               policy-id: policy1
               policy-version: 1.0.0
             properties:
               myProperty: value1
Example Decision Requests and Responses

For onap.policies.Montoring Policy Types, the action used will be configure. For configure actions, you can specify a resource by policy-id or policy-type. We recommend using policy-type, as a policy-id may not necessarily be deployed. In addition, your application should request all the available policies for your policy-type that your application should be enforcing.

Example Decision Request
  {
    "ONAPName": "myName",
    "ONAPComponent": "myComponent",
    "ONAPInstance": "myInstanceId",
    "requestId": "1",
    "action": "configure",
    "resource": {
        "policy-type": "onap.policies.monitoring.MyAnalytic"
    }
  }

The configure action will return a payload containing your full policy:

Making Decision Call in your Application

Your application should be able to do a RESTful API call to the XACML PDP Decision API endpoint. If you have code that does this already, then utilize that to do something similar to the following curl command:

If your application does not have REST http client code, you can use some common code available in the policy/common repository for doing HTTP calls.

Also, if your application wants to use common code to serialize/deserialize Decision Requests and Responses, then you can include the following dependency:

Responding to Policy Update Notifications

Your application should also be able to respond to Policy Update Notifications that are published on the Dmaap topic POLICY-NOTIFICATION. This is because if a user pushes an updated Policy, your application should be able to dynamically start enforcing that policy without restart.

If your application does not have Dmaap client code, you can use some available code in policy/common to receive Dmaap events.

To parse the JSON send over the topic, your application can use the following dependency:

Policy APEX PDP Engine

A short Introduction to APEX

Introduction to APEX

APEX stand for Adaptive Policy EXecution. It is a lightweight engine for execution of policies. APEX allows you to specify logic as a policy, logic that you can adapt on the fly as your system executes. The APEX policies you design can be really simple, with a single snippet of logic, or can be very complex, with many states and tasks. APEX policies can even be designed to self-adapt at execution time, the choice is yours!

Simple APEX Overview

Figure 1. Simple APEX Overview

The Adaptive Policy Engine in APEX runs your policies. These policies are triggered by incoming events. The logic of the policies executes and produces a response event. The Incoming Context on the incoming event and the Outgoing Context on the outgoing event are simply the fields and attributes of the event. You design the policies that APEX executes and the trigger and action events that your policies accept and produce. Events are fed in and sent out as JSON or XML events over Kafka, a Websocket, a file or named pipe, or even standard input. If you run APEX as a library in your application, you can even feed and receive events over a Java API.

APEX States and Context

Figure 2. APEX States and Context

You design your policy as a chain of states, with each state being fed by the state before. The simplest policy can have just one state. We provide specific support for the four-state MEDA (Match Establish Decide Act) policy state model and the three-state ECA (Event Condition Action) policy state model. APEX is fully distributed. You can decide how many APEX engine instances to run for your application and on which real or virtual hosts to run them.

In APEX, you also have control of the Context used by your policies. Context is simply the state information and data used by your policies. You define what context your policies use and what the scope of that context is. Policy Context is private to a particular policy and is accessible only to whatever APEX engines are running that particular policy. Global Context is available to all policies. External Context is read-only context such as weather or topology information that is provided by other systems. APEX keeps context coordinated across all the the instances running a particular policy. If a policy running in an APEX engine changes the value of a piece of context, that value is available to all other APEX engines that use that piece of context. APEX takes care of distribution, locking, writing of context to persistent storage, and monitoring of context.

The APEX Eco-System

Figure 3. The APEX Eco-System

The APEX engine (AP-EN) is available as a Java library for inclusion in your application, as a microservice running in a Docker container, or as a stand-alone service available for integration into your system. APEX also includes a policy editor (AP-AUTH) that allows you to design your policies and a web-based policy management console you use to deploy policies and to keep track of the state of policies and context in policies. Context handling (AP-CTX) is integrated into the APEX engine and policy deployment (AP-DEP) is provided as a servlet running under a web framework such as Apache Tomcat.

APEX Configuration

An APEX engine can be configured to use various combinations of event input handlers, event output handlers, event protocols, context handlers, and logic executors. The system is built using a plugin architecture. Each configuration option is realized by a plugin, which can be loaded and configured when the engine is started. New plugins can be added to the system at any time, though to benefit from a new plugin, an engine will need to be restarted.

APEX Configuration Matrix

Figure 4. APEX Configuration Matrix

The APEX distribution already comes with a number of plugins. The figure above shows the provided plugins. Any combination of input, output, event protocol, context handlers, and executors is possible.

APEX Policy Matrix

APEX offers a lot of flexibility for defining, deploying, and executing policies. Based on a theoretic model, it supports virtually any policy model and supports translation of legacy policies into the APEX execution format. However, the most important aspect for using APEX is to decide what policy is needed, what underlying policy concepts should be used, and how the decision logic should be realized. Once these aspects are decided, APEX can be used to execute the policies. If the policy evolves, say from a simple decision table to a fully adaptable policy, only the policy definition requires change. APEX supports all of that.

The figure below shows a (non-exhaustive) matrix, which will help to decide what policy is required to solve your problem. Read the matrix from left to right choosing one cell in each column.

APEX Policy Matrix

Figure 5. APEX Policy Matrix

The policy can support one of a number of stimuli with an associated purpose/model of the policy, for instance:

  • Configuration, i.e. what should happen. An example is an event that states an intended network configuration and the policy should provide the detailed actions for it. The policy can be realized for instance as an obligation policy, a promise or an intent.

  • Report, i.e. something did happen. An example is an event about an error or fault and the policy needs to repair that problem. The policy would usually be an obligation, utility function, or goal policy.

  • Monitoring, i.e. something does happen. An example is a notification about certain network conditions, to which the policy might (or might not) react. The policy will mitigate the monitored events or permit (deny) related actions as an obligation or authorization.

  • Analysis, i.e. why did something happen. An example is an analytic component sends insights of a situation requiring a policy to act on it. The policy can solve the problem, escalate it, or delegate it as a refrain or delegation policy.

  • Prediction, i.e. what will happen next. An example are events that a policy uses to predict a future network condition. The policy can prevent or enforce the prediction as an adaptive policy, a utility function, or a goal.

  • Feedback, i.e. why did something happen or not happen. Similar to analysis, but here the feedback will be in the input event and the policy needs to do something with that information. Feedback can be related to history or experience, for instance a previous policy execution. The policy needs to be context-aware or be a meta-policy.

Once the purpose of the policy is decided, the next step is to look into what context information the policy will require to do its job. This can range from very simple to a lot of different information, for instance:

  • No context, nothing but a trigger event, e.g. a string or a number, is required

  • Event context, the incoming event provides all information (more than a string or number) for the policy

  • Policy context (read only), the policy has access to additional information related to its class but cannot change/alter them

  • Policy context (read and write), the policy has access to additional information related to its class and can alter this information (for instance to record historic information)

  • Global context (read only), the policy has access to additional information of any kind but cannot change/alter them

  • Global context (read and write), the policy has access to additional information of any kind and can alter this information (for instance to record historic information)

The next step is to decide how the policy should do its job, i.e. what flavor it has, how many states are needed, and how many tasks. There are many possible combinations, for instance:

  • Simple / God: a simple policy with 1 state and 1 task, which is doing everything for the decision-making. This is the ideal policy for simple situation, e.g. deciding on configuration parameters or simple access control.

  • Simple sequence: a simple policy with a number of states each having a single task. This is a very good policy for simple decision-making with different steps. For instance, a classic action policy (ECA) would have 3 states (E, C, and A) with some logic (1 task) in each state.

  • Simple selective: a policy with 1 state but more than one task. Here, the appropriate task (and it’s logic) will be selected at execution time. This policy is very good for dealing with similar (or the same) situation in different contexts. For instance, the tasks can be related to available external software, or to current work load on the compute node, or to time of day.

  • Selective: any number of states having any number of tasks (usually more than 1 task). This is a combination of the two policies above, for instance an ECA policy with more than one task in E, C, and A.

  • Classic directed: a policy with more than one state, each having one task, but a non-sequential execution. This means that the sequence of the states is not pre-defined in the policy (as would be for all cases above) but calculated at runtime. This can be good to realize decision trees based on contextual information.

  • Super Adaptive: using the full potential of the APEX policy model, states and tasks and state execution are fully flexible and calculated at runtime (per policy execution). This policy is very close to a general programming system (with only a few limitations), but can solve very hard problems.

The final step is to select a response that the policy creates. Possible responses have been discussed in the literature for a very long time. A few examples are:

  • Obligation (deontic for what should happen)

  • Authorization (e.g. for rule-based or other access control or security systems)

  • Intent (instead of providing detailed actions the response is an intent statement and a further system processes that)

  • Delegation (hand the problem over to someone else, possibly with some information or instructions)

  • Fail / Error (the policy has encountered a problem, and reports it)

  • Feedback (why did the policy make a certain decision)

Flexible Deployment

APEX can be deployed in various ways. The following figure shows a few of these deployment options. Engine and (policy) executors are named UPe (universal policy engine, APEX engine) and UPx (universal policy executor, the APEX internal state machine executor).

APEX Deployment Options

Figure 6. APEX Deployment Options

  1. For an interface or class

    • Either UPx or UPe as association

  2. For an application

    • UPx as object for single policies

    • UPe as object for multiple policies

  3. For a component (as service)

    • UPe as service for requests

    • UPec as service for requests

  4. As a service (PolaS)

    • One or more UPe with service i/f

    • One or more Upec/UPec with service i/f

    • One or more Upec/UPec with service i/f

  5. In a control loop

    • UPe as decision making part

    • UPec as decision making part

  6. On cloud compute nodes

    • Nodes with only UPe or Upec

    • Nodes with any combination of UPe, UPec

  7. A cloud example

    • Left: 2 UPec managing several UPe on different cloud nodes

    • Right: 2 large UPec with different UPe/UPec deployments

Flexible Clustering

APEX can be clustered in various ways. The following figure shows a few of these clustering options. Cluster, engine and (policy) executors are named UPec (universal policy cluster), UPe (universal policy engine, APEX engine) and UPx (universal policy executor, the APEX internal state machine executor).

APEX Clustering Options

Figure 7. APEX Clustering Options

  1. Single source/target, single UPx

    • Simple forward

  2. Multiple sources/targets, single UPx

    • Simple forward

  3. Single source/target, multiple UPx

    • Multithreading (MT) in UPe

  4. Multiple sources/targets, multiple UPx instances

    • Simple forward & MT in UPe

  5. Multiple non-MT UPe in UPec

    • Simple event routing

  6. Multiple MT UPe in UPec

    • Simple event routing

  7. Mixed UPe in UPec

    • Simple event routing

  8. Multiple non-MT UPec in UPec

    • Intelligent event routing

  9. Multiple mixed UPec in UPec

    • Intelligent event routing

  1. Mix of UPec in multiple UPec

    • External intelligent event routing

    • Optimized with UPec internal routing

Resources

APEX User Manual

Installation of Apex

Requirements

APEX is 100% written in Java and runs on any platform that supports a JVM, e.g. Windows, Unix, Cygwin. Some APEX applications (such as the monitoring application) come as web archives, they do require a war-capable web server installed.

Installation Requirements
  • Downloaded distribution: JAVA runtime environment (JRE, Java 11 or later, APEX is tested with the OpenJDK Java)

  • Building from source: JAVA development kit (JDK, Java 11 or later, APEX is tested with the OpenJDK Java)

  • A web archive capable webserver, for instance for the monitoring application

  • Sufficient rights to install APEX on the system

  • Installation tools depending on the installation method used:

    • ZIP to extract from a ZIP distribution

      • Windows for instance 7Zip

    • TAR and GZ to extract from that TAR.GZ distribution

      • Windows for instance 7Zip

    • DPKG to install from the DEB distribution

      • Install: sudo apt-get install dpkg

Feature Requirements

APEX supports a number of features that require extra software being installed.

  • Apache Kafka to connect APEX to a Kafka message bus

  • Hazelcast to use distributed hash maps for context

  • Infinispan for distributed context and persistence

  • Docker to run APEX inside a Docker container

Build (Install from Source) Requirements

Installation from source requires a few development tools

  • GIT to retrieve the source code

  • Java SDK, Java version 8 or later

  • Apache Maven 3 (the APEX build environment)

Get the APEX Source Code

The first APEX source code was hosted on Github in January 2018. By the end of 2018, APEX was added as a project in the ONAP Policy Framework, released later in the ONAP Casablanca release.

The APEX source code is hosted in ONAP as project APEX. The current stable version is in the master branch. Simply clone the master branch from ONAP using HTTPS.

1git clone https://gerrit.onap.org/r/policy/apex-pdp
Build APEX

The examples in this document assume that the APEX source repositories are cloned to:

  • Unix, Cygwin: /usr/local/src/apex-pdp

  • Windows: C:\dev\apex-pdp

  • Cygwin: /cygdrive/c/dev/apex-pdp

Important

A Build requires ONAP Nexus APEX has a dependency to ONAP parent projects. You might need to adjust your Maven M2 settings. The most current settings can be found in the ONAP oparent repo: Settings.

Important

A Build needs Space Building APEX requires approximately 2-3 GB of hard disc space, 1 GB for the actual build with full distribution and 1-2 GB for the downloaded dependencies

Important

A Build requires Internet (for first build) During the build, several (a lot) of Maven dependencies will be downloaded and stored in the configured local Maven repository. The first standard build (and any first specific build) requires Internet access to download those dependencies.

Use Maven to for a standard build without any tests.

Unix, Cygwin

Windows

1# cd /usr/local/src/apex-pdp
2# mvn clean install -Pdocker -DskipTests
1 >c:
2 >cd \dev\apex
3 >mvn clean install -Pdocker -DskipTests

The build takes 2-3 minutes on a standard development laptop. It should run through without errors, but with a lot of messages from the build process.

When Maven is finished with the build, the final screen should look similar to this (omitting some success lines):

 1[INFO] tools .............................................. SUCCESS [  0.248 s]
 2[INFO] tools-common ....................................... SUCCESS [  0.784 s]
 3[INFO] simple-wsclient .................................... SUCCESS [  3.303 s]
 4[INFO] model-generator .................................... SUCCESS [  0.644 s]
 5[INFO] packages ........................................... SUCCESS [  0.336 s]
 6[INFO] apex-pdp-package-full .............................. SUCCESS [01:10 min]
 7[INFO] Policy APEX PDP - Docker build 2.0.0-SNAPSHOT ...... SUCCESS [ 10.307 s]
 8[INFO] ------------------------------------------------------------------------
 9[INFO] BUILD SUCCESS
10[INFO] ------------------------------------------------------------------------
11[INFO] Total time: 03:43 min
12[INFO] Finished at: 2018-09-03T11:56:01+01:00
13[INFO] ------------------------------------------------------------------------

The build will have created all artifacts required for an APEX installation. The following example show how to change to the target directory and how it should look like.

Unix, Cygwin

number-lines

-rwxrwx—+ 1 esvevan Domain Users 772 Sep 3 11:55 apex-pdp-package-full_2.0.0~SNAPSHOT_all.changes* -rwxrwx—+ 1 esvevan Domain Users 146328082 Sep 3 11:55 apex-pdp-package-full-2.0.0-SNAPSHOT.deb* -rwxrwx—+ 1 esvevan Domain Users 15633 Sep 3 11:54 apex-pdp-package-full-2.0.0-SNAPSHOT.jar* -rwxrwx—+ 1 esvevan Domain Users 146296819 Sep 3 11:55 apex-pdp-package-full-2.0.0-SNAPSHOT-tarball.tar.gz* drwxrwx—+ 1 esvevan Domain Users 0 Sep 3 11:54 archive-tmp/ -rwxrwx—+ 1 esvevan Domain Users 89 Sep 3 11:54 checkstyle-cachefile* -rwxrwx—+ 1 esvevan Domain Users 10621 Sep 3 11:54 checkstyle-checker.xml* -rwxrwx—+ 1 esvevan Domain Users 584 Sep 3 11:54 checkstyle-header.txt* -rwxrwx—+ 1 esvevan Domain Users 86 Sep 3 11:54 checkstyle-result.xml* drwxrwx—+ 1 esvevan Domain Users 0 Sep 3 11:54 classes/ drwxrwx—+ 1 esvevan Domain Users 0 Sep 3 11:54 dependency-maven-plugin-markers/ drwxrwx—+ 1 esvevan Domain Users 0 Sep 3 11:54 etc/ drwxrwx—+ 1 esvevan Domain Users 0 Sep 3 11:54 examples/ drwxrwx—+ 1 esvevan Domain Users 0 Sep 3 11:55 install_hierarchy/ drwxrwx—+ 1 esvevan Domain Users 0 Sep 3 11:54 maven-archiver/

Windows

number-lines

03/09/2018 11:55 <DIR> . 03/09/2018 11:55 <DIR> .. 03/09/2018 11:55 146,296,819 apex-pdp-package-full-2.0.0-SNAPSHOT-tarball.tar.gz 03/09/2018 11:55 146,328,082 apex-pdp-package-full-2.0.0-SNAPSHOT.deb 03/09/2018 11:54 15,633 apex-pdp-package-full-2.0.0-SNAPSHOT.jar 03/09/2018 11:55 772 apex-pdp-package-full_2.0.0~SNAPSHOT_all.changes 03/09/2018 11:54 <DIR> archive-tmp 03/09/2018 11:54 89 checkstyle-cachefile 03/09/2018 11:54 10,621 checkstyle-checker.xml 03/09/2018 11:54 584 checkstyle-header.txt 03/09/2018 11:54 86 checkstyle-result.xml 03/09/2018 11:54 <DIR> classes 03/09/2018 11:54 <DIR> dependency-maven-plugin-markers 03/09/2018 11:54 <DIR> etc 03/09/2018 11:54 <DIR> examples 03/09/2018 11:55 <DIR> install_hierarchy 03/09/2018 11:54 <DIR> maven-archiver 8 File(s) 292,652,686 bytes 9 Dir(s) 14,138,720,256 bytes free

Install APEX

APEX can be installed in different ways:

  • Unix: automatically using dpkg from .deb archive

  • Windows, Unix, Cygwin: manually from a .tar.gz archive

  • Windows, Unix, Cygwin: build from source using Maven, then install manually

Install with DPKG

You can get the APEX debian package from the ONAP Nexus Repository.

The install distributions of APEX automatically install the system. The installation directory is /opt/app/policy/apex-pdp. Log files are located in /var/log/onap/policy/apex-pdp. The latest APEX version will be available as /opt/app/policy/apex-pdp/apex-pdp.

For the installation, a new user apexuser and a new group apexuser will be created. This user owns the installation directories and the log file location. The user is also used by the standard APEX start scripts to run APEX with this user’s permissions.

DPKG Installation

number-lines

# sudo dpkg -i apex-pdp-package-full-2.0.0-SNAPSHOT.deb Selecting previously unselected package apex-uservice. (Reading database … 288458 files and directories currently installed.) Preparing to unpack apex-pdp-package-full-2.0.0-SNAPSHOT.deb … ******************preinst***************** arguments install ****************************************** creating group apexuser … creating user apexuser … Unpacking apex-uservice (2.0.0-SNAPSHOT) … Setting up apex-uservice (2.0.0-SNAPSHOT) … ******************postinst************** arguments configure *******************************************

Once the installation is finished, APEX is fully installed and ready to run.

Install Manually from Archive (Unix, Cygwin)

You can download a tar.gz archive from the ONAP Nexus Repository.

Create a directory where APEX should be installed. Extract the tar archive. The following example shows how to install APEX in /opt/apex and create a link to /opt/apex/apex for the most recent installation.

1# cd /opt
2# mkdir apex
3# cd apex
4# mkdir apex-full-2.0.0-SNAPSHOT
5# tar xvfz ~/Downloads/apex-pdp-package-full-2.0.0-SNAPSHOT.tar.gz -C apex-full-2.0.0-SNAPSHOT
6# ln -s apex apex-pdp-package-full-2.0.0-SNAPSHOT
Install Manually from Archive (Windows, 7Zip, GUI)

You can download a tar.gz archive from the ONAP Nexus Repository.

Copy the tar.gz file into the install folder (in this example C:\apex). Assuming you are using 7Zip, right click on the file and extract the tar archive. Note: the screenshots might show an older version than you have.

The right-click on the new created TAR file and extract the actual APEX distribution.

Inside the new APEX folder you see the main directories: bin, etc, examples, lib, and war

Once extracted, please rename the created folder to apex-full-2.0.0-SNAPSHOT. This will keep the directory name in line with the rest of this documentation.

Install Manually from Archive (Windows, 7Zip, CMD)

You can download a tar.gz archive from the ONAP Nexus Repository.

Copy the tar.gz file into the install folder (in this example C:\apex). Start cmd, for instance typing Windows+R and then cmd in the dialog. Assuming 7Zip is installed in the standard folder, simply run the following commands (for APEX version 2.0.0-SNAPSHOT full distribution)

1 >c:
2 >cd \apex
3 >"\Program Files\7-Zip\7z.exe" x apex-pdp-package-full-2.0.0-SNAPSHOT.tar.gz -so | "\Program Files\7-Zip\7z.exe" x -aoa -si -ttar -o"apex-full-2.0.0-SNAPSHOT"

APEX is now installed in the folder C:\apex\apex-full-2.0.0-SNAPSHOT.

Build from Source
Build and Install Manually (Unix, Windows, Cygwin)

Clone the APEX GIT repositories into a directory. Go to that directory. Use Maven to build APEX (all details on building APEX from source can be found in APEX HowTo: Build). Install from the created artifacts (rpm, deb, tar.gz, or copying manually).

The following example shows how to build the APEX system, without tests (-DskipTests) to safe some time. It assumes that the APX GIT repositories are cloned to:

  • Unix, Cygwin: /usr/local/src/apex

  • Windows: C:\dev\apex

Unix, Cygwin

Windows

1# cd /usr/local/src/apex
2# mvn clean install -Pdocker -DskipTests
1>c:
2>cd \dev\apex
3>mvn clean install -Pdocker -DskipTests

The build takes about 2 minutes without test and about 4-5 minutes with tests on a standard development laptop. It should run through without errors, but with a lot of messages from the build process. If build with tests (i.e. without -DskipTests), there will be error messages and stack trace prints from some tests. This is normal, as long as the build finishes successful.

When Maven is finished with the build, the final screen should look similar to this (omitting some success lines):

 1[INFO] tools .............................................. SUCCESS [  0.248 s]
 2[INFO] tools-common ....................................... SUCCESS [  0.784 s]
 3[INFO] simple-wsclient .................................... SUCCESS [  3.303 s]
 4[INFO] model-generator .................................... SUCCESS [  0.644 s]
 5[INFO] packages ........................................... SUCCESS [  0.336 s]
 6[INFO] apex-pdp-package-full .............................. SUCCESS [01:10 min]
 7[INFO] Policy APEX PDP - Docker build 2.0.0-SNAPSHOT ...... SUCCESS [ 10.307 s]
 8[INFO] ------------------------------------------------------------------------
 9[INFO] BUILD SUCCESS
10[INFO] ------------------------------------------------------------------------
11[INFO] Total time: 03:43 min
12[INFO] Finished at: 2018-09-03T11:56:01+01:00
13[INFO] ------------------------------------------------------------------------

The build will have created all artifacts required for an APEX installation. The following example show how to change to the target directory and how it should look like.

Unix, Cygwin

number-lines

# cd packages/apex-pdp-package-full/target # ls -l -rwxrwx—+ 1 esvevan Domain Users 772 Sep 3 11:55 apex-pdp-package-full_2.0.0~SNAPSHOT_all.changes* -rwxrwx—+ 1 esvevan Domain Users 146328082 Sep 3 11:55 apex-pdp-package-full-2.0.0-SNAPSHOT.deb* -rwxrwx—+ 1 esvevan Domain Users 15633 Sep 3 11:54 apex-pdp-package-full-2.0.0-SNAPSHOT.jar* -rwxrwx—+ 1 esvevan Domain Users 146296819 Sep 3 11:55 apex-pdp-package-full-2.0.0-SNAPSHOT-tarball.tar.gz* drwxrwx—+ 1 esvevan Domain Users 0 Sep 3 11:54 archive-tmp/ -rwxrwx—+ 1 esvevan Domain Users 89 Sep 3 11:54 checkstyle-cachefile* -rwxrwx—+ 1 esvevan Domain Users 10621 Sep 3 11:54 checkstyle-checker.xml* -rwxrwx—+ 1 esvevan Domain Users 584 Sep 3 11:54 checkstyle-header.txt* -rwxrwx—+ 1 esvevan Domain Users 86 Sep 3 11:54 checkstyle-result.xml* drwxrwx—+ 1 esvevan Domain Users 0 Sep 3 11:54 classes/ drwxrwx—+ 1 esvevan Domain Users 0 Sep 3 11:54 dependency-maven-plugin-markers/ drwxrwx—+ 1 esvevan Domain Users 0 Sep 3 11:54 etc/ drwxrwx—+ 1 esvevan Domain Users 0 Sep 3 11:54 examples/ drwxrwx—+ 1 esvevan Domain Users 0 Sep 3 11:55 install_hierarchy/ drwxrwx—+ 1 esvevan Domain Users 0 Sep 3 11:54 maven-archiver/

Windows

number-lines

>cd packagesapex-pdp-package-fulltarget >dir 03/09/2018 11:55 <DIR> . 03/09/2018 11:55 <DIR> .. 03/09/2018 11:55 146,296,819 apex-pdp-package-full-2.0.0-SNAPSHOT-tarball.tar.gz 03/09/2018 11:55 146,328,082 apex-pdp-package-full-2.0.0-SNAPSHOT.deb 03/09/2018 11:54 15,633 apex-pdp-package-full-2.0.0-SNAPSHOT.jar 03/09/2018 11:55 772 apex-pdp-package-full_2.0.0~SNAPSHOT_all.changes 03/09/2018 11:54 <DIR> archive-tmp 03/09/2018 11:54 89 checkstyle-cachefile 03/09/2018 11:54 10,621 checkstyle-checker.xml 03/09/2018 11:54 584 checkstyle-header.txt 03/09/2018 11:54 86 checkstyle-result.xml 03/09/2018 11:54 <DIR> classes 03/09/2018 11:54 <DIR> dependency-maven-plugin-markers 03/09/2018 11:54 <DIR> etc 03/09/2018 11:54 <DIR> examples 03/09/2018 11:55 <DIR> install_hierarchy 03/09/2018 11:54 <DIR> maven-archiver 8 File(s) 292,652,686 bytes 9 Dir(s) 14,138,720,256 bytes free

Now, take the .deb or the .tar.gz file and install APEX. Alternatively, copy the content of the folder install_hierarchy to your APEX directory.

Installation Layout

A full installation of APEX comes with the following layout.

         $APEX_HOME
             ├───bin             (1)
             ├───etc             (2)
             │   ├───editor
             │   ├───hazelcast
             │   ├───infinispan
             │   └───META-INF
             ├───examples            (3)
             │   ├───config          (4)
             │   ├───docker          (5)
             │   ├───events          (6)
             │   ├───html            (7)
             │   ├───models          (8)
             │   └───scripts         (9)
             ├───lib             (10)
             │   └───applications        (11)
             └───war             (12)

.. container:: colist arabic

   +-----------------------------------+-----------------------------------+
   | **1**                             | binaries, mainly scripts (bash    |
   |                                   | and bat) to start the APEX engine |
   |                                   | and applications                  |
   +-----------------------------------+-----------------------------------+
   | **2**                             | configuration files, such as      |
   |                                   | logback (logging) and third party |
   |                                   | library configurations            |
   +-----------------------------------+-----------------------------------+
   | **3**                             | example policy models to get      |
   |                                   | started                           |
   +-----------------------------------+-----------------------------------+
   | **4**                             | configurations for the examples   |
   |                                   | (with sub directories for         |
   |                                   | individual examples)              |
   +-----------------------------------+-----------------------------------+
   | **5**                             | Docker files and additional       |
   |                                   | Docker instructions for the       |
   |                                   | exampples                         |
   +-----------------------------------+-----------------------------------+
   | **6**                             | example events for the examples   |
   |                                   | (with sub directories for         |
   |                                   | individual examples)              |
   +-----------------------------------+-----------------------------------+
   | **7**                             | HTML files for some examples,     |
   |                                   | e.g. the Decisionmaker example    |
   +-----------------------------------+-----------------------------------+
   | **8**                             | the policy models, generated for  |
   |                                   | each example (with sub            |
   |                                   | directories for individual        |
   |                                   | examples)                         |
   +-----------------------------------+-----------------------------------+
   | **9**                             | additional scripts for the        |
   |                                   | examples (with sub directories    |
   |                                   | for individual examples)          |
   +-----------------------------------+-----------------------------------+
   | **10**                            | the library folder with all Java  |
   |                                   | JAR files                         |
   +-----------------------------------+-----------------------------------+
   | **11**                            | applications, also known as jar   |
   |                                   | with dependencies (or fat jars),  |
   |                                   | individually deployable           |
   +-----------------------------------+-----------------------------------+
   | **12**                            | WAR files for web applications    |
   +-----------------------------------+-----------------------------------+
System Configuration

Once APEX is installed, a few configurations need to be done:

  • Create an APEX user and an APEX group (optional, if not installed using RPM and DPKG)

  • Create environment settings for APEX_HOME and APEX_USER, required by the start scripts

  • Change settings of the logging framework (optional)

  • Create directories for logging, required (execution might fail if directories do not exist or cannot be created)

APEX User and Group

On smaller installations and test systems, APEX can run as any user or group.

However, if APEX is installed in production, we strongly recommend you set up a dedicated user for running APEX. This will isolate the execution of APEX to that user. We recommend you use the userid apexuser but you may use any user you choose.

The following example, for UNIX, creates a group called apexuser, an APEX user called apexuser, adds the group to the user, and changes ownership of the APEX installation to the user. Substitute <apex-dir> with the directory where APEX is installed.

1# sudo groupadd apexuser
2# sudo useradd -g apexuser apexuser
3# sudo chown -R apexuser:apexuser <apex-dir>

For other operating systems please consult your manual or system administrator.

Environment Settings: APEX_HOME and APEX_USER

The provided start scripts for APEX require two environment variables being set:

  • APEX_USER with the user under whos name and permission APEX should be started (Unix only)

  • APEX_HOME with the directory where APEX is installed (Unix, Windows, Cygwin)

The first row in the following table shows how to set these environment variables temporary (assuming the user is apexuser). The second row shows how to verify the settings. The last row explains how to set those variables permanently.

Unix, Cygwin (bash/tcsh)

Windows

1# export APEX_USER=apexuser
2# cd /opt/app/policy/apex-pdp
3# export APEX_HOME=`pwd`
1>set APEX_HOME=C:\apex\apex-full-2.0.0-SNAPSHOT
1# env | grep APEX
2# APEX_USER=apexuser
3# APEX_HOME=/opt/app/policy/apex-pdp
1>set APEX_HOME
2APEX_HOME=\apex\apex-full-2.0.0-SNAPSHOT
Making Environment Settings Permanent (Unix, Cygwin)

For a per-user setting, edit the a user’s bash or tcsh settings in ~/.bashrc or ~/.tcshrc. For system-wide settings, edit /etc/profiles (requires permissions).

Making Environment Settings Permanent (Windows)

On Windows 7 do

  • Click on the Start Menu

  • Right click on Computer

  • Select Properties

On Windows 8/10 do

  • Click on the Start Menu

  • Select System

Then do the following

  • Select Advanced System Settings

  • On the Advanced tab, click the Environment Variables button

  • Edit an existing variable, or create a new System variable: ‘Variable name’=”APEX_HOME”, ‘Variable value’=”C:apexapex-full-2.0.0-SNAPSHOT”

For the settings to take effect, an application needs to be restarted (e.g. any open cmd window).

Edit the APEX Logging Settings

Configure the APEX logging settings to your requirements, for instance:

  • change the directory where logs are written to, or

  • change the log levels

Edit the file $APEX_HOME/etc/logback.xml for any required changes. To change the log directory change the line

<property name="logDir" value="/var/log/onap/policy/apex-pdp/" />

to

<property name="logDir" value="/PATH/TO/LOG/DIRECTORY/" />

On Windows, it is recommended to change the log directory to:

<property name="logDir" value="C:/apex/apex-full-2.0.0-SNAPSHOT/logs" />

Note: Be careful about when to use \ vs. / as the path separator!

Create Directories for Logging

Make sure that the log directory exists. This is important when APEX was installed manually or when the log directory was changed in the settings (see above).

Unix, Cygwin

Windows

1sudo mkdir -p /var/log/onap/policy/apex-pdp
2sudo chown -R apexuser:apexuser /var/log/onap/policy/apex-pdp
1>mkdir C:\apex\apex-full-2.0.0-SNAPSHOT\logs
Verify the APEX Installation

When APEX is installed and all settings are realized, the installation can be verified.

Verify Installation - run Engine

A simple verification of an APEX installation can be done by simply starting the APEX engine without specifying a tosca policy. On Unix (or Cygwin) start the engine using $APEX_HOME/bin/apexApps.sh engine. On Windows start the engine using %APEX_HOME%\bin\apexApps.bat engine. The engine will fail to fully start. However, if the output looks similar to the following line, the APEX installation is realized.

 1Starting Apex service with parameters [] . . .
 2start of Apex service failed.
 3org.onap.policy.apex.model.basicmodel.concepts.ApexException: Arguments validation failed.
 4 at org.onap.policy.apex.service.engine.main.ApexMain.populateApexParameters(ApexMain.java:238)
 5 at org.onap.policy.apex.service.engine.main.ApexMain.<init>(ApexMain.java:86)
 6 at org.onap.policy.apex.service.engine.main.ApexMain.main(ApexMain.java:351)
 7Caused by: org.onap.policy.apex.model.basicmodel.concepts.ApexException: Tosca Policy file was not specified as an argument
 8 at org.onap.policy.apex.service.engine.main.ApexCommandLineArguments.validateReadableFile(ApexCommandLineArguments.java:242)
 9 at org.onap.policy.apex.service.engine.main.ApexCommandLineArguments.validate(ApexCommandLineArguments.java:172)
10 at org.onap.policy.apex.service.engine.main.ApexMain.populateApexParameters(ApexMain.java:235)
11 ... 2 common frames omitted
Verify Installation - run an Example

A full APEX installation comes with several examples. Here, we can fully verify the installation by running one of the examples.

We use the example called SampleDomain and configure the engine to use standard in and standard out for events. Run the engine with the provided configuration. Note: Cygwin executes scripts as Unix scripts but runs Java as a Windows application, thus the configuration file must be given as a Windows path.

On Unix/Linux flavoured platforms, give the commands below:

1 sudo su - apexuser
2 export APEX_HOME <path to apex installation>
3 export APEX_USER apexuser

Create a Tosca Policy for the SampleDomain example using ApexCliToscaEditor as explained in the section “The APEX CLI Tosca Editor”. Assume the tosca policy name is SampleDomain_tosca.json. You can then try to run apex using the ToscaPolicy.

1 # $APEX_HOME/bin/apexApps.sh engine -p $APEX_HOME/examples/SampleDomain_tosca.json (1)
2 >%APEX_HOME%\bin\apexApps.bat engine -p %APEX_HOME%\examples\SampleDomain_tosca.json(2)

1

UNIX

2

Windows

The engine should start successfully. Assuming the logging levels are set to info in the built system, the output should look similar to this (last few lines)

 1Starting Apex service with parameters [-p, /home/ubuntu/apex/SampleDomain_tosca.json] . . .
 22018-09-05 15:16:42,800 Apex [main] INFO o.o.p.a.s.e.r.impl.EngineServiceImpl - Created apex engine MyApexEngine-0:0.0.1 .
 32018-09-05 15:16:42,804 Apex [main] INFO o.o.p.a.s.e.r.impl.EngineServiceImpl - Created apex engine MyApexEngine-1:0.0.1 .
 42018-09-05 15:16:42,804 Apex [main] INFO o.o.p.a.s.e.r.impl.EngineServiceImpl - Created apex engine MyApexEngine-2:0.0.1 .
 52018-09-05 15:16:42,805 Apex [main] INFO o.o.p.a.s.e.r.impl.EngineServiceImpl - Created apex engine MyApexEngine-3:0.0.1 .
 62018-09-05 15:16:42,805 Apex [main] INFO o.o.p.a.s.e.r.impl.EngineServiceImpl - APEX service created.
 72018-09-05 15:16:43,962 Apex [main] INFO o.o.p.a.s.e.e.EngDepMessagingService - engine<-->deployment messaging starting . . .
 82018-09-05 15:16:43,963 Apex [main] INFO o.o.p.a.s.e.e.EngDepMessagingService - engine<-->deployment messaging started
 92018-09-05 15:16:44,987 Apex [main] INFO o.o.p.a.s.e.r.impl.EngineServiceImpl - Registering apex model on engine MyApexEngine-0:0.0.1
102018-09-05 15:16:45,112 Apex [main] INFO o.o.p.a.s.e.r.impl.EngineServiceImpl - Registering apex model on engine MyApexEngine-1:0.0.1
112018-09-05 15:16:45,113 Apex [main] INFO o.o.p.a.s.e.r.impl.EngineServiceImpl - Registering apex model on engine MyApexEngine-2:0.0.1
122018-09-05 15:16:45,113 Apex [main] INFO o.o.p.a.s.e.r.impl.EngineServiceImpl - Registering apex model on engine MyApexEngine-3:0.0.1
132018-09-05 15:16:45,120 Apex [main] INFO o.o.p.a.s.e.r.impl.EngineServiceImpl - Added the action listener to the engine
14Started Apex service

Important are the last two line, stating that APEX has added the final action listener to the engine and that the engine is started.

The engine is configured to read events from standard input and write produced events to standard output. The policy model is a very simple policy.

The following table shows an input event in the left column and an output event in the right column. Past the input event into the console where APEX is running, and the output event should appear in the console. Pasting the input event multiple times will produce output events with different values.

Input Event

Example Output Event

number-lines

{ “nameSpace”: “org.onap.policy.apex.sample.events”, “name”: “Event0000”, “version”: “0.0.1”, “source”: “test”, “target”: “apex”, “TestSlogan”: “Test slogan for External Event0”, “TestMatchCase”: 0, “TestTimestamp”: 1469781869269, “TestTemperature”: 9080.866 }

number-lines

{ “name”: “Event0004”, “version”: “0.0.1”, “nameSpace”: “org.onap.policy.apex.sample.events”, “source”: “Act”, “target”: “Outside”, “TestActCaseSelected”: 2, “TestActStateTime”: 1536157104627, “TestDecideCaseSelected”: 0, “TestDecideStateTime”: 1536157104625, “TestEstablishCaseSelected”: 0, “TestEstablishStateTime”: 1536157104623, “TestMatchCase”: 0, “TestMatchCaseSelected”: 1, “TestMatchStateTime”: 1536157104620, “TestSlogan”: “Test slogan for External Event0”, “TestTemperature”: 9080.866, “TestTimestamp”: 1469781869269 }

Terminate APEX by simply using CTRL+C in the console.

Verify a Full Installation - REST Client

APEX has a REST application for deploying, monitoring, and viewing policy models. The application can also be used to create new policy models close to the engine native policy language. Start the REST client as follows.

1# $APEX_HOME/bin/apexApps.sh full-client
1>%APEX_HOME%\bin\apexApps.bat full-client

The script will start a simple web server (Grizzly) and deploy a war web archive in it. Once the client is started, it will be available on localhost:18989. The last few line of the messages should be:

1Apex Editor REST endpoint (ApexServicesRestMain: Config=[ApexServicesRestParameters: URI=http://localhost:18989/apexservices/, TTL=-1sec], State=READY) starting at http://localhost:18989/apexservices/ . . .
2Jul 02, 2020 2:57:39 PM org.glassfish.grizzly.http.server.NetworkListener start
3INFO: Started listener bound to [localhost:18989]
4Jul 02, 2020 2:57:39 PM org.glassfish.grizzly.http.server.HttpServer start
5INFO: [HttpServer] Started.
6Apex Editor REST endpoint (ApexServicesRestMain: Config=[ApexServicesRestParameters: URI=http://localhost:18989/apexservices/, TTL=-1sec], State=RUNNING) started at http://localhost:18989/apexservices/

Now open a browser (Firefox, Chrome, Opera, Internet Explorer) and use the URL http://localhost:18989/. This will connect the browser to the started REST client. Click on the “Policy Editor” button and the Policy Editor start screen should be as follows.

Figure 1. Policy Editor Start Screen

Now load a policy model by clicking the menu File and then Open. In the opened dialog, go to the directory where APEX is installed, then examples, models, SampleDomain, and there select the file SamplePolicyModelJAVA.json. This will load the policy model used to verify the policy engine (see above). Once loaded, the screen should look as follows.

Figure 2. Policy Editor with loaded SampleDomain Policy Model

Now you can use the Policy editor. To finish this verification, simply terminate your browser (or the tab), and then use CTRL+C in the console where you started the Policy editor.

Installing the WAR Application

The three APEX clients are packaged in a WAR file. This is a complete application that can be installed and run in an application server. The application is realized as a servlet. You can find the WAR application in the ONAP Nexus Repository.

Installing and using the WAR application requires a web server that can execute war web archives. We recommend to use Apache Tomcat, however other web servers can be used as well.

Install Apache Tomcat including the Manager App, see V9.0 Docs for details. Start the Tomcat service, or make sure that Tomcat is running.

There are multiple ways to install the APEX WAR application:

  • copy the .war file into the Tomcat webapps folder

  • use the Tomcat Manager App to deploy via the web interface

  • deploy using a REST call to Tomcat

For details on how to install war files please consult the Tomcat Documentation or the Manager App HOW-TO. Once you installed an APEX WAR application (and wait for sufficient time for Tomcat to finalize the installation), open the Manager App in Tomcat. You should see the APEX WAR application being installed and running.

In case of errors, examine the log files in the Tomcat log directory. In a conventional install, those log files are in the logs directory where Tomcat is installed.

The WAR application file has a name similar to apex-client-full-<VERSION>.war.

Running APEX in Docker

Since APEX is in ONAP, we provide a full virtualization environment for the engine.

Run in ONAP

Running APEX from the ONAP docker repository only requires 2 commands:

  1. Log into the ONAP docker repo

         docker login -u docker -p docker nexus3.onap.org:10003

.. container:: olist arabic

   #. Run the APEX docker image

.. container:: listingblock

   .. container:: content

      ::

         docker run -it --rm  nexus3.onap.org:10003/onap/policy-apex-pdp:latest
Build a Docker Image

Alternatively, one can use the Dockerfile defined in the Docker package to build an image.

APEX Dockerfile

 1#
 2# Docker file to build an image that runs APEX on Java 8 in Ubuntu
 3#
 4FROM ubuntu:16.04
 5
 6RUN apt-get update && \
 7        apt-get upgrade -y && \
 8        apt-get install -y software-properties-common && \
 9        add-apt-repository ppa:openjdk-r/ppa -y && \
10        apt-get update && \
11        apt-get install -y openjdk-8-jdk
12
13# Create apex user and group
14RUN groupadd apexuser
15RUN useradd --create-home -g apexuser apexuser
16
17# Add Apex-specific directories and set ownership as the Apex admin user
18RUN mkdir -p /opt/app/policy/apex-pdp
19RUN mkdir -p /var/log/onap/policy/apex-pdp
20RUN chown -R apexuser:apexuser /var/log/onap/policy/apex-pdp
21
22# Unpack the tarball
23RUN mkdir /packages
24COPY apex-pdp-package-full.tar.gz /packages
25RUN tar xvfz /packages/apex-pdp-package-full.tar.gz --directory /opt/app/policy/apex-pdp
26RUN rm /packages/apex-pdp-package-full.tar.gz
27
28# Ensure everything has the correct permissions
29RUN find /opt/app -type d -perm 755
30RUN find /opt/app -type f -perm 644
31RUN chmod a+x /opt/app/policy/apex-pdp/bin/*
32
33# Copy examples to Apex user area
34RUN cp -pr /opt/app/policy/apex-pdp/examples /home/apexuser
35
36RUN apt-get clean
37
38RUN chown -R apexuser:apexuser /home/apexuser/*
39
40USER apexuser
41ENV PATH /opt/app/policy/apex-pdp/bin:$PATH
42WORKDIR /home/apexuser
Running APEX in Standalone mode

APEX Engine can run in standalone mode by taking in a ToscaPolicy as an argument and executing it. Assume there is a tosca policy named ToscaPolicy.json in APEX_HOME directory This policy can be executed in standalone mode using any of the below methods.

Run in an APEX installation
1 # $APEX_HOME/bin/apexApps.sh engine -p $APEX_HOME/ToscaPolicy.json(1)
2 >%APEX_HOME%\bin\apexApps.bat engine -p %APEX_HOME%\ToscaPolicy.json(2)

1

UNIX

2

Windows

Run in a docker container
1 # docker run -p 6969:6969 -v $APEX_HOME/ToscaPolicy.json:/tmp/policy/ToscaPolicy.json \
2   --name apex -it nexus3.onap.org:10001/onap/policy-apex-pdp:latest \
3   -c "/opt/app/policy/apex-pdp/bin/apexEngine.sh -p /tmp/policy/ToscaPolicy.json"

APEX Configurations Explained

Introduction to APEX Configuration

An APEX engine can be configured to use various combinations of event input handlers, event output handlers, event protocols, context handlers, and logic executors. The system is build using a plugin architecture. Each configuration option is realized by a plugin, which can be loaded and configured when the engine is started. New plugins can be added to the system at any time, though to benefit from a new plugin an engine will need to be restarted.

Figure 3. APEX Configuration Matrix

The APEX distribution already comes with a number of plugins. The figure above shows the provided plugins. Any combination of input, output, event protocol, context handlers, and executors is possible.

General Configuration Format

The APEX configuration file is a JSON file containing a few main blocks for different parts of the configuration. Each block then holds the configuration details. The following code shows the main blocks:

{
  "engineServiceParameters":{
    ... (1)
    "engineParameters":{ (2)
      "executorParameters":{...}, (3)
      "contextParameters":{...} (4)
      "taskParameters":[...] (5)
    }
  },
  "eventInputParameters":{ (6)
    "input1":{ (7)
      "carrierTechnologyParameters":{...},
      "eventProtocolParameters":{...}
    },
    "input2":{...}, (8)
      "carrierTechnologyParameters":{...},
      "eventProtocolParameters":{...}
    },
    ... (9)
  },
  "eventOutputParameters":{ (10)
    "output1":{ (11)
      "carrierTechnologyParameters":{...},
      "eventProtocolParameters":{...}
    },
    "output2":{ (12)
      "carrierTechnologyParameters":{...},
      "eventProtocolParameters":{...}
    },
    ... (13)
  }
}

1

main engine configuration

2

engine parameters for plugin configurations (execution environments and context handling)

3

engine specific parameters, mainly for executor plugins

4

context specific parameters, e.g. for context schemas, persistence, etc.

5

list of task parameters that should be made available in task logic (optional).

6

configuration of the input interface

7

an example input called input1 with carrier technology and event protocol

8

an example input called input2 with carrier technology and event protocol

9

any further input configuration

10

configuration of the output interface

11

an example output called output1 with carrier technology and event protocol

12

an example output called output2 with carrier technology and event protocol

13

any further output configuration

Engine Service Parameters

The configuration provides a number of parameters to configure the engine. An example configuration with explanations of all options is shown below.

"engineServiceParameters" : {
  "name"          : "AADMApexEngine", (1)
  "version"        : "0.0.1",  (2)
  "id"             :  45,  (3)
  "instanceCount"  : 4,  (4)
  "deploymentPort" : 12345,  (5)
  "policy_type_impl" : {...}, (6)
  "periodicEventPeriod": 1000, (7)
  "engineParameters":{ (8)
    "executorParameters":{...}, (9)
    "contextParameters":{...}, (10)
    "taskParameters":[...] (11)
  }
}

1

a name for the engine. The engine name is used to create a key in a runtime engine. An name matching the following regular expression can be used here: [A-Za-z0-9\\-_\\.]+

2

a version of the engine, use semantic versioning as explained here: Semantic Versioning _. This version is used in a runtime engine to create a version of the engine. For that reason, the version must match the following regular expression [A-Z0-9.]+

3

a numeric identifier for the engine

4

the number of threads (policy instances executed in parallel) the engine should use, use 1 for single threaded engines

5

the port for the deployment Websocket connection to the engine

6

the APEX policy model as a JSON or YAML block to load into the engine on startup when APEX is running a policy that has its logic and parameters specified in TOSCA (optional)

7

an optional timer for periodic policies, in milliseconds (a defined periodic policy will be executed every X milliseconds), not used of not set or 0

8

engine parameters for plugin configurations (execution environments and context handling)

9

engine specific parameters, mainly for executor plugins

10

context specific parameters, e.g. for context schemas, persistence, etc.

11

list of task parameters that should be made available in task logic (optional).

The model file is optional, it can also be specified via command line. In any case, make sure all execution and other required plug-ins for the loaded model are loaded as required.

Input and Output Interfaces

An APEX engine has two main interfaces:

  • An input interface to receive events: also known as ingress interface or consumer, receiving (consuming) events commonly named triggers, and

  • An output interface to publish produced events: also known as egress interface or producer, sending (publishing) events commonly named actions or action events.

The input and output interface is configured in terms of inputs and outputs, respectively. Each input and output is a combination of a carrier technology and an event protocol. Carrier technologies and event protocols are provided by plugins, each with its own specific configuration. Most carrier technologies can be configured for input as well as output. Most event protocols can be used for all carrier technologies. One exception is the JMS object event protocol, which can only be used for the JMS carrier technology. Some further restrictions apply (for instance for carrier technologies using bi- or uni-directional modes).

Input and output interface can be configured separately, in isolation, with any number of carrier technologies. The resulting general configuration options are:

  • Input interface with one or more inputs

    • each input with a carrier technology and an event protocol

    • some inputs with optional synchronous mode

    • some event protocols with additional parameters

  • Output interface with one or more outputs

    • each output with a carrier technology and an event encoding

    • some outputs with optional synchronous mode

    • some event protocols with additional parameters

The configuration for input and output is contained in eventInputParameters and eventOutputParameters, respectively. Inside here, one can configure any number of inputs and outputs. Each of them needs to have a unique identifier (name), the content of the name is free form. The example below shows a configuration for two inputs and two outputs.

"eventInputParameters": { (1)
  "FirstConsumer": { (2)
    "carrierTechnologyParameters" : {...}, (3)
    "eventProtocolParameters":{...}, (4)
    ... (5)
  },
  "SecondConsumer": { (6)
    "carrierTechnologyParameters" : {...}, (7)
    "eventProtocolParameters":{...}, (8)
    ... (9)
  },
},
"eventOutputParameters": { (10)
  "FirstProducer": { (11)
    "carrierTechnologyParameters":{...}, (12)
    "eventProtocolParameters":{...}, (13)
    ... (14)
  },
  "SecondProducer": { (15)
    "carrierTechnologyParameters":{...}, (16)
    "eventProtocolParameters":{...}, (17)
    ... (18)
  }
}

1

input interface configuration, APEX input plugins

2

first input called FirstConsumer

3

carrier technology for plugin

4

event protocol for plugin

5

any other input configuration (e.g. event name filter, see below)

6

second input called SecondConsumer

7

carrier technology for plugin

8

event protocol for plugin

9

any other plugin configuration

10

output interface configuration, APEX output plugins

11

first output called FirstProducer

12

carrier technology for plugin

13

event protocol for plugin

14

any other plugin configuration

15

second output called SecondProducer

16

carrier technology for plugin

17

event protocol for plugin

18

any other output configuration (e.g. event name filter, see below)

Event Name

Any event defined in APEX has to be unique. The “name” of of an event is used as an identifier for an ApexEvent. Every event has to be tagged to an eventName. This can be done in different ways. Either the actual event can have a field called “name”. Or, the event has some other field that can act as the identifier, which can be specified using “nameAlias”. But in other cases, where a “name” or “nameAlias” cannot be specified, the incoming event coming over an endpoint can be manually tagged to an “eventName” before consuming it.

The “eventName” can have a single event’s name if the event coming over the endpoint has to be always mapped to the specified eventName’s definition. Otherwise, if different events can come over the endpoint, then “eventName” field can consist of multiple event names separated by “|” symbol. In this case, based on the received event’s structure, it is mapped to any one of the event name specified in the “eventName” field.

The following code shows some examples on how to specify the eventName field:

"eventInputParameters": {
  "Input1": {
    "carrierTechnologyParameters" : {...},
    "eventProtocolParameters":{...},
    "eventName" : "VesEvent" (1)
  },
  "Input2": {
    "carrierTechnologyParameters" : {...},
    "eventProtocolParameters":{...},
    "eventName" : "AAISuccessResponseEvent|AAIFailureResponseEvent" (2)
  }
}
Event Filters

APEX will always send an event after a policy execution is finished. For a successful execution, the event sent is the output event created by the policy. In case the policy does not create an output event, APEX will create a new event with all input event fields plus an additional field exceptionMessage with an exception message.

There are situations in which this auto-generated error event might not be required or wanted:

  • when a policy failing should not result in an event send out via an output interface

  • when the auto-generated event goes back in an APEX engine (or the same APEX engine), this can create endless loops

  • the auto-generated event should go to a special output interface or channel

All of these situations are supported by a filter option using a wildecard (regular expression) configuration on APEX I/O interfaces. The parameter is called eventNameFilter and the value are Java regular expressions (a tutorial). The following code shows some examples:

"eventInputParameters": {
  "Input1": {
    "carrierTechnologyParameters" : {...},
    "eventProtocolParameters":{...},
    "eventNameFilter" : "^E[Vv][Ee][Nn][Tt][0-9]004$" (1)
  }
},
"eventOutputParameters": {
  "Output1": {
    "carrierTechnologyParameters":{...},
    "eventProtocolParameters":{...},
    "eventNameFilter" : "^E[Vv][Ee][Nn][Tt][0-9]104$" (2)
  }
}
Executors

Executors are plugins that realize the execution of logic contained in a policy model. Logic can be in a task selector, a task, and a state finalizer. Using plugins for execution environments makes APEX very flexible to support virtually any executable logic expressions.

APEX 2.0.0-SNAPSHOT supports the following executors:

  • Java, for Java implemented logic

    • This executor requires logic implemented using the APEX Java interfaces.

    • Generated JAR files must be in the classpath of the APEX engine at start time.

  • Javascript

  • JRuby,

  • Jython,

  • MVEL

    • This executor uses the latest version of the MVEL engine, which can be very hard to debug and can produce unwanted side effects during execution

Configure the Javascript Executor

The Javascript executor is added to the configuration as follows:

"engineServiceParameters":{
  "engineParameters":{
    "executorParameters":{
      "JAVASCRIPT":{
        "parameterClassName" :
        "org.onap.policy.apex.plugins.executor.javascript.JavascriptExecutorParameters"
      }
    }
  }
}
Configure the Jython Executor

The Jython executor is added to the configuration as follows:

"engineServiceParameters":{
  "engineParameters":{
    "executorParameters":{
      "JYTHON":{
        "parameterClassName" :
        "org.onap.policy.apex.plugins.executor.jython.JythonExecutorParameters"
      }
    }
  }
}
Configure the JRuby Executor

The JRuby executor is added to the configuration as follows:

"engineServiceParameters":{
  "engineParameters":{
    "executorParameters":{
      "JRUBY":{
        "parameterClassName" :
        "org.onap.policy.apex.plugins.executor.jruby.JrubyExecutorParameters"
      }
    }
  }
}
Configure the Java Executor

The Java executor is added to the configuration as follows:

"engineServiceParameters":{
  "engineParameters":{
    "executorParameters":{
      "JAVA":{
        "parameterClassName" :
        "org.onap.policy.apex.plugins.executor.java.JavaExecutorParameters"
      }
    }
  }
}
Configure the MVEL Executor

The MVEL executor is added to the configuration as follows:

"engineServiceParameters":{
  "engineParameters":{
    "executorParameters":{
      "MVEL":{
        "parameterClassName" :
        "org.onap.policy.apex.plugins.executor.mvel.MVELExecutorParameters"
      }
    }
  }
}
Context Handlers

Context handlers are responsible for all context processing. There are the following main areas:

  • Context schema: use schema handlers other than Java class (supported by default without configuration)

  • Context distribution: distribute context across multiple APEX engines

  • Context locking: mechanisms to lock context elements for read/write

  • Context persistence: mechanisms to persist context

APEX provides plugins for each of the main areas.

Configure AVRO Schema Handler

The AVRO schema handler is added to the configuration as follows:

"engineServiceParameters":{
  "engineParameters":{
    "contextParameters":{
      "parameterClassName" : "org.onap.policy.apex.context.parameters.ContextParameters",
      "schemaParameters":{
        "Avro":{
          "parameterClassName" :
            "org.onap.policy.apex.plugins.context.schema.avro.AvroSchemaHelperParameters"
        }
      }
    }
  }
}

Using the AVRO schema handler has one limitation: AVRO only supports field names that represent valid Java class names. This means only letters and the character _ are supported. Characters commonly used in field names, such as . and -, are not supported by AVRO. for more information see Avro Spec: Names.

To work with this limitation, the APEX Avro plugin will parse a given AVRO definition and replace all occurrences of . and - with a _. This means that

  • In a policy model, if the AVRO schema defined a field as my-name the policy logic should access it as my_name

  • In a policy model, if the AVRO schema defined a field as my.name the policy logic should access it as my_name

  • There should be no field names that convert to the same internal name

    • For instance the simultaneous use of my_name, my.name, and my-name should be avoided

    • If not avoided, the event processing might create unwanted side effects

  • If field names use any other not-supported character, the AVRO plugin will reject it

    • Since AVRO uses lazy initialization, this rejection might only become visible at runtime

Configure Task Parameters

The Task Parameters are added to the configuration as follows:

"engineServiceParameters": {
  "engineParameters": {
    "taskParameters": [
      {
        "key": "ParameterKey1",
        "value": "ParameterValue1"
      },
      {
        "taskId": "Task_Act0",
        "key": "ParameterKey2",
        "value": "ParameterValue2"
      }
    ]
  }
}

TaskParameters can be used to pass parameters from ApexConfig to the policy logic. In the config, these are optional. The list of task parameters provided in the config may be added to the tasks or existing task parameters in the task will be overriden.

If taskId is provided in ApexConfig for an entry, then that parameter is updated only for that particular task. Otherwise, the task parameter is added to all tasks.

Carrier Technologies

Carrier technologies define how APEX receives (input) and sends (output) events. They can be used in any combination, using asynchronous or synchronous mode. There can also be any number of carrier technologies for the input (consume) and the output (produce) interface.

Supported input technologies are:

  • Standard input, read events from the standard input (console), not suitable for APEX background servers

  • File input, read events from a file

  • Kafka, read events from a Kafka system

  • Websockets, read events from a Websocket

  • JMS,

  • REST (synchronous and asynchronous), additionally as client or server

  • Event Requestor, allows reading of events that have been looped back into APEX

Supported output technologies are:

  • Standard output, write events to the standard output (console), not suitable for APEX background servers

  • File output, write events to a file

  • Kafka, write events to a Kafka system

  • Websockets, write events to a Websocket

  • JMS

  • REST (synchronous and asynchronous), additionally as client or server

  • Event Requestor, allows events to be looped back into APEX

New carrier technologies can be added as plugins to APEX or developed outside APEX and added to an APEX deployment.

Standard IO

Standard IO does not require a specific plugin, it is supported be default.

Standard Input

APEX will take events from its standard input. This carrier is good for testing, but certainly not for a use case where APEX runs as a server. The configuration is as follows:

         "carrierTechnologyParameters" : {
           "carrierTechnology" : "FILE", (1)
           "parameters" : {
             "standardIO" : true (2)
           }
         }

.. container:: colist arabic

   +-------+---------------------------------------+
   | **1** | standard input is considered a file   |
   +-------+---------------------------------------+
   | **2** | file descriptor set to standard input |
   +-------+---------------------------------------+
Standard Output

APEX will send events to its standard output. This carrier is good for testing, but certainly not for a use case where APEX runs as a server. The configuration is as follows:

"carrierTechnologyParameters" : {
  "carrierTechnology" : "FILE", (1)
  "parameters" : {
    "standardIO" : true  (2)
  }
}

1

standard output is considered a file

2

file descriptor set to standard output

2.7.2. File IO

File IO does not require a specific plugin, it is supported be default.

File Input

APEX will take events from a file. The same file should not be used as an output. The configuration is as follows:

"carrierTechnologyParameters" : {
  "carrierTechnology" : "FILE", (1)
  "parameters" : {
    "fileName" : "examples/events/SampleDomain/EventsIn.xmlfile" (2)
  }
}

1

set file input

2

the name of the file to read events from

File Output

APEX will write events to a file. The same file should not be used as an input. The configuration is as follows:

"carrierTechnologyParameters" : {
  "carrierTechnology" : "FILE", (1)
  "parameters" : {
    "fileName"  : "examples/events/SampleDomain/EventsOut.xmlfile" (2)
  }
}

1

set file output

2

the name of the file to write events to

Event Requestor IO

Event Requestor IO does not require a specific plugin, it is supported be default. It should only be used with the APEX event protocol.

Event Requestor Input

APEX will take events from APEX.

"carrierTechnologyParameters" : {
  "carrierTechnology": "EVENT_REQUESTOR" (1)
}

1

set event requestor input

Event Requestor Output

APEX will write events to APEX.

"carrierTechnologyParameters" : {
  "carrierTechnology": "EVENT_REQUESTOR" (1)
}
Peering Event Requestors

When using event requestors, they need to be peered. This means an event requestor output needs to be peered (associated) with an event requestor input. The following example shows the use of an event requestor with the APEX event protocol and the peering of output and input.

"eventInputParameters": {
  "EventRequestorConsumer": {
    "carrierTechnologyParameters": {
      "carrierTechnology": "EVENT_REQUESTOR" (1)
    },
    "eventProtocolParameters": {
      "eventProtocol": "APEX" (2)
    },
    "eventNameFilter": "InputEvent", (3)
    "requestorMode": true, (4)
    "requestorPeer": "EventRequestorProducer", (5)
    "requestorTimeout": 500 (6)
  }
},
"eventOutputParameters": {
  "EventRequestorProducer": {
    "carrierTechnologyParameters": {
      "carrierTechnology": "EVENT_REQUESTOR" (7)
    },
    "eventProtocolParameters": {
      "eventProtocol": "APEX" (8)
    },
    "eventNameFilter": "EventListEvent", (9)
    "requestorMode": true, (10)
    "requestorPeer": "EventRequestorConsumer", (11)
    "requestorTimeout": 500 (12)
  }
}

1

event requestor on a consumer

2

with APEX event protocol

3

optional filter (best to use a filter to prevent unwanted events on the consumer side)

4

activate requestor mode

5

the peer to the output (must match the output carrier)

6

an optional timeout in milliseconds

7

event requestor on a producer

8

with APEX event protocol

9

optional filter (best to use a filter to prevent unwanted events on the consumer side)

10

activate requestor mode

11

the peer to the output (must match the input carrier)

12

an optional timeout in milliseconds

Kafka IO

Kafka IO is supported by the APEX Kafka plugin. The configurations below are examples. APEX will take any configuration inside the parameter object and forward it to Kafka. More information on Kafka specific configuration parameters can be found in the Kafka documentation:

Kafka Input

APEX will receive events from the Apache Kafka messaging system. The input is uni-directional, an engine will only receive events from the input but not send any event to the input.

"carrierTechnologyParameters" : {
  "carrierTechnology" : "KAFKA", (1)
  "parameterClassName" :
    "org.onap.policy.apex.plugins.event.carrier.kafka.KAFKACarrierTechnologyParameters",
  "parameters" : {
    "bootstrapServers"  : "localhost:49092", (2)
    "groupId"           : "apex-group-id", (3)
    "enableAutoCommit"  : true, (4)
    "autoCommitTime"    : 1000, (5)
    "sessionTimeout"    : 30000, (6)
    "consumerPollTime"  : 100, (7)
    "consumerTopicList" : ["apex-in-0", "apex-in-1"], (8)
    "keyDeserializer"   :
        "org.apache.kafka.common.serialization.StringDeserializer", (9)
    "valueDeserializer" :
        "org.apache.kafka.common.serialization.StringDeserializer" (10)
    "kafkaProperties": [  (11)
                         [
                           "security.protocol",
                           "SASL_SSL"
                         ],
                         [
                           "ssl.truststore.type",
                           "JKS"
                         ],
                         [
                           "ssl.truststore.location",
                           "/opt/app/policy/apex-pdp/etc/ssl/test.jks"
                         ],
                         [
                           "ssl.truststore.password",
                           "policy0nap"
                         ],
                         [
                           "sasl.mechanism",
                           "SCRAM-SHA-512"
                         ],
                         [
                           "sasl.jaas.config",
                           "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"policy\" password=\"policy\";"
                         ],
                         [
                           "ssl.endpoint.identification.algorithm",
                           ""
                         ]
                       ]
  }
}

1

set Kafka as carrier technology

2

bootstrap server and port

3

a group identifier

4

flag for auto-commit

5

auto-commit timeout in milliseconds

6

session timeout in milliseconds

7

consumer poll time in milliseconds

8

consumer topic list

9

key for the Kafka de-serializer

10

value for the Kafka de-serializer

11

properties for Kafka connectivity

Kindly note that the above Kafka properties is just a reference, and the actual properties required depends on the Kafka server installation.

In cases where the message produced in Kafka topic has been serialized using KafkaAvroSerializer, then the following parameters needs to be additionally added to KafkaProperties for the consumer to have the capability of deserializing the message properly while consuming.

[
  "value.deserializer",
  "io.confluent.kafka.serializers.KafkaAvroDeserializer"
],
[
  "schema.registry.url",
  "<url of the schema registry configured in Kafka cluster for registering Avro schemas>"
]

For more details on how to setup schema registry for Kafka cluster, kindly take a look here.

Kafka Output

APEX will send events to the Apache Kafka messaging system. The output is uni-directional, an engine will send events to the output but not receive any event from the output.

"carrierTechnologyParameters" : {
  "carrierTechnology" : "KAFKA", (1)
  "parameterClassName" :
    "org.onap.policy.apex.plugins.event.carrier.kafka.KAFKACarrierTechnologyParameters",
  "parameters" : {
    "bootstrapServers"  : "localhost:49092", (2)
    "acks"              : "all", (3)
    "retries"           : 0, (4)
    "batchSize"         : 16384, (5)
    "lingerTime"        : 1, (6)
    "bufferMemory"      : 33554432, (7)
    "producerTopic"     : "apex-out", (8)
    "keySerializer"     :
        "org.apache.kafka.common.serialization.StringSerializer", (9)
    "valueSerializer"   :
        "org.apache.kafka.common.serialization.StringSerializer" (10)
    "kafkaProperties": [  (11)
                         [
                           "security.protocol",
                           "SASL_SSL"
                         ],
                         [
                           "ssl.truststore.type",
                           "JKS"
                         ],
                         [
                           "ssl.truststore.location",
                           "/opt/app/policy/apex-pdp/etc/ssl/test.jks"
                         ],
                         [
                           "ssl.truststore.password",
                           "policy0nap"
                         ],
                         [
                           "sasl.mechanism",
                           "SCRAM-SHA-512"
                         ],
                         [
                           "sasl.jaas.config",
                           "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"policy\" password=\"policy\";"
                         ],
                         [
                           "ssl.endpoint.identification.algorithm",
                           ""
                         ]
                       ]
  }
}

1

set Kafka as carrier technology

2

bootstrap server and port

3

acknowledgement strategy

4

number of retries

5

batch size

6

time to linger in milliseconds

7

buffer memory in byte

8

producer topic

9

key for the Kafka serializer

10

value for the Kafka serializer

11

properties for Kafka connectivity

Kindly note that the above Kafka properties is just a reference, and the actual properties required depends on the Kafka server installation.

JMS IO

APEX supports the Java Messaging Service (JMS) as input as well as output. JMS IO is supported by the APEX JMS plugin. Input and output support an event encoding as text (JSON string) or object (serialized object). The input configuration is the same for both encodings, the output configuration differs.

JMS Input

APEX will receive events from a JMS messaging system. The input is uni-directional, an engine will only receive events from the input but not send any event to the input.

"carrierTechnologyParameters" : {
  "carrierTechnology" : "JMS", (1)
  "parameterClassName" :
      "org.onap.policy.apex.plugins.event.carrier.jms.JMSCarrierTechnologyParameters",
  "parameters" : { (2)
    "initialContextFactory" :
        "org.jboss.naming.remote.client.InitialContextFactory", (3)
    "connectionFactory" : "ConnectionFactory", (4)
    "providerURL" : "remote://localhost:5445", (5)
    "securityPrincipal" : "guest", (6)
    "securityCredentials" : "IAmAGuest", (7)
    "consumerTopic" : "jms/topic/apexIn" (8)
  }
}

1

set JMS as carrier technology

2

set all JMS specific parameters

3

the context factory, in this case from JBOSS (it requires the dependency org.jboss:jboss-remote-naming:2.0 .4.Final or a different version to be in the directory $APEX_HOME/lib or %APEX_HOME%\lib

4

a connection factory for the JMS connection

5

URL with host and port of the JMS provider

6

access credentials, user name

7

access credentials, user password

8

the JMS topic to listen to

JMS Output with Text

APEX engine send events to a JMS messaging system. The output is uni-directional, an engine will send events to the output but not receive any event from output.

"carrierTechnologyParameters" : {
  "carrierTechnology" : "JMS", (1)
  "parameterClassName" :
      "org.onap.policy.apex.plugins.event.carrier.jms.JMSCarrierTechnologyParameters",
  "parameters" : { (2)
    "initialContextFactory" :
        "org.jboss.naming.remote.client.InitialContextFactory", (3)
    "connectionFactory" : "ConnectionFactory", (4)
    "providerURL" : "remote://localhost:5445", (5)
    "securityPrincipal" : "guest", (6)
    "securityCredentials" : "IAmAGuest", (7)
    "producerTopic" : "jms/topic/apexOut", (8)
    "objectMessageSending": "false" (9)
  }
}

1

set JMS as carrier technology

2

set all JMS specific parameters

3

the context factory, in this case from JBOSS (it requires the dependency org.jboss:jboss-remote-naming:2.0 .4.Final or a different version to be in the directory $APEX_HOME/lib or %APEX_HOME%\lib

4

a connection factory for the JMS connection

5

URL with host and port of the JMS provider

6

access credentials, user name

7

access credentials, user password

8

the JMS topic to write to

9

set object messaging to false means it sends JSON text

JMS Output with Object

To configure APEX for JMS objects on the output interface use the same configuration as above (for output). Simply change the objectMessageSending parameter to true.

Websocket (WS) IO

APEX supports the Websockets as input as well as output. WS IO is supported by the APEX Websocket plugin. This carrier technology does only support uni-directional communication. APEX will not send events to a Websocket input and any event sent to a Websocket output will result in an error log.

The input can be configured as client (APEX connects to an existing Websocket server) or server (APEX starts a Websocket server). The same applies to the output. Input and output can both use a client or a server configuration, or separate configurations (input as client and output as server, input as server and output as client). Each configuration should use its own dedicated port to avoid any communication loops. The configuration of a Websocket client is the same for input and output. The configuration of a Websocket server is the same for input and output.

Websocket Client

APEX will connect to a given Websocket server. As input, it will receive events from the server but not send any events. As output, it will send events to the server and any event received from the server will result in an error log.

"carrierTechnologyParameters" : {
  "carrierTechnology" : "WEBSOCKET", (1)
  "parameterClassName" :
  "org.onap.policy.apex.plugins.event.carrier.websocket.WEBSOCKETCarrierTechnologyParameters",
  "parameters" : {
    "host" : "localhost", (2)
    "port" : 42451 (3)
  }
}

1

set Websocket as carrier technology

2

the host name on which a Websocket server is running

3

the port of that Websocket server

Websocket Server

APEX will start a Websocket server, which will accept any Websocket clients to connect. As input, it will receive events from the server but not send any events. As output, it will send events to the server and any event received from the server will result in an error log.

"carrierTechnologyParameters" : {
  "carrierTechnology" : "WEBSOCKET", (1)
  "parameterClassName" :
  "org.onap.policy.apex.plugins.event.carrier.websocket.WEBSOCKETCarrierTechnologyParameters",
  "parameters" : {
    "wsClient" : false, (2)
    "port"     : 42450 (3)
  }
}

1

set Websocket as carrier technology

2

disable client, so that APEX will start a Websocket server

3

the port for the Websocket server APEX will start

REST Client IO

APEX can act as REST client on the input as well as on the output interface. The media type is application/json, so this plugin only works with the JSON Event protocol.

REST Client Input

APEX will connect to a given URL to receive events, but not send any events. The server is polled, i.e. APEX will do an HTTP GET, take the result, and then do the next GET. Any required timing needs to be handled by the server configured via the URL. For instance, the server could support a wait timeout via the URL as ?timeout=100ms. The httpCodeFilter is used for filtering the status code, and it can be configured as a regular expression string. The default httpCodeFilter is “[2][0-9][0-9]” - for successful response codes. The response with HTTP status code that matches the given regular expression is forwarded to the task, otherwise it is logged as a failure.

"carrierTechnologyParameters" : {
  "carrierTechnology" : "RESTCLIENT", (1)
  "parameterClassName" :
    "org.onap.policy.apex.plugins.event.carrier.restclient.RESTClientCarrierTechnologyParameters",
  "parameters" : {
    "url" : "http://example.org:8080/triggers/events", (2)
    "httpMethod": "GET", (3)
    "httpCodeFilter" : "[2][0-9][0-9]", (4)
     "httpHeaders" : [ (5)
        ["Keep-Alive", "300"],
        ["Cache-Control", "no-cache"]
     ]
  }
}

1

set REST client as carrier technology

2

the URL of the HTTP server for events

3

the HTTP method to use (GET/PUT/POST/DELETE), optional, defaults to GET

4

use HTTP CODE FILTER for filtering status code, optional, defaults to [2][0-9][0-9]

5

HTTP headers to use on the REST request, optional

REST Client Output

APEX will connect to a given URL to send events, but not receive any events. The default HTTP operation is POST (no configuration required). To change it to PUT simply add the configuration parameter (as shown in the example below). The URL can be configured statically or tagged as ?example.{site}.org:8080/{trig}/events, all tags such as site and trig in the URL need to be set in the properties object available to the tasks. In addition, the keys should exactly match with the tags defined in url. The scope of the properties object is per HTTP call. Hence, key/value pairs set in the properties object by task are only available for that specific HTTP call.

"carrierTechnologyParameters" : {
  "carrierTechnology" : "RESTCLIENT", (1)
  "parameterClassName" :
    "org.onap.policy.apex.plugins.event.carrier.restclient.RESTClientCarrierTechnologyParameters",
  "parameters" : {
    "url" : "http://example.com:8888/actions/events", (2)
    "url" : "http://example.{site}.com:8888/{trig}/events", (2')
    "httpMethod" : "PUT". (3)
    "httpHeaders" : [ (4)
       ["Keep-Alive", "300"],
       ["Cache-Control", "no-cache"]
    ]                          }
}

1

set REST client as carrier technology

2

the static URL of the HTTP server for events

2’

the tagged URL of the HTTP server for events

3

the HTTP method to use (GET/PUT/POST/DELETE), optional, defaults to POST

4

HTTP headers to use on the REST request, optional

REST Server IO

APEX supports a REST server for input and output.

The REST server plugin always uses a synchronous mode. A client does a HTTP GET on the APEX REST server with the input event and receives the generated output event in the server reply. This means that for the REST server there has to always to be an input with an associated output. Input or output only are not permitted.

The plugin will start a Grizzly server as REST server for a normal APEX engine. If the APEX engine is executed as a servlet, for instance inside Tomcat, then Tomcat will be used as REST server (this case requires configuration on Tomcat as well).

Some configuration restrictions apply for all scenarios:

  • Minimum port: 1024

  • Maximum port: 65535

  • The media type is application/json, so this plugin only works with the JSON Event protocol.

The URL the client calls is created using

  • the configured host and port, e.g. http://localhost:12345

  • the standard path, e.g. /apex/

  • the name of the input/output, e.g. FirstConsumer/

  • the input or output name, e.g. EventIn.

The examples above lead to the URL http://localhost:12345/apex/FirstConsumer/EventIn.

A client can also get status information of the REST server using /Status, e.g. http://localhost:12345/apex/FirstConsumer/Status.

REST Server Stand-alone

We need to configure a REST server input and a REST server output. Input and output are associated with each other via there name.

Timeouts for REST calls need to be set carefully. If they are too short, the call might timeout before a policy finished creating an event.

The following example configures the input named as MyConsumer and associates an output named MyProducer with it.

"eventInputParameters": {
  "MyConsumer": {
    "carrierTechnologyParameters" : {
      "carrierTechnology" : "RESTSERVER", (1)
      "parameterClassName" :
        "org.onap.policy.apex.plugins.event.carrier.restserver.RESTServerCarrierTechnologyParameters",
      "parameters" : {
        "standalone" : true, (2)
        "host" : "localhost", (3)
        "port" : 12345 (4)
      }
    },
    "eventProtocolParameters":{
      "eventProtocol" : "JSON" (5)
    },
    "synchronousMode"    : true, (6)
    "synchronousPeer"    : "MyProducer", (7)
    "synchronousTimeout" : 500 (8)
  }
}

1

set REST server as carrier technology

2

set the server as stand-alone

3

set the server host

4

set the server listen port

5

use JSON event protocol

6

activate synchronous mode

7

associate an output MyProducer

8

set a timeout of 500 milliseconds

The following example configures the output named as MyProducer and associates the input MyConsumer with it. Note that for the output there are no more paramters (such as host or port), since they are already configured in the associated input

"eventOutputParameters": {
  "MyProducer": {
    "carrierTechnologyParameters":{
      "carrierTechnology" : "RESTSERVER",
      "parameterClassName" :
        "org.onap.policy.apex.plugins.event.carrier.restserver.RESTServerCarrierTechnologyParameters"
    },
    "eventProtocolParameters":{
      "eventProtocol" : "JSON"
    },
    "synchronousMode"    : true,
    "synchronousPeer"    : "MyConsumer",
    "synchronousTimeout" : 500
  }
}
REST Server Stand-alone, multi input

Any number of input/output pairs for REST servers can be configured. For instance, we can configure an input FirstConsumer with output FirstProducer and an input SecondConsumer with output SecondProducer. Important is that there is always one pair of input/output.

REST Server Stand-alone in Servlet

If APEX is executed as a servlet, e.g. inside Tomcat, the configuration becomes easier since the plugin can now use Tomcat as the REST server. In this scenario, there are not parameters (port, host, etc.) and the key standalone must not be used (or set to false).

For the Tomcat configuration, we need to add the REST server plugin, e.g.

<servlet>
  ...
  <init-param>
    ...
    <param-value>org.onap.policy.apex.plugins.event.carrier.restserver</param-value>
  </init-param>
  ...
</servlet>
REST Requestor IO

APEX can act as REST requestor on the input as well as on the output interface. The media type is application/json, so this plugin only works with the JSON Event protocol. This plugin allows APEX to send REST requests and to receive the reply of that request without tying up APEX resources while the request is being processed. The REST Requestor pairs a REST requestor producer and consumer together to handle the REST request and response. The REST request is created from an APEX output event and the REST response is input into APEX as a new input event.

REST Requestor Output (REST Request Producer)

APEX sends a REST request when events are output by APEX, the REST request configuration is specified on the REST Request Consumer (see below).

"carrierTechnologyParameters": {
  "carrierTechnology": "RESTREQUESTOR", (1)
  "parameterClassName": "org.onap.policy.apex.plugins.event.carrier.restrequestor.RESTRequestorCarrierTechnologyParameters"
},

1

set REST requestor as carrier technology

The settings below are required on the producer to define the event that triggers the REST request and to specify the peered consumer configuration for the REST request, for example:

"eventNameFilter": "GuardRequestEvent", (1)
"requestorMode": true, (2)
"requestorPeer": "GuardRequestorConsumer", (3)
"requestorTimeout": 500 (4)

1

a filter on the event

2

requestor mode must be set to true

3

the peered consumer for REST requests, that consumer specifies the full configuration for REST requests

4

the request timeout in milliseconds, overridden by timeout on consumer if that is set, optional defaults to 500 millisconds

REST Requestor Input (REST Request Consumer)

APEX will connect to a given URL to issue a REST request and wait for a REST response. The URL can be configured statically or tagged as ?example.{site}.org:8080/{trig}/events, all tags such as site and trig in the URL need to be set in the properties object available to the tasks. In addition, the keys should exactly match with the tags defined in url. The scope of the properties object is per HTTP call. Hence, key/value pairs set in the properties object by task are only available for that specific HTTP call. The httpCodeFilter is used for filtering the status code, and it can be configured as a regular expression string. The default httpCodeFilter is “[2][0-9][0-9]” - for successful response codes. The response with HTTP status code that matches the given regular expression is forwarded to the task, otherwise it is logged as a failure.

"carrierTechnologyParameters": {
  "carrierTechnology": "RESTREQUESTOR", (1)
  "parameterClassName": "org.onap.policy.apex.plugins.event.carrier.restrequestor.RESTRequestorCarrierTechnologyParameters",
  "parameters": {
    "url": "http://localhost:54321/some/path/to/rest/resource", (2)
    "url": "http://localhost:54321/{site}/path/to/rest/{resValue}", (2')
    "httpMethod": "POST", (3)
    "requestorMode": true, (4)
    "requestorPeer": "GuardRequestorProducer", (5)
    "restRequestTimeout": 2000, (6)
    "httpCodeFilter" : "[2][0-9][0-9]" (7)
    "httpHeaders" : [ (8)
       ["Keep-Alive", "300"],
       ["Cache-Control", "no-cache"]
    ]                          }
},

1

set REST requestor as carrier technology

2

the static URL of the HTTP server for events

2’

the tagged URL of the HTTP server for events

3

the HTTP method to use (GET/PUT/POST/DELETE), optional, defaults to GET

4

requestor mode must be set to true

5

the peered producer for REST requests, that producer specifies the APEX output event that triggers the REST request

6

request timeout in milliseconds, overrides any value set in the REST Requestor Producer, optional, defaults to 500 millisconds

7

use HTTP CODE FILTER for filtering status code optional, defaults to [2][0-9][0-9]

8

HTTP headers to use on the REST request, optional

Further settings may be required on the consumer to define the input event that is produced and forwarded into APEX, for example:

"eventName": "GuardResponseEvent", (1)
"eventNameFilter": "GuardResponseEvent" (2)

1

the event name

2

a filter on the event

gRPC IO

APEX can send requests over gRPC at the output side, and get back response at the input side. This can be used to send requests to CDS over gRPC. The media type is application/json, so this plugin only works with the JSON Event protocol.

gRPC Output

APEX will connect to a given host to send a request over gRPC.

"carrierTechnologyParameters": {
  "carrierTechnology": "GRPC", (1)
  "parameterClassName": "org.onap.policy.apex.plugins.event.carrier.grpc.GrpcCarrierTechnologyParameters",
  "parameters": {
    "host": "cds-blueprints-processor-grpc", (2)
    "port": 9111, (2')
    "username": "ccsdkapps", (3)
    "password": ccsdkapps, (4)
    "timeout" : 10 (5)
  }
},

1

set GRPC as carrier technology

2

the host to which request is sent

2’

the value for port

3

username required to initiate connection

4

password required to initiate connection

5

the timeout value for completing the request

Further settings are required on the producer to define the event that is requested, for example:

"eventName": "GRPCRequestEvent", (1)
"eventNameFilter": "GRPCRequestEvent", (2)
"requestorMode": true, (3)
"requestorPeer": "GRPCRequestConsumer", (4)
"requestorTimeout": 500 (5)

1

the event name

2

a filter on the event

3

the mode of the requestor

4

a peer for the requestor

5

a general request timeout

gRPC Input

APEX will connect to the host specified in the producer side, anad take in response back at the consumer side.

"carrierTechnologyParameters": {
  "carrierTechnology": "GRPC", (1)
  "parameterClassName": "org.onap.policy.apex.plugins.event.carrier.grpc.GrpcCarrierTechnologyParameters"
},

1

set GRPC as carrier technology

Further settings are required on the consumer to define the event that is requested, for example:

"eventNameFilter": "GRPCResponseEvent", (1)
"requestorMode": true, (2)
"requestorPeer": "GRPCRequestProducer", (3)
"requestorTimeout": 500 (4)

1

a filter on the event

2

the mode of the requestor

3

a peer for the requestor

4

a general request timeout

Event Protocols, Format and Encoding

Event protocols define what event formats APEX can receive (input) and should send (output). They can be used in any combination for input and output, unless further restricted by a carrier technology plugin (for instance for JMS output). There can only be 1 event protocol per event plugin.

Supported input event protocols are:

  • JSON, the event as a JSON string

  • APEX, an APEX event

  • JMS object, the event as a JMS object,

  • JMS text, the event as a JMS text,

  • XML, the event as an XML string,

  • YAML, the event as YAML text

Supported output event protocols are:

  • JSON, the event as a JSON string

  • APEX, an APEX event

  • JMS object, the event as a JMS object,

  • JMS text, the event as a JMS text,

  • XML, the event as an XML string,

  • YAML, the event as YAML text

New event protocols can be added as plugins to APEX or developed outside APEX and added to an APEX deployment.

JSON Event

The event protocol for JSON encoding does not require a specific plugin, it is supported by default. Furthermore, there is no difference in the configuration for the input and output interface.

For an input, APEX requires a well-formed JSON string. Well-formed here means according to the definitions of a policy. Any JSON string that is not defined as a trigger event (consume) will not be consumed (errors will be thrown). For output JSON events, APEX will always produce valid JSON strings according to the definition in the policy model.

The following JSON shows the configuration.

"eventProtocolParameters":{
  "eventProtocol" : "JSON"
}

For JSON events, there are a few more optional parameters, which allow to define a mapping for standard event fields. An APEX event must have the fields name, version, source, and target defined. Sometimes it is not possible to configure a trigger or actioning system to use those fields. However, they might be in an event generated outside APEX (or used outside APEX) just with different names. To configure APEX to map between the different event names, simply add the following parameters to a JSON event:

"eventProtocolParameters":{
  "eventProtocol" : "JSON",
  "nameAlias"     : "policyName", (1)
  "versionAlias"  : "policyVersion", (2)
  "sourceAlias"   : "from", (3)
  "targetAlias"   : "to", (4)
  "nameSpaceAlias": "my.name.space" (5)
}

1

mapping for the name field, here from a field called policyName

2

mapping for the version field, here from a field called policyVersion

3

mapping for the source field, here from a field called from (only for an input event)

4

mapping for the target field, here from a field called to (only for an output event)

5

mapping for the nameSpace field, here from a field called my.name.space

APEX Event

The event protocol for APEX events does not require a specific plugin, it is supported by default. Furthermore, there is no difference in the configuration for the input and output interface.

For input and output APEX uses APEX events.

The following JSON shows the configuration.

"eventProtocolParameters":{
  "eventProtocol" : "APEX"
}
JMS Event

The event protocol for JMS is provided by the APEX JMS plugin. The plugin supports encoding as JSON text or as object. There is no difference in the configuration for the input and output interface.

JMS Text

If used as input, APEX will take a JMS message and extract a JSON string, then proceed as if a JSON event was received. If used as output, APEX will take the event produced by a policy, create a JSON string, and then wrap it into a JMS message.

The configuration for JMS text is as follows:

"eventProtocolParameters":{
  "eventProtocol" : "JMSTEXT",
  "parameterClassName" :
    "org.onap.policy.apex.plugins.event.protocol.jms.JMSTextEventProtocolParameters"
}
JMS Object

If used as input, APEX will will take a JMS message, extract a Java Bean from the ObjectMessage message, construct an APEX event and put the bean on the APEX event as a parameter. If used as output, APEX will take the event produced by a policy, create a Java Bean and send it as a JMS message.

The configuration for JMS object is as follows:

"eventProtocolParameters":{
  "eventProtocol" : "JMSOBJECT",
  "parameterClassName" :
    "org.onap.policy.apex.plugins.event.protocol.jms.JMSObjectEventProtocolParameters"
}
YAML Event

The event protocol for YAML is provided by the APEX YAML plugin. There is no difference in the configuration for the input and output interface.

If used as input, APEX will consume events as YAML and map them to policy trigger events. Not well-formed YAML and not understood trigger events will be rejected. If used as output, APEX produce YAML encoded events from the event a policy produces. Those events will always be well-formed according to the definition in the policy model.

The following code shows the configuration.

"eventProtocolParameters":{
  "eventProtocol" : "XML",
  "parameterClassName" :
      "org.onap.policy.apex.plugins.event.protocol.yaml.YamlEventProtocolParameters"
}
XML Event

The event protocol for XML is provided by the APEX XML plugin. There is no difference in the configuration for the input and output interface.

If used as input, APEX will consume events as XML and map them to policy trigger events. Not well-formed XML and not understood trigger events will be rejected. If used as output, APEX produce XML encoded events from the event a policy produces. Those events will always be well-formed according to the definition in the policy model.

The following code shows the configuration.

"eventProtocolParameters":{
  "eventProtocol" : "XML",
  "parameterClassName" :
      "org.onap.policy.apex.plugins.event.protocol.xml.XMLEventProtocolParameters"
}
A configuration example

The following example loads all available plug-ins.

Events are consumed from a Websocket, APEX as client. Consumed event format is JSON.

Events are produced to Kafka. Produced event format is XML.

{
  "engineServiceParameters" : {
    "name"          : "MyApexEngine",
    "version"        : "0.0.1",
    "id"             :  45,
    "instanceCount"  : 4,
    "deploymentPort" : 12345,
    "engineParameters"    : {
      "executorParameters" : {
        "JAVASCRIPT" : {
          "parameterClassName" :
              "org.onap.policy.apex.plugins.executor.javascript.JavascriptExecutorParameters"
        },
        "JYTHON" : {
          "parameterClassName" :
              "org.onap.policy.apex.plugins.executor.jython.JythonExecutorParameters"
        },
        "JRUBY" : {
          "parameterClassName" :
              "org.onap.policy.apex.plugins.executor.jruby.JrubyExecutorParameters"
        },
        "JAVA" : {
          "parameterClassName" :
              "org.onap.policy.apex.plugins.executor.java.JavaExecutorParameters"
        },
        "MVEL" : {
          "parameterClassName" :
              "org.onap.policy.apex.plugins.executor.mvel.MVELExecutorParameters"
        }
      },
      "contextParameters" : {
        "parameterClassName" :
            "org.onap.policy.apex.context.parameters.ContextParameters",
        "schemaParameters" : {
          "Avro":{
             "parameterClassName" :
                 "org.onap.policy.apex.plugins.context.schema.avro.AvroSchemaHelperParameters"
          }
        }
      }
    }
  },
  "producerCarrierTechnologyParameters" : {
    "carrierTechnology" : "KAFKA",
    "parameterClassName" :
        "org.onap.policy.apex.plugins.event.carrier.kafka.KAFKACarrierTechnologyParameters",
    "parameters" : {
      "bootstrapServers"  : "localhost:49092",
      "acks"              : "all",
      "retries"           : 0,
      "batchSize"         : 16384,
      "lingerTime"        : 1,
      "bufferMemory"      : 33554432,
      "producerTopic"     : "apex-out",
      "keySerializer"     : "org.apache.kafka.common.serialization.StringSerializer",
      "valueSerializer"   : "org.apache.kafka.common.serialization.StringSerializer"
    }
  },
  "producerEventProtocolParameters" : {
    "eventProtocol" : "XML",
         "parameterClassName" :
             "org.onap.policy.apex.plugins.event.protocol.xml.XMLEventProtocolParameters"
  },
  "consumerCarrierTechnologyParameters" : {
    "carrierTechnology" : "WEBSOCKET",
    "parameterClassName" :
        "org.onap.policy.apex.plugins.event.carrier.websocket.WEBSOCKETCarrierTechnologyParameters",
    "parameters" : {
      "host" : "localhost",
      "port" : 88888
    }
  },
  "consumerEventProtocolParameters" : {
    "eventProtocol" : "JSON"
  }
}

Engine and Applications of the APEX System

Introduction to APEX Engine and Applications

The core of APEX is the APEX Engine, also known as the APEX Policy Engine or the APEX PDP (since it is in fact a Policy Decision Point). Beside this engine, an APEX system comes with a few applications intended to help with policy authoring, deployment, and execution.

The engine itself and most applications are started from the command line with command line arguments. This is called a Command Line Interface (CLI). Some applications require an installation on a webserver, as for instance the REST Editor. Those applications can be accessed via a web browser.

You can also use the available APEX APIs and applications to develop other applications as required. This includes policy languages (and associated parsers and compilers / interpreters), GUIs to access APEX or to define policies, clients to connect to APEX, etc.

For this documentation, we assume an installation of APEX as a full system based on a current ONAP release.

CLI on Unix, Windows, and Cygwin

A note on APEX CLI applications: all applications and the engine itself have been deployed and tested on different operating systems: Red Hat, Ubuntu, Debian, Mac OSX, Windows, Cygwin. Each operating system comes with its own way of configuring and executing Java. The main items here are:

  • For UNIX systems (RHL, Ubuntu, Debian, Mac OSX), the provided bash scripts work as expected with absolute paths (e.g. /opt/app/policy/apex-pdp/apex-pdp-2.0.0-SNAPSHOT/examples), indirect and linked paths (e.g. ../apex/apex), and path substitutions using environment settings (e.g. $APEX_HOME/bin/)

  • For Windows systems, the provided batch files (.bat) work as expected with with absolute paths (e.g. C:\apex\apex-2.0.0-SNAPSHOT\examples), and path substitutions using environment settings (e.g. %APEX_HOME%\bin\)

  • For Cygwin system we assume a standard Cygwin installation with standard tools (mainly bash) using a Windows Java installation. This means that the bash scripts can be used as in UNIX, however any argument pointing to files and directories need to use either a DOS path (e.g. C:\apex\apex-2.0.0-SNAPSHOT\examples\config...) or the command cygpath with a mixed option. The reason for that is: Cygwin executes Java using UNIX paths but then runs Java as a DOS/WINDOWS process, which requires DOS paths for file access.

The APEX Engine

The APEX engine can be started in different ways, depending your requirements. All scripts are located in the APEX bin directory

On UNIX and Cygwin systems use:

  • apexEngine.sh - this script will

    • Test if $APEX_USER is set and if the user exists, terminate with an error otherwise

    • Test if $APEX_HOME is set. If not set, it will use the default setting as /opt/app/policy/apex-pdp/apex-pdp. Then the set directory is tested to exist, the script will terminate if not.

    • When all tests are passed successfully, the script will call apexApps.sh with arguments to start the APEX engine.

  • apexApps.sh engine - this is the general APEX application launcher, which will

    • Start the engine with the argument engine

    • Test if $APEX_HOME is set and points to an existing directory. If not set or directory does not exist, script terminates.

    • Not test for any settings of $APEX_USER.

On Windows systems use apexEngine.bat and apexApps.bat engine respectively. Note: none of the windows batch files will test for %APEX_USER%.

Summary of alternatives to start the APEX Engine:

Unix, Cygwin

Windows

# $APEX_HOME/bin/apexEngine.sh [args]
# $APEX_HOME/bin/apexApps.sh engine [args]
> %APEX_HOME%\bin\apexEngine.bat [args]
> %APEX_HOME%\bin\apexApps.bat engine [args]

The APEX engine comes with a few CLI arguments, the main one is for setting the tosca policy file for execution. The tosca policy file is always required. The option -h prints a help screen.

usage: org.onap.policy.apex.service.engine.main.ApexMain [options...]
options
-p,--tosca-policy-file <TOSCA_POLICY_FILE>     the full path to the ToscaPolicy file to use.
-h,--help                                      outputs the usage of this command
-v,--version                                   outputs the version of Apex
The APEX CLI Editor

The CLI Editor allows to define policies from the command line. The application uses a simple language and supports all elements of an APEX policy. It can be used in to different ways:

  • non-interactive, specifying a file with the commands to create a policy

  • interactive, using the editors CLI to create a policy

When a policy is fully specified, the editor will generate the APEX core policy specification in JSON. This core specification is called the policy model in the APEX engine and can be used directly with the APEX engine.

On UNIX and Cygwin systems use:

  • apexCLIEditor.sh - simply starts the CLI editor, arguments to the script determine the mode of the editor

  • apexApps.sh cli-editor - simply starts the CLI editor, arguments to the script determine the mode of the editor

On Windows systems use:

  • apexCLIEditor.bat - simply starts the CLI editor, arguments to the script determine the mode of the editor

  • apexApps.bat cli-editor - simply starts the CLI editor, arguments to the script determine the mode of the editor

Summary of alternatives to start the APEX CLI Editor:

Unix, Cygwin

Windows

# $APEX_HOME/bin/apexCLIEditor.sh.sh [args]
# $APEX_HOME/bin/apexApps.sh cli-editor [args]
> %APEX_HOME%\bin\apexCLIEditor.bat [args]
> %APEX_HOME%\bin\apexApps.bat cli-editor [args]

The option -h provides a help screen with all command line arguments.

usage: org.onap.policy.apex.auth.clieditor.ApexCLIEditorMain [options...]
options
 -a,--model-props-file <MODEL_PROPS_FILE>       name of the apex model properties file to use
 -c,--command-file <COMMAND_FILE>               name of a file containing editor commands to run into the editor
 -h,--help                                      outputs the usage of this command
 -i,--input-model-file <INPUT_MODEL_FILE>       name of a file that contains an input model for the editor
 -if,--ignore-failures <IGNORE_FAILURES_FLAG>   true or false, ignore failures of commands in command files and continue
                                                executing the command file
 -l,--log-file <LOG_FILE>                       name of a file that will contain command logs from the editor, will log
                                                to standard output if not specified or suppressed with "-nl" flag
 -m,--metadata-file <CMD_METADATA_FILE>         name of the command metadata file to use
 -nl,--no-log                                   if specified, no logging or output of commands to standard output or log
                                                file is carried out
 -nm,--no-model-output                          if specified, no output of a model to standard output or model output
                                                file is carried out, the user can use the "save" command in a script to
                                                save a model
 -o,--output-model-file <OUTPUT_MODEL_FILE>     name of a file that will contain the output model for the editor, will
                                                output model to standard output if not specified or suppressed with
                                                "-nm" flag
 -wd,--working-directory <WORKING_DIRECTORY>    the working directory that is the root for the CLI editor and is the
                                                root from which to look for included macro files
The APEX CLI Tosca Editor

As per the new Policy LifeCycle API, the policies are expected to be defined as ToscaServiceTemplate. The CLI Tosca Editor is an extended version of the APEX CLI Editor which can generate the policies in ToscaServiceTemplate way.

The APEX config file(.json), command file(.apex) and the tosca template skeleton(.json) file paths need to be passed as input arguments to the CLI Tosca Editor. Policy in ToscaServiceTemplate format is generated as the output. This can be used as the input to Policy API for creating policies.

On UNIX and Cygwin systems use:

  • apexCLIToscaEditor.sh - starts the CLI Tosca editor, all the arguments supported by the basic CLI Editor are supported in addition to the mandatory arguments needed to generate ToscaServiceTemplate.

  • apexApps.sh cli-tosca-editor - starts the CLI Tosca editor, all the arguments supported by the basic CLI Editor are supported in addition to the mandatory arguments needed to generate ToscaServiceTemplate.

On Windows systems use:

  • apexCLIToscaEditor.bat - starts the CLI Tosca editor, all the arguments supported by the basic CLI Editor are supported in addition to the mandatory arguments needed to generate ToscaServiceTemplate.

  • apexApps.bat cli-tosca-editor - starts the CLI Tosca editor, all the arguments supported by the basic CLI Editor are supported in addition to the mandatory arguments needed to generate ToscaServiceTemplate.

Summary of alternatives to start the APEX CLI Tosca Editor:

Unix, Cygwin

Windows

# $APEX_HOME/bin/apexCLIToscaEditor.sh.sh [args]
# $APEX_HOME/bin/apexApps.sh cli-tosca-editor [args]
> %APEX_HOME%\bin\apexCLIToscaEditor.bat [args]
> %APEX_HOME%\bin\apexApps.bat cli-tosca-editor [args]

The option -h provides a help screen with all command line arguments.

usage: org.onap.policy.apex.auth.clieditor.tosca.ApexCliToscaEditorMain [options...]
options
 -a,--model-props-file <MODEL_PROPS_FILE>         name of the apex model properties file to use
 -ac,--apex-config-file <APEX_CONFIG_FILE>        name of the file containing apex configuration details
 -c,--command-file <COMMAND_FILE>                 name of a file containing editor commands to run into the editor
 -h,--help                                        outputs the usage of this command
 -i,--input-model-file <INPUT_MODEL_FILE>         name of a file that contains an input model for the editor
 -if,--ignore-failures <IGNORE_FAILURES_FLAG>     true or false, ignore failures of commands in command files and
                                                  continue executing the command file
 -l,--log-file <LOG_FILE>                         name of a file that will contain command logs from the editor, will
                                                  log to standard output if not specified or suppressed with "-nl" flag
 -m,--metadata-file <CMD_METADATA_FILE>           name of the command metadata file to use
 -nl,--no-log                                     if specified, no logging or output of commands to standard output or
                                                  log file is carried out
 -ot,--output-tosca-file <OUTPUT_TOSCA_FILE>      name of a file that will contain the output ToscaServiceTemplate
 -t,--tosca-template-file <TOSCA_TEMPLATE_FILE>   name of the input file containing tosca template which needs to be
                                                  updated with policy
 -wd,--working-directory <WORKING_DIRECTORY>      the working directory that is the root for the CLI editor and is the
                                                  root from which to look for included macro files

An example command to run the APEX CLI Tosca editor on windows machine is given below.

%APEX_HOME%/\bin/\apexCLIToscaEditor.bat -c %APEX_HOME%\examples\PolicyModel.apex -ot %APEX_HOME%\examples\test.json  -l %APEX_HOME%\examples\test.log -ac %APEX_HOME%\examples\RESTServerStandaloneJsonEvent.json -t %APEX_HOME%\examples\ToscaTemplate.json
The APEX Client

The APEX Client combines the Policy Editor, the Monitoring Client, and the Deployment Client into a single application. The standard way to use the APEX Full Client is via an installation of the war file on a webserver. However, the Full Client can also be started via command line. This will start a Grizzly webserver with the war deployed. Access to the Full Client is then via the provided URL

On UNIX and Cygwin systems use:

  • apexApps.sh full-client - simply starts the webserver with the Full Client

On Windows systems use:

  • apexApps.bat full-client - simply starts the webserver with the Full Client

The option -h provides a help screen with all command line arguments.

usage: org.onap.policy.apex.client.full.rest.ApexServicesRestMain [options...]
-h,--help                        outputs the usage of this command
-p,--port <PORT>                 port to use for the Apex Services REST calls
-t,--time-to-live <TIME_TO_LIVE> the amount of time in seconds that the server will run for before terminating

If the Full Client is started without any arguments the final messages will look similar to this:

Apex Editor REST endpoint (ApexServicesRestMain: Config=[ApexServicesRestParameters: URI=http://localhost:18989/apexservices/, TTL=-1sec], State=READY) starting at http://localhost:18989/apexservices/ . . .
Sep 05, 2018 11:28:28 PM org.glassfish.grizzly.http.server.NetworkListener start
INFO: Started listener bound to [localhost:18989]
Sep 05, 2018 11:28:28 PM org.glassfish.grizzly.http.server.HttpServer start
INFO: [HttpServer] Started.
Apex Editor REST endpoint (ApexServicesRestMain: Config=[ApexServicesRestParameters: URI=http://localhost:18989/apexservices/, TTL=-1sec], State=RUNNING) started at http://localhost:18989/apexservices/

The last line states the URL on which the Monitoring Client can be accessed. The example above stated http://localhost:18989/apexservices. In a web browser use the URL http://localhost:18989.

The APEX Application Launcher

The standard applications (Engine and CLI Editor) come with dedicated start scripts. For all other APEX applications, we provide an application launcher.

On UNIX and Cygwin systems use:

  • apexApps.sh` - simply starts the application launcher

On Windows systems use:

  • apexApps.bat - simply starts the application launcher

Summary of alternatives to start the APEX application launcher:

Unix, Cygwin

Windows

# $APEX_HOME/bin/apexApps.sh [args]
> %APEX_HOME%\bin\apexApps.bat [args]

The option -h provides a help screen with all launcher command line arguments.

apexApps.sh - runs APEX applications

       Usage:  apexApps.sh [options] | [<application> [<application options>]]

       Options
         -d <app>    - describes an application
         -l          - lists all applications supported by this script
         -h          - this help screen

Using -l lists all known application the launcher can start.

apexApps.sh: supported applications:
 --> ws-echo engine eng-monitoring full-client eng-deployment tpl-event-json model-2-cli rest-editor cli-editor ws-console

Using the -d <name> option describes the named application, for instance for the ws-console:

apexApps.sh: application 'ws-console'
 --> a simple console sending events to APEX, connect to APEX consumer port

Launching an application is done by calling the script with only the application name and any CLI arguments for the application. For instance, starting the ws-echo application with port 8888:

apexApps.sh ws-echo -p 8888
Application: Create Event Templates

Status: Experimental

This application takes a policy model (JSON or XML encoded) and generates templates for events in JSON format. This can help when a policy defines rather complex trigger or action events or complex events between states. The application can produce events for the types: stimuli (policy trigger events), internal (events between policy states), and response (action events).

Unix, Cygwin

Windows

# $APEX_HOME/bin/apexApps.sh tpl-event-json [args]
> %APEX_HOME%\bin\apexApps.bat tpl-event-json [args]

The option -h provides a help screen.

gen-model2event v{release-version} - generates JSON templates for events generated from a policy model
usage: gen-model2event
 -h,--help                 prints this help and usage screen
 -m,--model <MODEL-FILE>   set the input policy model file
 -t,--type <TYPE>          set the event type for generation, one of:
                           stimuli (trigger events), response (action
                           events), internal (events between states)
 -v,--version              prints the application version

The created templates are not valid events, instead they use some markup for values one will need to change to actual values. For instance, running the tool with the Sample Domain policy model as:

apexApps.sh tpl-event-json -m $APEX_HOME/examples/models/SampleDomain/SamplePolicyModelJAVA.json -t stimuli

will produce the following status messages:

gen-model2event: starting Event generator
 --> model file: examples/models/SampleDomain/SamplePolicyModelJAVA.json
 --> type: stimuli

and then run the generator application producing two event templates. The first template is called Event0000.

{
        "name" : "Event0000",
        "nameSpace" : "org.onap.policy.apex.sample.events",
        "version" : "0.0.1",
        "source" : "Outside",
        "target" : "Match",
        "TestTemperature" : ###double: 0.0###,
        "TestTimestamp" : ###long: 0###,
        "TestMatchCase" : ###integer: 0###,
        "TestSlogan" : "###string###"
}

The values for the keys are marked with # and the expected type of the value. To create an actual stimuli event, all these markers need to be change to actual values, for instance:

{
        "name" : "Event0000",
        "nameSpace" : "org.onap.policy.apex.sample.events",
        "version" : "0.0.1",
        "source" : "Outside",
        "target" : "Match",
        "TestTemperature" : 25,
        "TestTimestamp" : 123456789123456789,
        "TestMatchCase" : 1,
        "TestSlogan" : "Testing the Match Case with Temperature 25"
}
Application: Convert a Policy Model to CLI Editor Commands

Status: Experimental

This application takes a policy model (JSON or XML encoded) and generates commands for the APEX CLI Editor. This effectively reverses a policy specification realized with the CLI Editor.

Unix, Cygwin

Windows

# $APEX_HOME/bin/apexApps.sh model-2-cli [args]
> %APEX_HOME%\bin\apexApps.bat model-2-cli [args]

The option -h provides a help screen.

usage: gen-model2cli
 -h,--help                 prints this help and usage screen
 -m,--model <MODEL-FILE>   set the input policy model file
 -sv,--skip-validation     switch of validation of the input file
 -v,--version              prints the application version

For instance, running the tool with the Sample Domain policy model as:

apexApps.sh model-2-cli -m $APEX_HOME/examples/models/SampleDomain/SamplePolicyModelJAVA.json

will produce the following status messages:

gen-model2cli: starting CLI generator
 --> model file: examples/models/SampleDomain/SamplePolicyModelJAVA.json

and then run the generator application producing all CLI Editor commands and printing them to standard out.

Application: Websocket Clients (Echo and Console)

Status: Production

The application launcher also provides a Websocket echo client and a Websocket console client. The echo client connects to APEX and prints all events it receives from APEX. The console client connects to APEX, reads input from the command line, and sends this input as events to APEX.

Unix, Cygwin

Windows

# $APEX_HOME/bin/apexApps.sh ws-echo [args]
# $APEX_HOME/bin/apexApps.sh ws-console [args]
> %APEX_HOME%\bin\apexApps.bat ws-echo [args]
> %APEX_HOME%\bin\apexApps.bat ws-console [args]

The arguments are the same for both applications:

  • -p defines the Websocket port to connect to (defaults to 8887)

  • -s defines the host on which a Websocket server is running (defaults to localhost)

A discussion on how to use these two applications to build an APEX system is detailed HowTo-Websockets.

APEX Logging

Introduction to APEX Logging

All APEX components make extensive use of logging using the logging façade SLF4J with the backend Logback. Both are used off-the-shelve, so the standard documentation and configuration apply to APEX logging. For details on how to work with logback please see the logback manual.

The APEX applications is the logback configuration file $APEX_HOME/etc/logback.xml (Windows: %APEX_HOME%\etc\logback.xml). The logging backend is set to no debug, i.e. logs from the logging framework should be hidden at runtime.

The configurable log levels work as expected:

  • error (or ERROR) is used for serious errors in the APEX runtime engine

  • warn (or WARN) is used for warnings, which in general can be ignored but might indicate some deeper problems

  • info (or INFO) is used to provide generally interesting messages for startup and policy execution

  • debug (or DEBUG) provides more details on startup and policy execution

  • trace (or TRACE) gives full details on every aspect of the APEX engine from start to end

The loggers can also be configured as expected. The standard configuration (after installing APEX) uses log level info on all APEX classes (components).

The applications and scripts in $APEX_HOME/bin (Windows: %APEX_HOME\bin) are configured to use the logback configuration $APEX_HOME/etc/logback.xml (Windows: %APEX_HOME\etc\logback.xml). There are multiple ways to use different logback configurations, for instance:

  • Maintain multiple configurations in etc, for instance a logback-debug.xml for deep debugging and a logback-production.xml for APEX in production mode, then copy the required configuration file to the used logback.xml prior starting APEX

  • Edit the scripts in bin to use a different logback configuration file (only recommended if you are familiar with editing bash scripts or windows batch files)

Standard Logging Configuration

The standard logging configuration defines a context APEX, which is used in the standard output pattern. The location for log files is defined in the property logDir and set to /var/log/onap/policy/apex-pdp. The standard status listener is set to NOP and the overall logback configuration is set to no debug.

1<configuration debug="false">
2  <statusListener class="ch.qos.logback.core.status.NopStatusListener" />
3
4   <contextName>Apex</contextName>
5   <property name="logDir" value="/var/log/onap/policy/apex-pdp/" />
6
7  ...appenders
8  ...loggers
9</configuration>

The first appender defined is called STDOUT for logs to standard out.

1<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
2 <encoder>
3    <Pattern>%d %contextName [%t] %level %logger{36} - %msg%n</Pattern>
4  </encoder>
5</appender>

The root level logger then is set to the level info using the standard out appender.

1<root level="info">
2  <appender-ref ref="STDOUT" />
3</root>

The second appender is called FILE. It writes logs to a file apex.log.

1<appender name="FILE" class="ch.qos.logback.core.FileAppender">
2  <file>${logDir}/apex.log</file>
3  <encoder>
4    <pattern>%d %-5relative [procId=${processId}] [%thread] %-5level %logger{26} - %msg %n %ex{full}</pattern>
5  </encoder>
6</appender>

The third appender is called CTXT_FILE. It writes logs to a file apex_ctxt.log.

1<appender name="CTXT_FILE" class="ch.qos.logback.core.FileAppender">
2  <file>${logDir}/apex_ctxt.log</file>
3  <encoder>
4    <pattern>%d %-5relative [procId=${processId}] [%thread] %-5level %logger{26} - %msg %n %ex{full}</pattern>
5  </encoder>
6</appender>

The last definitions are for specific loggers. The first logger captures all standard APEX classes. It is configured for log level info and uses the standard output and file appenders. The second logger captures APEX context classes responsible for context monitoring. It is configured for log level trace and uses the context file appender.

1<logger name="org.onap.policy.apex" level="info" additivity="false">
2  <appender-ref ref="STDOUT" />
3  <appender-ref ref="FILE" />
4</logger>
5
6<logger name="org.onap.policy.apex.core.context.monitoring" level="TRACE" additivity="false">
7  <appender-ref ref="CTXT_FILE" />
8</logger>
Adding Logback Status and Debug

To activate logback status messages change the status listener from ‘NOP’ to for instance console.

<statusListener class="ch.qos.logback.core.status.OnConsoleStatusListener" />

To activate all logback debugging, for instance to debug a new logback configuration, activate the debug attribute in the configuration.

<configuration debug="true">
...
</configuration>
Logging External Components

Logback can also be configured to log any other, external components APEX is using, if they are using the common logging framework.

For instance, the context component of APEX is using Infinispan and one can add a logger for this external component. The following example adds a logger for Infinispan using the standard output appender.

<logger name="org.infinispan" level="INFO" additivity="false">
  <appender-ref ref="STDOUT" />
</logger>

Another example is Apache Zookeeper. The following example adds a logger for Zookeeper using the standard outout appender.

<logger name="org.apache.zookeeper.ClientCnxn" level="INFO" additivity="false">
  <appender-ref ref="STDOUT" />
</logger>
Configuring loggers for Policy Logic

The logging for the logic inside a policy (task logic, task selection logic, state finalizer logic) can be configured separate from standard logging. The logger for policy logic is org.onap.policy.apex.executionlogging. The following example defines

  • a new appender for standard out using a very simple pattern (simply the actual message)

  • a logger for policy logic to standard out using the new appender and the already described file appender.

<appender name="POLICY_APPENDER_STDOUT" class="ch.qos.logback.core.ConsoleAppender">
  <encoder>
    <pattern>policy: %msg\n</pattern>
  </encoder>
</appender>

<logger name="org.onap.policy.apex.executionlogging" level="info" additivity="false">
  <appender-ref ref="POLICY_APPENDER_STDOUT" />
  <appender-ref ref="FILE" />
</logger>

It is also possible to use specific logging for parts of policy logic. The following example defines a logger for task logic.

<logger name="org.onap.policy.apex.executionlogging.TaskExecutionLogging" level="TRACE" additivity="false">
  <appender-ref ref="POLICY_APPENDER_STDOUT" />
</logger>
Rolling File Appenders

Rolling file appenders are a good option for more complex logging of a production or complex testing APEX installation. The standard logback configuration can be used for these use cases. This section gives two examples for the standard logging and for context logging.

First the standard logging. The following example defines a rolling file appender. The appender rolls over on a daily basis. It allows for a file size of 100 MB.

<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
  <file>${logDir}/apex.log</file>
  <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
    <!-- rollover daily -->
    <!-- <fileNamePattern>xstream-%d{yyyy-MM-dd}.%i.txt</fileNamePattern> -->
    <fileNamePattern>${logDir}/apex_%d{yyyy-MM-dd}.%i.log.gz
    </fileNamePattern>
    <maxHistory>4</maxHistory>
    <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
      <!-- or whenever the file size reaches 100MB -->
      <maxFileSize>100MB</maxFileSize>
    </timeBasedFileNamingAndTriggeringPolicy>
  </rollingPolicy>
  <encoder>
    <pattern>
      %d %-5relative [procId=${processId}] [%thread] %-5level %logger{26} - %msg %ex{full} %n
    </pattern>
  </encoder>
</appender>

A very similar configuration can be used for a rolling file appender logging APEX context.

<appender name="CTXT-FILE"
      class="ch.qos.logback.core.rolling.RollingFileAppender">
  <file>${logDir}/apex_ctxt.log</file>
  <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
    <fileNamePattern>${logDir}/apex_ctxt_%d{yyyy-MM-dd}.%i.log.gz
    </fileNamePattern>
    <maxHistory>4</maxHistory>
    <timeBasedFileNamingAndTriggeringPolicy
        class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
      <maxFileSize>100MB</maxFileSize>
    </timeBasedFileNamingAndTriggeringPolicy>
  </rollingPolicy>
  <encoder>
    <pattern>
      %d %-5relative [procId=${processId}] [%thread] %-5level %logger{26} - %msg %ex{full} %n
    </pattern>
  </encoder>
</appender>
Example Configuration for Logging Logic

The following example shows a configuration that logs policy logic to standard out and a file (info). All other APEX components are logging to a file (debug).. This configuration an be used in a pre-production phase with the APEX engine still running in a separate terminal to monitor policy execution. This logback configuration is in the APEX installation as etc/logback-logic.xml.

<configuration debug="false">
    <statusListener class="ch.qos.logback.core.status.NopStatusListener" />

    <contextName>Apex</contextName>
    <property name="logDir" value="/var/log/onap/policy/apex-pdp/" />

    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <Pattern>%d %contextName [%t] %level %logger{36} - %msg%n</Pattern>
        </encoder>
    </appender>

    <appender name="FILE" class="ch.qos.logback.core.FileAppender">
        <file>${logDir}/apex.log</file>
        <encoder>
            <pattern>
                %d %-5relative [procId=${processId}] [%thread] %-5level%logger{26} - %msg %n %ex{full}
            </pattern>
        </encoder>
    </appender>

    <appender name="POLICY_APPENDER_STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>policy: %msg\n</pattern>
        </encoder>
    </appender>

    <root level="error">
        <appender-ref ref="STDOUT" />
    </root>

    <logger name="org.onap.policy.apex" level="debug" additivity="false">
        <appender-ref ref="FILE" />
    </logger>

    <logger name="org.onap.policy.apex.executionlogging" level="info" additivity="false">
        <appender-ref ref="POLICY_APPENDER_STDOUT" />
        <appender-ref ref="FILE" />
    </logger>
</configuration>
Example Configuration for a Production Server

The following example shows a configuration that logs all APEX components, including policy logic, to a file (debug). This configuration an be used in a production phase with the APEX engine being executed as a service on a system without console output. This logback configuration is in the APEX installation as logback-server.xml

<configuration debug="false">
    <statusListener class="ch.qos.logback.core.status.NopStatusListener" />

    <contextName>Apex</contextName>
    <property name="logDir" value="/var/log/onap/policy/apex-pdp/" />

    <appender name="FILE" class="ch.qos.logback.core.FileAppender">
        <file>${logDir}/apex.log</file>
        <encoder>
            <pattern>
                %d %-5relative [procId=${processId}] [%thread] %-5level%logger{26} - %msg %n %ex{full}
            </pattern>
        </encoder>
    </appender>

    <root level="debug">
        <appender-ref ref="FILE" />
    </root>

    <logger name="org.onap.policy.apex.executionlogging" level="debug" additivity="false">
        <appender-ref ref="FILE" />
    </logger>
</configuration>

Unsupported Features

This section documents some legacy and unsupported features in apex-pdp. The documentation here has not been updated for recent versions of apex-pdp. For example, the apex-pdp models specified in this example should now be in TOSCA format.

Building a System with Websocket Backend
Websockets

Websocket is a protocol to run sockets of HTTP. Since it in essence a socket, the connection is realized between a server (waiting for connections) and a client (connecting to a server). Server/client separation is only important for connection establishment, once connected, everyone can send/receive on the same socket (as any standard socket would allow).

Standard Websocket implementations are simple, no publish/subscribe and no special event handling. Most servers simply send all incoming messages to all connections. There is a PubSub definition on top of Websocket called WAMP. APEX does not support WAMP at the moment.

Websocket in Java

In Java, JSR 356 defines the standard Websocket API. This JSR is part of Jave EE 7 standard. For Java SE, several implementations exist in open source. Since Websockets are a stable standard and simple, most implementations are stable and ready to use. A lot of products support Websockets, like Spring, JBoss, Netty, … there are also Kafka extensions for Websockets.

Websocket Example Code for Websocket clients (FOSS)

There are a lot of implementations and examples available on Github for Websocket clients. If one is using Java EE 7, then one can also use the native Websocket implementation. Good examples for clients using simply Java SE are here:

For Java EE, the native Websocket API is explained here:

BCP: Websocket Configuration

The probably best is to configure APEX for Websocket servers for input (ingress, consume) and output (egress, produce) interfaces. This means that APEX will start Websocket servers on named ports and wait for clients to connect. Advantage: once APEX is running all connectivity infrastructure is running as well. Consequence: if APEX is not running, everyone else is in the dark, too.

The best protocol to be used is JSON string. Each event on any interface is then a string with a JSON encoding. JSON string is a little bit slower than byte code, but we doubt that this will be noticeable. A further advantage of JSON strings over Websockets with APEX starting the servers: it is very easy to connect web browsers to such a system. Simple connect the web browser to the APEX sockets and send/read JSON strings.

Once APEX is started you simply connect Websocket clients to it, and send/receive event. When APEX is terminated, the Websocket servers go down, and the clients will be disconnected. APEX does not (yet) support auto-client reconnect nor WAMP, so clients might need to be restarted or reconnected manually after an APEX boot.

Demo with VPN Policy Model

We assume that you have an APEX installation using the full package, i.e. APEX with all examples, of version 0.5.6 or higher. We will use the VPN policy from the APEX examples here.

Now, have the following ready to start the demo:

  • 3 terminals on the host where APEX is running (we need 1 for APEX and 1 for each client)

  • the events in the file $APEX_HOME/examples/events/VPN/SetupEvents.json open in an editor (we need to send those events to APEX)

  • the events in the file $APEX_HOME/examples/events/VPN/Link09Events.json open in an editor (we need to send those events to APEX)

A Websocket Configuration for the VPN Domain

Create a new APEX configuration using the VPN policy model and configuring APEX as discussed above for Websockets. Copy the following configuration into $APEX_HOME/examples/config/VPN/Ws2WsServerAvroContextJsonEvent.json (for Windows use %APEX_HOME%\examples\config\VPN\Ws2WsServerAvroContextJsonEvent.json):

 1{
 2  "engineServiceParameters" : {
 3    "name"          : "VPNApexEngine",
 4    "version"        : "0.0.1",
 5    "id"             :  45,
 6    "instanceCount"  : 1,
 7    "deploymentPort" : 12345,
 8    "policyModelFileName" : "examples/models/VPN/VPNPolicyModelAvro.json",
 9    "engineParameters"    : {
10      "executorParameters" : {
11        "MVEL" : {
12          "parameterClassName" : "org.onap.policy.apex.plugins.executor.mvel.MVELExecutorParameters"
13        }
14      },
15      "contextParameters" : {
16        "parameterClassName" : "org.onap.policy.apex.context.parameters.ContextParameters",
17        "schemaParameters":{
18          "Avro":{
19            "parameterClassName" : "org.onap.policy.apex.plugins.context.schema.avro.AvroSchemaHelperParameters"
20          }
21        }
22      }
23    }
24  },
25  "producerCarrierTechnologyParameters" : {
26    "carrierTechnology" : "WEBSOCKET",
27    "parameterClassName" : "org.onap.policy.apex.plugins.event.carrier.websocket.WEBSOCKETCarrierTechnologyParameters",
28    "parameters" : {
29      "wsClient" : false,
30      "port"     : 42452
31    }
32  },
33  "producerEventProtocolParameters" : {
34    "eventProtocol" : "JSON"
35  },
36  "consumerCarrierTechnologyParameters" : {
37    "carrierTechnology" : "WEBSOCKET",
38    "parameterClassName" : "org.onap.policy.apex.plugins.event.carrier.websocket.WEBSOCKETCarrierTechnologyParameters",
39    "parameters" : {
40     "wsClient" : false,
41      "port"     : 42450
42    }
43  },
44  "consumerEventProtocolParameters" : {
45    "eventProtocol" : "JSON"
46  }
47}

Start APEX Engine

In a new terminal, start APEX with the new configuration for Websocket-Server ingress/egress:

1#: $APEX_HOME/bin/apexApps.sh engine -c $APEX_HOME/examples/config/VPN/Ws2WsServerAvroContextJsonEvent.json
1#: %APEX_HOME%\bin\apexApps.bat engine -c %APEX_HOME%\examples\config\VPN\Ws2WsServerAvroContextJsonEvent.json

Wait for APEX to start, it takes a while to create all Websocket servers (about 8 seconds on a standard laptop without cached binaries). depending on your log messages, you will see no (some, a lot) log messages. If APEX starts correctly, the last few messages you should see are:

1 2017-07-28 13:17:20,834 Apex [main] INFO c.e.a.s.engine.runtime.EngineService - engine model VPNPolicyModelAvro:0.0.1 added to the engine-AxArtifactKey:(name=VPNApexEngine-0,version=0.0.1)
2 2017-07-28 13:17:21,057 Apex [Apex-apex-engine-service-0:0] INFO c.e.a.s.engine.runtime.EngineService - Engine AxArtifactKey:(name=VPNApexEngine-0,version=0.0.1) processing ...
3 2017-07-28 13:17:21,296 Apex [main] INFO c.e.a.s.e.r.impl.EngineServiceImpl - Added the action listener to the engine
4 Started Apex service

APEX is running in the new terminal and will produce output when the policy is triggered/executed.

Run the Websocket Echo Client

The echo client is included in an APEX full installation. To run the client, open a new shell (Unix, Cygwin) or command prompt (cmd on Windows). Then use the APEX application launcher to start the client.

Important

APEX engine needs to run first The example assumes that an APEX engine configured for produce carrier technology Websocket and JSON event protocol is executed first.

Unix, Cygwin

Windows

# $APEX_HOME/bin/apexApps.sh ws-echo [args]
> %APEX_HOME%\bin\apexApps.bat ws-echo [args]

Use the following command line arguments for server and port of the Websocket server. The port should be the same as configured in the APEX engine. The server host should be the host on which the APEX engine is running

  • -p defines the Websocket port to connect to (defaults to 8887)

  • -s defines the host on which a Websocket server is running (defaults to localhost)

Let’s assume that there is an APEX engine running, configured for produce Websocket carrier technology, as server, for port 42452, with produce event protocol JSON,. If we start the console client on the same host, we can omit the -s options. We start the console client as:

# $APEX_HOME/bin/apexApps.sh ws-echo -p 42452 (1)
> %APEX_HOME%\bin\apexApps.bat ws-echo -p 42452 (2)

1

Start client on Unix or Cygwin

2

Start client on Windows

Once started successfully, the client will produce the following messages (assuming we used -p 42452 and an APEX engine is running on localhost with the same port:

ws-simple-echo: starting simple event echo
 --> server: localhost
 --> port: 42452

Once started, the application will simply print out all received events to standard out.
Each received event will be prefixed by '---' and suffixed by '===='


ws-simple-echo: opened connection to APEX (Web Socket Protocol Handshake)

Run the Websocket Console Client

The console client is included in an APEX full installation. To run the client, open a new shell (Unix, Cygwin) or command prompt (cmd on Windows). Then use the APEX application launcher to start the client.

Important

APEX engine needs to run first The example assumes that an APEX engine configured for consume carrier technology Websocket and JSON event protocol is executed first.

Unix, Cygwin

Windows

# $APEX_HOME/bin/apexApps.sh ws-console [args]
> %APEX_HOME%\bin\apexApps.bat ws-console [args]

Use the following command line arguments for server and port of the Websocket server. The port should be the same as configured in the APEX engine. The server host should be the host on which the APEX engine is running

  • -p defines the Websocket port to connect to (defaults to 8887)

  • -s defines the host on which a Websocket server is running (defaults to localhost)

Let’s assume that there is an APEX engine running, configured for consume Websocket carrier technology, as server, for port 42450, with consume event protocol JSON,. If we start the console client on the same host, we can omit the -s options. We start the console client as:

# $APEX_HOME/bin/apexApps.sh ws-console -p 42450 (1)
> %APEX_HOME%\bin\apexApps.sh ws-console -p 42450 (2)

1

Start client on Unix or Cygwin

2

Start client on Windows

Once started successfully, the client will produce the following messages (assuming we used -p 42450 and an APEX engine is running on localhost with the same port:

ws-simple-console: starting simple event console
 --> server: localhost
 --> port: 42450

 - terminate the application typing 'exit<enter>' or using 'CTRL+C'
 - events are created by a non-blank starting line and terminated by a blank line


ws-simple-console: opened connection to APEX (Web Socket Protocol Handshake)

Send Events

Now you have the full system up and running:

  • Terminal 1: APEX ready and loaded

  • Terminal 2: an echo client, printing received messages produced by the VPN policy

  • Terminal 2: a console client, waiting for input on the console (standard in) and sending text to APEX

We started the engine with the VPN policy example. So all the events we are using now are located in files in the following example directory:

1#: $APEX_HOME/examples/events/VPN
2> %APEX_HOME%\examples\events\VPN

To sends events, simply copy the content of the event files into Terminal 3 (the console client). It will read multi-line JSON text and send the events. So copy the content of SetupEvents.json into the client. APEX will trigger a policy and produce some output, the echo client will also print some events created in the policy. In Terminal 1 (APEX) you’ll see some status messages from the policy as:

 1{Link=L09, LinkUp=true}
 2L09     true
 3outFields: {Link=L09, LinkUp=true}
 4{Link=L10, LinkUp=true}
 5L09     true
 6L10     true
 7outFields: {Link=L10, LinkUp=true}
 8{CustomerName=C, LinkList=L09 L10, SlaDT=300, YtdDT=300}
 9*** Customers ***
10C       300     300     [L09, L10]
11outFields: {CustomerName=C, LinkList=L09 L10, SlaDT=300, YtdDT=300}
12{CustomerName=A, LinkList=L09 L10, SlaDT=300, YtdDT=50}
13*** Customers ***
14A       300     50      [L09, L10]
15C       300     300     [L09, L10]
16outFields: {CustomerName=A, LinkList=L09 L10, SlaDT=300, YtdDT=50}
17{CustomerName=D, LinkList=L09 L10, SlaDT=300, YtdDT=400}
18*** Customers ***
19A       300     50      [L09, L10]
20C       300     300     [L09, L10]
21D       300     400     [L09, L10]
22outFields: {CustomerName=D, LinkList=L09 L10, SlaDT=300, YtdDT=400}
23{CustomerName=B, LinkList=L09 L10, SlaDT=300, YtdDT=299}
24*** Customers ***
25A       300     50      [L09, L10]
26B       300     299     [L09, L10]
27C       300     300     [L09, L10]
28D       300     400     [L09, L10]
29outFields: {CustomerName=B, LinkList=L09 L10, SlaDT=300, YtdDT=299}

In Terminal 2 (echo-client) you see the received events, the last two should look like:

 1ws-simple-echo: received
 2---------------------------------
 3{
 4  "name": "VPNCustomerCtxtActEvent",
 5  "version": "0.0.1",
 6  "nameSpace": "org.onap.policy.apex.domains.vpn.events",
 7  "source": "Source",
 8  "target": "Target",
 9  "CustomerName": "C",
10  "LinkList": "L09 L10",
11  "SlaDT": 300,
12  "YtdDT": 300
13}
14=================================
15
16ws-simple-echo: received
17---------------------------------
18{
19  "name": "VPNCustomerCtxtActEvent",
20  "version": "0.0.1",
21  "nameSpace": "org.onap.policy.apex.domains.vpn.events",
22  "source": "Source",
23  "target": "Target",
24  "CustomerName": "D",
25  "LinkList": "L09 L10",
26  "SlaDT": 300,
27  "YtdDT": 400
28}
29=================================

Congratulations, you have triggered a policy in APEX using Websockets, the policy did run through, created events, picked up by the echo-client.

Now you can send the Link 09 and Link 10 events, they will trigger the actual VPN policy and some calculations are made. Let’s take the Link 09 events from Link09Events.json, copy them all into Terminal 3 (the console). APEX will run the policy (with some status output), and the echo client will receive and print events.

To terminate the applications, simply press CTRL+C in Terminal 1 (APEX). This will also terminate the echo-client in Terminal 2. Then type exit<enter> in Terminal 3 (or CTRL+C) to terminate the console-client.

APEX Policy Guide

APEX Policy Matrix

APEX offers a lot of flexibility for defining, deploying, and executing policies. Based on a theoretic model, it supports virtually any policy model and supports translation of legacy policies into the APEX execution format. However, the most important aspect for using APEX is to decide what policy is needed, what underlying policy concepts should be used, and how the decision logic should be realized. Once these aspects are decided, APEX can be used to execute the policies. If the policy evolves, say from a simple decision table to a fully adaptable policy, only the policy definition requires change. APEX supports all of that.

The figure below shows a (non-exhaustive) matrix, which will help to decide what policy is required to solve your problem. Read the matrix from left to right choosing one cell in each column.

APEX Policy Matrix

Figure 1. APEX Policy Matrix

The policy can support one of a number of stimuli with an associated purpose/model of the policy, for instance:

  • Configuration, i.e. what should happen. An example is an event that states an intended network configuration and the policy should provide the detailed actions for it. The policy can be realized for instance as an obligation policy, a promise or an intent.

  • Report, i.e. something did happen. An example is an event about an error or fault and the policy needs to repair that problem. The policy would usually be an obligation, utility function, or goal policy.

  • Monitoring, i.e. something does happen. An example is a notification about certain network conditions, to which the policy might (or might not) react. The policy will mitigate the monitored events or permit (deny) related actions as an obligation or authorization.

  • Analysis, i.e. why did something happen. An example is an analytic component sends insights of a situation requiring a policy to act on it. The policy can solve the problem, escalate it, or delegate it as a refrain or delegation policy.

  • Prediction, i.e. what will happen next. An example are events that a policy uses to predict a future network condition. The policy can prevent or enforce the prediction as an adaptive policy, a utility function, or a goal.

  • Feedback, i.e. why did something happen or not happen. Similar to analysis, but here the feedback will be in the input event and the policy needs to something with that information. Feedback can be related to history or experience, for instance a previous policy execution. The policy needs to be context-aware or be a meta-policy.

Once the purpose of the policy is decided, the next step is to look into what context information the policy will require to do its job. This can range from very simple to a lot of different information, for instance:

  • No context, nothing but a trigger event, e.g. a string or a number, is required

  • Event context, the incoming event provides all information (more than a string or number) for the policy

  • Policy context (read only), the policy has access to additional information related to its class but cannot change/alter them

  • Policy context (read and write), the policy has access to additional information related to its class and can alter this information (for instance to record historic information)

  • Global context (read only), the policy has access to additional information of any kind but cannot change/alter them

  • Global context (read and write), the policy the policy has access to additional information of any kind and can alter this information (for instance to record historic information)

The next step is to decide how the policy should do its job, i.e. what flavor it has, how many states are needed, and how many tasks. There are many possible combinations, for instance:

  • Simple / God: a simple policy with 1 state and 1 task, which is doing everything for the decision-making. This is the ideal policy for simple situation, e.g. deciding on configuration parameters or simple access control.

  • Simple sequence: a simple policy with a number of states each having a single task. This is a very good policy for simple decision-making with different steps. For instance, a classic action policy (ECA) would have 3 states (E, C, and A) with some logic (1 task) in each state.

  • Simple selective: a policy with 1 state but more than one task. Here, the appropriate task (and it’s logic) will be selected at execution time. This policy is very good for dealing with similar (or the same) situation in different contexts. For instance, the tasks can be related to available external software, or to current work load on the compute node, or to time of day.

  • Selective: any number of states having any number of tasks (usually more than 1 task). This is a combination of the two policies above, for instance an ECA policy with more than one task in E, C, and A.

  • Classic directed: a policy with more than one state, each having one task, but a non-sequential execution. This means that the sequence of the states is not pre-defined in the policy (as would be for all cases above) but calculated at runtime. This can be good to realize decision trees based on contextual information.

  • Super Adaptive: using the full potential of the APEX policy model, states and tasks and state execution are fully flexible and calculated at runtime (per policy execution). This policy is very close to a general programming system (with only a few limitations), but can solve very hard problems.

The final step is to select a response that the policy creates. Possible responses have been discussed in the literature for a very long time. A few examples are:

  • Obligation (deontic for what should happen)

  • Authorization (e.g. for rule-based or other access control or security systems)

  • Intent (instead of providing detailed actions the response is an intent statement and a further system processes that)

  • Delegation (hand the problem over to someone else, possibly with some information or instructions)

  • Fail / Error (the policy has encountered a problem, and reports it)

  • Feedback (why did the policy make a certain decision)

APEX Policy Model

The APEX policy model is shown in UML notation in the figure below. A policy model can be stored in JSON or XML format in a file or can be held in a database. The APEX editor creates and modifies APEX policy models. APEX deployment deploys policy models, and a policy model is loaded into APEX engines so that the engines can run the policies in the policy model.

The figure shows four different views of the policy model:

  • The general model view shows the main parts of a policy: state, state output, event, and task. A task can also have parameters. Data types can be defined on a per-model basis using either standard atomic types (such as character, string, numbers) or complex types from a policy domain.

  • The logic model view emphasizes how decision-making logic is injected into a policy. There are essentially three different types of logic: task logic (for decision making in a task), task selection logic (to select a task if more than one is defined in a state), and state finalizer logic (to compute the final output event of a state and select an appropriate next state from the policy model).

  • The context model view shows how context is injected into a policy. States collect all context from their tasks. A task can define what context it requires for the decision making, i.e. what context the task logic will process. Context itself is a collection of items (individual context information) with data types. Context can be templated.

  • The event and field model view shows the events in the policy model. Tasks define what information they consume (input) and produce (output). This information is modeled as fields, essentially a key/type tuple in the model and a key/type/value triple at execution. Events then are collection of fields.

APEX Policy Model for Execution

Figure 2. APEX Policy Model for Execution

Concepts and Keys

Each element of the policy model is called a concept. Each concept is a subclass of the abstract Concept class, as shown in the next figure. Every concept implements the following abstract methods:

Concepts and Keys

Figure 3. Concepts and Keys

  • getKey() - gets the unique key for this concept instance in the system

  • validate() - validates the structure of this concept, its sub-concepts and its relationships

  • clean() - carries out housekeeping on the concept such as trimming strings, remove any hanging references

  • clone() - creates a deep copy of an instance of this concept

  • equals() - checks if two instances of this concept are equal

  • toString() - returns a string representation of the concept

  • hashCode() - returns a hash code for the concept

  • copyTo() - carries out a deep copy of one instance of the concept to another instance, overwriting the target fields.

All concepts must have a key, which uniquely identifies a concept instance. The key of a subclass of an Concept must either be an ArtifactKey or an ReferenceKey. Concepts that have a stand-alone independent existence such as Policy, Task, and Event must have an ArtifctKey key. Concepts that are contained in other concepts, that do not exist as stand-alone concepts must have an ReferenceKey key. Examples of such concepts are State and EventParameter.

An ArticactKey has two fields; the Name of the concept it is the key for and the concept’s Version. A concept’s name must be unique in a given PolicyModel. A concept version is represented using the well known major.minor.path scheme as used in semantic versioning.

A ReferenceKey has three fields. The UserKeyName and UserKeyVersion fields identify the ArtifactKey of the concept in which the concept keyed by the ReferenceKey is contained. The LocalName field identifies the contained concept instance. The LocalName must be unique in the concepts of a given type contained by a parent.

For example, a policy called SalesPolicy with a Version of 1.12.4 has a state called Decide. The Decide state is linked to the SalesPolicy with a ReferenceKey with fields UserKeyName of SalesPolicy, UserKeyVersion of 1.12.4, and LocalName of Decide. There must not be another state called Decide in the policy SalesPolicy. However, there may well be a state called Decide in some other policy called PurchasingPolicy.

Each concept in the model is also a JPA (Java Persistence API) Entity. This means that every concept can be individually persisted or the entire model can be persisted en-bloc to any persistence mechanism using an JPA framework such as Hibernate or EclipseLink.

Concept: PolicyModel

The PolicyModel concept is a container that holds the definition of a set of policies and their associated events, context maps, and tasks. A PolicyModel is implemented as four maps for policies, events, context maps, and tasks. Each map is indexed by the key of the policy, event, context map, or task. Any non-empty policy model must have at least one entry in its policy, event, and task map because all policies must have at least one input and output event and must execute at least one task.

A PolicyModel concept is keyed with an ArtifactKey key. Because a PolicyModel is an AxConcept, calling the validate() method on a policy model validates the concepts, structure, and relationships of the entire policy model.

Concept: DataType

Data types are tightly controlled in APEX in order to provide a very high degree of consistency in policies and to facilitate tracking of changes to context as policies execute. All context is modeled as a DataType concept. Each DataType concept instance is keyed with an ArtifactKey key. The DataType field identifies the Java class of objects that is used to represent concept instances that use this data type. All context has a DataType; incoming and outgoing context is represented by EventField concepts and all other context is represented by ContextItem concepts.

Concept: Event

An Event defines the structure of a message that passes into or out of an APEX engine or that passes between two states in an APEX engine. APEX supports message reception and sending in many formats and all messages are translated into an Event prior to processing by an APEX engine. Event concepts are keyed with an ArtifactKey key. The parameters of an event are held as a map of EventField concept instances with each parameter indexed by the LocalName of its ReferenceKey. An Event has three fields:

  • The NameSpace identifies the domain of application of the event

  • The Source of the event identifies the system that emitted the event

  • The Target of the event identifies the system that the event was sent to

A PolicyModel contains a map of all the events known to a given policy model. Although an empty model may have no events in its event map, any sane policy model must have at least one Event defined.

Concept: EventField

The incoming context and outgoing context of an event are the fields of the event. Each field representing a single piece of incoming or outgoing context. Each field of an Event is represented by an instance of the EventField concept. Each EventField concept instance in an event is keyed with a ReferenceKey key, which references the event. The LocalName field of the ReferenceKey holds the name of the field A reference to a DataType concept defines the data type that values of this parameter have at run time.

Concept: ContextMap

The set of context that is available for use by the policies of a PolicyModel is defined as ContextMap concept instances. The PolicyModel holds a map of all the ContextMap definitions. A ContextMap is itself a container for a group of related context items, each of which is represented by a ContextItem concept instance. ContextMap concepts are keyed with an ArtifactKey key. A developer can use the APEX Policy Editor to create context maps for their application domain.

A ContextMap uses a map to hold the context items. The ContextItem concept instances in the map are indexed by the LocalName of their ReferenceKey.

The ContextMapType field of a ContextMap defines the type of a context map. The type can have either of two values:

  • A BAG context map is a context map with fixed content. Each possible context item in the context map is defined at design time and is held in the ContextMap context instance as ContextItem concept definitions and only the values of the context items in the context map can be changed at run time. The context items in a BAG context map have mixed types and distinct ContextItem concept instances of the same type can be defined. A BAG context map is convenient for defining a group of context items that are diverse but are related by domain, such as the characteristics of a device. A fully defined BAG context map has a fully populated ContextItem map but its ContextItemTemplate reference is not defined.

  • A SAMETYPE context map is used to represent a group of ContextItem instances of the same type. Unlike a BAG context map, the ContextItem concept instances of a SAMETYPE context map can be added, modified, and deleted at runtime. All ContextItem concept instances in a SAMETYPE context map must be of the same type, and that context item is defined as a single ContextItemTemplate concept instances at design time. At run time, the ContextItemTemplate definition is used to create new ContextItem concept instances for the context map on demand. A fully defined SAMETYPE context map has an empty ContextItem map and its ContextItemTemplate_ reference is defined.

The Scope of a ContextMap defines the range of applicability of a context map in APEX. The following scopes of applicability are defined:

  • EPHEMERAL scope means that the context map is owned, used, and modified by a single application but the context map only exists while that application is running

  • APPLICATION scope specifies that the context map is owned, used, and modified by a single application, the context map is persistent

  • GLOBAL scope specifies that the context map is globally owned and is used and modified by any application, the context map is persistent

  • EXTERNAL scope specifies that the context map is owned by an external system and may be used in a read-only manner by any application, the context map is persistent

A much more sophisticated scoping mechanism for context maps is envisaged for Apex in future work. In such a mechanism, the scope of a context map would work somewhat like the way roles work in security authentication systems.

Concept: ContextItem

Each piece of context in a ContextMap is represented by an instance of the ContextItem concept. Each ContextItem concept instance in a context map keyed with a ReferenceKey key, which references the context map of the context item. The LocalName field of the ReferenceKey holds the name of the context item in the context map A reference to a DataType concept defines the data type that values of this context item have at run time. The WritableFlag indicates if the context item is read only or read-write at run time.

Concept: ContextItemTemplate

In a SAMETYPE ContextMap, the ContextItemTemplate definition provides a template for the ContextItem instances that will be created on the context map at run time. Each ContextItem concept instance in the context map is created using the ContextItemTemplate template. It is keyed with a ReferenceKey key, which references the context map of the context item. The LocalName field of the ReferenceKey, supplied by the creator of the context item at run time, holds the name of the context item in the context map. A reference to a DataType concept defines the data type that values of this context item have at run time. The WritableFlag indicates if the context item is read only or read-write at run time.

Concept: Task

The smallest unit of logic in a policy is a Task. A task encapsulates a single atomic unit of logic, and is designed to be a single indivisible unit of execution. A task may be invoked by a single policy or by many policies. A task has a single trigger event, which is sent to the task when it is invoked. Tasks emit one or more outgoing events, which carry the result of the task execution. Tasks may use or modify context as they execute.

The Task concept definition captures the definition of an APEX task. Task concepts are keyed with an ArtifactKey key. The Trigger of the task is a reference to the Event concept that triggers the task. The OutgoingEvents of a task are a set of references to Event concepts that may be emitted by the task.

All tasks have logic, some code that is programmed to execute the work of the task. The Logic concept of the task holds the definition of that logic.

The Task definition holds a set of ContextItem and ContextItemTemplate context items that the task is allow to access, as defined by the task developer at design time. The type of access (read-only or read write) that a task has is determined by the WritableFlag flag on the individual context item definitions. At run time, a task may only access the context items specified in its context item set, the APEX engine makes only the context items in the task context item set is available to the task.

A task can be configured with startup parameters. The set of parameters that can be configured on a task are defined as a set of TaskParameter concept definitions.

Concept: TaskParameter

Each configuration parameter of a task are represented as a Taskparameter concept keyed with a ReferenceKey key, which references the task. The LocalName field of the ReferenceKey holds the name of the parameter. The DefaultValue field defines the default value that the task parameter is set to. The value of TaskParameter instances can be overridden at deployment time by specifying their values in the configuration information passed to APEX engines.

The taskParameters field is specified under engineParameters in the ApexConfig. It can contain one or more task parameters, where each item can contain the parameter key, value as well as the taskId to which it is associated. If the taskId is not specified, then the parameters are added to all tasks.

Concept: Logic

The Logic concept instance holds the actual programmed task logic for a task defined in a Task concept or the programmed task selection logic for a state defined in a State concept. It is keyed with a ReferenceKey key, which references the task or state that owns the logic. The LocalName field of the Logic concept is the name of the logic.

The LogicCode field of a Logic concept definition is a string that holds the program code that is to be executed at run time. The LogicType field defines the language of the code. The standard values are the logic languages supported by APEX: JAVASCRIPT, JAVA, JYTHON, JRUBY, or MVEL.

The APEX engine uses the LogicType field value to decide which language interpreter to use for a task and then sends the logic defined in the LogicCode field to that interpreter.

Concept: Policy

The Policy concept defines a policy in APEX. The definition is rather straightforward. A policy is made up of a set of states with the flavor of the policy determining the structure of the policy states and the first state defining what state in the policy executes first. Policy concepts are keyed with an ArtifactKey key.

The PolicyFlavour of a Policy concept specifies the structure that will be used for the states in the policy. A number of commonly used policy patterns are supported as APEX policy flavors. The standard policy flavors are:

  • The MEDA flavor supports policies written to the MEDA policy pattern and require a sequence of four states: namely Match, Establish, Decide and Act.

  • The OODA flavor supports policies written to the OODA loop pattern and require a sequence of four states: namely Observe, Orient, Decide and Act.

  • The ECA flavor supports policies written to the ECA active rule pattern and require a sequence of three states: namely Event, Condition and Action

  • The XACML flavor supports policies written in XACML and require a single state: namely XACML

  • The FREEFORM flavor supports policies written in an arbitrary style. A user can define a FREEFORM policy as an arbitrarily long chain of states.

The FirstState field of a Policy definition is the starting point for execution of a policy. Therefore, the trigger event of the state referenced in the FirstState field is also the trigger event for the entire policy.

Concept: State

The State concept represents a phase or a stage in a policy, with a policy being composed of a series of states. Each state has at least one but may have many tasks and, on each run of execution, a state executes one and only one of its tasks. If a state has more than one task, then its task selection logic is used to select which task to execute. Task selection logic is programmable logic provided by the state designer. That logic can use incoming, policy, global, and external context to select which task best accomplishes the purpose of the state in a give situation if more than one task has been specified on a state. A state calls one and only one task when it is executed.

Each state is triggered by an event, which means that all tasks of a state must also be triggered by that same event. The set of output events for a state is the union of all output events from all tasks for that task. In practice at the moment, because a state can only have a single input event, a state that is not the final state of a policy may only output a single event and all tasks of that state may also only output that single event. In future work, the concept of having a less restrictive trigger pattern will be examined.

A State concept is keyed with a ReferenceKey key, which references the Policy concept that owns the state. The LocalName field of the ReferenceKey holds the name of the state. As a state is part of a chain of states, the NextState field of a state holds the ReferenceKey key of the state in the policy to execute after this state.

The Trigger field of a state holds the ArtifactKey of the event that triggers this state. The OutgoingEvents field holds the ArtifactKey references of all possible events that may be output from the state. This is a set that is the union of all output events of all tasks of the state.

The Task concepts that hold the definitions of the task for the state are held as a set of ArtifactKey references in the state. The DefaultTask field holds a reference to the default task for the state, a task that is executed if no task selection logic is specified. If the state has only one task, that task is the default task.

The Logic concept referenced by a state holds the task selection logic for a state. The task selection logic uses the incoming context (parameters of the incoming event) and other context to determine the best task to use to execute its goals. The state holds a set of references to ContextItem and ContextItemTemplate definitions for the context used by its task selection logic.

Writing Logic

Writing APEX Task Logic

Task logic specifies the behavior of an Apex Task. This logic can be specified in a number of ways, exploiting Apex’s plug-in architecture to support a range of logic executors. In Apex scripted Task Logic can be written in any of these languages:

These languages were chosen because the scripts can be compiled into Java bytecode at runtime and then efficiently executed natively in the JVM. Task Logic an also be written directly in Java but needs to be compiled, with the resulting classes added to the classpath. There are also a number of other Task Logic types (e.g. Fuzzy Logic), but these are not supported as yet. This guide will focus on the scripted Task Logic approaches, with MVEL and JavaScript being our favorite languages. In particular this guide will focus on the Apex aspects of the scripts. However, this guide does not attempt to teach you about the scripting languages themselves …​ that is up to you!

Tip

JVM-based scripting languages For more more information on scripting for the Java platform see: https://docs.oracle.com/javase/8/docs/technotes/guides/scripting/prog_guide/index.html

Note

What do Tasks do? The function of an Apex Task is to provide the logic that can be executed for an Apex State as one of the steps in an Apex Policy. Each task receives some incoming fields, executes some logic (e.g: make a decision based on shared state or context, incoming fields, external context, etc.), perhaps set some shared state or context and then emits outgoing fields. The state that uses the task is responsible for extracting the incoming fields from the state input event. The state also has an output mapper associated with the task, and this output mapper is responsible for mapping the outgoing fields from the task into an appropriate output event for the state.

First lets start with a sample task, drawn from the “My First Apex Policy” example: The task “MorningBoozeCheck” from the “My First Apex Policy” example is available in both MVEL and JavaScript:

Javascript code for the MorningBoozeCheck task

 1/*
 2 * ============LICENSE_START=======================================================
 3 *  Copyright (C) 2016-2018 Ericsson. All rights reserved.
 4 *  Modifications Copyright (C) 2020 Nordix Foundation.
 5 * ================================================================================
 6 * Licensed under the Apache License, Version 2.0 (the "License");
 7 * you may not use this file except in compliance with the License.
 8 * You may obtain a copy of the License at
 9 *
10 *      http://www.apache.org/licenses/LICENSE-2.0
11 *
12 * Unless required by applicable law or agreed to in writing, software
13 * distributed under the License is distributed on an "AS IS" BASIS,
14 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15 * See the License for the specific language governing permissions and
16 * limitations under the License.
17 *
18 * SPDX-License-Identifier: Apache-2.0
19 * ============LICENSE_END=========================================================
20 */
21
22executor.logger.info("Task Execution: '"+executor.subject.id+"'. Input Fields: '"+executor.inFields+"'");
23
24executor.outFields.put("amount"      , executor.inFields.get("amount"));
25executor.outFields.put("assistant_ID", executor.inFields.get("assistant_ID"));
26executor.outFields.put("notes"       , executor.inFields.get("notes"));
27executor.outFields.put("quantity"    , executor.inFields.get("quantity"));
28executor.outFields.put("branch_ID"   , executor.inFields.get("branch_ID"));
29executor.outFields.put("item_ID"     , executor.inFields.get("item_ID"));
30executor.outFields.put("time"        , executor.inFields.get("time"));
31executor.outFields.put("sale_ID"     , executor.inFields.get("sale_ID"));
32
33item_id = executor.inFields.get("item_ID");
34
35//All times in this script are in GMT/UTC since the policy and events assume time is in GMT.
36var timenow_gmt =  new Date(Number(executor.inFields.get("time")));
37
38var midnight_gmt = new Date(Number(executor.inFields.get("time")));
39midnight_gmt.setUTCHours(0,0,0,0);
40
41var eleven30_gmt = new Date(Number(executor.inFields.get("time")));
42eleven30_gmt.setUTCHours(11,30,0,0);
43
44var timeformatter = new java.text.SimpleDateFormat("HH:mm:ss z");
45
46var itemisalcohol = false;
47if(item_id != null && item_id >=1000 && item_id < 2000)
48    itemisalcohol = true;
49
50if( itemisalcohol
51    && timenow_gmt.getTime() >= midnight_gmt.getTime()
52    && timenow_gmt.getTime() <  eleven30_gmt.getTime()) {
53
54  executor.outFields.put("authorised", false);
55  executor.outFields.put("message", "Sale not authorised by policy task " +
56    executor.subject.taskName+ " for time " + timeformatter.format(timenow_gmt.getTime()) +
57    ". Alcohol can not be sold between " + timeformatter.format(midnight_gmt.getTime()) +
58    " and " + timeformatter.format(eleven30_gmt.getTime()));
59}
60else{
61  executor.outFields.put("authorised", true);
62  executor.outFields.put("message", "Sale authorised by policy task " +
63    executor.subject.taskName + " for time "+timeformatter.format(timenow_gmt.getTime()));
64}
65
66/*
67This task checks if a sale request is for an item that is an alcoholic drink.
68If the local time is between 00:00:00 GMT and 11:30:00 GMT then the sale is not
69authorised. Otherwise the sale is authorised.
70In this implementation we assume that items with item_ID value between 1000 and
712000 are all alcoholic drinks :-)
72*/
73
74true;

MVEL code for the MorningBoozeCheck task

 1/*
 2 * ============LICENSE_START=======================================================
 3 *  Copyright (C) 2016-2018 Ericsson. All rights reserved.
 4 *  Modifications Copyright (C) 2020 Nordix Foundation.
 5 * ================================================================================
 6 * Licensed under the Apache License, Version 2.0 (the "License");
 7 * you may not use this file except in compliance with the License.
 8 * You may obtain a copy of the License at
 9 *
10 *      http://www.apache.org/licenses/LICENSE-2.0
11 *
12 * Unless required by applicable law or agreed to in writing, software
13 * distributed under the License is distributed on an "AS IS" BASIS,
14 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15 * See the License for the specific language governing permissions and
16 * limitations under the License.
17 *
18 * SPDX-License-Identifier: Apache-2.0
19 * ============LICENSE_END=========================================================
20 */
21import java.util.Date;
22import java.util.Calendar;
23import java.util.TimeZone;
24import java.text.SimpleDateFormat;
25
26logger.info("Task Execution: '"+subject.id+"'. Input Fields: '"+inFields+"'");
27
28outFields.put("amount"      , inFields.get("amount"));
29outFields.put("assistant_ID", inFields.get("assistant_ID"));
30outFields.put("notes"       , inFields.get("notes"));
31outFields.put("quantity"    , inFields.get("quantity"));
32outFields.put("branch_ID"   , inFields.get("branch_ID"));
33outFields.put("item_ID"     , inFields.get("item_ID"));
34outFields.put("time"        , inFields.get("time"));
35outFields.put("sale_ID"     , inFields.get("sale_ID"));
36
37item_id = inFields.get("item_ID");
38
39//The events used later to test this task use GMT timezone!
40gmt = TimeZone.getTimeZone("GMT");
41timenow = Calendar.getInstance(gmt);
42df = new SimpleDateFormat("HH:mm:ss z");
43df.setTimeZone(gmt);
44timenow.setTimeInMillis(inFields.get("time"));
45
46midnight = timenow.clone();
47midnight.set(
48    timenow.get(Calendar.YEAR),timenow.get(Calendar.MONTH),
49    timenow.get(Calendar.DATE),0,0,0);
50eleven30 = timenow.clone();
51eleven30.set(
52    timenow.get(Calendar.YEAR),timenow.get(Calendar.MONTH),
53    timenow.get(Calendar.DATE),11,30,0);
54
55itemisalcohol = false;
56if(item_id != null && item_id >=1000 && item_id < 2000)
57    itemisalcohol = true;
58
59if( itemisalcohol
60    && timenow.after(midnight) && timenow.before(eleven30)){
61  outFields.put("authorised", false);
62  outFields.put("message", "Sale not authorised by policy task "+subject.taskName+
63    " for time "+df.format(timenow.getTime())+
64    ". Alcohol can not be sold between "+df.format(midnight.getTime())+
65    " and "+df.format(eleven30.getTime()));
66  return true;
67}
68else{
69  outFields.put("authorised", true);
70  outFields.put("message", "Sale authorised by policy task "+subject.taskName+
71    " for time "+df.format(timenow.getTime()));
72  return true;
73}
74
75/*
76This task checks if a sale request is for an item that is an alcoholic drink.
77If the local time is between 00:00:00 GMT and 11:30:00 GMT then the sale is not
78authorised. Otherwise the sale is authorised.
79In this implementation we assume that items with item_ID value between 1000 and
802000 are all alcoholic drinks :-)
81*/

The role of the task in this simple example is to copy the values in the incoming fields into the outgoing fields, then examine the values in some incoming fields (item_id and time), then set the values in some other outgoing fields (authorised and message).

Both MVEL and JavaScript like most JVM-based scripting languages can use standard Java libraries to perform complex tasks. Towards the top of the scripts you will see how to import Java classes and packages to be used directly in the logic. Another thing to notice is that Task Logic should return a java.lang.Boolean value true if the logic executed correctly. If the logic fails for some reason then false can be returned, but this will cause the policy invoking this task will fail and exit.

Note

How to return a value from task logic Some languages explicitly support returning values from the script (e.g. MVEL and JRuby) using an explicit return statement (e.g. return true), other languages do not (e.g. Jython). For languages that do not support the return statement, a special field called returnValue must be created to hold the result of the task logic operation (i.e. assign a java.lang.Boolean value to the returnValue field before completing the task). Also, in MVEL if there is no explicit return statement then the return value of the last executed statement will return (e.g. the statement a=(1+2) will return the value 3).

For Javascript, the last statement of a script must be a statement that evaluates to true or false, indicating whether the script executed correctly or not. In the case where the script always executes to compeletion sucessfully, simply add a last line with the statement true’. In cases where success or failure is assessed in the script, create a boolean local variable with a name such as returnvalue. In the execution of the script, set returnValue to be true or false as appropriate. The last line of the scritp tehn should simply be returnValue;, which returns the value of returnValue.

Besides these imported classes and normal language features Apex provides some natively available parameters and functions that can be used directly. At run-time these parameters are populated by the Apex execution environment and made natively available to logic scripts each time the logic script is invoked. (These can be accessed using the executor keyword for most languages, or can be accessed directly without the executor keyword in MVEL):

Table 1. The executor Fields / Methods

Name

Type

Java type

Description

inFields

Fields

java.util.Map <String,Object>

The incoming task fields, implemented as a standard Java (unmodifiable) Map

Example:

executor.logger.debug("Incoming fields: " +executor.inFields.entrySet());
var item_id = executor.incomingFields["item_ID"];
if (item_id >=1000) { ... }

outFields

Fields

java.util.Map <String,Object>

The outgoing task fields. This is implemented as a standard initially empty Java (modifiable) Map. To create a new schema-compliant instance of a field object see the utility method subject.getOutFieldSchemaHelper() below

Example:

executor.outFields["authorised"] = false;

logger

Logger

org.slf4j.ext.XLogger

A helpful logger

Example:

executor.logger.info("Executing task: " +executor.subject.id);

TRUE/FALSE

boolean

java.lang.Boolean

2 helpful constants. These are useful to retrieve correct return values for the task logic

Example:

var returnValue = executor.isTrue;
var returnValueType = Java.type("java.lang.Boolean");
var returnValue = new returnValueType(true);

subject

Task

TaskFacade

This provides some useful information about the task that contains this task logic. This object has some useful fields and methods :

  • AxTask task to get access to the full task definition of the host task

  • String getTaskName() to get the name of the host task

  • String getId() to get the ID of the host task

  • SchemaHelper getInFieldSchemaHelper( String fieldName ) to get a SchemaHelper helper object to manipulate incoming task fields in a schema-aware manner

  • SchemaHelper getOutFieldSchemaHelper( String fieldName ) to get a SchemaHelper helper object to manipulate outgoing task fields in a schema-aware manner, e.g. to instantiate new schema-compliant field objects to populate the executor.outFields outgoing fields map

Example:

executor.logger.info("Task name: " + executor.subject.getTaskName());
executor.logger.info("Task id: " + executor.subject.getId());
executor.logger.info("Task inputs definitions: "
  + "executor.subject.task.getInputFieldSet());
executor.logger.info("Task outputs definitions: "
  + "executor.subject.task.getOutputFieldSet());
executor.outFields["authorised"] = executor.subject
  .getOutFieldSchemaHelper("authorised").createNewInstance("false");

ContextAlbum getContextAlbum(String ctxtAlbumName )

A utility method to retrieve a ContextAlbum for use in the task. This is how you access the context used by the task. The returned ContextAlbum implements the java.util.Map <String,Object> interface to get and set context as appropriate. The returned ContextAlbum also has methods to lock context albums, get information about the schema of the items to be stored in a context album, and get a SchemaHelper to manipulate context album items. How to define and use context in a task is described in the Apex Programmer’s Guide and in the My First Apex Policy guide.

Example:

var bkey = executor.inFields.get("branch_ID");
var cnts = executor.getContextMap("BranchCounts");
cnts.lockForWriting(bkey);
cnts.put(bkey, cnts.get(bkey) + 1);
cnts.unlockForWriting(bkey);
Writing APEX Task Selection Logic

The function of Task Selection Logic is to choose which task should be executed for an Apex State as one of the steps in an Apex Policy. Since each state must define a default task there is no need for Task Selection Logic unless the state uses more than one task. This logic can be specified in a number of ways, exploiting Apex’s plug-in architecture to support a range of logic executors. In Apex scripted Task Selection Logic can be written in any of these languages:

These languages were chosen because the scripts can be compiled into Java bytecode at runtime and then efficiently executed natively in the JVM. Task Selection Logic an also be written directly in Java but needs to be compiled, with the resulting classes added to the classpath. There are also a number of other Task Selection Logic types but these are not supported as yet. This guide will focus on the scripted Task Selection Logic approaches, with MVEL and JavaScript being our favorite languages. In particular this guide will focus on the Apex aspects of the scripts. However, this guide does not attempt to teach you about the scripting languages themselves …​ that is up to you!

Tip

JVM-based scripting languages For more more information on Scripting for the Java platform see: https://docs.oracle.com/javase/8/docs/technotes/guides/scripting/prog_guide/index.html

Note

What does Task Selection Logic do? When an Apex state references multiple tasks, there must be a way to dynamically decide which task should be chosen and executed. This can depend on the many factors, e.g. the incoming event for the state, shared state or context, external context, etc.. This is the function of a state’s Task Selection Logic. Obviously, if there is only one task then Task only one task then Task Selection Logic is not needed. Each state must also select one of the tasks a the default state. If the Task Selection Logic is unable to select an appropriate task, then it should select the default task. Once the task has been selected the Apex Engine will then execute that task.

First lets start with some simple Task Selection Logic, drawn from the “My First Apex Policy” example: The Task Selection Logic from the “My First Apex Policy” example is specified in JavaScript here:

Javascript code for the “My First Policy” Task Selection Logic

/*
 * ============LICENSE_START=======================================================
 *  Copyright (C) 2016-2018 Ericsson. All rights reserved.
 *  Modifications Copyright (C) 2020 Nordix Foundation.
 * ================================================================================
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 *
 * SPDX-License-Identifier: Apache-2.0
 * ============LICENSE_END=========================================================
 */

executor.logger.info("Task Selection Execution: '"+executor.subject.id+
    "'. Input Event: '"+executor.inFields+"'");

branchid = executor.inFields.get("branch_ID");
taskorig = executor.subject.getTaskKey("MorningBoozeCheck");
taskalt = executor.subject.getTaskKey("MorningBoozeCheckAlt1");
taskdef = executor.subject.getDefaultTaskKey();

if(branchid >=0 && branchid <1000){
  taskorig.copyTo(executor.selectedTask);
}
else if (branchid >=1000 && branchid <2000){
  taskalt.copyTo(executor.selectedTask);
}
else{
  taskdef.copyTo(executor.selectedTask);
}

/*
This task selection logic selects task "MorningBoozeCheck" for branches with
0<=branch_ID<1000 and selects task "MorningBoozeCheckAlt1" for branches with
1000<=branch_ID<2000. Otherwise the default task is selected.
In this case the default task is also "MorningBoozeCheck"
*/

true;

The role of the Task Selection Logic in this simple example is to examine the value in one incoming field (branchid), then depending on that field’s value set the value for the selected task to the appropriate task (MorningBoozeCheck, MorningBoozeCheckAlt1, or the default task).

Another thing to notice is that Task Selection Logic should return a java.lang.Boolean value true if the logic executed correctly. If the logic fails for some reason then false can be returned, but this will cause the policy invoking this task will fail and exit.

Note

How to return a value from Task Selection Logic Some languages explicitly support returning values from the script (e.g. MVEL and JRuby) using an explicit return statement (e.g. return true), other languages do not (e.g. JavaScript and Jython). For languages that do not support the return statement, a special field called returnValue must be created to hold the result of the task logic operation (i.e. assign a java.lang.Boolean value to the returnValue field before completing the task). Also, in MVEL if there is not explicit return statement then the return value of the last executed statement will return (e.g. the statement a=(1+2) will return the value 3).

Each of the scripting languages used in Apex can import and use standard Java libraries to perform complex tasks. Besides imported classes and normal language features Apex provides some natively available parameters and functions that can be used directly. At run-time these parameters are populated by the Apex execution environment and made natively available to logic scripts each time the logic script is invoked. (These can be accessed using the executor keyword for most languages, or can be accessed directly without the executor keyword in MVEL):

Table 2. The executor Fields / Methods

Unix, Cygwin

Windows

1>c:
2>cd \dev\apex
3>mvn clean install -DskipTests
1# cd /usr/local/src/apex-pdp
2# mvn clean install -DskipTests

Name

Type

Java type

Description

inFields

Fields

java.util.Map <String,Object>

All fields in the state’s incoming event. This is implemented as a standard Java Java (unmodifiable) Map

Example:

executor.logger.debug("Incoming fields: " + executor.inFields.entrySet());
var item_id = executor.incomingFields["item_ID"];
if (item_id >=1000) { ... }

outFields

Fields

java.util.Map <String,Object>

The outgoing task fields. This is implemented as a standard initially empty Java (modifiable) Map. To create a new schema-compliant instance of a field object see the utility method subject.getOutFieldSchemaHelper() below

Example:

executor.outFields["authorised"] = false;

logger

Logger

org.slf4j.ext.XLogger

A helpful logger

Example:

executor.logger.info("Executing task: "
+executor.subject.id);

TRUE/FALSE

boolean

java.lang.Boolean

2 helpful constants. These are useful to retrieve correct return values for the task logic

Example:

var returnValue = executor.isTrue;
var returnValueType = Java.type("java.lang.Boolean");
var returnValue = new returnValueType(true);

subject

Task

TaskFacade

This provides some useful information about the task that contains this task logic. This object has some useful fields and methods :

  • AxTask task to get access to the full task definition of the host task

  • String getTaskName() to get the name of the host task

  • String getId() to get the ID of the host task

  • SchemaHelper getInFieldSchemaHelper( String fieldName ) to get a SchemaHelper helper object to manipulate incoming task fields in a schema-aware manner

  • SchemaHelper getOutFieldSchemaHelper( String fieldName ) to get a SchemaHelper helper object to manipulate outgoing task fields in a schema-aware manner, e.g. to instantiate new schema-compliant field objects to populate the executor.outFields outgoing fields map

Example:

executor.logger.info("Task name: " + executor.subject.getTaskName());
executor.logger.info("Task id: " + executor.subject.getId());
executor.logger.info("Task inputs definitions: "
  + "executor.subject.task.getInputFieldSet());
executor.logger.info("Task outputs definitions: "
  + "executor.subject.task.getOutputFieldSet());
executor.outFields["authorised"] = executor.subject
  .getOutFieldSchemaHelper("authorised")
  .createNewInstance("false");

parameters

Fields

java.util.Map <String,String>

All parameters in the current task. This is implemented as a standard Java Map.

Example:

executor.parameters.get("ParameterKey1"))

ContextAlbum getContextAlbum(String ctxtAlbumName )

A utility method to retrieve a ContextAlbum for use in the task. This is how you access the context used by the task. The returned ContextAlbum implements the java.util.Map <String,Object> interface to get and set context as appropriate. The returned ContextAlbum also has methods to lock context albums, get information about the schema of the items to be stored in a context album, and get a SchemaHelper to manipulate context album items. How to define and use context in a task is described in the Apex Programmer’s Guide and in the My First Apex Policy guide.

Example:

var bkey = executor.inFields.get("branch_ID");
var cnts = executor.getContextMap("BranchCounts");
cnts.lockForWriting(bkey);
cnts.put(bkey, cnts.get(bkey) + 1);
cnts.unlockForWriting(bkey);
Logic Cheat Sheet

Examples given here use Javascript (if not stated otherwise), other execution environments will be similar.

Finish Logic with Success or Error

To finish logic, i.e. return to APEX, with success use the following line close to the end of the logic.

JS Success

true;

To notify a problem, finish with an error.

JS Fail

false;
Logic Logging

Logging can be made easy using a local variable for the logger. Line 1 below does that. Then we start with a trace log with the task (or task logic) identifier followed by the infields.

JS Logging

var logger = executor.logger;
logger.trace("start: " + executor.subject.id);
logger.trace("-- infields: " + executor.inFields);

For larger logging blocks you can use the standard logging API to detect log levels, for instance:

JS Logging Blocks

if(logger.isTraceEnabled()){
  // trace logging block here
}

Note: the shown logger here logs to org.onap.policy.apex.executionlogging. The behavior of the actual logging can be specified in the $APEX_HOME/etc/logback.xml.

If you want to log into the APEX root logger (which is sometimes necessary to report serious logic errors to the top), then import the required class and use this logger.

JS Root Logger

var rootLogger = LoggerFactory.getLogger(logger.ROOT_LOGGER_NAME);
rootLogger.error("Serious error in logic detected: " + executor.subject.id);
Accessing TaskParameters

TaskParameters available in a Task can be accessed in the logic. The parameters in each task are made available at the executor level. This example assumes a parameter with key ParameterKey1.

JS TaskParameter value

executor.parameters.get("ParameterKey1"))

Alternatively, the task parameters can also be accessed from the task object.

JS TaskParameter value using task object

executor.subject.task.getTaskParameters.get("ParameterKey1").getTaskParameterValue()
Local Variable for Infields

It is a good idea to use local variables for infields. This avoids long code lines and policy evolution. The following example assumes infields named nodeName and nodeAlias.

JS Infields Local Var

var ifNodeName = executor.inFields["nodeName"];
var ifNodeAlias = executor.inFields["nodeAlias"];
Local Variable for Context Albums

Similar to the infields it is good practice to use local variables for context albums as well. The following example assumes that a task can access a context album albumTopoNodes. The second line gets a particular node from this context album.

JS Infields Local Var

var albumTopoNodes = executor.getContextAlbum("albumTopoNodes");
var ctxtNode = albumTopoNodes.get(ifNodeName);
Set Outfields in Logic

The task logic needs to set outfields with content generated. The exception are outfields that are a direct copy from an infield of the same name, APEX does that autmatically.

JS Set Outfields

executor.outFields["report"] = "node ctxt :: added node " + ifNodeName;
Create a instance of an Outfield using Schemas

If an outfield is not an atomic type (string, integer, etc.) but uses a complex schema (with a Java or Avro backend), APEX can help to create new instances. The executor provides a field called subject, which provides a schem helper with an API for this. The complete API of the schema helper is documented here: API Doc: SchemaHelper.

If the backend is Java, then the Java class implementing the schema needs to be imported.

The following example assumes an outfield situation. The subject method getOutFieldSchemaHelper() is used to create a new instance.

JS Outfield Instance with Schema

var situation = executor.subject.getOutFieldSchemaHelper("situation").createNewInstance();

If the schema backend is Java, the new instance will be as implemented in the Java class. If the schema backend is Avro, the new instance will have all fields from the Avro schema specification, but set to null. So any entry here needs to be done separately. For instance, the situation schema has a field problemID which we set.

JS Outfield Instance with Schema, set

situation.put("problemID", "my-problem");
Create a instance of an Context Album entry using Schemas

Context album instances can be created using very similar to the outfields. Here, the schema helper comes from the context album directly. The API of the schema helper is the same as for outfields, see API Doc: SchemaHelper.

If the backend is Java, then the Java class implementing the schema needs to be imported.

The following example creates a new instance of a context album instance named albumProblemMap.

JS Outfield Instance with Schema

var albumProblemMap = executor.getContextAlbum("albumProblemMap");
var linkProblem = albumProblemMap.getSchemaHelper().createNewInstance();

This can of course be also done in a single call without the local variable for the context album.

JS Outfield Instance with Schema, one line

var linkProblem = executor.getContextAlbum("albumProblemMap").getSchemaHelper().createNewInstance();

If the schema backend is Java, the new instance will be as implemented in the Java class. If the schema backend is Avro, the new instance will have all fields from the Avro schema specification, but set to null. So any entry here needs to be done separately (see above in outfields for an example).

Enumerates

When dealing with enumerates (Avro or Java defined), it is sometimes and in some execution environments necessary to convert them to a string. For example, assume an Avro enumerate schema as:

Avro Enumerate Schema

{
  "type": "enum", "name": "Status", "symbols" : [
    "UP", "DOWN"
  ]
}

Using a switch over a field initialized with this enumerate in Javascript will fail. Instead, use the toString method, for example:

JS Outfield Instance with Schema, one line

var switchTest = executor.inFields["status"]; switch(switchTest.toString()){
  case "UP": ...; break; case "DOWN": ...; break; default: ...;
}
MVEL Initialize Outfields First!

In MVEL, we observed a problem when accessing (setting) outfields without a prior access to them. So in any MVEL task logic, before setting any outfield, simply do a get (with any string), to load the outfields into the MVEL cache.

MVEL Outfield Initialization

outFields.get("initialize outfields");
Using Java in Scripting Logic

Since APEX executes the logic inside a JVM, most scripting languages provide access to all standard Java classes. Simply add an import for the required class and then use it as in actual Java.

The following example imports java.util.arraylist into a Javascript logic, and then creates a new list.

JS Import ArrayList

var myList = new ArrayList();
Converting Javascript scripts from Nashorn to Rhino dialects

The Nashorn Javascript engine was removed from Java in the Java 11 release. Java 11 was introduced into the Policy Framework in the Frankfurt release, so from Frankfurt on, APEX Javascript scripts use the Rhino Javascript engine and scripts must be in the Rhino dialect.

There are some minor but important differences between the dialects that users should be aware of so that they can convert their scripts into the Rhino dialect.

Return Values

APEX scripts must always return a value of true indicating that the script executed correctly or false indicating that there was an error in script execution.

Pre Frankfurt

In Nashorn dialect scripts, the user had to create a special variable called returnValue and set the value of that variable to be the return value for the script.

Frankfurt and Later

In Rhino dialect scripts, the return value of the script is the logical result of the last statement. Therefore the last line of the script must evaluate to either true or false.

JS Rhino script last executed line examples

true;

returnValue; // Where returnValue is assigned earlier in the script

someValue == 1; // Where the value of someValue is assigned earlier in the script
return statement

The return statement is not supported from the main script called in the Rhino interpreter.

Pre Frankfurt

In Nashorn dialect scripts, the user could return a value of true or false at any point in their script.

JS Nashorn main script returning true and false

var n;

// some code assigns n a value

if (n < 2) {
  return false;
} else {
  return true;
}

Frankfurt and Later

In Rhino dialect scripts, the return statement cannot be used in the main method, but it can still be used in functions. If you want to have a return statement in your code prior to the last statement, encapsulate your code in a function.

JS Rhino script with return statements in a function

someFunction();

function someFunction() {
  var n;

  // some code assigns n a value

  if (n < 2) {
      return false;
  } else {
      return true;
  }
}
Compatibility Script

For Nashorn, the user had to call a compatibility script at the beginning of their Javascript script. This is not required in Rhino.

Pre Frankfurt

In Nashorn dialect scripts, the compatibility script must be loaded.

Nashorn compatability script loading

load("nashorn:mozilla_compat.js");

Frankfurt and Later

Not required.

Import of Java classes

For Nashorn, the user had explicitly import all the Java packages and classes they wished to use in their Javascript script. In Rhino, all Java classes on the classpath are available for use.

Pre Frankfurt

In Nashorn dialect scripts, Java classes must be imported.

Importation of Java packages and classes

importPackage(java.text);
importClass(java.text.SimpleDateFormat);

Frankfurt and Later

Not required.

Using Java Classes and Objects as Variables

Setting a Javascript variable to hold a Java class or a Java object is more straightforward in Rhino than it is in Nashorn. The examples below show how to instantiate a Javascript variable as a Java class and how to use that variable to create an instance of the Java class in another Javascript variable in both dialects.

Pre Frankfurt

Create Javascript variables to hold a Java class and instance

var webClientClass = Java.type("org.onap.policy.apex.examples.bbs.WebClient");
var webClientObject = new webClientClass();

Frankfurt and Later

Create Javascript variables to hold a Java class and instance

var webClientClass = org.onap.policy.apex.examples.bbs.WebClient;
var webClientObject = new webClientClass();
Equal Value and Equal Type operator ===

The Equal Value and Equal Type operator === is not supported in Rhino. Developers must use the Equal To operator == instead. To check types, they may need to explicitly find and check the type of the variables they are using.

APEX OnapPf Guide

Installation

Build and Install

Refer Apex User Manual to find details on the build and installation of the APEX component. Information on the requirements and system configuration can also be found here.

Installation Layout

A full installation of APEX comes with the following layout.

$APEX_HOME
    ├───bin             (1)
    ├───etc             (2)
    │   ├───editor
    │   ├───hazelcast
    │   ├───infinispan
    │   └───META-INF
    │   ├───onappf
    |       └───config      (3)
    │   └───ssl             (4)
    ├───examples            (5)
    │   ├───config          (6)
    │   ├───docker          (7)
    │   ├───events          (8)
    │   ├───html            (9)
    │   ├───models          (10)
    │   └───scripts         (11)
    ├───lib             (12)
    │   └───applications        (13)
    └───war             (14)

1

binaries, mainly scripts (bash and bat) to start the APEX engine and applications

2

configuration files, such as logback (logging) and third party library configurations

3

configuration file for APEXOnapPf, such as OnapPfConfig.json (initial configuration for APEXOnapPf)

4

ssl related files such as policy-keystore and policy-truststore

5

example policy models to get started

6

configurations for the examples (with sub directories for individual examples)

7

Docker files and additional Docker instructions for the examples

8

example events for the examples (with sub directories for individual examples)

9

HTML files for some examples, e.g. the Decisionmaker example

10

the policy models, generated for each example (with sub directories for individual examples)

11

additional scripts for the examples (with sub directories for individual examples)

12

the library folder with all Java JAR files

13

applications, also known as jar with dependencies (or fat jars), individually deployable

14

WAR files for web applications

Verify the APEXOnapPf Installation

When APEX is installed and all settings are realized, the installation can be verified.

Verify Installation - run APEXOnapPf

A simple verification of an APEX installation can be done by simply starting the APEXOnapPf without any configuration. On Unix (or Cygwin) start the engine using $APEX_HOME/bin/apexOnapPf.sh. On Windows start the engine using %APEX_HOME%\bin\apexOnapPf.bat. The engine will fail to fully start. However, if the output looks similar to the following line, the APEX installation is realized.

1Apex [main] INFO o.o.p.a.s.onappf.ApexStarterMain - In ApexStarter with parameters []
2Apex [main] ERROR o.o.p.a.s.onappf.ApexStarterMain - start of services-onappf failed
3org.onap.policy.apex.services.onappf.exception.ApexStarterException: apex starter configuration file was not specified as an argument
4at org.onap.policy.apex.services.onappf.ApexStarterCommandLineArguments.validateReadableFile(ApexStarterCommandLineArguments.java:278)
5        at org.onap.policy.apex.services.onappf.ApexStarterCommandLineArguments.validate(ApexStarterCommandLineArguments.java:165)
6        at org.onap.policy.apex.services.onappf.ApexStarterMain.<init>(ApexStarterMain.java:66)
7        at org.onap.policy.apex.services.onappf.ApexStarterMain.main(ApexStarterMain.java:165)

To fully verify the installation, run the ApexOnapPf by providing the configuration files.

OnapPfConfig.json is the file which contains the initial configuration to startup the ApexStarter service. The dmaap topics to be used for sending or receiving messages is also specified in the this file. Provide this file as argument while running the ApexOnapPf.

1# $APEX_HOME/bin/apexOnapPf.sh -c $APEX_HOME/etc/onappf/config/OnapPfConfig.json (1)
2# $APEX_HOME/bin/apexOnapPf.sh -c C:/apex/apex-full-2.0.0-SNAPSHOT/etc/onappf/config/OnapPfConfig.json (2)
3>%APEX_HOME%\bin\apexOnapPf.bat -c %APEX_HOME%\etc\onappf\config\OnapPfConfig.json (3)

1

UNIX

2

Cygwin

3

Windows

The APEXOnapPf should start successfully. Assuming the logging levels are not changed (default level is info), the output should look similar to this (last few lines)

 1In ApexStarter with parameters [-c, C:/apex/etc/onappf/config/OnapPfConfig.json] . . .
 2Apex [main] INFO o.o.p.c.u.services.ServiceManager - service manager starting set alive
 3Apex [main] INFO o.o.p.c.u.services.ServiceManager - service manager starting register pdp status context object
 4Apex [main] INFO o.o.p.c.u.services.ServiceManager - service manager starting topic sinks
 5Apex [main] INFO o.o.p.c.u.services.ServiceManager - service manager starting Pdp Status publisher
 6Apex [main] INFO o.o.p.c.u.services.ServiceManager - service manager starting Register pdp update listener
 7Apex [main] INFO o.o.p.c.u.services.ServiceManager - service manager starting Register pdp state change request dispatcher
 8Apex [main] INFO o.o.p.c.u.services.ServiceManager - service manager starting Message Dispatcher . . .
 9Apex [main] INFO o.o.p.c.u.services.ServiceManager - service manager starting Rest Server . . .
10Apex [main] INFO o.o.p.c.u.services.ServiceManager - service manager started
11Apex [main] INFO o.o.p.a.s.onappf.ApexStarterMain - Started ApexStarter service

The ApexOnapPf service is now running, sending heartbeat messages to dmaap (which will be received by PAP) and listening for messages from PAP on the dmaap topic specified. Based on instructions from PAP, the ApexOnapPf will deploy or undeploy policies on the ApexEngine.

Terminate APEX by simply using CTRL+C in the console.

Running APEXOnapPf in Docker

Running APEX from the ONAP docker repository only requires 2 commands:

  1. Log into the ONAP docker repo

docker login -u docker -p docker nexus3.onap.org:10003
  1. Run the APEX docker image

docker run -p 6969:6969 -p 23324:23324 -it --rm  nexus3.onap.org:10001/onap/policy-apex-pdp:2.1-SNAPSHOT-latest /bin/bash -c "/opt/app/policy/apex-pdp/bin/apexOnapPf.sh -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json"

To run the ApexOnapPf, the startup script apexOnapPf.sh along with the required configuration files are specified. Also, the ports 6969 (healthcheck) and 23324 (deployment port for the ApexEngine) are exposed.

Build a Docker Image

Alternatively, one can use the Dockerfile defined in the Docker package to build an image.

APEX Dockerfile

 1#
 2# Docker file to build an image that runs APEX on Java 11 or better in alpine
 3#
 4FROM onap/policy-jre-alpine:2.0.1
 5
 6LABEL maintainer="Policy Team"
 7
 8ARG POLICY_LOGS=/var/log/onap/policy/apex-pdp
 9ENV POLICY_HOME=/opt/app/policy/apex-pdp
10ENV POLICY_LOGS=$POLICY_LOGS
11
12RUN apk add --no-cache \
13        vim \
14        iproute2 \
15        iputils \
16    && addgroup -S apexuser && adduser -S apexuser -G apexuser \
17    && mkdir -p $POLICY_HOME \
18    && mkdir -p $POLICY_LOGS \
19    && chown -R apexuser:apexuser $POLICY_LOGS \
20    && mkdir /packages
21
22COPY /maven/apex-pdp-package-full.tar.gz /packages
23RUN tar xvfz /packages/apex-pdp-package-full.tar.gz --directory $POLICY_HOME \
24    && rm /packages/apex-pdp-package-full.tar.gz \
25    && find /opt/app -type d -perm 755 \
26    && find /opt/app -type f -perm 644 \
27    && chmod 755 $POLICY_HOME/bin/* \
28    && cp -pr $POLICY_HOME/examples /home/apexuser \
29    && chown -R apexuser:apexuser /home/apexuser/* $POLICY_HOME \
30    && chmod 644 $POLICY_HOME/etc/*
31
32USER apexuser
33ENV PATH $POLICY_HOME/bin:$PATH
34WORKDIR /home/apexuser

APEXOnapPf Configuration File Explained

The ApexOnapPf is initialized using a configuration file:

  • OnapPfConfig.json

Format of the configuration file (OnapPfConfig.json) explained

The configuration file is a JSON file containing the initial values for configuring the rest server for healthcheck and the pdp itself. The topic infrastructure and the topics to be used for sending or receiving messages is specified in this configuration file. A sample can be found below:

{
    "name":"ApexStarterParameterGroup",
    "restServerParameters": {  (1)
        "host": "0.0.0.0",
        "port": 6969,
        "userName": "...",
        "password": "...",
        "https": true  (2)
    },
    "pdpStatusParameters":{
        "timeIntervalMs": 120000,  (3)
        "pdpType":"apex",  (4)
        "pdpGroup":"defaultGroup",  (5)
        "description":"Pdp Heartbeat",
        "supportedPolicyTypes":[{"name":"onap.policies.controlloop.operational.Apex","version":"1.0.0"}]  (6)
    },
    "topicParameterGroup": {
        "topicSources" : [{  (7)
            "topic" : "POLICY-PDP-PAP",  (8)
            "servers" : [ "message-router" ],  (9)
            "topicCommInfrastructure" : "dmaap"  (10)
        }],
        "topicSinks" : [{  (11)
            "topic" : "POLICY-PDP-PAP",  (12)
            "servers" : [ "message-router" ],  (13)
            "topicCommInfrastructure" : "dmaap"  (14)
        }]
    }
}

1

parameters for setting up the rest server such as host, port userName and password.

2

https flag if enabled will enable https support by the rest server.

3

time interval in which PDP-A has to send heartbeats to PAP. Specified in milliseconds.

4

Type of the pdp.

5

The group to which the pdp belong to.

6

List of policy types supported by the PDP. A trailing “.*” can be used to specify multiple policy types; for example, “onap.policies.controlloop.operational.apex.*” would match any policy type beginning with “onap.policies.controlloop.operational.apex.”

7

List of topics’ details from which messages are received.

8

Topic name of the source to which PDP-A listens to for messages from PAP.

9

List of servers for the source topic.

10

The source topic infrastructure. For e.g. dmaap, noop, ueb

11

List of topics’ details to which messages are sent.

12

Topic name of the sink to which PDP-A sends messages.

13

List of servers for the sink topic.

14

The sink topic infrastructure. For e.g. dmaap, noop, ueb

Policy Examples

HowTo: My First Policy

Introduction

Consider a scenario where a supermarket chain called HyperM controls how it sells items in a policy-based manner. Each time an item is processed by HyperM’s point-of-sale (PoS) system an event is generated and published about that item of stock being sold. This event can then be used to update stock levels, etc..

HyperM want to extend this approach to allow some checks to be performed before the sale can be completed. This can be achieved by requesting a policy-controlled decision as each item is processed by for sale by each PoS system. The decision process is integrated with HyperM’s other IT systems that manage stock control, sourcing and purchasing, personnel systems, etc.

In this document we will show how APEX and APEX Policies can be used to achieve this, starting with a simple policy, building up to more complicated policy that demonstrates the features of APEX.

Data Models

Sales Input Event

Each time a PoS system processes a sales item an event with the following format is emitted:

Table 1. Sale Input Event

Event

Fields

Description

SALE_INPUT

time, sale_ID, amount, item_ID, quantity, assistant_ID, branch_ID, notes, …​

Event indicating a sale of an item is occurring

In each SALE_INPUT event the sale_ID field is a unique ID generated by the PoS system. A timestamp for the event is stored in the time field. The amount field refers to the value of the item(s) to be sold (in cents). The item_ID field is a unique identifier for each item type, and can be used to retrieve more information about the item from HyperM’s stock control system. The quantity field refers to the quantity of the item to be sold. The assistant_ID field is a unique identifier for the PoS operator, and can be used to retrieve more information about the operator from the HyperM’s personnel system. Since HyperM has many branches the branch_ID identifies the shop. The notes field contains arbitrary notes about the sale.

Sales Decision Event

After a SALE_INPUT event is emitted by the PoS system HyperM’s policy-based controlled sales checking system emits a Sale Authorization Event indicating whether the sale is authorized or denied. The PoS system can then listen for this event before continuing with the sale.

Table 2. Sale Authorisation Event

Event

Fields

Description

SALE_AUTH

sale_ID, time, authorized, amount, item_ID, quantity, assistant_ID, branch_ID, notes, message…​

Event indicating a sale of an item is authorized or denied

In each SALE_AUTH event the sale_ID field is copied from the SALE_INPUT event that trigger the decision request. The SALE_AUTH event is also timestamped using the time field, and a field called authorised is set to true or false depending on whether the sale is authorized or denied. The message field carries an optional message about why a sale was not authorized. The other fields from the SALE_INPUT event are also included for completeness.

Stock Control: Items

HyperM maintains information about each item for sale in a database table called ITEMS.

Table 3. Items Database

Table

Fields

Description

ITEMS

item_ID, description, cost_price, barcode, supplier_ID, category, …​

Database table describing each item for sale

The database table ITEMS has a row for each items that HyperM sells. Each item is identified by an item_ID value. The description field stores a description of the item. The cost price of the item is given in cost_price. The barcode of the item is encoded in barcode, while the item supplier is identified by supplier_ID. Items may also be classified into categories using the category field. Useful categories might include: soft drinks, alcoholic drinks, cigarettes, knives, confectionery, bakery, fruit&vegetables, meat, etc..

Personnel System: Assistants

Table 4. Assistants Database

Table

Fields

Description

ASSISTANTS

assistant_ID, surname, firstname, middlename, age, grade, phone_number, …​

Database table describing each HyperM sales assistant

The database table ASSISTANTS has a row for each sales assistant employed by HyperM. Each assistant is identified by an assistant_ID value, with their name given in the firstname, middlename and surname fields. The assistant’s age in years is given in age, while their phone number is contained in the phone_number field. The assistant’s grade is encoded in grade. Useful values for grade might include: trainee, operator, supervisor, etc..

Locations: Branches

Table 5. Branches Database

Table

Fields

Description

BRANCHES

branch_ID, branch_Name, category, street, city, country, postcode, …​

Database table describing each HyperM branch

HyperM operates a number of branches. Each branch is described in the BRANCHES database table. Each branch is identified by a branch_ID, with a branch name given in branch_Name. The address for the branch is encoded in street, city, country and postcode. The branch category is given in the category field. Useful values for category might include: Small, Large, Super, Hyper, etc..

Policy Step 1

Scenario

For the first version of our policy, let’s start with something simple. Let us assume that there exists some restriction that alcohol products cannot be sold before 11:30am. In this section we will go through the necessary steps to define a policy that can enforce this for HyperM.

  • Alcohol cannot be sold before 11:30am…

New Policy Model

Create the an new empty Policy Model MyFirstPolicyModel

Since an organisation like HyperM may have many policies covering many different domains, policies should be grouped into policy sets. In order to edit or deploy a policy, or policy set, the definition of the policy(ies) and all required events, tasks, states, etc., are grouped together into a ‘Policy Model’. An organization might define many Policy Models, each containing a different set of policies.

So the first step is to create a new empty Policy Model called MyFirstPolicyModel. Using the APEX Policy Editor, click on the ‘File’ menus and select ‘New’. Then define our new policy model called MyFirstPolicyModel. Use the ‘Generate UUID’ button to create a new unique ID for the policy model, and fill in a description for the policy model. Press the Submit button to save your changes.

File > New to create a new Policy Model

Create a new Policy Model

Events

Create the input event SALE_INPUT and the output event SALE_AUTH

Using the APEX Policy Editor, click on the ‘Events’ tab. In the ‘Events’ pane, right click and select ‘New’:

Right click to create a new event

Create a new event type called SALE_INPUT. Use the ‘Generate UUID’ button to create a new unique ID for the event type, and fill in a description for the event. Add a namespace, e.g. com.hyperm. We can add hard-coded strings for the Source and Target, e.g. POS and APEX. At this stage we will not add any parameter fields, we will leave this until later. Use the Submit button to create the event.

Fill in the necessary information for the 'SALE_INPUT' event and click 'Submit'

Repeat the same steps for a new event type called SALE_AUTH. Just use APEX as source and POS as target, since this is the output event coming from APEX going to the sales point.

Before we can add parameter fields to an event we must first define APEX Context Item Schemas that can be used by those fields.

To create new item schemas, click on the ‘Context Item Schemas’ tab. In that ‘Context Item Schemas’ pane, right click and select ‘Create new ContextSchema’.

Right click to create a new Item Schema

Create item schemas with the following characteristics, each with its own unique UUID:

Table 1. Item Schemas

Name

Schema Flavour

Schema Definition

Description

timestamp_type

Java

java.lang.Long

A type for time values

sale_ID_type

Java

java.lang.Long

A type for sale_ID values

price_type

Java

java.lang.Long

A type for amo unt/price values

item_ID_type

Java

java.lang.Long

A type for item_ID values

as sistant_ID_type

Java

java.lang.Long

A type for ` assistant_ID` values

quantity_type

Java

ja va.lang.Integer

A type for quantity values

branch_ID_type

Java

java.lang.Long

A type for branch_ID values

notes_type

Java

j ava.lang.String

A type for notes values

authorised_type

Java

ja va.lang.Boolean

A type for authorised values

message_type

Java

j ava.lang.String

A type for message values

Create a new Item Schema

The item schemas can now be seen on the ‘Context Item Schemas’ tab, and can be updated at any time by right-clicking on the item schemas on the ‘Context Item Schemas’ tab. Now we can go back to the event definitions for SALE_INPUT and SALE_AUTH and add some parameter fields.

Tip

APEX natively supports schema definitions in Java and Avro. Java schema definitions are simply the name of a Java Class. There are some restrictions:

  • the class must be instantiatable, i.e. not an Java interface or abstract class

  • primitive types are not supported, i.e. use java.lang.Integer instead of int, etc.

  • it must be possible to find the class, i.e. the class must be contained in the Java classpath.

Avro schema definitions can be any valid Avro schema. For events using fields defined with Avro schemas, any incoming event containing that field must contain a value that conforms to the Avro schema.

Click on the ‘Events’ tab, then right click the SALE_INPUT row and select ‘Edit Event SALE_INPUT’. To add a new event parameter use the 'Add Event Parameter' button at the bottom of the screen. For the `SALE_INPUT event add the following event parameters:

Table 2. Event Parameter Fields for the SALE_INPUT Event

Parameter Name

Parameter Type

Optional

time

timestamp_type

no

sale_ID

sale_ID_type

no

amount

price_type

no

item_ID

item_ID_type

no

quantity

quantity_type

no

assistant_ID

assistant_ID_type

no

branch_ID

branch_ID_type

no

notes

notes_type

yes

Remember to click the ‘Submit’ button at the bottom of the event definition pane.

Tip

Parameter fields can be optional in events. If a parameter is not marked as optional then by default it is mandatory, so it must appear in any input event passed to APEX. If an optional field is not set for an output event then value will be set to null.

Add new event parameters to an event

Select the SALE_AUTH event and add the following event parameters:

Table 3. Event Parameter Fields for the SALE_AUTH Event

Parameter Name

Parameter Type

no

sale_ID

sale_ID_type

no

time

timestamp_type

no

authorised

authorised_type

no

message

message_type

yes

amount

price_type

no

item_ID

item_ID_type

no

assistant_ID

assistant_ID_type

no

quantity

quantity_type

no

branch_ID

branch_ID_type

no

notes

notes_type

yes

Remember to click the ‘Submit’ button at the bottom of the event definition pane.

The events for our policy are now defined.

New Policy

Create a new Policy and add the “No Booze before 11:30” check

APEX policies are defined using a state-machine model. Each policy comprises one or more states that can be individually executed. Where there is more than one state the states are chained together to form a Directed Acyclic Graph (DAG) of states. A state is triggered by passing it a single input (or ‘trigger’) event and once executed each state then emits an output event. For each state the logic for the state is embedded in one or more tasks. Each task contains specific task logic that is executed by the APEX execution environment each time the task is invoked. Where there is more than one task in a state then the state also defines some task selection logic to select an appropriate task each time the state is executed.

Therefore, to create a new policy we must first define one or more tasks.

To create a new Task click on the ‘Tasks’ tab. In the ‘Tasks’ pane, right click and select ‘Create new Task’. Create a new Task called MorningBoozeCheck. Use the ‘Generate UUID’ button to create a new unique ID for the task, and fill in a description for the task.

Right click to create a new task

Tasks are configured with a set of input fields and a set of output fields. To add new input/output fields for a task use the ‘Add Task Input Field’ and ‘Add Task Output Field’ button. The list of input and out fields to add for the MorningBoozeCheck task are given below. The input fields are drawn from the parameters in the state’s input event, and the task’s output fields are used to populate the state’s output event. The task’s input and output fields must be a subset of the event parameters defined for the input and output events for any state that uses that task. (You may have noticed that the input and output fields for the MorningBoozeCheck task have the exact same names and reuse the item schemas that we used for the parameters in the SALE_INPUT and SALE_AUTH events respectively).

Table 1. Input fields for MorningBoozeCheck task

Parameter Name

Parameter Type

time

timestamp_type

sale_ID

sale_ID_type

amount

price_type

item_ID

item_ID_type

quantity

quantity_type

assistant_ID

assistant_ID_type

branch_ID

branch_ID_type

notes

notes_type

Table 2. Output fields for MorningBoozeCheck task

Parameter Name

Parameter Type

sale_ID

sale_ID_type

time

timestamp_type

authorised

authorised_type

message

message_type

amount

price_type

item_ID

item_ID_type

assistant_ID

assistant_ID_type

quantity

quantity_type

branch_ID

branch_ID_type

notes

notes_type

Add input and out fields for the task

Each task must include some ‘Task Logic’ that implements the behaviour for the task. Task logic can be defined in a number of different ways using a choice of languages. For this task we will author the logic using the Java-like scripting language called `MVEL <https://en.wikipedia.org/wiki/MVEL>`__.

For simplicity use the code for the task logic here(Task Logic: MorningBoozeCheck.mvel). Paste the script text into the ‘Task Logic’ box, and use “MVEL” as the ‘Task Logic Type / Flavour’.

This logic assumes that all items with item_ID between 1000 and 2000 contain alcohol, which is not very realistic, but we will see a better approach for this later. It also uses the standard Java time utilities to check if the current time is between 00:00:00 GMT and 11:30:00 GMT. For a detailed guide to how to write your own logic in `JavaScript <https://en.wikipedia.org/wiki/JavaScript>`__, `MVEL <https://en.wikipedia.org/wiki/MVEL>`__ or one of the other supported languages please refer to APEX Programmers Guide.

Add task logic the task

An alternative version of the same logic is available in JavaScript(Task Logic: MorningBoozeCheck.js). Just use “JAVASCRIPT” as the ‘Task Logic Type / Flavour’ instead.

The task definition is now complete so click the ‘Submit’ button to save the task. The task can now be seen on the ‘Tasks’ tab, and can be updated at any time by right-clicking on the task on the ‘Task’ tab. Now that we have created our task, we can can create a policy that uses that task.

To create a new Policy click on the ‘Policies’ tab. In the ‘Policies’ pane, right click and select ‘Create new Policy’:

Create a new Policy called MyFirstPolicy. Use the ‘Generate UUID’ button to create a new unique ID for the policy, and fill in a description for the policy. Use ‘FREEFORM’ as the ‘Policy Flavour’.

Each policy must have at least one state. Since this is ‘freeform’ policy we can add as many states as we wish. Let’s start with one state. Add a new state called BoozeAuthDecide to this MyFirstPolicy policy using the ‘Add new State’ button after filling in the name of our new state.

Create a new policy

Each state must uses one input event type. For this new state select the SALE_INPUT event as the input event.

Each policy must define a ‘First State’ and a ‘Policy Trigger Event’. The ‘Policy Trigger Event’ is the input event for the policy as a whole. This event is then passed to the first state in the chain of states in the policy, therefore the ‘Policy Trigger Event’ will be the input event for the first state. Each policy can only have one ‘First State’. For our MyFirstPolicy policy, select BoozeAuthDecide as the ‘First State’. This will automatically select SALE_INPUT as the ‘Policy Trigger Event’ for our policy.

Create a state

In this case we will create a reference the pre-existing MorningBoozeCheck task that we defined above using the ‘Add New Task’ button. Select the MorningBoozeCheck task, and use the name of the task as the ‘Local Name’ for the task.

in the case where a state references more than one task, a ‘Default Task’ must be selected for the state and some logic (‘Task Selection Logic’) must be specified to select the appropriate task at execution time. Since our new state BoozeAuthDecide only has one task the default task is automatically selected and no ‘Task Selection Logic’ is required.

Note

In a ‘Policy’ ‘State’ a ‘State Output Mapping’ has 3 roles: 1) Select which ‘State’ should be executed next, 2) Select the type of the state’s ‘Outgoing Event’, and 3) Populate the state’s ‘Outgoing Event’. This is how states are chained together to form a (Directed Acyclic Graph (DAG)) of states. The final state(s) of a policy are those that do not select any ‘next’ state. Since a ‘State’ can only accept a single type of event, the type of the event emitted by a previous ‘State’ must match the incoming event type of the next ‘State’. This is also how the last state(s) in a policy can emit events of different types. The ‘State Output Mapping’ is also responsible for taking the fields that are output by the task executed in the state and populating the state’s output populating the state’s output event before it is emitted.

Each ‘Task’ referenced in ‘State’ must have a defined ‘Output Mapping’ to take the output of the task, select an ‘Outgoing Event’ type for the state, populate the state’s outgoing event, and then select the next state to be executed (if any).

There are 2 basic types of output mappings:

  1. Direct Output Mappings have a single value for ‘Next State’ and a single value for ‘State Output Event’. The outgoing event for the state is automatically created, any outgoing event parameters that were present in the incoming event are copied into the outgoing event, then any task output fields that have the same name and type as parameters in the outgoing event are automatically copied into the outgoing event.

  2. Logic-Based State Output Mappings / Finalizers have some logic defined that dynamically selects and creates the ‘State Outgoing Event’, manages the population of the outgoing event parameters (perhaps changing or adding to the outputs from the task), and then dynamically selects the next state to be executed (if any).

Each task reference must also have an associated ‘Output State Mapping’ so we need an ‘Output State Mapping’ for the BoozeAuthDecide state to use when the MorningBoozeCheck task is executed. The simplest type of output mapping is a ‘Direct Output Mapping’.

Create a new ‘Direct Output Mapping’ for the state called MorningBoozeCheck_Output_Direct using the ‘Add New Direct State Output Mapping’ button. Select SALE_AUTH as the output event and select None for the next state value. We can then select this output mapping for use when the the MorningBoozeCheck task is executed. Since there is only state, and only one task for that state, this output mapping ensures that the BoozeAuthDecide state is the only state executed and the state (and the policy) can only emit events of type SALE_AUTH. (You may remember that the output fields for the MorningBoozeCheck task have the exact same names and reuse the item schemas that we used for the parameters in SALE_AUTH event. The MorningBoozeCheck_Output_Direct direct output mapping can now automatically copy the values from the MorningBoozeCheck task directly into outgoing SALE_AUTH events.)

Add a Task and Output Mapping

Click the ‘Submit’ button to complete the definition of our MyFirstPolicy policy. The policy MyFirstPolicy can now be seen in the list of policies on the ‘Policies’ tab, and can be updated at any time by right-clicking on the policy on the ‘Policies’ tab.

The MyFirstPolicyModel, including our MyFirstPolicy policy can now be checked for errors. Click on the ‘Model’ menu and select ‘Validate’. The model should validate without any ‘Warning’ or ‘Error’ messages. If you see any ‘Error’ or ‘Warning’ messages, carefully read the message as a hint to find where you might have made a mistake when defining some aspect of your policy model.

Validate the policy model for error using the 'Model' > 'Validate' menu item

Congratulations, you have now completed your first APEX policy. The policy model containing our new policy can now be exported from the editor and saved. Click on the ‘File’ menu and select ‘Download’ to save the policy model in JSON format. The exported policy model is then available in the directory you selected, for instance $APEX_HOME/examples/models/MyFirstPolicy/1/MyFirstPolicyModel_0.0.1.json. The exported policy can now be loaded into the APEX Policy Engine, or can be re-loaded and edited by the APEX Policy Editor.

Download the completed policy model using the 'File' > 'Download' menu item

Test The Policy

Test Policy Step 1

To start a new APEX Engine you can use the following configuration. In a full APEX installation you can find this configuration in $APEX_HOME/examples/config/MyFirstPolicy/1/MyFirstPolicyConfigStdin2StdoutJsonEvent.json. This configuration expects incoming events to be in JSON format and to be passed into the APEX Engine from stdin, and result events will be printed in JSON format to stdout. This configuration loads the policy model stored in the file ‘MyFirstPolicyModel_0.0.1.json’ as exported from the APEX Editor. Note, you may need to edit this file to provide the full path to wherever you stored the exported policy model file.

To test the policy try paste the following events into the console as the APEX engine executes:

Title

Input Event (JSON)

Output Event (JSON)

comment

{
  "nameSpace": "com.hyperm",
  "name": "SALE_INPUT",
  "version": "0.0.1",
  "time": 1483351989000,
  "sale_ID": 99999991,
  "amount": 299,
  "item_ID": 5123,
  "quantity": 1,
  "assistant_ID": 23,
  "branch_ID": 1,
  "notes": "Special Offer!!"
}
{
  "name": "SALE_AUTH",
  "version": "0.0.1",
  "nameSpace": "com.hyperm",
  "source": "",
  "target": "",
  "amount": 299,
  "assistant_ID": 23,
  "authorised": true,
  "branch_ID": 1,
  "item_ID": 5123,
  "message": "Sale authorised by policy task MorningBoozeCheck for time 10:13:09 GMT",
  "notes": "Special Offer!!",
  "quantity": 1,
  "sale_ID": 99999991,
  "time": 1483351989000
}

Request to buy a non-alcoholic item (item_ID=5123) at 10:13:09 on Tuesday, 10 January 2017. Sale is authorized.

{
  "nameSpace": "com.hyperm",
  "name": "SALE_INPUT",
  "version": "0.0.1",
  "time": 1483346466000,
  "sale_ID": 99999992,
  "amount": 1249,
  "item_ID": 1012,
  "quantity": 1,
  "assistant_ID": 12,
  "branch_ID": 2
}
{
  "nameSpace": "com.hyperm",
  "name": "SALE_AUTH",
  "version": "0.0.1",
  "source": "",
  "target": "",
  "amount": 1249,
  "assistant_ID": 12,
  "authorised": false,
  "branch_ID": 2,
  "item_ID": 1012,
  "message": "Sale not authorised by policy task MorningBoozeCheck for time 08:41:06 GMT. Alcohol can not be sold between 00:00:00 GMT and 11:30:00 GMT",
  "notes": null,
  "quantity": 1,
  "sale_ID": 99999992,
  "time": 1483346466000
}

Request to buy alcohol item (item_ID=1249) at 08:41:06 on Monday, 02 January 2017. Sale is not authorized.

{
  "nameSpace": "com.hyperm",
  "name": "SALE_INPUT",
  "version": "0.0.1",
  "time": 1482265033000,
  "sale_ID": 99999993,
  "amount": 4799,
  "item_ID": 1943,
  "quantity": 2,
  "assistant_ID": 9,
  "branch_ID": 3
}
{
  "name": "SALE_AUTH",
  "version": "0.0.1",
  "nameSpace": "com.hyperm",
  "source": "",
  "target": "",
  "amount": 4799,
  "assistant_ID": 9,
  "authorised": true,
  "branch_ID": 3,
  "item_ID": 1943,
  "message": "Sale authorised by policy task MorningBoozeCheck for time 20:17:13 GMT",
  "notes": null,
  "quantity": 2,
  "sale_ID": 99999993,
  "time": 1482265033000
}

Request to buy alcohol (item_ID=1943) at 20:17:13 on Tuesday, 20 December 2016. Sale is authorized.

CLI Editor File

Policy 1 in CLI Editor

An equivalent version of the MyFirstPolicyModel policy model can again be generated using the APEX CLI editor. A sample APEX CLI script is shown below:

Policy Step 2

Scenario

HyperM have just opened a new branch in a different country, but that country has different rules about when alcohol can be sold! In this section we will go through the necessary steps to extend our policy to enforce this for HyperM.

  • In some branches alcohol cannot be sold before 1pm, and not at all on Sundays.

Although there are a number of ways to accomplish this the easiest approach for us is to define another task and then select which task is appropriate at runtime depending on the branch identifier in the incoming event.

Extend Policy Model

Extend the Policy with the new Scenario

To create a new Task click on the ‘Tasks’ tab. In the ‘Tasks’ pane, right click and select ‘Create new Task’:

Create a new Task called MorningBoozeCheckAlt1. Use the ‘Generate UUID’ button to create a new unique ID for the task, and fill in a description for the task. Select the same input and output fields that we used earlier when we defined the MorningBoozeCheck task earlier.

Table 1. Input fields for MorningBoozeCheckAlt1 task

Parameter Name

Parameter Type

time

timestamp_type

sale_ID

sale_ID_type

amount

price_type

item_ID

item_ID_type

quantity

quantity_type

assistant_ID

assistant_ID_type

branch_ID

branch_ID_type

notes

notes_type

Table 2. Output fields for MorningBoozeCheckAlt1 task

Parameter Name

Parameter Type

sale_ID

sale_ID_type

time

timestamp_type

authorised

authorised_type

message

message_type

amount

price_type

item_ID

item_ID_type

assistant_ID

assistant_ID_type

quantity

quantity_type

branch_ID

branch_ID_type

notes

notes_type

This task also requires some ‘Task Logic’ to implement the new behaviour for this task.

For simplicity use the following code for the task logic (`MorningBoozeCheckAlt1` task logic (`MVEL`)). It again assumes that all items with item_ID between 1000 and 2000 contain alcohol. We again use the standard Java time utilities to check if the current time is between 00:00:00 CET and 13:00:00 CET or if it is Sunday.

For this task we will again author the logic using the `MVEL <https://en.wikipedia.org/wiki/MVEL>`__ scripting language. Sample task logic code (specified in `MVEL <https://en.wikipedia.org/wiki/MVEL>`__) is given below. For a detailed guide to how to write your own logic in `JavaScript <https://en.wikipedia.org/wiki/JavaScript>`__, `MVEL <https://en.wikipedia.org/wiki/MVEL>`__ or one of the other supported languages please refer to APEX Programmers Guide.

Create a new alternative task `MorningBoozeCheckAlt1`

The task definition is now complete so click the ‘Submit’ button to save the task. Now that we have created our task, we can can add this task to the single pre-existing state (BoozeAuthDecide) in our policy.

To edit the BoozeAuthDecide state in our policy click on the ‘Policies’ tab. In the ‘Policies’ pane, right click on our MyFirstPolicy policy and select ‘Edit’. Navigate to the BoozeAuthDecide state in the ‘states’ section at the bottom of the policy definition pane.

Right click to edit a policy

To add our new task MorningBoozeCheckAlt1, scroll down to the BoozeAuthDecide state in the ‘States’ section. In the ‘State Tasks’ section for BoozeAuthDecide use the ‘Add new task’ button. Select our new MorningBoozeCheckAlt1 task, and use the name of the task as the ‘Local Name’ for the task. The MorningBoozeCheckAlt1 task can reuse the same MorningBoozeCheck_Output_Direct ‘Direct State Output Mapping’ that we used for the MorningBoozeCheck task. (Recall that the role of the ‘State Output Mapping’ is to select the output event for the state, and select the next state to be executed. These both remain the same as before.)

Since our state has more than one task we must define some logic to determine which task should be used each time the state is executed. This task selection logic is defined in the state definition. For our BoozeAuthDecide state we want the choice of which task to use to be based on the branch_ID from which the SALE_INPUT event originated. For simplicity sake let us assume that branches with branch_ID between 0 and 999 should use the MorningBoozeCheck task, and the branches with with branch_ID between 1000 and 1999 should use the MorningBoozeCheckAlt1 task.

This time, for variety, we will author the task selection logic using the `JavaScript <https://en.wikipedia.org/wiki/JavaScript>`__ scripting language. Sample task selection logic code is given here (`BoozeAuthDecide` task selection logic (`JavaScript`)). Paste the script text into the ‘Task Selection Logic’ box, and use “JAVASCRIPT” as the ‘Task Selection Logic Type / Flavour’. It is necessary to mark one of the tasks as the ‘Default Task’ so that the task selection logic always has a fallback default option in cases where a particular task cannot be selected. In this case the MorningBoozeCheck task can be the default task.

State definition with 2 Tasks and Task Selection Logic

When complete don’t forget to click the ‘Submit’ button at the bottom of ‘Policies’ pane for our MyFirstPolicy policy after updating the BoozeAuthDecide state.

Congratulations, you have now completed the second step towards your first APEX policy. The policy model containing our new policy can again be validated and exported from the editor and saved as shown in Step 1.

The exported policy model is then available in the directory you selected, as MyFirstPolicyModel_0.0.1.json. The exported policy can now be loaded into the APEX Policy Engine, or can be re-loaded and edited by the APEX Policy Editor.

Test The Policy

Test Policy Step 2

To start a new APEX Engine you can use the following configuration. In a full APEX installation you can find this configuration in $APEX_HOME/examples/config/MyFirstPolicy/2/MyFirstPolicyConfigStdin2StdoutJsonEvent.json. Note, this has changed from the configuration file in Step 1 to enable the JAVASCRIPT executor for our new ‘Task Selection Logic’.

To test the policy try paste the following events into the console as the APEX engine executes. Note, all tests from Step 1 will still work perfectly since none of those events originate from a branch with branch_ID between 1000 and 2000. The ‘Task Selection Logic’ will therefore pick the MorningBoozeCheck task as expected, and will therefore give the same results.

Table 1. Inputs and Outputs when testing My First Policy

Input Event (JSON)

Output Event (JSON)

comment

{
  "nameSpace": "com.hyperm",
  "name": "SALE_INPUT",
  "version": "0.0.1",
  "time": 1483346466000,
  "sale_ID": 99999992,
  "amount": 1249,
  "item_ID": 1012,
  "quantity": 1,
  "assistant_ID": 12,
  "branch_ID": 2
}
{
  "nameSpace": "com.hyperm",
  "name": "SALE_AUTH",
  "version": "0.0.1",
  "source": "",
  "target": "",
  "amount": 1249,
  "assistant_ID": 12,
  "authorised": false,
  "branch_ID": 2,
  "item_ID": 1012,
  "message": "Sale not authorised by policy task MorningBoozeCheck for time 08:41:06 GMT. Alcohol can not be sold between 00:00:00 GMT and 11:30:00 GMT",
  "notes": null,
  "quantity": 1,
  "sale_ID": 99999992,
  "time": 1483346466000
}

Request to buy alcohol item (item_ID=1249) at 08:41:06 GMT on Monday, 02 January 2017. Sale is not authorized. Uses the MorningBoozeCheck task.

{
  "nameSpace": "com.hyperm",
  "name": "SALE_INPUT",
  "version": "0.0.1",
  "time": 1482398073000,
  "sale_ID": 99999981,
  "amount": 299,
  "item_ID": 1047,
  "quantity": 1,
  "assistant_ID": 1212,
  "branch_ID": 1002
}
{
  "nameSpace" : "com.hyperm",
  "name" : "SALE_AUTH",
  "version" : "0.0.1",
  "source" : "",
  "target" : "",
  "sale_ID" : 99999981,
  "amount" : 299,
  "assistant_ID" : 1212,
  "notes" : null,
  "quantity" : 1,
  "branch_ID" : 1002,
  "item_ID" : 1047,
  "authorised" : false,
  "time" : 1482398073000,
  "message" : "Sale not authorised by policy task MorningBoozeCheckAlt1 for time 10:14:33 CET. Alcohol can not be sold between 00:00:00 CET and 13:00:00 CET or on Sunday"
}

Request to buy alcohol (item_ID=1047) at 10:14:33 on Thursday, 22 December 2016. Sale is not authorized. Uses the MorningBoozeCheckAlt1 task.

{
  "nameSpace": "com.hyperm",
  "name": "SALE_INPUT",
  "version": "0.0.1",
  "time": 1482077977000,
  "sale_ID": 99999982,
  "amount": 2199,
  "item_ID": 1443,
  "quantity": 12,
  "assistant_ID": 94,
  "branch_ID": 1003,
  "notes": "Buy 3, get 1 free!!"
}
{
  "nameSpace" : "com.hyperm",
  "name" : "SALE_AUTH",
  "version" : "0.0.1",
  "source" : "",
  "target" : "",
  "sale_ID" : 99999982,
  "amount" : 2199,
  "assistant_ID" : 94,
  "notes" : "Buy 3, get 1 free!!",
  "quantity" : 12,
  "branch_ID" : 1003,
  "item_ID" : 1443,
  "authorised" : false,
  "time" : 1482077977000,
  "message" : "Sale not authorised by policy task MorningBoozeCheckAlt1 for time 17:19:37 CET. Alcohol can not be sold between 00:00:00 CET and 13:00:00 CET or on Sunday"
}

Request to buy alcohol (item_ID=1443) at 17:19:37 on Sunday, 18 December 2016. Sale is not authorized. Uses the MorningBoozeCheckAlt1 task.

{
  "nameSpace": "com.hyperm",
  "name": "SALE_INPUT",
  "version": "0.0.1",
  "time": 1483351989000,
  "sale_ID": 99999983,
  "amount": 699,
  "item_ID": 5321,
  "quantity": 1,
  "assistant_ID": 2323,
  "branch_ID": 1001,
  "notes": ""
}
{
  "nameSpace" : "com.hyperm",
  "name" : "SALE_AUTH",
  "version" : "0.0.1",
  "source" : "",
  "target" : "",
  "sale_ID" : 99999983,
  "amount" : 699,
  "assistant_ID" : 2323,
  "notes" : "",
  "quantity" : 1,
  "branch_ID" : 1001,
  "item_ID" : 5321,
  "authorised" : true,
  "time" : 1483351989000,
  "message" : "Sale authorised by policy task MorningBoozeCheckAlt1 for time 11:13:09 CET"
}

Request to buy non-alcoholic item (item_ID=5321) at 11:13:09 on Monday, 2 January 2017. Sale is authorized. Uses the MorningBoozeCheckAlt1 task.

CLI Editor File

Policy 2 in CLI Editor

An equivalent version of the MyFirstPolicyModel policy model can again be generated using the APEX CLI editor. A sample APEX CLI script is shown below:

Policy-controlled Video Streaming (pcvs) with APEX

Introduction

This module contains several demos for Policy-controlled Video Streaming (PCVS). Each demo defines a policy using AVRO and Javascript (or other scripting languages for the policy logic). To run the demo, a vanilla Ubuntu server with some extra software packages is required:

  • Mininet as network simulator

  • Floodlight as SDN controller

  • Kafka as messaging system

  • Zookeeper for Kafka configuration

  • APEX for policy control

Install Ubuntu Server and SW

Install Demo

Requirements:

  • Ubuntu server: 1.4 GB

  • Ubuntu with Xubuntu Desktop, git, Firefox: 2.3 GB

  • Ubuntu with all, system updated: 3 GB

  • With ZK, Kafka, VLC, Mininet, Floodlight, Python: 4.4 GB

  • APEX Build (M2 and built): M2 ~ 2 GB, APEX ~3.5 GB

  • APEX install (not build locally): ~ 300 MB

On a Ubuntu OS (install a stable or LTS server first)

# pre for Ubuntu, tools and X
sudo apt-get  -y install --no-install-recommends software-properties-common
sudo apt-get  -y install --no-install-recommends build-essential
sudo apt-get  -y install --no-install-recommends git
sudo aptitude -y install --no-install-recommends xubuntu-desktop
sudo apt-get  -y install --no-install-recommends firefox


# install Java
sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get -y install --no-install-recommends oracle-java8-installer
java -version


# reboot system, run system update, then continue

# if VBox additions are needed, install and reboot
sudo (cd /usr/local/share; wget https://www.virtualbox.org/download/testcase/VBoxGuestAdditions_5.2.7-120528.iso)
sudo mount /usr/local/share/VBoxGuestAdditions_5.2.7-120528.iso /media/cdrom
sudo (cd /media/cdrom;VBoxLinuxAdditions.run)


# update apt-get DB
sudo apt-get update

# if APEX is build from source, install maven and rpm
sudo apt-get install maven rpm

# install ZooKeeper
sudo apt-get install zookeeperd

# install Kafka
(cd /tmp;wget http://ftp.heanet.ie/mirrors/www.apache.org/dist/kafka/1.0.0/kafka_2.12-1.0.0.tgz --show-progress)
sudo mkdir /opt/Kafka
sudo tar -xvf /tmp/kafka_2.12-1.0.0.tgz -C /opt/Kafka/

# install mininet
cd /usr/local/src
sudo git clone https://github.com/mininet/mininet.git
(cd mininet;util/install.sh -a)

# install floodlight, requires ant
sudo apt-get install ant
cd /usr/local/src
sudo wget --no-check-certificate https://github.com/floodlight/floodlight/archive/master.zip
sudo unzip master.zip
cd floodlight-master
sudo ant
sudo mkdir /var/lib/floodlight
sudo chmod 777 /var/lib/floodlight

# install python pip
sudo apt-get install python-pip

# install kafka-python (need newer version from github)
cd /usr/local/src
sudo git clone https://github.com/dpkp/kafka-python
sudo pip install ./kafka-python

# install vlc
sudo apt-get install vlc

Install APEX either from source or from a distribution package. See the APEX documentation for details. We assume that APEX is installed in /opt/ericsson/apex/apex

Copy the LinkMonitor file to Kafka-Python

sudo cp /opt/ericsson/apex/apex/examples/scripts/pcvs/vpnsla/LinkMonitor.py /usr/local/src/kafka-python

Change the Logback configuration in APEX to logic logging

(cd /opt/ericsson/apex/apex/etc; sudo cp logback-logic.xml logback.xml)

Get the Demo Video

sudo mkdir /usr/local/src/videos

Standard 720p (recommended)

(cd /usr/local/src/videos; sudo curl -o big_buck_bunny_480p_surround.avi http://download.blender.org/peach/bigbuckbunny_movies/big_buck_bunny_480p_surround-fix.avi)

Full HD video

(cd videos; sudo curl -o bbb_sunflower_1080p_60fps_normal.mp4 http://distribution.bbb3d.renderfarming.net/video/mp4/bbb_sunflower_1080p_60fps_normal.mp4)
VPN SLA Demo

This demo uses a network with several central office and core switches, over which two VPNs are run. A customer A has two location A1 and A2 and a VPN between them. A customer B has two location B1 and B2 and a VPN between them.

VPN SLA Architecture

The architecture above shows the scenario. The components are realized in this demo as follows:

  • CEP / Analytics - a simple Python script taking events from Kafka and sending them to APEX

  • APEX / Policy - the APEX engine running the VPA SLA policy

  • Controller - A vanilla Floodlight controller taking events from the Link Monitor and configuring Mininet

  • Network - A network created using Mininet

The demo requires to start some software (detailed below). To show actual video streams, we use VLC. If you do not want to show video streams, but only the policy, skip the VLC section.

All shown scripts are available in a full APEX installation in $APEX_HOME/examples/scripts/pcvs/vpnsla.

Start all Software

Create environment variables in a file, say env.sh. In each new Xterm

  • Source these environment settings, e.g. . ./env.sh

  • Run the commands below as root (sudo per command or sudo -i for interactive mode as shown below)

#!/usr/bin/env bash

export src_dir=/usr/local/src
export APEX_HOME=/opt/ericsson/apex/apex
export APEX_USER=apexuser

In a new Xterm, start Floodlight

sudo -i
. ./env.sh
cd $src_dir/floodlight-master && java -jar target/floodlight.jar

In a new Xterm start Mininet

sudo -i
. ./env.sh
mn -c && python $APEX_HOME/examples/scripts/pcvs/vpnsla/MininetTopology.py

In a new Xterm, start Kafka

sudo -i
. ./env.sh
/opt/Kafka/kafka_2.12-1.0.0/bin/kafka-server-start.sh /opt/Kafka/kafka_2.12-1.0.0/config/server.properties

In a new Xerm start APEX with the Kafka configuration for this demo

cd $APEX_HOME
./bin/apexApps.sh engine -c examples/config/pcvs/vpnsla/kafka2kafka.json

In a new Xterm start the Link Monitor. The Link Monitor has a 30 second sleep to slow down the demonstration. So the first action of it comes 30 seconds after start. Every new action in 30 second intervals.

sudo -i
. ./env.sh
cd $src_dir
xterm -hold -e 'python3 $src_dir/kafka-python/LinkMonitor.py' &

Now all software should be started and the demo is running. The Link Monitor will send link up events, picked up by APEX which triggers the policy. Since there is no problem, the policy will do nothing.

Create 2 Video Streams with VLC

In the Mininet console, type xterm A1 A2 and xterm B1 B2 to open terminals on these nodes.

A2 and B2 are the receiving nodes. In these terminals, run vlc-wrapper. In each opened VLC window do

  • Click Media → Open Network Stream

  • Give the URL as rtp://@:5004

A1 and B1 are the sending nodes (sending the video stream) In these terminals, run vlc-wrapper. In each opened VLC window do

  • Click Media → Stream

  • Add the video (from /usr/local/src/videos)

  • Click Stream

  • Click Next

  • Change the destination RTP / MPEG Transport Stream and click Add

  • Change the address and type to 10.0.0.2 in A1 and to 10.0.0.4 in B1

  • Turn off Active Transcoding (this is important to minimize CPU load)

  • Click Next

  • Click Stream

The video should be streaming across the network from A1 to A2 and from B1 to B2. If the video streams a slow or interrupted the CPU load is too high. In these cases either try a better machine or use a different (lower quality) video stream.

Take out L09 and let the Policy do it’s Magic

Now it is time to take out the link L09. This will be picked up by the Link Monitor, which sends a new event (L09 DOWN) to the policy. The policy then will calculate which customer should be impeded (throttled). This will continue, until SLAs are violated, then a priority calculation will kick in (Customer A is prioritized in the setup).

To initiate this, simply type link s5 s6 down in the Mininet console followed by exit.

If you have the video streams running, you will see one or the other struggeling, depending on the policy decision.

Reset the Demo

If you want to reset the demo, simple stop (in this order) the following process

  • Link Monitor

  • APEX

  • Mininet

  • Floodlight

Then restart them in this order

  • Floodlight

  • Mininet

  • APEX

  • Link Monitor

Monitor the Demo

Floodlight and APEX provide REST interfaces for monitoring.

  • Floodlight: see Floodlight Docs for details on how to access the monitoring. In a standard installation as we use here, pointing browser to the URL http://localhost:8080/ui/pages/index.html should work on the same host

  • APEX please see the APEX documentation for Monitoring Client or Full Client for details on how to monitor APEX.

VPN SLA Policy

The VPN SLA policy is designed as a MEDA policy. The first state (M = Match) takes the trigger event (a link up or down) and checks if this is a change to the known topology. The second state (E = Establish) takes all available information (trigger event, local context) and defines what situation we have. The third state (D = Decide) takes the situation and selects which algorithm is best to process it. This state can select between none (nothing to do), solved (a problem is solved now), sla (compare the current customer SLA situation and select one to impede), and priority (impede non-priority customers). The fourth and final state (A = Act) selects the right action for the taken decision and creates the response event sent to the orchestrator.

We have added three more policies to set the local context: one for adding nodes, one for adding edges (links), and one for adding customers. These policies do not realize any action, they are only here for updating the local context. This mechanism is the fasted way to update local context, and it is independent of any context plugin.

The policy uses data defined in Avro, so we have a number of Avro schema definitions.

Context Schemas

The context schemas are for the local context. We model edges and nodes for the topology, customers, and problems with all information on detected problems.

Trigger Schemas

The trigger event provides a status as UP or DOWN. To avoid tests for these strings in the logic, we defined an Avro schema for an enumeration (AVRO Schema Link Status). This does not impact the trigger system (it can still send the strings), but makes the task logic simpler.

Context Logic Nodes

The node context logic simply takes the trigger event (for context) and creates a new node in the local context topology (Logic Node Context).

Context Logic Edges

The edge context logic simply takes the trigger event (for context) and creates a new edge in the local context topology (Logic Edge Context).

Context Logic Customer

The customer context logic simply takes the trigger event (for context) and creates a new customer in the local context topology (Logic Customer Context).

Logic: Match

This is the logic for the match state. It is kept very simple. Beside taking the trigger event, it also creates a timestamp. This timestamp is later used for SLA and downtime calculations as well as for some performance information of the policy . Sample Logic Policy Match State

Logic: Policy Establish State

This is the logic for the establish state. It is the most complicated logic, since establishing a situation for a decision is the most important part of any policy. First, the policy describes what we find (the switch block), in terms of 8 normal situations and 1 extreme error case.

If required, it creates local context information for the problem (if it is new) or updates it (if the problem still exists). It also calculates customer SLA downtime and checks for any SLA violations. Finally, it creates a situation object. Sample Logic Policy Establish State

Logic: Policy Decide State

The decide state can select between different algorithms depending on the situation. So it needs a Task Selection Logic (TSL). This TSL select a task in the current policy execution (i.e. potentially a different one per execution). Sample JS Logic Policy Decide State - TSL

The actual task logic are then none, solved, sla, and priority. Sample task logic are as given below :

Logic: Policy Act State

This is the logic for the act state. It is simply selecting an action, and creating the repsonse event for the orchestrator (the output of the policy). Sample Logic Policy Act State

CLI Spec

Complete Policy Definition

The complete policy definition is realized using the APEX CLI Editor. The script below shows the actual policy specification. All logic and schemas are included (as macro file). Sample APEX VPN SLA Policy Specification

Context Events Nodes

The following events create all nodes of the topology.

Context Events Edges

The following events create all edges of the topology.

Context Events Customers

The following events create all customers of the topology.

Trigger Examples

The following events are examples for trigger events

Mininet Topology

The topology is realized using Mininet. This script is used to establish the topology and to realize network configurations. Sample Mininet Topology

APEX Examples Decision Maker

Sample APEX Policy in TOSCA format

An example APEX policy in TOSCA format for the vCPE use case can be found here:

My First Policy

A good starting point is the My First Policy example. It describes a sales problem, to which policy can be applied. The example details the policy background, shows how to use the REST Editor to create a policy, and provides details for running the policies. The documentation can be found:

VPN SLA

The domain Policy-controlled Video Streaming (PCVS) contains a policy for controlling video streams with different strategies. It also provides details for installing an actual testbed with off-the-shelve software (Mininet, Floodlight, Kafka, Zookeeper). The policy model here demonstrates virtually all APEX features: local context and policies controlling it, task selection logic and multiple tasks in a single state, AVRO schemas for context, AVOR schemas for events (trigger and local), and a CLI editor specification of the policy. The documentation can be found:

Decision Maker

The domain Decision Maker shows a very simple policy for decisions. Interesting here is that the it creates a Docker image to run the policy and that it uses the APEX REST applications to update the policy on the-fly. It also has local context to remember past decisions, and shows how to use that to no make the same decision twice in a row. The documentation can be found:

Policy Distribution Component

Introduction to Policy Distribution

The main job of policy distribution component is to receive incoming notifications, download artifacts, decode policies from downloaded artifacts & forward the decoded policies to all configured policy forwarders.


The current implementation of distribution component comes with built-in SDC reception handler for receiving incoming distribution notifications from SDC using SDC client library. Upon receiving the notification, the corresponding CSAR artifacts are downloaded using SDC client library.The downloaded CSAR is then given to the configured policy decoder for decoding and generating policies. The generated policies are then forwarded to all configured policy forwarders. Related distribution status is sent to SDC at each step (download/deploy/done) during the entire flow.


The distribution component also comes with built-in REST based endpoints for fetching health check status & statistical data of running distribution system.


The distribution component is designed using plugin based architecture. All the handlers, decoders & forwarders are basically plugins to the running distribution engine. The plugins are configured in the configuration JSON file provided during startup of distribution engine. Adding a new plugin is simply implementing the related interfaces, adding them to the configuration JSON file & making the classes available in the classpath while starting distribution engine. There is no need to edit anything in the distribution core engine. Refer to distribution user manual for more details about the system and the configuration.

Policy Distribution User Manual

Installation

Requirements

Distribution is 100% written in Java and runs on any platform that supports a JVM, e.g. Windows, Unix, Cygwin.

Installation Requirements
  • Downloaded distribution: JAVA runtime environment (JRE, Java 11, Distribution is tested with the OpenJDK)

  • Building from source: JAVA development kit (JDK, Java 11, Distribution is tested with the OpenJDK)

  • Sufficient rights to install Distribution on the system

  • Installation tools

    • TAR and GZ to extract from that TAR.GZ distribution

      • Windows for instance 7Zip

    • Docker to run Distribution inside a Docker container

Build (Install from Source) Requirements

Installation from source requires a few development tools

  • GIT to retrieve the source code

  • Java SDK, Java version 8 or later

  • Apache Maven 3 (the Distribution build environment)

Get the Distribution Source Code

The Distribution source code is hosted in ONAP as project distribution. The current stable version is in the master branch. Simply clone the master branch from ONAP using HTTPS.

git clone https://gerrit.onap.org/r/policy/distribution
Build Distribution

The examples in this document assume that the distribution source repositories are cloned to:

  • Unix, Cygwin: /usr/local/src/distribution

  • Windows: C:\dev\distribution

  • Cygwin: /cygdrive/c/dev/distribution

Important

A Build requires ONAP Nexus Distribution has a dependency to ONAP parent projects. You might need to adjust your Maven M2 settings. The most current settings can be found in the ONAP oparent repo: Settings.

Important

A Build needs Space Building distribution requires approximately 1-2 GB of hard disc space, 100 MB for the actual build with full distribution and around 1 GB for the downloaded dependencies.

Important

A Build requires Internet (for first build) During the build, several (a lot) of Maven dependencies will be downloaded and stored in the configured local Maven repository. The first standard build (and any first specific build) requires Internet access to download those dependencies.

Use Maven for a standard build without any tests.

Unix, Cygwin

Windows

# cd /usr/local/src/distribution
# mvn clean install -DskipTest
>c:
>cd \dev\distribution
>mvn clean install -DskipTests

The build takes 2-3 minutes on a standard development laptop. It should run through without errors, but with a lot of messages from the build process.


When Maven is finished with the build, the final screen should look similar to this (omitting some success lines):

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] policy-distribution ................................ SUCCESS [  3.666 s]
[INFO] distribution-model ................................. SUCCESS [ 11.234 s]
[INFO] forwarding ......................................... SUCCESS [  7.611 s]
[INFO] reception .......................................... SUCCESS [  7.072 s]
[INFO] main ............................................... SUCCESS [ 21.017 s]
[INFO] plugins ............................................ SUCCESS [  0.453 s]
[INFO] forwarding-plugins ................................. SUCCESS [01:20 min]
[INFO] reception-plugins .................................. SUCCESS [ 18.545 s]
[INFO] Policy Distribution Packages ....................... SUCCESS [  0.419 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:39 min
[INFO] Finished at: 2018-11-15T13:59:09Z
[INFO] Final Memory: 73M/1207M
[INFO] ------------------------------------------------------------------------

The build will have created all artifacts required for distribution installation. The following example show how to change to the target directory and how it should look.

Unix, Cygwin

-rw-r--r-- 1 user 1049089    10616 Oct 31 13:35 checkstyle-checker.xml
-rw-r--r-- 1 user 1049089      609 Oct 31 13:35 checkstyle-header.txt
-rw-r--r-- 1 user 1049089      245 Oct 31 13:35 checkstyle-result.xml
-rw-r--r-- 1 user 1049089       89 Oct 31 13:35 checkstyle-cachefile
drwxr-xr-x 1 user 1049089        0 Oct 31 13:35 maven-archiver/
-rw-r--r-- 1 user 1049089     7171 Oct 31 13:35 policy-distribution-tarball-2.0.1-SNAPSHOT.jar
drwxr-xr-x 1 user 1049089        0 Oct 31 13:35 archive-tmp/
-rw-r--r-- 1 user 1049089 66296012 Oct 31 13:35 policy-distribution-tarball-2.0.1-SNAPSHOT-tarball.tar.gz
drwxr-xr-x 1 user 1049089        0 Nov 12 10:56 test-classes/
drwxr-xr-x 1 user 1049089        0 Nov 20 14:31 classes/

Windows

11/12/2018  10:56 AM    <DIR>          .
11/12/2018  10:56 AM    <DIR>          ..
10/31/2018  01:35 PM    <DIR>          archive-tmp
10/31/2018  01:35 PM                89 checkstyle-cachefile
10/31/2018  01:35 PM            10,616 checkstyle-checker.xml
10/31/2018  01:35 PM               609 checkstyle-header.txt
10/31/2018  01:35 PM               245 checkstyle-result.xml
11/20/2018  02:31 PM    <DIR>          classes
10/31/2018  01:35 PM    <DIR>          maven-archiver
10/31/2018  01:35 PM        66,296,012 policy-distribution-tarball-2.0.1-SNAPSHOT-tarball.tar.gz
10/31/2018  01:35 PM             7,171 policy-distribution-tarball-2.0.1-SNAPSHOT.jar
11/12/2018  10:56 AM    <DIR>          test-classes
Install Distribution

Distribution can be installed in different ways:

  • Windows, Unix, Cygwin: manually from a .tar.gz archive

  • Windows, Unix, Cygwin: build from source using Maven, then install manually

Install Manually from Archive (Windows, 7Zip, GUI)

Download a tar.gz archive and copy the file into the install folder (in this example C:\distribution). Assuming you are using 7Zip, right click on the file and extract the tar archive.


Extract the TAR archive

Then right-click on the new created TAR file and extract the actual distribution.


Extract the distribution

Inside the new distribution folder you see the main directories: bin, etc``and ``lib


Once extracted, please rename the created folder to distribution-full-2.0.2-SNAPSHOT. This will keep the directory name in line with the rest of this documentation.

Build from Source
Build and Install Manually (Unix, Windows, Cygwin)

Clone the Distribution GIT repositories into a directory. Go to that directory. Use Maven to build Distribution (all details on building Distribution from source can be found in Distribution HowTo: Build).

Now, take the .tar.gz file and install distribution.

Installation Layout

A full installation of distribution comes with the following layout.

  • bin

  • etc

  • lib

Running Distribution in Docker
Run in ONAP

Running distribution from the ONAP docker repository only requires 2 commands:

  1. Log into the ONAP docker repo

docker login -u docker -p docker nexus3.onap.org:10003
  1. Run the distribution docker image

docker run -it --rm  nexus3.onap.org:10003/onap/policy-distribution:latest
Build a Docker Image

Alternatively, one can use the Dockerfile defined in the Docker package to build an image.

Distribution Configurations Explained

Introduction to Distribution Configuration

A distribution engine can be configured to use various combinations of policy reception handlers, policy decoders and policy forwarders. The system is built using a plugin architecture. Each configuration option is realized by a plugin, which can be loaded and configured when the engine is started. New plugins can be added to the system at any time, though to benefit from a new plugin, an engine will need to be restarted.


The distribution already comes with sdc reception handler, file reception handler, hpa optimization policy decoder, file in csar policy decoder, policy lifecycle api forwarder.

General Configuration Format

The distribution configuration file is a JSON file containing a few main blocks for different parts of the configuration. Each block then holds the configuration details. The following code shows the main blocks:

{
  "restServerParameters":{
    ... (1)
  },
  "receptionHandlerParameters":{ (2)
    "pluginHandlerParameters":{ (3)
      "policyDecoders":{...}, (4)
      "policyForwarders":{...} (5)
    }
  },
  "receptionHandlerConfigurationParameters":{
    ... (6)
  }
  ,
  "policyForwarderConfigurationParameters":{
    ... (7)
  }
  ,
  "policyDecoderConfigurationParameters":{
    ... (8)
  }
}

1

rest server configuration

2

reception handler plugin configurations

3

plugin handler parameters configuration

4

policy decoder plugin configuration

5

policy forwarder plugin configuration

6

reception handler plugin parameters

7

policy forwarder plugin parameters

8

policy decoder plugin parameters

A configuration example

The following example loads HPA use case & general tosca policy related plug-ins.

Notifications are consumed from SDC through SDC client. Consumed artifacts format is CSAR.

Generated policies are forwarded to policy lifecycle api’s for creation & deployment.

{
    "name":"SDCDistributionGroup",
    "restServerParameters":{
        "host":"0.0.0.0",
        "port":6969,
        "userName":"healthcheck",
        "password":"zb!XztG34"
      },
    "receptionHandlerParameters":{
         "SDCReceptionHandler":{
            "receptionHandlerType":"SDC",
            "receptionHandlerClassName":"org.onap.policy.distribution.reception.handling.sdc.SdcReceptionHandler",
                "receptionHandlerConfigurationName":"sdcConfiguration",
            "pluginHandlerParameters":{
                "policyDecoders":{
                    "ToscaPolicyDecoder":{
                        "decoderType":"ToscaPolicyDecoder",
                        "decoderClassName":"org.onap.policy.distribution.reception.decoding.policy.file.PolicyDecoderFileInCsarToPolicy",
                        "decoderConfigurationName": "toscaPolicyDecoderConfiguration"
                    }
                },
                "policyForwarders":{
                    "LifeCycleApiForwarder":{
                        "forwarderType":"LifeCycleAPI",
                        "forwarderClassName":"org.onap.policy.distribution.forwarding.lifecycle.api.LifecycleApiPolicyForwarder",
                        "forwarderConfigurationName": "lifecycleApiConfiguration"
                    }
                }
            }
        }
    },
    "receptionHandlerConfigurationParameters":{
        "sdcConfiguration":{
            "parameterClassName":"org.onap.policy.distribution.reception.handling.sdc.SdcReceptionHandlerConfigurationParameterGroup",
            "parameters":{
                "asdcAddress": "sdc-be.onap:8443",
                "messageBusAddress": [
                "message-router.onap"
                 ],
                "user": "policy",
                "password": "Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U",
                "pollingInterval":20,
                "pollingTimeout":30,
                "consumerId": "policy-id",
                "artifactTypes": [
                "TOSCA_CSAR",
                "HEAT"
                ],
                "consumerGroup": "policy-group",
                "environmentName": "AUTO",
                "keystorePath": "null",
                "keystorePassword": "null",
                "activeserverTlsAuth": false,
                "isFilterinEmptyResources": true,
                "isUseHttpsWithDmaap": true
            }
        }
    },
    "policyDecoderConfigurationParameters":{
        "toscaPolicyDecoderConfiguration":{
            "parameterClassName":"org.onap.policy.distribution.reception.decoding.policy.file.PolicyDecoderFileInCsarToPolicyParameterGroup",
            "parameters":{
                "policyFileName": "tosca_policy",
                "policyTypeFileName": "tosca_policy_type"
            }
        }
    },
    "policyForwarderConfigurationParameters":{
        "lifecycleApiConfiguration": {
            "parameterClassName": "org.onap.policy.distribution.forwarding.lifecycle.api.LifecycleApiForwarderParameters",
            "parameters": {
                "apiParameters": {
                    "hostName": "policy-api",
                    "port": 6969,
                    "userName": "healthcheck",
                    "password": "zb!XztG34"
                },
                "papParameters": {
                    "hostName": "policy-pap",
                    "port": 6969,
                    "userName": "healthcheck",
                    "password": "zb!XztG34"
                },
                "isHttps": true,
                "deployPolicies": true
            }
        }
    }
}

The Distribution Engine

The Distribution engine can be started using policy-dist.sh script. The script is located in the source code at distribution/packages/policy-distribution-docker/src/main/docker directory


On UNIX and Cygwin systems use policy-dist.sh script.


On Windows systems navigate to the distribution installation directory. Run the following command java -cp "etc:lib\*" org.onap.policy.distribution.main.startstop.Main -c <config-file-path>


The Distribution engine comes with CLI arguments for setting configuration. The configuration file is always required. The option -h prints a help screen.

usage: org.onap.policy.distribution.main.startstop.Main [options...]
options
-c,--config-file <CONFIG_FILE>  the full path to the configuration file to use, the configuration file must be a Json file
                                containing the distribution configuration parameters
-h,--help                       outputs the usage of this command
-v,--version                    outputs the version of distribution system

The Distribution REST End-points

The distribution engine comes with built-in REST based endpoints for fetching health check status & statistical data of running distribution system.

# Example Output from curl http -a '{user}:{password}' :6969/healthcheck

    HTTP/1.1 200 OK
  Content-Length: XXX
  Content-Type: application/json
  Date: Tue, 17 Apr 2018 10:51:14 GMT
  Server: Jetty(9.3.20.v20170531)
  {
       "code":200,
       "healthy":true,
       "message":"alive",
       "name":"Policy SSD",
       "url":"self"
  }

# Example Output from curl http -a '{user}:{password}' :6969/statistics

  HTTP/1.1 200 OK
  Content-Length: XXX
  Content-Type: application/json
  Date: Tue, 17 Apr 2018 10:51:14 GMT
  Server: Jetty(9.3.20.v20170531)
  {
       "code":200,
       "distributions":10,
       "distribution_complete_ok":8,
       "distribution_complete_fail":2,
       "downloads":15,
       "downloads_ok"; 10,
       "downloads_error": 5
  }

Using Monitoring Gui

Here is an example running Monitoring Gui on a Native Windows Computer.

Environment setup

create and run docker images about the following tar packages

1docker load-i pdp.tar
2docker load-i mariadb.tar
3docker load-i api.tar
4docker load-i apex.tar
5docker load-i pap.tar
6docker load-i xacml.tar

download latest source from gerrit and create tar by command

1tar example example.tar

download drools-pdp latest source from gerrit

prepare eclipse for starting drools-pdp

config drools-pdp dependency in eclipse

  • create config folder inside drools-pdppolicy-management, copy feature-lifecycle.properties into this folder

    Create the Folder Arc

     1lifecycle.pdp.group=${envd:POLICY_PDP_PAP_GROUP:defaultGroup}
     2
     3dmaap.source.topics=POLICY-PDP-PAP
     4dmaap.sink.topics=POLICY-PDP-PAP
     5
     6dmaap.source.topics.POLICY-PDP-PAP.servers=localhost:3904
     7dmaap.source.topics.POLICY-PDP-PAP.managed=false
     8
     9dmaap.sink.topics.POLICY-PDP-PAP.servers=localhost:3904
    10dmaap.sink.topics.POLICY-PDP-PAP.managed=false
    
  • update run property “classpath” of “drools.system.Main” in Eclipse

    Update run Property

    Lifecycle classpath setting

Prepare Postman for sending REST request to components during demo

import “demo.postman_collection.json” into PostMan

Import JSON in PostMan

“demo.postman_collection.json”, “link

clean docker environment

1# docker rm $(docker ps-aq)

Demo steps

docker compose start mariadb and message-router. Mariadb must be started in a seperate console because it needs several seconds to finish startup, and other docker startups depends on this

1# docker-compose up -d mariadb message-router

docker compose start other components API, PAP, APEX-PDP, XACML-PDP

1# docker-compose up -d pdp xacml pap api

start “drools.system.Main” in eclipse

verify PDPs are registered into the database

  • start PAP statistics monitoring GUI

    java -jar client/client-monitoring/target/client-monitoring-uber 2.2.0-SNAPSHOT.jar

  • open monitor in browser

    curl localhost:18999

set up pap parameter

Pap parameter

input parameters

Set up pap parameter

Fetch PdpLists

Fetch Pdp Lists

no Engine Worker started, we can only see healthcheck result when we click on the instance APEX statistics

No engine worker started

XACML statistics

XACML statistics

use PostMan to send request to API to create policy type/create policy/ deploy policy

1API_Create Policy Type
2API_Create Policy
3Simple Deploy Policy

now APEX PDP statistics data includes engine worker statistics, and shows the monitoring GUI updating automatically (every 2 minutes)

Engine worker started

use PostMan to send a request to DMAAP, add one xacml-pdp statistics message manually, show that the monitoring GUI updates the PostMan API

xacml-pdp statistics update

Update XACML statistics

Policy Release Notes

Version: 8.0.1

Release Date

2021-08-12 (Honolulu Maintenance Release #1)

Artifacts

Artifacts released:

Repository

Java Artifact

Docker Image (if applicable)

policy/parent

3.3.2

policy/common

1.8.2

policy/models

2.4.4

policy/api

2.4.4

onap/policy-api:2.4.4

policy/pap

2.4.5

onap/policy-pap:2.4.5

policy/drools-pdp

1.8.4

onap/policy-drools:1.8.4

policy/apex-pdp

2.5.4

onap/policy-apex-pdp:2.5.4

policy/xacml-pdp

2.4.5

onap/policy-xacml-pdp:2.4.5

policy/drools-applications

1.8.4

onap/policy-pdpd-cl:1.8.4

policy/distribution

2.5.4

onap/policy-distribution:2.5.4

policy/docker

2.2.1

onap/policy-jdk-alpine:2.2.1, onap/policy-jre-alpine:2.2.1

Bug Fixes and Necessary Enhancements

  • [POLICY-3062] - Update the ENTRYPOINT in APEX-PDP Dockerfile

  • [POLICY-3066] - Stackoverflow error in APEX standalone after changing to onap java image

  • [POLICY-3078] - Support SSL communication in Kafka IO plugin of Apex-PDP

  • [POLICY-3173] - APEX-PDP incorrectly reports successful policy deployment to PAP

  • [POLICY-3202] - PDP-D: no locking feature: service loader not locking the no-lock-manager

  • [POLICY-3227] - Implementation of context album improvements in apex-pdp

  • [POLICY-3230] - Make default PDP-D and PDP-D-APPS work out of the box

  • [POLICY-3248] - PdpHeartbeats are not getting processed by PAP

  • [POLICY-3301] - Apex Avro Event Schemas - Not support for colon ‘:’ character in field names

  • [POLICY-3305] - Ensure XACML PDP application/translator methods are extendable

  • [POLICY-3331] - PAP: should allow for external configuration of groups other than defaultGroup

  • [POLICY-3338] - Upgrade CDS dependency to the latest version

  • [POLICY-3366] - PDP-D: support configuration of overarching DMAAP https flag

  • [POLICY-3450] - PAP should support turning on/off via configuration storing PDP statistics

  • [POLICY-3454] - PDP-D CL APPS: swagger mismatched libraries cause telemetry shell to fail

  • [POLICY-3485] - Limit statistics record count

  • [POLICY-3507] - CDS Operation Policy execution runtime error

  • [POLICY-3516] - Upgrade CDS dependency to the 1.1.5 version

Known Limitations

The APIs provided by xacml-pdp (e.g., healthcheck, statistics, decision) are always active. While PAP controls which policies are deployed to a xacml-pdp, it does not control whether or not the APIs are active. In other words, xacml-pdp will respond to decision requests, regardless of whether PAP has made it ACTIVE or PASSIVE.

Version: 8.0.0

Release Date

2021-04-29 (Honolulu Release)

New features

Artifacts released:

Repository

Java Artifact

Docker Image (if applicable)

policy/parent

3.3.0

policy/common

1.8.0

policy/models

2.4.2

policy/api

2.4.2

onap/policy-api:2.4.2

policy/pap

2.4.2

onap/policy-pap:2.4.2

policy/drools-pdp

1.8.2

onap/policy-drools:1.8.2

policy/apex-pdp

2.5.2

onap/policy-apex-pdp:2.5.2

policy/xacml-pdp

2.4.2

onap/policy-xacml-pdp:2.4.2

policy/drools-applications

1.8.2

onap/policy-pdpd-cl:1.8.2

policy/distribution

2.5.2

onap/policy-distribution:2.5.2

policy/docker

2.2.1

onap/policy-jdk-alpine:2.2.1, onap/policy-jre-alpine:2.2.1

Key Updates

  • Enhanced statistics
    • PDPs provide statistics, retrievable via PAP REST API

  • PDP deployment status
    • Policy deployment API enhanced to reflect actual policy deployment status in PDPs

    • Make PAP component stateless

  • Policy support
    • Upgrade XACML 3.0 code to use new Time Extensions

    • Enhancements for interoperability between Native Policies and other policy types

    • Support for arbitrary policy types on the Drools PDP

    • Improve handling of multiple policies in APEX PDP

    • Update policy-models TOSCA handling with Control Loop Entities

  • Alternative locking mechanisms
    • Support NO locking feature in Drools-PDP

  • Security
    • Remove credentials in code from the Apex JMS plugin

  • Actor enhancements
    • Actors should give better warnings than NPE when data is missing

    • Remove old event-specific actor code

  • PDP functional assignments
    • Make PDP type configurable in drools-pdp

    • Make PDP type configurable in xacml-pdp

  • Performance improvements
    • Support policy updates between PAP and the PDPs, phase 1

  • Maintainability
    • Use ONAP base docker image

    • Remove GPLv3 components from docker containers

    • Move CSITs to Policy repos

    • Deprecate server pool feature in drools-pdp

  • PoCs
    • Merge CLAMP functionality into Policy Framework project

    • TOSCA Defined Control Loop

Known Limitations, Issues and Workarounds

System Limitations

The policy API component requires a fresh new database when migrating to the honolulu release. Therefore, upgrades require a fresh new database installation. Please see the Installing or Upgrading Policy section for appropriate procedures.

Known Vulnerabilities
Workarounds

Security Notes

  • POLICY-3005 - Bump direct dependency versions
    • Upgrade org.onap.dmaap.messagerouter.dmaapclient to 1.1.12

    • Upgrade org.eclipse.persistence to 2.7.8

    • Upgrade org.glassfish.jersey.containers to 2.33

    • Upgrade com.fasterxml.jackson.module to 2.11.3

    • Upgrade com.google.re2j to 1.5

    • Upgrade org.mariadb.jdbc to 2.7.1

    • Upgrade commons-codec to 1.15

    • Upgrade com.thoughtworks.xstream to 1.4.15

    • Upgrade org.apache.httpcomponents:httpclient to 4.5.13

    • Upgrade org.apache.httpcomponents:httpcore to 4.4.14

    • Upgrade org.json to 20201115

    • Upgrade org.projectlombok to 1.18.16

    • Upgrade org.yaml to 1.27

    • Upgrade io.cucumber to 6.9.1

    • Upgrade org.apache.commons:commons-lang3 to 3.11

    • Upgrade commons-io to 2.8.0

  • POLICY-2936 - Upgrade to latest version of CDS API
    • Upgrade io.grpc to 1.35.0

    • Upgrade com.google.protobuf to 3.14.0

References

For more information on the ONAP Honolulu release, please see:

  1. ONAP Home Page

  2. ONAP Documentation

  3. ONAP Release Downloads

  4. ONAP Wiki Page

Quick Links:

Version: 7.0.0

Release Date

2020-12-03 (Guilin Release)

New features

Artifacts released:

Repository

Java Artifact

Docker Image (if applicable)

policy/parent

3.2.0

policy/common

1.7.1

policy/models

2.3.5

policy/api

2.3.3

onap/policy-api:2.3.3

policy/pap

2.3.3

onap/policy-pap:2.3.3

policy/drools-pdp

1.7.4

onap/policy-drools:1.7.4

policy/apex-pdp

2.4.4

onap/policy-apex-pdp:2.4.4

policy/xacml-pdp

2.3.3

onap/policy-xacml-pdp:2.3.3

policy/drools-applications

1.7.5

onap/policy-pdpd-cl:1.7.5

policy/distribution

2.4.3

onap/policy-distribution:2.4.3

policy/docker

2.1.1

onap/policy-jdk-alpine:2.1.1, onap/policy-jre-alpine:2.1.1

Key Updates

  • Kubernetes integration
    • All components return with non-zero exit code in case of application failure

    • All components log to standard out (i.e., k8s logs) by default

    • Continue to write log files inside individual pods, as well

  • Multi-tenancy
    • Basic initial support using the existing features

  • E2E Network Slicing
    • Added ModifyNSSI operation to SO actor

  • Consolidated health check
    • Indicate failure if there aren’t enough PDPs registered

  • Legacy operational policies
    • Removed from all components

  • OOM helm charts refactoring
    • Name standardization

    • Automated certificate generation

  • Actor Model
    • Support various use cases and provide more flexibility to Policy Designers

    • Reintroduced the “usecases” controller into drools-pdp, supporting the use cases under the revised actor architecture

  • Guard Application
    • Support policy filtering

  • Matchable Application - Support for ONAP or 3rd party components to create matchable policy types out of the box

  • Policy Lifecycle & Administration API
    • Query/Delete by policy name & version without policy type

  • Apex-PDP enhancements
    • Support multiple event & response types coming from a single endpoint

    • Standalone installation now supports Tosca-based policies

    • Legacy policy format has been removed

    • Support chaining/handling of gRPC failure responses

  • Policy Distribution
    • HPA decoders & related classes have been removed

  • Policy Engine
    • Deprecated

Known Limitations, Issues and Workarounds

System Limitations

The policy API component requires a fresh new database when migrating to the guilin release. Therefore, upgrades require a fresh new database installation. Please see the Installing or Upgrading Policy section for appropriate procedures.

Known Vulnerabilities
  • POLICY-2463 - In APEX Policy javascript task logic, JSON.stringify causing stackoverflow exceptions

Workarounds
  • POLICY-2463 - Use the stringify method of the execution context

Security Notes

  • POLICY-2878 - Dependency upgrades
    • Upgrade com.fasterxml.jackson to 2.11.1

  • POLICY-2387 - Dependency upgrades
    • Upgrade org.json to 20200518

    • Upgrade com.google.re2j to 1.4

    • Upgrade com.thoughtworks.xstream to 1.4.12

    • Upgrade org.eclipse.persistence to 2.2.1

    • Upgrade org.apache.httpcomponents to 4.5.12

    • Upgrade org.projectlombok to 1.18.12

    • Upgrade org.slf4j to 1.7.30

    • Upgrade org.codehaus.plexus to 3.3.0

    • Upgrade com.h2database to 1.4.200

    • Upgrade io.cucumber to 6.1.2

    • Upgrade org.assertj to 3.16.1

    • Upgrade com.openpojo to 0.8.13

    • Upgrade org.mockito to 3.3.3

    • Upgrade org.awaitility to 4.0.3

    • Upgrade org.onap.aaf.authz to 2.1.21

  • POLICY-2668 - Dependency upgrades
    • Upgrade org.java-websocket to 1.5.1

  • POLICY-2623 - Remove log4j dependency

  • POLICY-1996 - Dependency upgrades
    • Upgrade org.onap.dmaap.messagerouter.dmaapclient to 1.1.11

References

For more information on the ONAP Guilin release, please see:

  1. ONAP Home Page

  2. ONAP Documentation

  3. ONAP Release Downloads

  4. ONAP Wiki Page

Quick Links:

Version: 6.0.1

Release Date

2020-08-21 (Frankfurt Maintenance Release #1)

Artifacts

Artifacts released:

Repository

Java Artifact

Docker Image (if applicable)

policy/drools-applications

1.6.4

onap/policy-pdpd-cl:1.6.4

Bug Fixes

Security Notes

Fixed Security Issues

  • [POLICY-2678] - policy/engine tomcat upgrade for CVE-2020-11996

Version: 6.0.0

Release Date

2020-06-04 (Frankfurt Release)

New features

Artifacts released:

Repository

Java Artifact

Docker Image (if applicable)

policy/parent

3.1.3

policy/common

1.6.5

policy/models

2.2.6

policy/api

2.2.4

onap/policy-api:2.2.4

policy/pap

2.2.3

onap/policy-pap:2.2.3

policy/drools-pdp

1.6.3

onap/policy-drools:1.6.3

policy/apex-pdp

2.3.2

onap/policy-apex-pdp:2.3.2

policy/xacml-pdp

2.2.2

onap/policy-xacml-pdp:2.2.2

policy/drools-applications

1.6.4

onap/policy-pdpd-cl:1.6.4

policy/engine

1.6.4

onap/policy-pe:1.6.4

policy/distribution

2.3.2

onap/policy-distribution:2.3.2

policy/docker

2.0.1

onap/policy-jdk-alpine:2.0.1, onap/policy-jre-alpine:2.0.1, onap/policy-jdk-debian:2.0.1, onap/policy-jre-debian:2.0.1

Summary

New features include policy update notifications, native policy support, streamlined health check for the Policy Administration Point (PAP), configurable pre-loading/pre-deployment of policies, new APIs (e.g. to create one or more Policies with a single call), new experimental PDP monitoring GUI, and enhancements to all three PDPs: XACML, Drools, APEX.

Common changes in all policy components

  • Upgraded all policy components to Java 11.

  • Logback file can be now loaded using OOM configmap.
    • If needed, logback file can be loaded as a configmap during the OOM deployment. For this, just put the logback.xml file in corresponding config directory in OOM charts.

  • TOSCA changes:
    • “tosca_definitions_version” is now “tosca_simple_yaml_1_1_0”

    • typeVersion→ type_version, int→integer, bool→boolean, String→string, Map→map, List→list

  • SupportedPolicyTypes now removed from pdp status message.
    • All PDPs now send PdpGroup to which they belong to in the registration message.

    • SupportedPolicyTypes are not sent anymore.

  • Native Policy Support
    • Each PDP engine has its own native policy language. A new Policy Type onap.policies.Native was created and supported for each PDP engine to support native policy types.

POLICY-PAP

  • Policy Update Notifications
    • PAP now generates notifications via the DMaaP Message Router when policies are successfully or unsuccessfully deployed (or undeployed) from all relevant PDPs.

  • PAP API to fetch Policy deployment status
    • Clients will be able to poll the PAP API to find out when policies have been successfully or unsuccessfully deployed to the PDP’s.

  • Removing supportedPolicyTypes from PdpStatus
    • PDPs are assigned to a PdpGroup based on what group is mentioned in the heartbeat. Earlier this was done based on the supportedPolicyTypes.

  • Support policy types with wild-cards, Preload wildcard supported type in PAP

  • PAP should NOT make a PDP passive if it cannot deploy a policy.
    • If a PDP fails to deploy one or more policies specified in a PDP-UPDATE message, PAP will undeploy those policies that failed to deploy to the PDP. This entails removing the policies from the Pdp Group(s), issuing new PDP-UPDATE requests, and updating the notification tracking data.

    • Also, re-register pdp if not found in the DB during heartbeat processing.

  • Consolidated health check in PAP
    • PAP can report the health check for ALL the policy components now. The PDP’s health is tracked based on heartbeats, and other component’s REST API is used for healthcheck.

    • “healthCheckRestClientParameters” (REST parameters for API and Distribution healthcheck) are added to the startup config file in PAP.

  • PDP statistics from PAP
    • All PDPs send statistics data as part of the heartbeat. PAP reads this and saves this data to the database, and this statistics data can be accessed from the monitoring GUI.

  • PAP API for Create or Update PdpGroups
    • A new API is now available just for creating/updating PDP Groups. Policies cannot be added/updated during PDP Group create/update operations. There is another API for this. So, if provided in the create/update group request, they are ignored. Supported policy types are defined during PDP Group creation. They cannot be updated once they are created. Refer to this for details: https://github.com/onap/policy-parent/blob/master/docs/pap/pap.rst#id8

  • PAP API to deploy policies to PdpGroups
    • A new API is introduced to deploy policies on specific PDPGroups. Each subgroup includes an “action” property, which is used to indicate that the policies are being added (POST) to the subgroup, deleted (DELETE) from the subgroup, or that the subgroup’s entire set of policies is being replaced (PATCH) by a new set of policies.

POLICY-API

  • A new simplified API to create one or more policies in one call.
    • This simplified API doesn’t require policy type id & policy type version to be part of the URL.

    • The simple URI “policy/api/v1/policies” with a POST input body takes in a ToscaServiceTemplate with the policies in it.

  • List of Preloaded policy types are made configurable
    • Until El Alto, the list of pre-loaded policy types are hardcoded in the code. Now, this is made configurable, and the list can be specified in the startup config file for the API component under “preloadPolicyTypes”. The list is ignored if the DB already contains one or more policy types.

  • Preload default policies for ONAP components
    • The ability to configure the preloading of initial default policies into the system upon startup.

  • A lot of improvements to the API code and validations corresponding to the changes in policy-models.
    • Creating same policyType/policy repeatedly without any change in request body will always be successful with 200 response

    • If there is any change in the request body, then that should be a new version. If any change is posted without a version change, then 406 error response is returned.

  • Known versioning issues are there in Policy Types handling.
    • https://jira.onap.org/browse/POLICY-2377 covers the versioning issues in Policy. Basically, multiple versions of a Policy Type cannot be handled in TOSCA. So, in Frankfurt, the latest version of the policy type is examined. This will be further looked into in Guilin.

  • Cascaded GET of PolicyTypes and Policies
    • Fetching/GET PolicyType now returns all of the referenced/parent policyTypes and dataTypes as well.

    • Fetching/GET Policy allows specifying mode now.

    • By default the mode is “BARE”, which returns only the requested Policy in response. If mode is specified as “REFERENCED”, all of the referenced/parent policyTypes and dataTypes are returned as well.

  • The /deployed API is removed from policy/api
    • This run time administration job to see the deployment status of a policy is now possible via PAP.

  • Changes related to design and support of TOSCA Compliant Policy Types for the operational and guard policy models.

POLICY-DISTRIBUTION

  • From Frankfurt release, policy-distribution component uses APIs provided by Policy-API and Policy-PAP for creation of policy types and policies, and deployment of policies.
    • Note: If “deployPolicies” field in the startup config file is true, then only the policies are deployed using PAP endpoint.

  • Policy/engine & apex-pdp dependencies are removed from policy-distribution.

POLICY-APEX-PDP

  • Changed the JavaScript executor from Nashorn to Rhino as part of Java 11 upgrade.
  • APEX supports multiple policy deployment in Frankfurt.
    • Up through El Alto APEX-PDP had the capability to take in only a single ToscaPolicy. When PAP sends a list of Tosca Policies in PdpUpdate, only the first one is taken and only that single Policy is deployed in APEX. This is fixed in Frankfurt. Now, APEX can deploy a list of Tosca Policies altogether into the engine.

    • Note: There shouldn’t be any duplicates in the deployed policies (for e.g. same input/output parameter names, or same event/task names etc).

    • For example, when 3 policies are deployed and one has duplicates, say same input/task or any such concept is used in the 2nd and 3rd policy, then APEX-PDP ignores the 3rd policy and executes only the 1st and 2nd policies. APEX-PDP also respond back to PAP with the message saying that “only Policy 1 and 2 are deployed. Others failed due to duplicate concept”.

  • Context retainment during policy upgrade.
    • In APEX-PDP, context is referred by the apex concept ‘contextAlbum’. When there is no major version change in the upgraded policy to be deployed, the existing context of the currently running policy is retained. When the upgraded policy starts running, it will have access to this context as well.

    • For example, Policy A v1.1 is currently deployed to APEX. It has a contextAlbum named HeartbeatContext and heartbeats are currently added to the HeartbeatContext based on events coming in to the policy execution. Now, when Policy A v1.2 (with some other changes and same HeartbeatContext) is deployed, Policy Av1.1 is replaced by Policy A1.2 in the APEX engine, but the content in HeartbeatContext is retained for Policy A1.2.

  • APEX-PDP now specifies which PdpGroup it belongs to.
    • Up through El Alto, PAP assigned each PDP to a PDP group based on the supportedPolicyTypes it sends in the heartbeat. But in Frankfurt, each PDP comes up saying which PdpGroup they belong to, and this is sent to PAP in the heartbeat. PAP then registers the PDP the PdpGroup specified by the PDP. If no group name is specified like this, then PAP assigns the PDP to defaultGroup by default. SupportedPolicyTypes are not sent to PAP by the PDP now.

    • In APEX-PDP, this can be specified in the startup config file(OnapPfConfig.json). “pdpGroup”: “<groupName>” is added under “pdpStatusParameters” in the config file.

  • APEX-PDP now sends PdpStatistics data in heartbeat.
    • Apex now sends the PdpStatistics data in every heartbeat sent to PAP. PAP saves this data to the database, and this statistics data can be accessed from the monitoring GUI.

  • Removed “content” section from ToscaPolicy properties in APEX.
    • Up through El Alto, APEX specific policy information was placed under properties|content in ToscaPolicy. Avoid placing under “content” and keep the information directly under properties. So, the ToscaPolicy structure will have apex specific policy information in properties|engineServiceParameters, properties|eventInputParameters, properties|eventOutputParameters.

  • Passing parameters from ApexConfig to policy logic.
  • GRPC support for APEX-CDS interaction.

POLICY-XACML-PDP

  • Added optional Decision API param to Decision API for monitor decisions that returns abbreviated results.
    • Return only an abbreviated list of policies (e.g. metadata Policy Id and Version) without the actual contents of the policies (e.g. the Properties).

  • XACML PDP now support PASSIVE_MODE.

  • Added support to return status and error if pdp-x failed to load a policy.

  • Changed optimization Decision API application to support “closest matches” algorithm.

  • Changed Xacml-pdp to report the pdp group defined in XacmlPdpParameters config file as part of heartbeat. Also, removed supportedPolicyType from pdpStatus message.

  • Design the TOSCA policy model for SDNC naming policies and implement an application that translates it to a working policy and is available for decision API.

  • XACML pdp support for Control Loop Coordination
    • Added policies for SON and PCI to support each blocking the other, with test cases and appropriate requests

  • Extend PDP-X capabilities so that it can load in and enforce the native XACML policies deployed from PAP.

POLICY-DROOLS-PDP

  • Support for PDP-D in offline mode to support locked deployments. This is the default ONAP installation.

  • Parameterize maven repository URLs for easier CI/CD integration.

  • Support for Tosca Compliant Operational Policies.

  • Support for TOSCA Compliant Native Policies that allows creation and deployment of new drools-applications.

  • Validation of Operational and Native Policies against their policy type.

  • Support for a generic Drools-PDP docker image to host any type of application.

  • Experimental Server Pool feature that supports multiple active Drools PDP hosts.

POLICY-DROOLS-APPLICATIONS

  • Removal of DCAE ONSET alarm duplicates (with different request IDs).

  • Support of a new controller (frankfurt) that supports the ONAP use cases under the new actor architecture.

  • Deprecated the “usecases” controller supporting the use cases under the legacy actor architecture.

  • Deleted the unsupported “amsterdam” controller related projects.

Known Limitations, Issues and Workarounds

System Limitations

The policy API component requires a fresh new database when migrating to the frankfurt release. Therefore, upgrades require a fresh new database installation. Please see the Installing or Upgrading Policy section for appropriate procedures.

Known Vulnerabilities
  • POLICY-2463 - In APEX Policy javascript task logic, JSON.stringify causing stackoverflow exceptions

  • POLICY-2487 - policy/api hangs in loop if preload policy does not exist

Workarounds
  • POLICY-2463 - Parse incoming object using JSON.Parse() or cast the object to a String

Security Notes

  • POLICY-2221 - Password removal from helm charts

  • POLICY-2064 - Allow overriding of keystore and truststore in policy helm charts

  • POLICY-2381 - Dependency upgrades
    • Upgrade drools 7.33.0

    • Upgrade jquery to 3.4.1 in jquery-ui

    • Upgrade snakeyaml to 1.26

    • Upgrade org.infinispan infinispan-core 10.1.5.Final

    • upgrade io.netty 4.1.48.Final

    • exclude org.glassfish.jersey.media jersey-media-jaxb artifact

    • Upgrade com.fasterxml.jackson.core 2.10.0.pr3

    • Upgrade org.org.jgroups 4.1.5.Final

    • Upgrade commons-codec 20041127.091804

    • Upgrade com.github.ben-manes.caffeine 2.8.0

Version: 5.0.2

Release Date

2020-08-24 (El Alto Maintenance Release #1)

New Features

Artifacts released:

Repository

Java Artifact

Docker Image (if applicable)

policy/api

2.1.3

onap/policy-api:2.1.3

policy/pap

2.1.3

onap/policy-pap:2.1.3

policy/drools-pdp

1.5.3

onap/policy-drools:1.5.3

policy/apex-pdp

2.2.3

onap/policy-apex-pdp:2.2.3

policy/xacml-pdp

2.1.3

onap/policy-xacml-pdp:2.1.3

policy/drools-applications

1.5.4

onap/policy-pdpd-cl:1.5.4

policy/engine

1.5.3

onap/policy-pe:1.5.3

policy/distribution

2.2.2

onap/policy-distribution:2.2.2

policy/docker

1.4.0

onap/policy-common-alpine:1.4.0, onap/policy/base-alpine:1.4.0

Bug Fixes

  • [PORTAL-760] - Access to Policy portal is impossible

  • [POLICY-2107] - policy/distribution license issue in resource needs to be removed

  • [POLICY-2169] - SDC client interface change caused compile error in policy distribution

  • [POLICY-2171] - Upgrade elalto branch models and drools-applications

  • [POLICY-1509] - Investigate Apex org.python.jython-standalone.2.7.1

  • [POLICY-2062] - APEX PDP logs > 4G filled local storage

Security Notes

Fixed Security Issues

Version: 5.0.1

Release Date

2019-10-24 (El Alto Release)

New Features

Artifacts released:

Repository

Java Artifact

Docker Image (if applicable)

policy/parent

3.0.1

policy/common

1.5.2

policy/models

2.1.4

policy/api

2.1.2

onap/policy-api:2.1.2

policy/pap

2.1.2

onap/policy-pap:2.1.2

policy/drools-pdp

1.5.2

onap/policy-drools:1.5.2

policy/apex-pdp

2.2.1

onap/policy-apex-pdp:2.2.1

policy/xacml-pdp

2.1.2

onap/policy-xacml-pdp:2.1.2

policy/drools-applications

1.5.3

onap/policy-pdpd-cl:1.5.3

policy/engine

1.5.2

onap/policy-pe:1.5.2

policy/distribution

2.2.1

onap/policy-distribution:2.2.1

policy/docker

1.4.0

onap/policy-common-alpine:1.4.0 onap/policy/base-alpine:1.4.0

The El Alto release for POLICY delivered the following Epics. For a full list of stories and tasks delivered in the El Alto release, refer to JiraPolicyElAlto.

  • [POLICY-1727] - This epic covers technical debt left over from Dublin

  • POLICY-969 Docker improvement in policy framwork modules

  • POLICY-1074 Fix checkstyle warnings in every repository

  • POLICY-1121 RPM build for Apex

  • POLICY-1223 CII Silver Badging Requirements

  • POLICY-1600 Clean up hash code equality checks, cloning and copying in policy-models

  • POLICY-1646 Replace uses of getCanonicalName() with getName()

  • POLICY-1652 Move PapRestServer to policy/common

  • POLICY-1732 Enable maven-checkstyle-plugin in apex-pdp

  • POLICY-1737 Upgrade oParent 2.0.0 - change daily jobs to staging jobs

  • POLICY-1742 Make HTTP return code handling configurable in APEX

  • POLICY-1743 Make URL configurable in REST Requestor and REST Client

  • POLICY-1744 Remove topic.properties and incorporate into overall properties

  • POLICY-1770 PAP REST API for PDPGroup Healthcheck

  • POLICY-1771 Boost policy/api JUnit code coverage

  • POLICY-1772 Boost policy/xacml-pdp JUnit code coverage

  • POLICY-1773 Enhance the policy/xacml-pdp S3P Stability and Performance tests

  • POLICY-1784 Better Handling of “version” field value with clients

  • POLICY-1785 Deploy same policy with a new version simply adds to the list

  • POLICY-1786 Create a simple way to populate the guard database for testing

  • POLICY-1791 Address Sonar issues in new policy repos

  • POLICY-1795 PAP: bounced apex and xacml pdps show deleted instance in pdp status through APIs.

  • POLICY-1800 API|PAP components use different version formats

  • POLICY-1805 Build up stability test for api component to follow S3P requirements

  • POLICY-1806 Build up S3P performance test for api component

  • POLICY-1847 Add control loop coordination as a preloaded policy type

  • POLICY-1871 Change policy/distribution to support ToscaPolicyType & ToscaPolicy

  • POLICY-1881 Upgrade policy/distribution to latest SDC artifacts

  • POLICY-1885 Apex-pdp: Extend CLIEditor to generate policy in ToscaServiceTemplate format

  • POLICY-1898 Move apex-pdp & distribution documents to policy/parent

  • POLICY-1942 Boost policy/apex-pdp JUnit code coverage

  • POLICY-1953 Create addTopic taking BusTopicParams instead of Properties in policy/endpoints

  • Additional items delivered with the release.

  • POLICY-1637 Remove “version” from PdpGroup

  • POLICY-1653 Remove isNullVersion() method

  • POLICY-1966 Fix more sonar issues in policy drools

  • POLICY-1988 Generate El Alto AAF Certificates

  • [POLICY-1823] - This epic covers the work to develop features that will be deployed dark in El Alto.

  • POLICY-1762 Create CDS API model implementation

  • POLICY-1763 Create CDS Actor

  • POLICY-1899 Update optimization xacml application to support more flexible Decision API

  • POLICY-1911 XACML PDP must be able to retrieve Policy Type from API

Bug Fixes

The following bug fixes have been deployed with this release:

  • [POLICY-1671] - policy/engine JUnit tests now take over 30 minutes to run

  • [POLICY-1725] - XACML PDP returns 500 vs 400 for bad syntax JSON

  • [POLICY-1793] - API|MODELS: Retrieving Legacy Operational Policy as a Tosca Policy with wrong version

  • [POLICY-1795] - PAP: bounced apex and xacml pdps show deleted instance in pdp status through APIs.

  • [POLICY-1800] - API|PAP components use different version formats

  • [POLICY-1802] - Apex-pdp: context album is mandatory for policy model to compile

  • [POLICY-1803] - PAP should undeploy policies when subgroup is deleted

  • [POLICY-1807] - Latest version is always returned when using the endpoint to retrieve all versions of a particular policy

  • [POLICY-1808] - API|PAP|PDP-X [new] should publish docker images with the following tag X.Y-SNAPSHOT-latest

  • [POLICY-1810] - API: support “../deployed” REST API (URLs) for legacy policies

  • [POLICY-1811] - The endpoint of retrieving the latest version of TOSCA policy does not return the latest one, especially when there are double-digit versions

  • [POLICY-1818] - APEX does not allow arbitrary Kafka parameters to be specified

  • [POLICY-1838] - Drools-pdp error log is missing data in ErrorDescription field

  • [POLICY-1839] - Policy Model currently needs to be escaped

  • [POLICY-1843] - Decision API not returning monitoring policies when calling api with policy-type

  • [POLICY-1844] - XACML PDP does not update policy statistics

  • [POLICY-1858] - Usecase DRL - named query should not be invoked

  • [POLICY-1859] - Drools rules should not timeout when given timeout=0 - should be treated as infinite

  • [POLICY-1872] - brmsgw fails building a jar - trafficgenerator dependency does not exist

  • [POLICY-2047] - TOSCA Policy Types should be map not a list

  • [POLICY-2060] - ToscaProperties object is missing metadata field

  • [POLICY-2156] - missing field in create VF module request to SO

Security Notes

Fixed Security Issues

Known Security Issues

Known Vulnerabilities in Used Modules

POLICY code has been formally scanned during build time using NexusIQ and all Critical vulnerabilities have been addressed, items that remain open have been assessed for risk and determined to be false positive. The POLICY open Critical security vulnerabilities and their risk assessment have been documented as part of the project (El Alto Release).

Quick Links:

Known Issues

The following known issues will be addressed in a future release:

  • [POLICY-1276] - JRuby interpreter shutdown fails on second and subsequent runs

  • [POLICY-1291] - Maven Error when building Apex documentation in Windows

  • [POLICY-1578] - PAP pushPolicies.sh in startup fails due to race condition in some environments

  • [POLICY-1832] - API|PAP: data race condition seem to appear sometimes when creating and deploying policy

  • [POLICY-2103] - policy/distribution may need to re-synch if SDC gets reinstalled

  • [POLICY-2062] - APEX PDP logs > 4G filled local storage

  • [POLICY-2080] - drools-pdp JUnit fails intermittently in feature-active-standby-management

  • [POLICY-2111] - PDP-D APPS: AAF Cadi conflicts with Aether libraries

  • [POLICY-2158] - PAP loses synchronization with PDPs

  • [POLICY-2159] - PAP console (legacy): cannot edit policies with GUI

Version: 4.0.0

Release Date

2019-06-26 (Dublin Release)

New Features

Artifacts released:

Repository

Java Artifact

Docker Image (if applicable)

policy/parent

2.1.0

policy/common

1.4.0

policy/models

2.0.2

policy/api

2.0.1

onap/policy-api:2.0.1

policy/pap

2.0.1

onap/policy-pap:2.0.1

policy/drools-pdp

1.4.0

onap/policy-drools:1.4.0

policy/apex-pdp

2.1.0

onap/policy-apex-pdp:2.1.0

policy/xacml-pdp

2.1.0

onap/policy-xacml-pdp:2.1.0

policy/drools-applications

1.4.2

onap/policy-pdpd-cl:1.4.2

policy/engine

1.4.1

onap/policy-pe:1.4.1

policy/distribution

2.1.0

onap/policy-distribution:2.1.0

policy/docker

1.4.0

onap/policy-common-alpine:1.4.0 onap/policy/base-alpine:1.4.0

The Dublin release for POLICY delivered the following Epics. For a full list of stories and tasks delivered in the Dublin release, refer to JiraPolicyDublin.

  • [POLICY-1068] - This epic covers the work to cleanup, enhance, fix, etc. any Control Loop based code base.
    • POLICY-1195 Separate model code from drools-applications into other repositories

    • POLICY-1367 Spike - Experimentation for management of Drools templates and Operational Policies

    • POLICY-1397 PDP-D: NOOP Endpoints Support to test Operational Policies.

    • POLICY-1459 PDP-D [Control Loop] : Create a Control Loop flavored PDP-D image

  • [POLICY-1069] - This epic covers the work to harden the codebase for the Policy Framework project.
    • POLICY-1007 Remove Jackson from policy framework components

    • POLICY-1202 policy-engine & apex-pdp are using different version of eclipselink

    • POLICY-1250 Fix issues reported by sonar in policy modules

    • POLICY-1368 Remove hibernate from policy repos

    • POLICY-1457 Use Alpine in base docker images

  • [POLICY-1072] - This epic covers the work to support S3P Performance criteria.
    • S3P Performance related items

  • [POLICY-1171] - Enhance CLC Facility
    • POLICY-1173 High-level specification of coordination directives

  • [POLICY-1220] - This epic covers the work to support S3P Security criteria
    • POLICY-1538 Upgrade Elasticsearch to 6.4.x to clear security issue

  • [POLICY-1269] - R4 Dublin - ReBuild Policy Infrastructure
    • POLICY-1270 Policy Lifecycle API RESTful HealthCheck/Statistics Main Entry Point

    • POLICY-1271 PAP RESTful HealthCheck/Statistics Main Entry Point

    • POLICY-1272 Create the S3P JMeter tests for API, PAP, XACML (2nd Gen)

    • POLICY-1273 Policy Type Application Design Requirements

    • POLICY-1436 XACML PDP RESTful HealthCheck/Statistics Main Entry Point

    • POLICY-1440 XACML PDP RESTful Decision API Main Entry Point

    • POLICY-1441 Policy Lifecycle API RESTful Create/Read Main Entry Point for Policy Types

    • POLICY-1442 Policy Lifecycle API RESTful Create/Read Main Entry Point for Concrete Policies

    • POLICY-1443 PAP Dmaap PDP Register/UnRegister Main Entry Point

    • POLICY-1444 PAP Dmaap Policy Deploy/Undeploy Policies Main Entry Point

    • POLICY-1445 XACML PDP upgrade to xacml 2.0.0

    • POLICY-1446 Policy Lifecycle API RESTful Delete Main Entry Point for Policy Types

    • POLICY-1447 Policy Lifecycle API RESTful Delete Main Entry Point for Concrete Policies

    • POLICY-1449 XACML PDP Dmaap Register/UnRegister Functionality

    • POLICY-1451 XACML PDP Dmaap Deploy/UnDeploy Functionality

    • POLICY-1452 Apex PDP Dmaap Register/UnRegister Functionality

    • POLICY-1453 Apex PDP Dmaap Deploy/UnDeploy Functionality

    • POLICY-1454 Drools PDP Dmaap Register/UnRegister Functionality

    • POLICY-1455 Drools PDP Dmaap Deploy/UnDeploy Functionality

    • POLICY-1456 Policy Architecture and Roadmap Documentation

    • POLICY-1458 Create S3P JMeter Tests for Policy API

    • POLICY-1460 Create S3P JMeter Tests for PAP

    • POLICY-1461 Create S3P JMeter Tests for Policy XACML Engine (2nd Generation)

    • POLICY-1462 Create S3P JMeter Tests for Policy SDC Distribution

    • POLICY-1471 Policy Application Designer - Develop Guard and Control Loop Coordination Policy Type application

    • POLICY-1474 Modifications of Control Loop Operational Policy to support new Policy Lifecycle API

    • POLICY-1515 Prototype Policy Lifecycle API Swagger Entry Points

    • POLICY-1516 Prototype the Policy Decision API

    • POLICY-1541 PAP REST API for PDPGroup Query, Statistics & Delete

    • POLICY-1542 PAP REST API for PDPGroup Deployment, State Management & Health Check

  • [POLICY-1399] - This epic covers the work to support model drive control loop design as defined by the Control Loop Subcommittee
    • Model drive control loop related items

  • [POLICY-1404] - This epic covers the work to support the CCVPN Use Case for Dublin
    • POLICY-1405 Develop SDNC API for trigger bandwidth

  • [POLICY-1408] - This epic covers the work done with the Casablanca release
    • POLICY-1410 List Policy API

    • POLICY-1413 Dashboard enhancements

    • POLICY-1414 Push Policy and DeletePolicy API enhancement

    • POLICY-1416 Model enhancements to support CLAMP

    • POLICY-1417 Resiliency improvements

    • POLICY-1418 PDP APIs - make ClientAuth optional

    • POLICY-1419 Better multi-role support

    • POLICY-1420 Model enhancement to support embedded JSON

    • POLICY-1421 New audit data for push/delete

    • POLICY-1422 Enhanced encryption

    • POLICY-1423 Save original model file

    • POLICY-1427 Controller Logging Feature

    • POLICY-1489 PDP-D: Nested JSON Event Filtering support with JsonPath

    • POLICY-1499 Mdc Filter Feature

  • [POLICY-1438] - This epic covers the work to support 5G OOF PCI Use Case
    • POLICY-1463 Functional code changes in Policy for OOF SON use case

    • POLICY-1464 Config related aspects for OOF SON use case

  • [POLICY-1450] - This epic covers the work to support the Scale Out Use Case.
    • POLICY-1278 AAI named-queries are being deprecated and should be replaced with custom-queries

    • POLICY-1545 E2E Automation - Parse the newly added model ids from operation policy

  • Additional items delivered with the release.
    • POLICY-1159 Move expectException to policy-common/utils-test

    • POLICY-1176 Work on technical debt introduced by CLC POC

    • POLICY-1266 A&AI Modularity

    • POLICY-1274 further improvement in PSSD S3P test

    • POLICY-1401 Build onap.policies.Monitoring TOSCA Policy Template

    • POLICY-1465 Support configurable Heap Memory Settings for JVM processes

Bug Fixes

The following bug fixes have been deployed with this release:

  • [POLICY-1241] - Test failure in drools-pdp if JAVA_HOME is not set

  • [POLICY-1289] - Apex only considers 200 response codes as successful result codes

  • [POLICY-1437] - Fix issues in FileSystemReceptionHandler of policy-distribution component

  • [POLICY-1501] - policy-engine JUnit tests are not independent

  • [POLICY-1627] - APEX does not support specification of a partitioner class for Kafka

Security Notes

Fixed Security Issues

  • [OJSI-117] - In default deployment POLICY (nexus) exposes HTTP port 30236 outside of cluster.

  • [OJSI-157] - In default deployment POLICY (policy-api) exposes HTTP port 30240 outside of cluster.

  • [OJSI-118] - In default deployment POLICY (policy-apex-pdp) exposes HTTP port 30237 outside of cluster.

  • [OJSI-184] - In default deployment POLICY (brmsgw) exposes HTTP port 30216 outside of cluster.

Known Security Issues

Known Vulnerabilities in Used Modules

POLICY code has been formally scanned during build time using NexusIQ and all Critical vulnerabilities have been addressed, items that remain open have been assessed for risk and determined to be false positive. The POLICY open Critical security vulnerabilities and their risk assessment have been documented as part of the project (Dublin Release).

Quick Links:

Known Issues

The following known issues will be addressed in a future release:

  • [POLICY-1795] - PAP: bounced apex and xacml pdps show deleted instance in pdp status through APIs.

  • [POLICY-1810] - API: ensure that the REST APISs (URLs) are supported and consistent regardless the type of policy: operational, guard, tosca-compliant.

  • [POLICY-1277] - policy config takes too long time to become retrievable in PDP

  • [POLICY-1378] - add support to append value into policyScope while one policy could be used by several services

  • [POLICY-1650] - Policy UI doesn’t show left menu or any content

  • [POLICY-1671] - policy/engine JUnit tests now take over 30 minutes to run

  • [POLICY-1725] - XACML PDP returns 500 vs 400 for bad syntax JSON

  • [POLICY-1793] - API|MODELS: Retrieving Legacy Operational Policy as a Tosca Policy with wrong version

  • [POLICY-1800] - API|PAP components use different version formats

  • [POLICY-1802] - Apex-pdp: context album is mandatory for policy model to compile

  • [POLICY-1808] - API|PAP|PDP-X [new] should publish docker images with the following tag X.Y-SNAPSHOT-latest

  • [POLICY-1818] - APEX does not allow arbitrary Kafka parameters to be specified

  • [POLICY-1276] - JRuby interpreter shutdown fails on second and subsequent runs

  • [POLICY-1803] - PAP should undeploy policies when subgroup is deleted

  • [POLICY-1291] - Maven Error when building Apex documentation in Windows

  • [POLICY-1872] - brmsgw fails building a jar - trafficgenerator dependency does not exist

Version: 3.0.2

Release Date

2019-03-31 (Casablanca Maintenance Release #2)

The following items were deployed with the Casablanca Maintenance Release:

Bug Fixes

  • [POLICY-1522] - Policy doesn’t send “payload” field to APPC

Security Fixes

  • [POLICY-1538] - Upgrade Elasticsearch to 6.4.x to clear security issue

License Issues

  • [POLICY-1433] - Remove proprietary licenses in PSSD test CSAR

Known Issues

The following known issue will be addressed in a future release.

  • [POLICY-1650] - Policy UI doesn’t show left menu or any content

A workaround for this issue consists in bypassing the Portal UI when accessing the Policy UI. See PAP recipes for the specific procedure.

Version: 3.0.1

Release Date

2019-01-31 (Casablanca Maintenance Release)

The following items were deployed with the Casablanca Maintenance Release:

New Features

  • [POLICY-1221] - Policy distribution application to support HTTPS communication

  • [POLICY-1222] - Apex policy PDP to support HTTPS Communication

Bug Fixes

Version: 3.0.0

Release Date

2018-11-30 (Casablanca Release)

New Features

The Casablanca release for POLICY delivered the following Epics. For a full list of stories and tasks delivered in the Casablanca release, refer to JiraPolicyCasablanca (Note: Jira details can also be viewed from this link).

  • [POLICY-701] - This epic covers the work to integrate Policy into the SDC Service Distribution

The policy team introduced a new application into the framework that provides integration of the Service Distribution Notifications from SDC to Policy.

  • [POLICY-719] - This epic covers the work to build the Policy Lifecycle API

  • [POLICY-726] - This epic covers the work to distribute policy from the PAP to the PDPs into the ONAP platform

  • [POLICY-876] - This epics covers the work to re-build how the PAP organizes the PDP’s into groups.

The policy team did some forward looking spike work towards re-building the Software Architecture.

  • [POLICY-809] - Maintain and implement performance

  • [POLICY-814] - 72 hour stability testing (component and platform)

The policy team made enhancements to the Drools PDP to further support S3P Performance. For the new Policy SDC Distribution application and the newly ingested Apex PDP the team established S3P performance standard and performed 72 hour stability tests.

  • [POLICY-824] - maintain and implement security

The policy team established AAF Root Certificate for HTTPS communication and CADI/AAF integration into the MVP applications. In addition, many java dependencies were upgraded to clear CLM security issues.

  • [POLICY-840] - Flexible control loop coordination facility.

Work towards a POC for control loop coordination policies were implemented.

  • [POLICY-841] - Covers the work required to support HPA

Enhancements were made to support the HPA use case through the use of the new Policy SDC Service Distribution application.

  • [POLICY-842] - This epic covers the work to support the Auto Scale Out functional requirements

Enhancements were made to support Scale Out Use Case to enforce new guard policies and updated SO and A&AI APIs.

  • [POLICY-851] - This epic covers the work to bring in the Apex PDP code

A new Apex PDP engine was ingested into the platform and work was done to ensure code cleared CLM security issues, sonar issues, and checkstyle.

  • [POLICY-1081] - This epic covers the contribution for the 5G OOF PCI Optimization use case.

Policy templates changes were submitted that supported the 5G OOF PCI optimization use case.

  • [POLICY-1182] - Covers the work to support CCVPN use case

Policy templates changes were submitted that supported the CCVPN use case.

Bug Fixes

The following bug fixes have been deployed with this release:

  • [POLICY-799] - Policy API Validation Does Not Validate Required Parent Attributes in the Model

  • [POLICY-869] - Control Loop Drools Rules should not have exceptions as well as die upon an exception

  • [POLICY-872] - investigate potential race conditions during rules version upgrades during call loads

  • [POLICY-878] - pdp-d: feature-pooling disables policy-controllers preventing processing of onset events

  • [POLICY-909] - get_ZoneDictionaryDataByName class type error

  • [POLICY-920] - Hard-coded path in junit test

  • [POLICY-921] - XACML Junit test cannot find property file

  • [POLICY-1083] - Mismatch in action cases between Policy and APPC

Security Notes

POLICY code has been formally scanned during build time using NexusIQ and all Critical vulnerabilities have been addressed, items that remain open have been assessed for risk and determined to be false positive. The POLICY open Critical security vulnerabilities and their risk assessment have been documented as part of the project (Casablanca Release).

Quick Links:

Known Issues

Version: 2.0.0

Release Date

2018-06-07 (Beijing Release)

New Features

The Beijing release for POLICY delivered the following Epics. For a full list of stories and tasks delivered in the Beijing release, refer to JiraPolicyBeijing.

  • [POLICY-390] - This epic covers the work to harden the Policy platform software base (incl 50% JUnit coverage)
    • POLICY-238 policy/drools-applications: clean up maven structure

    • POLICY-336 Address Technical Debt

    • POLICY-338 Address JUnit Code Coverage

    • POLICY-377 Policy Create API should validate input matches DCAE microservice template

    • POLICY-389 Cleanup Jenkin’s CI/CD process’s

    • POLICY-449 Policy API + Console : Common Policy Validation

    • POLICY-568 Integration with org.onap AAF project

    • POLICY-610 Support vDNS scale out for multiple times in Beijing release

  • [POLICY-391] - This epic covers the work to support Release Planning activities
    • POLICY-552 ONAP Licensing Scan - Use Restrictions

  • [POLICY-392] - Platform Maturity Requirements - Performance Level 1
    • POLICY-529 Platform Maturity Performance - Drools PDP

    • POLICY-567 Platform Maturity Performance - PDP-X

  • [POLICY-394] - This epic covers the work required to support a Policy developer environment in which Policy Developers can create, update policy templates/rules separate from the policy Platform runtime platform.
    • POLICY-488 pap should not add rules to official template provided in drools applications

  • [POLICY-398] - This epic covers the body of work involved in supporting policy that is platform specific.
    • POLICY-434 need PDP /getConfig to return an indicator of where to find the config data - in config.content versus config field

  • [POLICY-399] - This epic covers the work required to policy enable Hardware Platform Enablement
    • POLICY-622 Integrate OOF Policy Model into Policy Platform

  • [POLICY-512] - This epic covers the work to support Platform Maturity Requirements - Stability Level 1
    • POLICY-525 Platform Maturity Stability - Drools PDP

    • POLICY-526 Platform Maturity Stability - XACML PDP

  • [POLICY-513] - Platform Maturity Requirements - Resiliency Level 2
    • POLICY-527 Platform Maturity Resiliency - Policy Engine GUI and PAP

    • POLICY-528 Platform Maturity Resiliency - Drools PDP

    • POLICY-569 Platform Maturity Resiliency - BRMS Gateway

    • POLICY-585 Platform Maturity Resiliency - XACML PDP

    • POLICY-586 Platform Maturity Resiliency - Planning

    • POLICY-681 Regression Test Use Cases

  • [POLICY-514] - This epic covers the work to support Platform Maturity Requirements - Security Level 1
    • POLICY-523 Platform Maturity Security - CII Badging - Project Website

  • [POLICY-515] - This epic covers the work to support Platform Maturity Requirements - Escalability Level 1
    • POLICY-531 Platform Maturity Scalability - XACML PDP

    • POLICY-532 Platform Maturity Scalability - Drools PDP

    • POLICY-623 Docker image re-design

  • [POLICY-516] - This epic covers the work to support Platform Maturity Requirements - Manageability Level 1
    • POLICY-533 Platform Maturity Manageability L1 - Logging

    • POLICY-534 Platform Maturity Manageability - Instantiation < 1 hour

  • [POLICY-517] - This epic covers the work to support Platform Maturity Requirements - Usability Level 1
    • POLICY-535 Platform Maturity Usability - User Guide

    • POLICY-536 Platform Maturity Usability - Deployment Documentation

    • POLICY-537 Platform Maturity Usability - API Documentation

  • [POLICY-546] - R2 Beijing - Various enhancements requested by clients to the way we handle TOSCA models.

Bug Fixes

The following bug fixes have been deployed with this release:

  • [POLICY-484] - Extend election handler run window and clean up error messages

  • [POLICY-494] - POLICY EELF Audit.log not in ECOMP Standards Compliance

  • [POLICY-501] - Fix issues blocking election handler and add directed interface for opstate

  • [POLICY-509] - Add IntelliJ file to .gitingore

  • [POLICY-510] - Do not enforce hostname validation

  • [POLICY-518] - StateManagement creation of EntityManagers.

  • [POLICY-519] - Correctly initialize the value of allSeemsWell in DroolsPdpsElectionHandler

  • [POLICY-629] - Fixed a bug on editor screen

  • [POLICY-684] - Fix regex for brmsgw dependency handling

  • [POLICY-707] - ONAO-PAP-REST unit tests fail on first build on clean checkout

  • [POLICY-717] - Fix a bug in checking required fields if the object has include function

  • [POLICY-734] - Fix Fortify Header Manipulation Issue

  • [POLICY-743] - Fixed data name since its name was changed on server side

  • [POLICY-753] - Policy Health Check failed with multi-node cluster

  • [POLICY-765] - junit test for guard fails intermittently

Security Notes

POLICY code has been formally scanned during build time using NexusIQ and all Critical vulnerabilities have been addressed, items that remain open have been assessed for risk and determined to be false positive. The POLICY open Critical security vulnerabilities and their risk assessment have been documented as part of the project.

Quick Links:

Known Issues

The following known issues will be addressed in a future release:

  • [POLICY-522] - PAP REST APIs undesired HTTP response body for 500 responses

  • [POLICY-608] - xacml components : remove hardcoded secret key from source code

  • [POLICY-764] - Policy Engine PIP Configuration JUnit Test fails intermittently

  • [POLICY-776] - OOF Policy TOSCA models are not correctly rendered

  • [POLICY-799] - Policy API Validation Does Not Validate Required Parent Attributes in the Model

  • [POLICY-801] - fields mismatch for OOF flavorFeatures between implementation and wiki

  • [POLICY-869] - Control Loop Drools Rules should not have exceptions as well as die upon an exception

  • [POLICY-872] - investigate potential race conditions during rules version upgrades during call loads

Version: 1.0.2

Release Date

2018-01-18 (Amsterdam Maintenance Release)

Bug Fixes

The following fixes were deployed with the Amsterdam Maintenance Release:

  • [POLICY-486] - pdp-x api pushPolicy fails to push latest version

Version: 1.0.1

Release Date

2017-11-16 (Amsterdam Release)

New Features

The Amsterdam release continued evolving the design driven architecture of and functionality for POLICY. The following is a list of Epics delivered with the release. For a full list of stories and tasks delivered in the Amsterdam release, refer to JiraPolicyAmsterdam.

  • [POLICY-31] - Stabilization of Seed Code
    • POLICY-25 Replace any remaining openecomp reference by onap

    • POLICY-32 JUnit test code coverage

    • POLICY-66 PDP-D Feature mechanism enhancements

    • POLICY-67 Rainy Day Decision Policy

    • POLICY-93 Notification API

    • POLICY-158 policy/engine: SQL injection Mitigation

    • POLICY-269 Policy API Support for Rainy Day Decision Policy and Dictionaries

  • [POLICY-33] - This epic covers the body of work involved in deploying the Policy Platform components
    • POLICY-40 MSB Integration

    • POLICY-124 Integration with oparent

    • POLICY-41 OOM Integration

    • POLICY-119 PDP-D: noop sinks

  • [POLICY-34] - This epic covers the work required to support a Policy developer environment in which Policy Developers can create, update policy templates/rules separate from the policy Platform runtime platform.
    • POLICY-57 VF-C Actor code development

    • POLICY-43 Amsterdam Use Case Template

    • POLICY-173 Deployment of Operational Policies Documentation

  • [POLICY-35] - This epic covers the body of work involved in supporting policy that is platform specific.
    • POLICY-68 TOSCA Parsing for nested objects for Microservice Policies

  • [POLICY-36] - This epic covers the work required to capture policy during VNF on-boarding.

  • [POLICY-37] - This epic covers the work required to capture, update, extend Policy(s) during Service Design.
    • POLICY-64 CLAMP Configuration and Operation Policies for vFW Use Case

    • POLICY-65 CLAMP Configuration and Operation Policies for vDNS Use Case

    • POLICY-48 CLAMP Configuration and Operation Policies for vCPE Use Case

    • POLICY-63 CLAMP Configuration and Operation Policies for VOLTE Use Case

  • [POLICY-38] - This epic covers the work required to support service distribution by SDC.

  • [POLICY-39] - This epic covers the work required to support the Policy Platform during runtime.
    • POLICY-61 vFW Use Case - Runtime

    • POLICY-62 vDNS Use Case - Runtime

    • POLICY-59 vCPE Use Case - Runtime

    • POLICY-60 VOLTE Use Case - Runtime

    • POLICY-51 Runtime Policy Update Support

    • POLICY-328 vDNS Use Case - Runtime Testing

    • POLICY-324 vFW Use Case - Runtime Testing

    • POLICY-320 VOLTE Use Case - Runtime Testing

    • POLICY-316 vCPE Use Case - Runtime Testing

  • [POLICY-76] - This epic covers the body of work involved in supporting R1 Amsterdam Milestone Release Planning Milestone Tasks.
    • POLICY-77 Functional Test case definition for Control Loops

    • POLICY-387 Deliver the released policy artifacts

Bug Fixes
  • This is technically the first release of POLICY, previous release was the seed code contribution. As such, the defects fixed in this release were raised during the course of the release. Anything not closed is captured below under Known Issues. For a list of defects fixed in the Amsterdam release, refer to JiraPolicyAmsterdam.

Known Issues
  • The operational policy template has been tested with the vFW, vCPE, vDNS and VOLTE use cases. Additional development may/may not be required for other scenarios.

  • For vLBS Use Case, the following steps are required to setup the service instance:
    • Create a Service Instance via VID.

    • Create a VNF Instance via VID.

    • Preload SDNC with topology data used for the actual VNF instantiation (both base and DNS scaling modules). NOTE: you may want to set “vlb_name_0” in the base VF module data to something unique. This is the vLB server name that DCAE will pass to Policy during closed loop. If the same name is used multiple times, the Policy name-query to AAI will show multiple entries, one for each occurrence of that vLB VM name in the OpenStack zone. Note that this is not a limitation, typically server names in a domain are supposed to be unique.

    • Instantiate the base VF module (vLB, vPacketGen, and one vDNS) via VID. NOTE: The name of the VF module MUST start with Vfmodule_. The same name MUST appear in the SDNC preload of the base VF module topology. We’ll relax this naming requirement for Beijing Release.

    • Run heatbridge from the Robot VM using Vfmodule_ _ as stack name (it is the actual stack name in OpenStack)

    • Populate AAI with a dummy VF module for vDNS scaling.

Security Issues
  • None at this time

Other
  • None at this time

End of Release Notes