Optimization Framework: Homing and Allocation
OOF-HAS is an policy-driven placement optimizing service (or homing service) that allows ONAP to deploy services automatically across multiple sites and multiple clouds. It enables placement based on a wide variety of policy constraints including capacity, location, platform capabilities, and other service specific constraints.
HAS is a distributed resource broker that enables automated policy-driven optimized placement of services on a global heterogeneous platform using ONAP. Given a set of service components (based on SO decomposition flows) and requirements for placing these components (driven by policies), HAS finds optimal resources (cloud regions or existing service instances) to home these service components such that it meets all the service requirements. HAS is architected as an extensible homing service that can accommodate a growing set of homing objectives, policy constraints, data sources and placement algorithms. It is also service-agnostic by design and can easily onboard new services with minimal effort. Therefore, HAS naturally extends to a general policy-driven optimizing placement platform for wider range of services, e.g., DCAE micro-services, ECOMP control loops, server capacity, etc. Finally, HAS provides an traceable mechanism for what-if analysis which is critical for ease of understanding a homing recommendation and resolving infeasibility scenarios.
OF-HAS is the implementation of the ONAP Homing Service. The formal project name in ONAP is OF-HAS. The informal name for the project is Conductor (inherited from the seed-code), which is interchangeably used through the project.
Given the description of what needs to be deployed (demands) and the placement requirements (constraints), Conductor determines placement candidates that meet all constraints while optimizing the resource usage of the AIC infrastructure. A customer request may be satisfied by deploying new VMs in AIC (AIC inventory) or by using existing service instances with enough remaining capacity (service inventory).
From a canonical standpoint, Conductor is known as a homing service, in the same way OpenStack Heat is an orchestration service, or Nova is a compute service.
Architecture
Introduction
OOF-HAS is an policy-driven placement optimizing service (or homing service) that allows ONAP to deploy services automatically across multiple sites and multiple clouds. It enables placement based on a wide variety of policy constraints including capacity, location, platform capabilities, and other service specific constraints. In Frankfurt release, it is also used for the E2E Network Slicing use case to select an appropriate existing Network Slice Instance (NSI) / Network Slice Sub-net Instances (NSSIs), and/or provide the Slice Profile for creating a new NSSI which shall be part of a new NSI.
HAS is a distributed resource broker that enables automated policy-driven optimized placement of services on a global heterogeneous platform using ONAP. Given a set of service components (based on SO decomposition flows) and requirements for placing these components (driven by policies), HAS finds optimal resources (cloud regions or existing service instances) to home these service components such that it meets all the service requirements. HAS is architected as an extensible homing service that can accommodate a growing set of homing objectives, policy constraints, data sources and placement algorithms. It is also service-agnostic by design and can easily onboard new services with minimal effort. Therefore, HAS naturally extends to a general policy-driven optimizing placement platform for wider range of services, e.g., DCAE micro-services, ECOMP control loops, server capacity, etc. Finally, HAS provides an traceable mechanism for what-if analysis which is critical for ease of understanding a homing recommendation and resolving infeasibility scenarios.
HAS in Service Instantiation workflows
Below is an illustration of HAS interactions with other ONAP components to enable Policy driven homing. The homing policy constraints have been expanded (and categorized) to highlight the range of constraints that could be provided to HAS for determining the homing solution. The figure also shows how HAS uses a plugin-based approach to allow an extensible set of constraints and data models.

More information on how homing constraints are specified can be found at OOF-HAS Homing Specification Guide, and a sample homing template has been drawn up for residential vCPE Homing Use Case.
HAS Architecture (R2)

Lifecycle of a Homing request in HAS

Use cases
Residential vCPE: https://wiki.onap.org/display/DW/vCPE+Homing+Use+Case
5G RAN: https://wiki.onap.org/display/DW/Homing+5G+RAN+VNFs
E2E Network Slicing: https://wiki.onap.org/display/DW/E2E+Network+Slicing+Use+Case+in+R6+Frankfurt
A sample heuristic greedy algorithm of HAS (using a vCPE as example)

Components
Conductor consists of five services that work together:
``conductor-api``: An HTTP REST API
``conductor-controller``: Validation, translation, and status/results
``conductor-data``: Inventory provider and service controller gateway
``conductor-solver``: Processing and solution calculation
``conductor-reservation``: Reserves the suggested solution solved by Solver component.
Workflow
Deployment plans are created, viewed, and deleted via
conductor-api
and its REST API.Included within each
conductor-api
plan request is a Homing Template.Homing Templates describe a set of inventory demands and constraints to be solved against.
conductor-api
hands off all API requests toconductor-controller
for handling.All deployment plans are assigned a unique identifier (UUID-4), which can be used to check for solution status asynchronously. (Conductor does not support callbacks at this time.)
conductor-controller
ensures templates are well-formed and valid. Errors and remediation are made visible throughconductor-api
. When running in debug mode, the API will also include a python traceback in the response body, if available.conductor-controller
usesconductor-data
to resolve demands against a particular inventory provider (e.g., A&AI).conductor-controller
translates the template into a format suitable for solving.As each template is translated,
conductor-solver
begins working on it.conductor-solver
usesconductor-data
to resolve constraints against a particular service controller (e.g., SDN-C).conductor-solver
determines the most suitable inventory to recommend.conductor-reservation
attempts to reserve the solved solution in SDN-GC
NOTE: There is no Command Line Interface or Python API Library at this time.
DB Backend
All Conductor services use a DB backend for data storage/persistence and/or as a RPC transport mechanism. The current implementation supports two services which can be used as backend - Music and ETCD.
Offered APIs
This document describes the Homing API, provided by the Homing and Allocation service (Conductor).
To view API documentation in the interactive swagger UI download the following and paste into the swagger tool here: https://editor.swagger.io
GET /
retrieve versions
Description: retrieve supported versions of the API
Produces: [‘application/json’]
Responses
200 - list of supported versions
400 - bad request
401 - unauthorized request
POST /v1/plans
create a plan
Description: creates a plan from one or more service demands
Consumes: [‘application/json’]
Produces: [‘application/json’]
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
demand |
body |
service demand |
Responses
201 - plan created
400 - bad request
401 - unauthorized request
GET /v1/plans/{plan_id}
retreive a plan
Description: retrieve a plan
Produces: [‘application/json’]
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
plan_id |
path |
UUID of plan identifier |
string |
Responses
200 - retrieve a plan
400 - bad request
401 - unauthorized request
500 - Internal Server Error
DELETE /v1/plans/{plan_id}
delete a plan
Description: delete a plan
Produces: [‘application/json’]
Parameters
Name |
Position |
Description |
Type |
---|---|---|---|
plan_id |
path |
UUID of plan identifier |
string |
Responses
204 - deleted a plan
400 - bad request
401 - unauthorized request
State Diagram
----------------------------------------
| |
| /---> solved ---> reserving ---> done
| / /
template -> translated -> solving ------> not found /
| ^ | \ /
| | conditionally | \---> error <----/
| | (see note) | ^
| \---------------/ |
\---------------------------------------/
NOTE: When Conductor’s solver service is started in non-concurrent
mode (the default), it will reset any plans found waiting and stuck in
the solving
state back to translated
.
{
"name": "PLAN_NAME",
"template": "CONDUCTOR_TEMPLATE",
"limit": 3
}
{
"plan": {
"name": "PLAN_NAME",
"id": "ee1c5269-c7f0-492a-8652-f0ceb15ed3bc",
"transaction_id": "6bca5f2b-ee7e-4637-8b58-1b4b36ed10f9",
"status": "solved",
"message", "Plan PLAN_NAME is solved.",
"links": [
{
"href": "http://homing/v1/plans/ee1c5269-c7f0-492a-8652-f0ceb15ed3bc",
"rel": "self"
}
],
"recommendations": [
{
"DEMAND_NAME_1": {
"inventory_provider": "aai",
"service_resource_id": "4feb0545-69e2-424c-b3c4-b270e5f2a15d",
"candidate": {
"candidate_id": "99befee8-e8c0-425b-8f36-fb7a8098d9a9",
"inventory_type": "service",
"location_type": "aic",
"location_id": "dal01",
"host_id" : "vig20002vm001vig001"
},
"attributes": {OPAQUE-DICT}
},
"DEMAND_NAME_2": {
"inventory_provider": "aai",
"service_resource_id": "578eb063-b24a-4654-ba9e-1e5cf7eb9183",
"candidate": {
"inventory_type": "cloud",
"location_type": "aic",
"location_id": "dal02"
},
"attributes": {OPAQUE-DICT}
}
},
{
"DEMAND_NAME_1": {
"inventory_provider": "aai",
"service_resource_id": "4feb0545-69e2-424c-b3c4-b270e5f2a15d",
"candidate": {
"candidate_id": "99befee8-e8c0-425b-8f36-fb7a8098d9a9",
"inventory_type": "service",
"location_type": "aic",
"location_id": "dal03",
"host_id" : "vig20001vm001vig001"
},
"attributes": {OPAQUE-DICT}
},
"DEMAND_NAME_2": {
"inventory_provider": "aai",
"service_resource_id": "578eb063-b24a-4654-ba9e-1e5cf7eb9183",
"candidate": {
"inventory_type": "cloud",
"location_type": "aic",
"location_id": "dal04"
},
"attributes": {OPAQUE-DICT}
}
},
...
]
}
}
Show plan details
GET /v1/plans/{plan_id}
Normal response codes: 200
Error response codes: unauthorized (401), itemNotFound (404)
Request parameters
Parameter |
Style |
Type |
Description |
---|---|---|---|
|
plain |
csapi:UUID |
The UUID of the plan. |
Response Parameters
See the Response Parameters for Create a plan.
Delete a plan
DELETE /v1/plans/{plan_id}
Normal response codes: 204
Error response codes: badRequest (400), unauthorized (401), itemNotFound (404)
Request parameters
Parameter |
Style |
Type |
Description |
---|---|---|---|
|
plain |
csapi:UUID |
The UUID of the plan. |
This operation does not accept a request body and does not return a response body.
API Errors
In the event of an error with a status other than unauthorized (401), a detailed repsonse body is returned.
Response parameters
Parameter |
Style |
Type |
Description |
---|---|---|---|
|
plain |
xsd:string |
Human-readable name. |
|
plain |
xsd:string |
Detailed explanation with remediation (if any). |
|
plain |
xsd:int |
HTTP Status Code. |
|
plain |
xsd:dict |
Error dictionary. Keys include message, traceback, and type. |
|
plain |
xsd:string |
Internal error message. |
|
plain |
xsd:string |
Python traceback (if available). |
|
plain |
xsd:string |
HTTP Status class name (from python-webob) |
Examples
A plan with the name “pl an” is considered a bad request because the name contains a space.
{
"title": "Bad Request",
"explanation": "-> name -> pl an did not pass validation against callable: plan_name_type (must contain only uppercase and lowercase letters, decimal digits, hyphens, periods, underscores, and tildes [RFC 3986, Section 2.3])",
"code": 400,
"error": {
"message": "The server could not comply with the request since it is either malformed or otherwise incorrect.",
"type": "HTTPBadRequest"
}
}
The HTTP COPY method was attempted but is not allowed.
{
"title": "Method Not Allowed",
"explanation": "The COPY method is not allowed.",
"code": 405,
"error": {
"message": "The server could not comply with the request since it is either malformed or otherwise incorrect.",
"type": "HTTPMethodNotAllowed"
}
}
Consumed APIs
The following are the dependencies for the project based on the scope for the Casablanca Release.
AAI
See ReadTheDocs documentation for Active and Available Inventory component
Multi-Cloud
See ReadTheDocs documentation for Multi-Cloud component
MUSIC
See ReadTheDocs documentation for Multi-site State Coordination Service component
SDNC
See ReadTheDocs documentation for Software Defined Network Controller component
SMS
The Secrets Management Service is a component of the Application Authorization Framework. Disclaimer - as of this writing AAF RTD does not include discussion of SMS
Installation
OOF-HAS OOM Charts
HAS charts are located in the OOM repository
Please refer OOM documentation for deploying/undeploying the OOF compoenents via helm charts in the k8s environment.
Local Installation
HAS components can be deployed in two ways in a local environment for development and testing.
Docker Installation
Building Docker Images
Build the HAS docker images using the maven build from the root of the project
git clone --depth 1 https://gerrit.onap.org/r/optf/has
cd has
mvn clean install
Installing the components and simulators
HAS docker containers can be installed using the shell scripts in the CSIT directory which includes the script to deploy the startup dependencies(SMS, ETCD) and a few simulators.
export WORKSPACE=$(pwd)/csit
./csit/plans/default/setup.sh
Similarly the installed components can be deleted using the teardown script.
export WORKSPACE=$(pwd)/csit
./csit/plans/default/teardown.sh
Note: The simulator setup can be disabled by the commenting out the commands from the setup scripts.
Installation from the source
HAS components can be installed locally by directly in a linux based environment. This will be significant for testing and debugging during developme
Requirements
Conductor is officially supported on most of the linux based environment, but of the development and testing were done on Ubuntu based machines.
Ensure the following packages are present, as they may not be included by default:
libffi-dev
python3.8
Installing Dependent Components(AAF-SMS, ETCD/MUSIC)
The scripts to install and uninstall the components are present in the CSIT directory.
Note: For setting up SMS, ETCD and MUSIC, docker must be present in the machine.
For installing/uninstalling AAF-SMS,
cd csit/scripts
# install SMS
source setup-sms.sh
# uninstall SMS
docker stop sms
docker stop vault
docker rm sms
docker rm vault
For installing/uninstalling ETCD
cd csit/scripts
# install etcd
source etcd_Script.sh
# uninstall etcd
source etcd_teardown_script.sh
Installing From Source
IMPORTANT: Perform the steps in this section after optionally configuring and activating a python virtual environment.
Conductor source in ONAP is maintained in https://gerrit.onap.org/r/optf/has.
Clone the git repository, and then install from within the conductor
directory:
git clone --depth 1 https://gerrit.onap.org/r/optf/has
cd conductor
pip install --no-cache-dir -e .
Verifying Installation
Each of the five Conductor services may be invoked with the --help
option:
conductor-api -- --help
conductor-controller --help
conductor-data --help
conductor-solver --help
conductor-reservation --help
NOTE: The conductor-api
command is deliberate. --
is used as
as separator between the arguments used to start the WSGI server and the
arguments passed to the WSGI application.
Running for the First Time
Each Conductor component may be run interactively. In this case, the user does not necessarily matter.
When running interactively, it is suggested to run each command in a separate terminal session and in the following order:
conductor-data --config-file=/etc/conductor/conductor.conf
conductor-controller --config-file=/etc/conductor/conductor.conf
conductor-solver --config-file=/etc/conductor/conductor.conf
conductor-reservation --config-file=/etc/conductor/conductor.conf
conductor-api --port=8091 -- --config-file=/etc/conductor/conductor.conf
Sample API Calls and Homing Templates
A Postman collection illustrating sample requests is available upon request. The collection will also be added in a future revision.
Sample homing templates are also available.
Configuration
Configuration files are located in etc/conductor
relative to the
python environment Conductor is installed in.
To generate a sample configuration file, change to the directory just
above where etc/conductor
is located (e.g., /
for the default
environment, or the virtual environment root directory). Then:
$ oslo-config-generator --config-file=etc/conductor/conductor-config-generator.conf
This will generate etc/conductor/conductor.conf.sample
.
Because the configuration directory and files will include credentials, consider removing world permissions:
$ find etc/conductor -type f -exec chmod 640 {} +
$ find etc/conductor -type d -exec chmod 750 {} +
The sample config may then be copied and edited. Be sure to backup any
previous conductor.conf
if necessary.
$ cd etc/conductor
$ cp -p conductor.conf.sample conductor.conf
conductor.conf
is fully annotated with descriptions of all options.
Defaults are included, with all options commented out. Conductor will
use defaults even if an option is not present in the file. To change an
option, simply uncomment it and edit its value.
With the exception of the DEFAULT
section, it’s best to restart the
Conductor services after making any config changes. In some cases, only
one particular service actually needs to be restarted. When in doubt,
however, it’s best to restart all of them.
A few options in particular warrant special attention:
[DEFAULT]
# If set to true, the logging level will be set to DEBUG instead of the default
# INFO level. (boolean value)
# Note: This option can be changed without restarting.
#debug = false
For more verbose logging across all Conductor services, set debug
to
true.
[db_options]
# db_backend to use
db_backend = etcd
# Use music mock api
music_mock = False
Set db_backend
to the db(music/etcd) which is being deployed. Based on this
options, conductor will decide on using the corresponding client to access the
backend.
[aai]
# Base URL for A&AI, up to and not including the version, and without a
# trailing slash. (string value)
#server_url = https://controller:8443/aai
# SSL/TLS certificate file in pem format. This certificate must be registered
# with the A&AI endpoint. (string value)
#certificate_file = certificate.pem
# Private Certificate Key file in pem format. (string value)
#certificate_key_file = certificate_key.pem
# Certificate Authority Bundle file in pem format. Must contain the appropriate
# trust chain for the Certificate file. (string value)
#certificate_authority_bundle_file = certificate_authority_bundle.pem
Set server_url
to the A&AI server URL, to but not including the
version, omitting any trailing slash. Conductor supports A&AI API v9 at
a minimum.
Set the certificate
prefixed keys to the appropriate SSL/TLS-related
files.
IMPORTANT: The A&AI server may have a mismatched host/domain name
and SSL/TLS certificate. In such cases, certificate verification will
fail. To mitigate this, certificate_authority_bundle_file
may be set
to an empty value. While Conductor normally requires a CA Bundle
(otherwise why bother using SSL/TLS), this requirement has been
temporarily relaxed so that development and testing may continue.
[messaging_server]
# Log debug messages. Default value is False. (boolean value)
#debug = false
When the DEFAULT
section’s debug
option is true
, set this
section’s debug
option to true
to enable detailed Conductor-side
RPC-over-Music debug messages.
Be aware, it is voluminous. “You have been warned.” :)
[music_api]
# List of hostnames (round-robin access) (list value)
#hostnames = localhost
# Log debug messages. Default value is False. (boolean value)
#debug = false
Set hostnames
to match wherever the Music REST API is being hosted
(wherever Apache Tomcat and MUSIC.war
are located).
When the DEFAULT
section’s debug
option is true
, set this
section’s debug
option to true
to enable detailed Conductor-side
MUSIC API debug messages.
The previous comment around the volume of log lines applies even more so here. (Srsly. We’re not kidding.)
IMPORTANT: Conductor does not presently use Music’s atomic consistency features due to concern around lock creation/acquisition. Instead, Conductor uses eventual consistency. For this reason, consistency issues may occur when using Music in a multi-server, High Availability configuration.
[sdnc]
# Base URL for SDN-C. (string value)
#server_url = https://controller:8443/restconf
# Basic Authentication Username (string value)
#username = <None>
# Basic Authentication Password (string value)
#password = <None>
Set server_url
to the SDN-C server URL, omitting any trailing slash.
Set username
and password
to the appropriate values as directed
by SDN-C.
Installation - Advanced Options
Running conductor-api Under apache2 httpd and mod_wsgi
conductor-api
may be run as-is for development and test purposes.
When used in a production environment, it is recommended that
conductor-api
run under a multithreaded httpd service supporting
WSGI,
tuned as appropriate.
Configuration instructions for apache2 httpd and nginx are included herein. Respective package requirements are:
Sample configuration files have been provided in the repository.
These instructions presume a conductor
user exists. See the
Service Scripts section for details.
First, set up a few directories:
$ sudo mkdir -p /var/www/conductor
$ sudo mkdir /var/log/apache2/conductor
To install, place the Conductor WSGI application
file in /var/www/conductor
.
Set the owner/group of both directories/files to conductor
:
$ sudo chown -R conductor:conductor /var/log/apache2/conductor /var/www/conductor
Next, place the Conductor apache2 httpd site config
file in
/etc/apache2/sites-available
.
Set the owner/group to root
:
$ sudo chown -R root:root /etc/apache2/sites-available/conductor.conf
If Conductor was installed in a python virtual environment, append
python-home=VENV
to WSGIDaemonProcess
, where VENV
is the
python virtual environment root directory.
IMPORTANT: Before proceeding, disable the conductor-api
sysvinit
and upstart services, as the REST API will now be handled by apache2
httpd. Otherwise there will be a port conflict, and you will be sad.
Enable the Conductor site, ensure the configuration syntax is valid, and gracefully restart apache2 httpd.
$ sudo a2ensite conductor
$ sudo apachectl -t
Syntax OK
$ sudo apachectl graceful
To disable the Conductor site, run sudo a2dissite conductor
, then
gracefully restart once again. Optionally, re-enable the
conductor-api
sysvinit and upstart services.
Running conductor-api Under nginx and uWSGI
Sample configuration files have been provided in the repository.
These instructions presume a conductor
user exists. See the
Service Scripts section for details.
To install, place the Conductor nginx config
files and WSGI application
file in /etc/nginx
(taking care to
backup any prior configuration files). It may be desirable to
incorporate Conductor’s nginx.conf
into the existing config.
Rename app.wsgi
to conductor.wsgi
:
$ cd /etc/nginx
$ sudo mv app.wsgi conductor.wsgi
In nginx.conf
, set CONDUCTOR_API_FQDN
to the server name.
IMPORTANT: Before proceeding, disable the conductor-api
sysvinit
and upstart services, as the REST API will now be handled by nginx.
Otherwise there will be a port conflict, and you will be sad.
Restart nginx:
$ sudo service nginx restart
Then, run conductor-api
under nginx using uWSGI:
$ sudo uwsgi -s /tmp/uwsgi.sock --chmod-socket=777 --wsgi-file /etc/nginx/conductor.wsgi --callable application --set port=8091
To use a python virtual environment, add --venv VENV
to the
uwsgi
command, where VENV
is the python virtual environment root
directory.
Networking
All conductor services require line-of-sight access to all Music/ETCD servers/ports.
The conductor-api
service uses TCP port 8091.
Security
conductor-api
is accessed via HTTP. SSL/TLS certificates and
AuthN/AuthZ (e.g., AAF) are supported at this time in kubernetes
environment.
Conductor makes use of plugins that act as gateways to inventory providers and service controllers. At present, two plugins are supported out-of-the-box: A&AI and SDN-C, respectively.
A&AI requires two-way SSL/TLS. Certificates must be registered and whitelisted with A&AI. SDN-C uses HTTP Basic Authentication. Consult with each respective service for official information on how to obtain access.
Storage
For a cloud environment in particular, it may be desirable to use a separate block storage device (e.g., an OpenStack Cinder volume) for logs, configuration, and other data persistence. In this way, it becomes a trivial matter to replace the entire VM if necessary, followed by reinstallation of the app and any supplemental configuration. Take this into consideration when setting various Conductor config options.
Logging
HAS uses a single logger, oslo, across all the components. The logging format is compliant with the EELF recommendations, including having the following logs: error, audit, metric, application.
The log statements follow the following format (values default to preset values when missing):
Timestamp|RequestId|ServiceInstanceId|ThreadId|Virtual Server Name|ServiceName|InstanceUUID|Log Level|Alarm Severity Level|Server IP Address|HOST NAME|Remote IP Address|Class name|Timer|Detailed Message
The logger util module can be found at:
<>/has/conductor/conductor/common/utils/conductor_logging_util.py
Log File Rotation
Sample logrotate.d
configuration files have been provided in the
repository.
To install, place all Conductor logrotate
files in
/etc/logrotate.d
.
Set file ownership and permissions:
$ sudo chown root:root /etc/logrotate.d/conductor*
$ sudo chmod 644 /etc/logrotate.d/conductor*
logrotate.d
automatically recognizes new files at the next log
rotation opportunity and does not require restarting.
Homing Specification Guide
This document describes the Homing Template format, used by the Homing service. It is a work in progress and subject to frequent revision.
Template Structure
Homing templates are defined in YAML and follow the structure outlined below.
homing_template_version: 2017-10-10
parameters:
PARAMETER_DICT
locations:
LOCATION_DICT
demands:
DEMAND_DICT
constraints:
CONSTRAINT_DICT
reservations:
RESERVATION_DICT
optimization:
OPTIMIZATION
homing_template_version
: This key with value 2017-10-10 (or a later date) indicates that the YAML document is a Homing template of the specified version.parameters
: This section allows for specifying input parameters that have to be provided when instantiating the homing template. Typically, this section is used for providing runtime parameters (like SLA thresholds), which in turn is used in the existing homing policies. The section is optional and can be omitted when no input is required.locations
: This section contains the declaration of geographic locations. This section is optional and can be omitted when no input is required.demands
: This section contains the declaration of demands. This section with at least one demand should be defined in any Homing template, or the template would not really do anything when being instantiated.constraints
: This section contains the declaration of constraints. The section is optional and can be omitted when no input is required.reservations
: This section contains the declaration of required reservations. This section is optional and can be omitted when reservations are not required.optimization
: This section allows the declaration of an optimization. This section is optional and can be omitted when no input is required.
Homing Template Version
The value of homing_template_version
tells HAS not only the format
of the template but also features that will be validated and supported.
Only one value is supported: 2017-10-10
in the initial release of
HAS.
homing_template_version: 2017-10-10
Parameters
The parameters section allows for specifying input parameters that have to be provided when instantiating the template. Such parameters are typically used for providing runtime inputs (like SLA thresholds), which in turn is used in the existing homing policies. This also helps build reusable homing constraints where these parameters can be embedded design time, and it corresponding values can be supplied during runtime.
Each parameter is specified with the name followed by its value. Values can be strings, lists, or dictionaries.
Example
In this example, provider_name
is a string and service_info
is a
dictionary containing both a string and a list (keyed by base_url
and nod_config
, respectively).
parameters:
provider_name: multicloud
service_info:
base_url: http://serviceprovider.sdngc.com/
nod_config:
- http://nod/config_a.yaml
- http://nod/config_b.yaml
- http://nod/config_c.yaml
- http://nod/config_d.yaml
A parameter can be referenced in place of any value. See the Intrinsic Functions section for more details.
Locations
One or more locations may be declared. A location may be referenced
by one or more constraints
. Locations may be defined in any of the
following ways:
Coordinate
A geographic coordinate expressed as a latitude and longitude.
Key |
Value |
---|---|
|
Latitude of the location. |
|
Longitude of the location. |
Host Name
An opaque host name that can be translated to a coordinate via an inventory provider (e.g., A&AI).
Key |
Value |
---|---|
|
Host name identifying a location. |
CLLI
Common Language Location Identification (CLLI) code(https://en.wikipedia.org/wiki/CLLI_code).
Key |
Value |
---|---|
|
8 character CLLI. |
Questions
Do we need functions that can convert one of these to the other? E.g., CLLI Codes to a latitude/longitude
Placemark
An address expressed in geographic region-agnostic terms (referred to as a placemark).
- *This is an example as of Frankfurt release. Support for this schema is
deferred to subsequent release.*
Key |
Value |
---|---|
|
The abbreviated country name associated with the placemark. |
|
The postal code associated with the placemark. |
|
The state or province associated with the placemark. |
|
Additional administrative area information for the placemark. |
|
The city associated with the placemark. |
|
Additional city-level information for the placemark. |
|
The street address associated with the placemark. |
|
Additional street-level information for the placemark. |
Note:
A geocoder could be used to convert placemarks to a latitude/longitude
Examples
The following examples illustrate a location expressed in coordinate, host_name, CLLI, and placemark, respectively.
locations:
location_using_coordinates:
latitude: 32.897480
longitude: -97.040443
host_location_using_host_name:
host_name: USESTCDLLSTX55ANZ123
location_using_clli:
clli_code: DLLSTX55
location_using_placemark:
sub_thoroughfare: 1
thoroughfare: ATT Way
locality: Bedminster
administrative_area: NJ
postal_code: 07921-2694
Demands
A demand can be satisfied by using candidates drawn from inventories. Each demand is uniquely named. Inventory is considered to be opaque and can represent anything from which candidates can be drawn.
A demand’s resource requirements are determined by asking an inventory provider for one or more sets of inventory candidates against which the demand will be made. An explicit set of candidates may also be declared, for example, if the only candidates for a demand are predetermined.
Demand criteria is dependent upon the inventory provider in use.
Provider-agnostic Schema
Key |
Value |
---|---|
|
A HAS-supported inventory provider. |
|
The reserved words |
|
A list of key-value pairs, that is used to select inventory candidates that match all the specified attributes. The key should be a uniquely identifiable attribute at the inventory provider. |
|
A list of key-value pairs, that will be added to the candidate’s attribute directly from template. |
|
If |
|
If |
|
The default cost of an inventory candidate, expressed as currency. This must be specified if the inventory provider may not always return a cost. |
|
A list of one or more candidates from which a solution will be explored. Must be a valid candidate as described in the candidate schema. |
|
A list of one or more candidates that should be excluded from the search space. Must be a valid candidate as described in the candidate schema. |
|
The current placement for the demand. Must be a valid candidate as described in the candidate schema. |
Note
The demand attributes in the template come from either policy or from a northbound request scope.
Examples
The following example helps understand a demand specification using Active & Available Inventory (A&AI), the inventory provider-of-record for ONAP.
Inventory Provider Criteria
Key |
Value |
---|---|
|
Examples: |
|
The reserved words |
|
A list of key-value pairs to match against inventory when drawing candidates. |
|
A list of key-value pairs, that will be added to the candidate’s attribute directly from template. |
|
Examples may include |
|
Must be a valid service id.
Examples may include |
|
The default cost of an inventory candidate, expressed as a unitless number. |
|
A list of one or more valid candidates. See Candidate Schema for details. |
|
A list of one or more valid candidates. See Candidate Schema for details. |
|
A single valid candidate, representing the current placement for the demand. See candidate schema for details. |
Candidate Schema
The following is the schema for a valid candidate
:
candidate_id
uniquely identifies a candidate. Currently, it is either a Service Instance ID or Cloud Region ID.candidate_type
identifies the type of the candidate. Currently, it is eithercloud
orservice
. *inventory_type
is defined as described in Inventory Provider Criteria (above).inventory_provider
identifies the inventory from which the candidate was drawn. *host_id
is an ID of a specific host (used only when referring to service/existing inventory).cost
is expressed as a unitless number.location_id
is always a location ID of the specified location type (e.g., for a type ofcloud
this will be an Cloud Region ID).location_type
is an inventory provider supported location type.latitude
is a valid latitude corresponding to the location_id.longitude
is a valid longitude corresponding to the location_id.city
(Optional) city corresponding to the location_id.state
(Optional) state corresponding to the location_id.country
(Optional) country corresponding to the location_id.region
(Optional) geographic region corresponding to the location_id.complex_name
(Optional) Name of the complex corresponding to the location_id.cloud_owner
(Optional) refers to the cloud owner (e.g.,azure
,aws
,att
, etc.).cloud_region_version
(Optional) is an inventory provider supported version of the cloud region.physical_location_id
(Optional) is an inventory provider supported CLLI code corresponding to the cloud region.
Examples
Service Candidate
{
"candidate_id": "1ac71fb8-ad43-4e16-9459-c3f372b8236d",
"candidate_type": "service",
"inventory_type": "service",
"inventory_provider": "aai",
"host_id": "vnf_123456",
"cost": "100",
"location_id": "DLLSTX9A",
"location_type": "azure",
"latitude": "32.897480",
"longitude": "-97.040443",
"city": "Dallas",
"state": "TX",
"country": "USA",
"region": "US",
"complex_name": "dalls_one",
"cloud_owner": "att-aic",
"cloud_region_version": "1.1",
"physical_location_id": "DLLSTX9A"
}
Cloud Candidate
{
"candidate_id": "NYCNY55",
"candidate_type": "cloud",
"inventory_type": "cloud",
"inventory_provider": "aai",
"cost": "100",
"location_id": "NYCNY55",
"location_type": "azure",
"latitude": "40.7128",
"longitude": "-74.0060",
"city": "New York",
"state": "NY",
"country": "USA",
"region": "US",
"complex_name": "ny_one",
"cloud_owner": "att-aic",
"cloud_region_version": "1.1",
"physical_location_id": "NYCNY55",
"flavors": {
"flavor":[
{
"flavor-id":"9cf8220b-4d96-4c30-a426-2e9382f3fff2",
"flavor-name":"flavor-numa-cpu-topology-instruction-set",
"flavor-vcpus":64,
"flavor-ram":65536,
"flavor-disk":1048576,
"flavor-ephemeral":128,
"flavor-swap":"0",
"flavor-is-public":false,
"flavor-selflink":"pXtX",
"flavor-disabled":false,
"hpa-capabilities":{
"hpa-capability":[
{
"hpa-capability-id":"01a4bfe1-1993-4fda-bd1c-ef333b4f76a9",
"hpa-feature":"cpuInstructionSetExtensions",
"hpa-version":"v1",
"architecture":"Intel64",
"resource-version":"1521306560982",
"hpa-feature-attributes":[
{
"hpa-attribute-key":"instructionSetExtensions",
"hpa-attribute-value":"{\"value\":{['AAA', 'BBB', 'CCC', 'DDD']}}",
"resource-version":"1521306560989"
}
]
},
{
"hpa-capability-id":"167ad6a2-7d9c-4bf2-9a1b-30e5311b8c66",
"hpa-feature":"numa",
"hpa-version":"v1",
"architecture":"generic",
"resource-version":"1521306561020",
"hpa-feature-attributes":[
{
"hpa-attribute-key":"numaCpu-1",
"hpa-attribute-value":"{\"value\":4}",
"resource-version":"1521306561060"
},
{
"hpa-attribute-key":"numaNodes",
"hpa-attribute-value":"{\"value\":2}",
"resource-version":"1521306561088"
},
{
"hpa-attribute-key":"numaCpu-0",
"hpa-attribute-value":"{\"value\":2}",
"resource-version":"1521306561028"
},
{
"hpa-attribute-key":"numaMem-0",
"hpa-attribute-value":"{\"value\":2, \"unit\":\"GB\" }",
"resource-version":"1521306561044"
},
{
"hpa-attribute-key":"numaMem-1",
"hpa-attribute-value":"{\"value\":4, \"unit\":\"GB\" }",
"resource-version":"1521306561074"
}
]
},
{
"hpa-capability-id":"13ec6d4d-7fee-48d8-9e4a-c598feb101ed",
"hpa-feature":"basicCapabilities",
"hpa-version":"v1",
"architecture":"generic",
"resource-version":"1521306560909",
"hpa-feature-attributes":[
{
"hpa-attribute-key":"numVirtualCpu",
"hpa-attribute-value":"{\"value\":64}",
"resource-version":"1521306560932"
},
{
"hpa-attribute-key":"virtualMemSize",
"hpa-attribute-value":"{\"value\":65536, \"unit\":\"MB\" }",
"resource-version":"1521306560954"
}
]
},
{
"hpa-capability-id":"8fa22e64-41b4-471f-96ad-6c4708635e4c",
"hpa-feature":"cpuTopology",
"hpa-version":"v1",
"architecture":"generic",
"resource-version":"1521306561109",
"hpa-feature-attributes":[
{
"hpa-attribute-key":"numCpuCores",
"hpa-attribute-value":"{\"value\":8}",
"resource-version":"1521306561114"
},
{
"hpa-attribute-key":"numCpuThreads",
"hpa-attribute-value":"{\"value\":8}",
"resource-version":"1521306561138"
},
{
"hpa-attribute-key":"numCpuSockets",
"hpa-attribute-value":"{\"value\":6}",
"resource-version":"1521306561126"
}
]
}
]
},
"resource-version":"1521306560203"
},
{
"flavor-id":"f5aa2b2e-3206-41b6-80d5-cf041b098c43",
"flavor-name":"flavor-cpu-pinning-ovsdpdk-instruction-set",
"flavor-vcpus":32,
"flavor-ram":131072,
"flavor-disk":2097152,
"flavor-ephemeral":128,
"flavor-swap":"0",
"flavor-is-public":false,
"flavor-selflink":"pXtX",
"flavor-disabled":false,
"hpa-capabilities":{
"hpa-capability":[
{
"hpa-capability-id":"4d04f4d8-e257-4442-8417-19a525e56096",
"hpa-feature":"cpuInstructionSetExtensions",
"hpa-version":"v1",
"architecture":"generic",
"resource-version":"1521306561223",
"hpa-feature-attributes":[
{
"hpa-attribute-key":"instructionSetExtensions",
"hpa-attribute-value":"{\"value\":{['A11', 'B22']}}",
"resource-version":"1521306561228"
}
]
},
{
"hpa-capability-id":"8d36a8fe-bfee-446a-bbcb-881ee66c8f78",
"hpa-feature":"ovsDpdk",
"hpa-version":"v1",
"architecture":"generic",
"resource-version":"1521306561170",
"hpa-feature-attributes":[
{
"hpa-attribute-key":"dataProcessingAccelerationLibrary",
"hpa-attribute-value":"{\"value\":\"v18.02\"}",
"resource-version":"1521306561175"
}
]
},
{
"hpa-capability-id":"c140c945-1532-4908-86c9-d7f71416f1dd",
"hpa-feature":"cpuPinning",
"hpa-version":"v1",
"architecture":"generic",
"resource-version":"1521306561191",
"hpa-feature-attributes":[
{
"hpa-attribute-key":"logicalCpuPinningPolicy",
"hpa-attribute-value":"{\"value\":\"dedicated\"}",
"resource-version":"1521306561196"
},
{
"hpa-attribute-key":"logicalCpuThreadPinningPolicy",
"hpa-attribute-value":"{value:\"prefer\"}",
"resource-version":"1521306561206"
}
]
},
{
"hpa-capability-id":"4565615b-1077-4bb5-a340-c5be48db2aaa",
"hpa-feature":"basicCapabilities",
"hpa-version":"v1",
"architecture":"generic",
"resource-version":"1521306561244",
"hpa-feature-attributes":[
{
"hpa-attribute-key":"numVirtualCpu",
"hpa-attribute-value":"{\"value\":32}",
"resource-version":"1521306561259"
},
{
"hpa-attribute-key":"virtualMemSize",
"hpa-attribute-value":"{\"value\":131072, \"unit\":\"MB\" }",
"resource-version":"1521306561248"
}
]
}
]
},
"resource-version":"1521306561164"
}
]
}
}
vfmodule candidate
{
"candidate_id": "d187d743-5932-4fb9-a42d-db0a5be5ba7e",
"city": "example-city-val-27150",
"cloud_owner": "CloudOwner",
"cloud_region_version": "1",
"complex_name": "clli1",
"cost": 1.0,
"country": "example-country-val-94173",
"existing_placement": "false",
"host_id": "vFW-PKG-MC",
"inventory_provider": "aai",
"inventory_type": "vfmodule",
"ipv4-oam-address": "oam_network_zb4J",
"ipv6-oam-address": "",
"latitude": "example-latitude-val-89101",
"location_id": "RegionOne",
"location_type": "att_aic",
"longitude": "32.89948",
"nf-id": "fcbff633-47cc-4f38-a98d-4ba8285bd8b6",
"nf-name": "vFW-PKG-MC",
"nf-type": "vnf",
"passthrough_attributes": {
"td-role": "anchor"
},
"physical_location_id": "clli1",
"port_key": "vlan_port",
"region": "example-region-val-13893",
"service_instance_id": "3e8d118c-10ca-4b4b-b3db-089b5e9e6a1c",
"service_resource_id": "vPGN-XX",
"sriov_automation": "false",
"state": "example-state-val-59487",
"uniqueness": "false",
"vf-module-id": "d187d743-5932-4fb9-a42d-db0a5be5ba7e",
"vf-module-name": "vnf-pkg-r1-t2-mc",
"vim-id": "CloudOwner_RegionOne",
"vlan_key": "vlan_key",
"vnf-type": "5G_EVE_Demo/5G_EVE_PKG 0",
"vservers": [
{
"l-interfaces": [
{
"interface-id": "4b333af1-90d6-42ae-8389-d440e6ff0e93",
"interface-name": "vnf-pkg-r1-t2-mc-vpg_private_2_port-mf7lu55usq7i",
"ipv4-addresses": [
"10.100.100.2"
],
"ipv6-addresses": [],
"macaddr": "fa:16:3e:c4:07:7f",
"network-id": "59763a33-3296-4dc8-9ee6-2bdcd63322fc",
"network-name": ""
},
{
"interface-id": "85dd57e9-6e3a-48d0-a784-4598d627e798",
"interface-name": "vnf-pkg-r1-t2-mc-vpg_private_1_port-734xxixicw6r",
"ipv4-addresses": [
"10.0.110.2"
],
"ipv6-addresses": [],
"macaddr": "fa:16:3e:b5:86:38",
"network-id": "cdb4bc25-2412-4b77-bbd5-791a02f8776d",
"network-name": ""
},
{
"interface-id": "edaff25a-878e-4706-ad52-4e3d51cf6a82",
"interface-name": "vnf-pkg-r1-t2-mc-vpg_private_0_port-e5qdm3p5ijhe",
"ipv4-addresses": [
"192.168.10.200"
],
"ipv6-addresses": [],
"macaddr": "fa:16:3e:ff:d8:6f",
"network-id": "932ac514-639a-45b2-b1a3-4c5bb708b5c1",
"network-name": ""
}
],
"vserver-id": "00bddefc-126e-4e4f-a18d-99b94d8d9a30",
"vserver-name": "zdfw1fwl01pgn01"
}
]
}
nssi candidate
{
"candidate_id": "1a636c4d-5e76-427e-bfd6-241a947224b0",
"candidate_type": "nssi",
"conn_density": 0,
"cost": 1.0,
"domain": "cn",
"e2e_latency": 0,
"exp_data_rate": 0,
"exp_data_rate_dl": 100,
"exp_data_rate_ul": 100,
"instance_name": "nssi_test_0211",
"inventory_provider": "aai",
"inventory_type": "nssi",
"jitter": 0,
"latency": 20,
"max_number_of_ues": 0,
"nsi_id": "4115d3c8-dd59-45d6-b09d-e756dee9b518",
"nsi_model_invariant_id": "39b10fe6-efcc-40bc-8184-c38414b80771",
"nsi_model_version_id": "8b664b11-6646-4776-9f59-5c3de46da2d6",
"nsi_name": "nsi_test_0211",
"payload_size": 0,
"reliability": 99.99,
"resource_sharing_level": "0",
"survival_time": 0,
"traffic_density": 0,
"ue_mobility_level": "stationary",
"uniqueness": "true"
}
Examples
The following examples illustrate two demands:
vGMuxInfra
: A vGMuxInfra service, drawing candidates of type service from the inventory. Only candidates that match the customer_id and orchestration-status will be included in the search space.vG
: A vG, drawing candidates of type service and cloud from the inventory. Only candidates that match the customer_id and provisioning-status will be included in the search space.
demands:
vGMuxInfra:
- inventory_provider: aai
inventory_type: service
attributes:
equipment_type: vG_Mux
customer_id: some_company
orchestration-status: Activated
model-id: 174e371e-f514-4913-a93d-ed7e7f8fbdca
model-version: 2.0
vG:
- inventory_provider: aai
inventory_type: service
attributes:
equipment_type: vG
customer_id: some_company
provisioning-status: provisioned
- inventory_provider: aai
inventory_type: cloud
Note
Cost could be used to specify the cost of choosing a specific candidate. For example, choosing an existing VNF instance can be less costlier than creating a new instance.
Constraints
A Constraint is used to eliminate inventory candidates from one or more demands that do not meet the requirements specified by the constraint. Since reusability is one of the cornerstones of HAS, Constraints are designed to be service-agnostic, and is parameterized such that it can be reused across a wide range of services. Further, HAS is designed with a plug-in architecture that facilitates easy addition of new constraint types.
Constraints are denoted by a constraints
key. Each constraint is
uniquely named and set to a dictionary containing a constraint type, a
list of demands to apply the constraint to, and a dictionary of
constraint properties.
Considerations while using multiple constraints * Constraints should be treated as a unordered list, and no assumptions should be made as regards to the order in which the constraints are evaluated for any given demand. * All constraints are effectively AND-ed together. Constructs such as “Constraint X OR Y” are unsupported. * Constraints are reducing in nature, and does not increase the available candidates at any point during the constraint evaluations.
Schema
Key |
Value |
---|---|
|
Key is a unique name. |
|
The type of constraint. See Constraint Types for a list of currently supported values. |
|
One or more previously
declared demands. If
only one demand is
specified, it may appear
without list markers
( |
|
Properties particular to the specified constraint type. Use if required by the constraint. |
constraints:
CONSTRAINT_NAME_1:
type: CONSTRAINT_TYPE
demands: DEMAND_NAME | [DEMAND_NAME_1, DEMAND_NAME_2, ...]
properties: PROPERTY_DICT
CONSTRAINT_NAME_2:
type: CONSTRAINT_TYPE
demands: DEMAND_NAME | [DEMAND_NAME_1, DEMAND_NAME_2, ...]
properties: PROPERTY_DICT
...
Constraint Types
Type |
Description |
---|---|
|
Constraint that matches the specified list of Attributes. |
|
Geographic distance constraint between each pair of a list of demands. |
|
Geographic distance constraint between each of a list of demands and a specific location. |
|
Constraint that ensures available capacity in an existing service instance for an incoming demand. |
|
Constraint that enforces two or more demands are satisfied using candidates from a pre-established group in the inventory. |
|
Constraint that ensures available capacity in an existing cloud region for an incoming demand. |
|
Constraint that enforces co-location/diversity at the granularities of clouds/regions/availabil ity-zones. |
|
Constraint that recommends cloud region with an optimal flavor based on required HPA capabilities for an incoming demand. |
|
Constraint that checks if the incoming demand fits the VIM instance. |
|
License availability constraint. |
|
Network constraint between each pair of a list of demands. |
|
Network constraint between each of a list of demands and a specific location/address. |
|
Constraint that checks if an attribute is within the threshold. |
Note: Constraint names marked “Deferred” **will not* be supported in the current release of HAS.*
Threshold Values
Constraint property values representing a threshold may be an integer or
floating point number, optionally prefixed with a comparison operator:
=
, <
, >
, <=
, or >=
. The default is =
and
optionally suffixed with a unit.
Whitespace may appear between the comparison operator and value, and
between the value and units. When a range values is specified (e.g.,
10-20 km
), the comparison operator is omitted.
Each property is documented with a default unit. The following units are supported:
Unit |
Values |
Default |
---|---|---|
Currency |
|
|
Time |
|
|
Distance |
|
|
Throughput |
|
|
Attribute
Constrain one or more demands by one or more attributes, expressed as properties. Attributes are mapped to the inventory provider specified properties, referenced by the demands. For example, properties could be hardware capabilities provided by the platform (flavor, CPU-Pinning, NUMA), features supported by the services, etc.
Schema
Property |
Value |
---|---|
|
Opaque dictionary of attribute name and value pairs. Values must be strings or numbers. Encoded and sent to the service provider via a plugin. |
Note: Attribute values are not detected/parsed as thresholds by the Homing framework. Such interpretations and evaluations are inventory provider-specific and delegated to the corresponding plugin
constraints:
sriov_nj:
type: attribute
demands: [my_vnf_demand, my_other_vnf_demand]
properties:
evaluate:
cloud_version: 1.1
flavor: SRIOV
subdivision: US-TX
vcpu_pinning: True
numa_topology: numa_spanning
Proposal: Evaluation Operators
To assist in evaluating attributes, the following operators and notation are proposed:
Operator |
Name |
Operand |
---|---|---|
|
|
Any object (string, number, list, dict) |
|
|
|
|
|
A number (strings are converted to float) |
|
|
|
|
|
|
|
|
|
|
|
A list of objects (string, number, list, dict) |
|
|
|
|
|
A regular expression pattern |
Example usage:
constraints:
sriov_nj:
type: attribute
demands: [my_vnf_demand, my_other_vnf_demand]
properties:
evaluate:
cloud_version: {gt: 1.0}
flavor: {regex: /^SRIOV$/i}
subdivision: {any: [US-TX, US-NY, US-CA]}
Distance Between Demands
Constrain each pairwise combination of two or more demands by distance requirements.
Schema
Name |
Value |
---|---|
|
Distance between demands, measured by the geographic path. |
The constraint is applied between each pairwise combination of demands. For this reason, at least two demands must be specified, implicitly or explicitly.
constraints:
distance_vnf1_vnf2:
type: distance_between_demands
demands: [my_vnf_demand, my_other_vnf_demand]
properties:
distance: < 250 km
Distance To Location
Constrain one or more demands by distance requirements relative to a specific location.
Schema
Property |
Value |
---|---|
|
Distance between demands, measured by the geographic path. |
|
A previously declared location. |
The constraint is applied between each demand and the referenced location, not across all pairwise combinations of Demands.
constraints:
distance_vnf1_loc:
type: distance_to_location
demands: [my_vnf_demand, my_other_vnf_demand, another_vnf_demand]
properties:
distance: < 250 km
location: LOCATION_ID
Instance Fit
Constrain each demand by its service requirements.
Requirements are sent as a request to a service controller. Service
controllers are defined by plugins in Homing (e.g., sdn-c
).
A service controller plugin knows how to communicate with a particular endpoint (via HTTP/REST, DMaaP, etc.), obtain necessary information, and make a decision. The endpoint and credentials can be configured through plugin settings.
Schema
Property |
Description |
---|---|
|
Name of a service controller. |
|
Opaque dictionary of key/value pairs. Values must be strings or numbers. Encoded and sent to the service provider via a plugin. |
constraints:
check_for_availability:
type: instance_fit
demands: [my_vnf_demand, my_other_vnf_demand]
properties:
controller: sdn-c
request: REQUEST_DICT
Region Fit
Constrain each demand’s inventory candidates based on inventory provider membership.
Requirements are sent as a request to a service controller. Service
controllers are defined by plugins in Homing (e.g., sdn-c
).
A service controller plugin knows how to communicate with a particular endpoint (via HTTP/REST, DMaaP, etc.), obtain necessary information, and make a decision. The endpoint and credentials can be configured through plugin settings.
Schema
Property |
Description |
---|---|
|
Name of a service controller. |
|
Opaque dictionary of key/value pairs. Values must be strings or numbers. Encoded and sent to the service provider via a plugin. |
constraints:
check_for_membership:
type: region_fit
demands: [my_vnf_demand, my_other_vnf_demand]
properties:
controller: sdn-c
request: REQUEST_DICT
Zone
Constrain two or more demands such that each is located in the same or different zone category.
Zone categories are inventory provider-defined, based on the demands being constrained.
Schema
Property |
Value |
---|---|
|
Zone qualifier. One of |
|
Zone category. One of |
For example, to place two demands in different disaster zones:
constraints:
vnf_diversity:
type: zone
demands: [my_vnf_demand, my_other_vnf_demand]
properties:
qualifier: different
category: disaster
Or, to place two demands in the same region:
constraints:
vnf_affinity:
type: zone
demands: [my_vnf_demand, my_other_vnf_demand]
properties:
qualifier: same
category: region
Notes
These categories could be any of the following:
disaster_zone
,region
,complex
,time_zone
, andmaintenance_zone
. Really, we are talking affinity/anti-affinity at the level of DCs, but these terms may cause confusion with affinity/anti-affinity in OpenStack.
HPA & Cloud Agnostic Intent
Constrain each demand’s inventory candidates based on cloud regions’ Hardware platform capabilities (HPA) and also intent support. Note that currently HPA the cloud agnostic constraints will use the same schema.
Requirements mapped to the inventory provider specified properties, referenced by the demands. For eg, properties could be hardware capabilities provided by the platform through flavors or cloud-region eg:(CPU-Pinning, NUMA), features supported by the services, etc.
Schema
Property |
Value |
---|---|
|
List of id, type, directives and flavorProperties of each VM of the VNF demand. |
Property for evaluation |
Value |
---|---|
|
Name of VFC |
|
Type of VFC. Could be |
|
Directives for one VFC. Now we only have flavor directives inside. Each VFC must have one directive |
|
Flavor properties for one VFC. Contains detailed HPA requirements |
Property for directives |
Value |
---|---|
|
Type of directive |
|
Attributes inside directive |
Property for attributes |
Value |
---|---|
|
Attribute name/label |
|
Attributes value |
Note: Each VFC must have one directive with type ‘flavor_directives’ to put the
flavors inside. The attribute_name
is the place to put flavor label and the
attribute_value
will first left blank. After getting the proper flavor, OOF will
merge the flavor name into the attribute_value
inside flavor directives. Also,
all the directives coming from one VFC inside the same request will be merged
together in directives
, as they are using the same structure as ‘directives’.
constraints:
hpa_constraint:
type: hpa
demands: [my_vnf_demand, my_other_vnf_demand]
properties:
evaluate:
- [ List of {id: {vdu Name},
type: {type of VF },
directives: DIRECTIVES LIST,
flavorProperties: HPACapability DICT} ]
HPACapability DICT :
hpa-feature: basicCapabilities
hpa-version: v1
architecture: generic
directives:
- DIRECTIVES LIST
hpa-feature-attributes:
- HPAFEATUREATTRIBUTES LIST
DIRECTIVES LIST:
type: String
attributes:
- ATTRIBUTES LIST
ATTRIBUTES LIST:
attribute_name: String,
attribute_value: String
HPAFEATUREATTRIBUTES LIST:
hpa-attribute-key: String
hpa-attribute-value: String
operator: One of OPERATOR
unit: String
OPERATOR : ['=', '<', '>', '<=', '>=', 'ALL']
Example
Example for HEAT request(SO)
- Note: Where “attributes”:[{“attribute_name”:” oof_returned_flavor_label_for_vgw_1 “,
Admin needs to ensure that this value is same as flavor parameter in HOT
{
"hpa_constraint":{
"type":"hpa",
"demands":[
"vG"
],
"properties":{
"evaluate":[
{
"id": "vgw_0",
"type": "vnfc",
"directives": [
{
"type":"flavor_directives",
"attributes":[
{
"attribute_name":" oof_returned_flavor_label_for_vgw_0 ",
"attribute_value": "<Blank>"
}
]
}
],
"flavorProperties":[
{
"hpa-feature":"basicCapabilities",
"hpa-version":"v1",
"architecture":"generic",
"mandatory": "True",
"directives": [],
"hpa-feature-attributes":[
{
"hpa-attribute-key":"numVirtualCpu",
"hpa-attribute-value":"32",
"operator":"="
}
]
},
{
"hpa-feature":"basicCapabilities",
"hpa-version":"v1",
"architecture":"generic",
"mandatory": "True",
"directives": [],
"hpa-feature-attributes":[
{
"hpa-attribute-key":"virtualMemSize",
"hpa-attribute-value":"64",
"operator":"=",
"unit":"GB"
}
]
},
{
"hpa-feature":"ovsDpdk",
"hpa-version":"v1",
"architecture":"generic",
"mandatory": "False",
"score": "10",
"directives": [],
"hpa-feature-attributes":[
{
"hpa-attribute-key":"dataProcessingAccelerationLibrary",
"hpa-attribute-value":"v18.02",
"operator":"="
}
]
},
{
"hpa-feature": "qosIntentCapabilities",
"mandatory": "True",
"architecture": "generic",
"hpa-version": "v1",
"directives": [],
"hpa-feature-attributes": [
{
"hpa-attribute-key":"Infrastructure Resource Isolation for VNF",
"hpa-attribute-value": "Burstable QoS",
"operator": "=",
"unit": ""
},
{ "hpa-attribute-key":"Burstable QoS Oversubscription Percentage",
"hpa-attribute-value": "25",
"operator": "=",
"unit": ""
}
]
}
]
},
{
"id": "vgw_1",
"type": "vnfc",
"directives": [
{
"type":"flavor_directives",
"attributes":[
{
"attribute_name":" oof_returned_flavor_label_for_vgw_1 ",
"attribute_value": "<Blank>"
}
]
}
],
"flavorProperties":[
{
"hpa-feature":"basicCapabilities",
"hpa-version":"v1",
"architecture":"generic",
"mandatory": "False",
"score": "5",
"directives": [],
"hpa-feature-attributes":[
{
"hpa-attribute-key":"numVirtualCpu",
"hpa-attribute-value":"8",
"operator":">="
}
]
},
{
"hpa-feature":"basicCapabilities",
"hpa-version":"v1",
"architecture":"generic",
"mandatory": "False",
"score": "5",
"directives": [],
"hpa-feature-attributes":[
{
"hpa-attribute-key":"virtualMemSize",
"hpa-attribute-value":"16",
"operator":">=",
"unit":"GB"
}
]
},
{
"hpa-feature":"sriovNICNetwork",
"hpa-version":"v1",
"architecture":"generic",
"mandatory": "True",
"directives": [
{
"type": "sriovNICNetwork_directives",
"attributes": [
{ "attribute_name": "oof_returned_vnic_type_for_vgw_1",
"attribute_value": "direct"
},
{ "attribute_name": "oof_returned_provider_network_for_vgw_1",
"attribute_value": "physnet2"
}
]
}
],
"hpa-feature-attributes":[
{
"hpa-attribute-key":"pciVendorId",
"hpa-attribute-value":"8086",
"operator":"=",
"unit":""
},
{
"hpa-attribute-key":"pciDeviceId",
"hpa-attribute-value":"0443",
"operator":"=",
"unit":""
},
{
"hpa-attribute-key":"pciCount",
"hpa-attribute-value":"1",
"operator":"=",
"unit":""
},
{
"hpa-attribute-key":"physicalNetwork",
"hpa-attribute-value":"physnet2",
"operator":"=",
"unit":""
}
]
}
]
}
]
}
}
}
Example for Pure TOSCA request(VF-C)
{
"hpa_constraint":{
"type":"hpa",
"demands":[
"vG"
],
"properties":{
"evaluate":[
{
"id": "vgw_0",
"type": "tocsa.nodes.nfv.Vdu.Compute",
"directives": [
{
"type":"flavor_directives",
"attributes":[
{
"attribute_name":" flavor_name ",
"attribute_value": "<Blank>"
}
]
}
],
"flavorProperties":[
{
"hpa-feature":"basicCapabilities",
"hpa-version":"v1",
"architecture":"generic",
"mandatory": "True",
"directives": [],
"hpa-feature-attributes":[
{
"hpa-attribute-key":"numVirtualCpu",
"hpa-attribute-value":"32",
"operator":"="
}
]
},
{
"hpa-feature":"basicCapabilities",
"hpa-version":"v1",
"architecture":"generic",
"mandatory": "True",
"directives": [],
"hpa-feature-attributes":[
{
"hpa-attribute-key":"virtualMemSize",
"hpa-attribute-value":"64",
"operator":"=",
"unit":"GB"
}
]
},
{
"hpa-feature":"ovsDpdk",
"hpa-version":"v1",
"architecture":"generic",
"mandatory": "False",
"score": "10",
"directives": [],
"hpa-feature-attributes":[
{
"hpa-attribute-key":"dataProcessingAccelerationLibrary",
"hpa-attribute-value":"v18.02",
"operator":"="
}
]
},
{
"hpa-feature": "qosIntentCapabilities",
"mandatory": "True",
"architecture": "generic",
"hpa-version": "v1",
"directives": [],
"hpa-feature-attributes": [
{
"hpa-attribute-key":"Infrastructure Resource Isolation for VNF",
"hpa-attribute-value": "Burstable QoS",
"operator": "=",
"unit": ""
},
{ "hpa-attribute-key":"Burstable QoS Oversubscription Percentage",
"hpa-attribute-value": "25",
"operator": "=",
"unit": ""
}
]
}
]
},
{
"id": "vgw_1",
"type": "tosca.nodes.nfv.Vdu.Compute",
"directives": [
{
"type":"flavor_directives",
"attributes":[
{
"attribute_name":" flavor_name ",
"attribute_value": "<Blank>"
}
]
}
],
"flavorProperties":[
{
"hpa-feature":"basicCapabilities",
"hpa-version":"v1",
"architecture":"generic",
"mandatory": "False",
"score": "5",
"directives": [],
"hpa-feature-attributes":[
{
"hpa-attribute-key":"numVirtualCpu",
"hpa-attribute-value":"8",
"operator":">="
}
]
},
{
"hpa-feature":"basicCapabilities",
"hpa-version":"v1",
"architecture":"generic",
"mandatory": "False",
"score": "5",
"directives": [],
"hpa-feature-attributes":[
{
"hpa-attribute-key":"virtualMemSize",
"hpa-attribute-value":"16",
"operator":">=",
"unit":"GB"
}
]
},
{
"hpa-feature":"sriovNICNetwork",
"hpa-version":"v1",
"architecture":"generic",
"mandatory": "True",
"directives": [],
"hpa-feature-attributes":[
{
"hpa-attribute-key":"pciVendorId",
"hpa-attribute-value":"8086",
"operator":"=",
"unit":""
},
{
"hpa-attribute-key":"pciDeviceId",
"hpa-attribute-value":"0443",
"operator":"=",
"unit":""
},
{
"hpa-attribute-key":"pciCount",
"hpa-attribute-value":"1",
"operator":"=",
"unit":""
},
]
}
]
}
]
}
}
}
VIM Fit
Constrain each demand’s inventory candidates based on capacity check for available capacity at the VIM instances.
Requirements are sent as an opaque request object understood by the VIM controllers or MultiCloud. Each controller is defined and implemented as a plugin in Conductor.
A vim controller plugin knows how to communicate with a particular endpoint (via HTTP/REST, DMaaP, etc.), obtain necessary information, and make a decision. The endpoint and credentials can be configured through plugin settings.
Schema
Property |
Value |
---|---|
|
Name of a vim controller. (e.g., multicloud) |
|
Opaque dictionary of key/value pairs. Values must be strings or numbers. Encoded and sent to the vim controller via a plugin. |
constraints:
check_cloud_capacity:
type: vim_fit
demands: [my_vnf_demand, my_other_vnf_demand]
properties:
controller: multicloud
request: REQUEST_DICT
Notes
For the current release the REQUEST_DICT is of the following format as defined by the policy for vim_fit. The REQUEST_DICT is an opaque request object defined through policy, so it is not restricted to this format. In the current release MultiCloud supports the check_vim_capacity using the following grammar.
{ "request":{ "vCPU":10, "Memory":{ "quantity":{ "get_param":"REQUIRED_MEM" }, "unit":"GB" }, "Storage":{ "quantity":{ "get_param":"REQUIRED_DISK" }, "unit":"GB" } } }
Inventory Group
Constrain demands such that inventory items are grouped across two demands.
This constraint has no properties.
constraints:
my_group:
type: inventory_group
demands: [demand_1, demand_2]
Note: Only pair-wise groups are supported at this time. The list must have only two demands.
License
Constrain demands according to license availability.
Support for this constraint is deferred to a later release.
Schema
Property |
Value |
---|---|
|
Unique license identifier |
|
Opaque license key, particular to the license identifier |
constraints:
my_software:
type: license
demands: [demand_1, demand_2, ...]
properties:
id: SOFTWARE_ID
key: LICENSE_KEY
Network Between Demands
Constrain each pairwise combination of two or more demands by network requirements.
Support for this constraint is deferred to a later release.
Schema
Property |
Value |
---|---|
|
Desired network bandwidth. |
|
Desired distance between demands, measured by the network path. |
|
Desired network latency. |
Any combination of bandwidth
, distance
, or latency
must be
specified. If none of these properties are used, it is treated as a
malformed request.
The constraint is applied between each pairwise combination of demands. For this reason, at least two demands must be specified, implicitly or explicitly.
constraints:
network_requirements:
type: network_between_demands
demands: [my_vnf_demand, my_other_vnf_demand]
properties:
bandwidth: >= 1000 Mbps
distance: < 250 km
latency: < 50 ms
Network To Location
Constrain one or more demands by network requirements relative to a specific location.
Support for this constraint is deferred to a later release.
Schema
Property |
Value |
---|---|
|
Desired network bandwidth. |
|
Desired distance between demands, measured by the network path. |
|
Desired network latency. |
|
A previously declared location. |
Any combination of bandwidth
, distance
, or latency
must be
specified. If none of these properties are used, it is treated as a
malformed request.
The constraint is applied between each demand and the referenced location, not across all pairwise combinations of Demands.
constraints:
my_access_network_constraint:
type: network_to_location
demands: [my_vnf_demand, my_other_vnf_demand]
properties:
bandwidth: >= 1000 Mbps
distance: < 250 km
latency: < 50 ms
location: LOCATION_ID
Capabilities
Constrain each demand by its cluster capability requirements. For example, as described by an OpenStack Heat template and operational environment.
Support for this constraint is deferred to a later release.
Schema
Property |
Value |
---|---|
|
Indicates the kind of specification being provided in
the properties. Must be |
|
For specifications of type |
(Optional) |
For specifications of type |
constraints:
check_for_fit:
type: capability
demands: [my_vnf_demand, my_other_vnf_demand]
properties:
specification: heat
template: http://repository/my/stack_template
environment: http://repository/my/stack_environment
Threshold
Constrain each demand by an attribute which is within a certain threshold.
Schema
Property |
Value |
---|---|
|
List of attributes and its threshold |
Property for evaluation |
Value |
---|---|
|
Attribute of a candidate |
|
Threshold Value |
|
Condition to check. Supported Values are
|
|
Attribute’s unit of measurement |
urllc_threshold:
type: threshold
demands: ['URLLC']
properties:
evaluate:
- attribute: latency
operator: lte
threshold: 50
unit: ms
- attribute: reliability
operator: gte
threshold: 99.99
Note:
The status of the constraint support is of Frankfurt release.
Reservations
A Reservation allows reservation of resources associated with candidate that satisfies one or more demands.
Similar to the instance_fit constraint, requirements are sent as a
request to a service controller that handles the reservation.
Service controllers are defined by plugins in Homing (e.g., sdn-c
).
The service controller plugin knows how to make a reservation (and initiate rollback on a failure) with a particular endpoint (via HTTP/REST, DMaaP, etc.) of the service controller. The endpoint and credentials can be configured through plugin settings.
Schema
Property |
Description |
---|---|
|
Name of a service controller. |
|
Opaque dictionary of key/value pairs. Values must be strings or numbers. Encoded and sent to the service provider via a plugin. |
resource_reservation:
type: instance_reservation
demands: [my_vnf_demand, my_other_vnf_demand]
properties:
controller: sdn-c
request: REQUEST_DICT
Optimizations
An Optimization allows specification of a objective function, which aims to maximize or minimize a certain value that varies based on the choice of candidates for one or more demands that are a part of the objective function. For example, an objective function may be to find the closest cloud-region to a customer to home a demand.
Optimization Components
Optimization definitions can be broken down into three components:
Compo nent |
Key |
Value |
---|---|---|
Goal |
|
A single Operand (usually |
Opera tor |
|
Two or more Operands (Numbers, Operators, Functions) |
Funct ion |
|
A two-element list consisting of a location and demand. |
Example
Given a customer location cl
, two demands vG1
and vG2
, and
weights w1
and w2
, the optimization criteria can be expressed
as:
minimize(weight1 * distance_between(cl, vG1) + weight2 * distance_between(cl, vG2))
This can be read as: “Minimize the sum of weighted distances from cl to vG1 and from cl to vG2.”
Such optimizations may be expressed in a template as follows:
parameters:
w1: 10
w2: 20
optimization:
minimize:
sum:
- product:
- {get_param: w1}
- {distance_between: [cl, vG1]}
- product:
- {get_param: w2}
- {distance_between: [cl, vG2]}
Or without the weights as:
optimization:
minimize:
sum:
- {distance_between: [cl, vG1]}
- {distance_between: [cl, vG2]}
Template Restriction
While the template format supports any number of arrangements of numbers, operators, and functions, HAS’s solver presently expects a very specific arrangement.
Optimizations must conform to a single goal of
minimize
followed by asum
operator.The sum can consist of two
distance_between
function calls, or twoproduct
operators.If a
product
operator is present, it must contain at least adistance_between
function call, plus one optional number to be used for weighting.Numbers may be referenced via
get_param
.The objective function has to be written in the sum-of-product format. In the future, HAS can convert product-of-sum into sum-of-product automatically.
The first two examples in this section illustrate both of these use cases.
Inline Operations
If desired, operations can be rewritten inline. For example, the two
product
operations from the previous example can also be expressed
as:
parameters:
w1: 10
w2: 20
optimization:
minimize:
sum:
- {product: [{get_param: w1}, {distance_between: [cl, vG1]}]}
- {product: [{get_param: w2}, {distance_between: [cl, vG2]}]}
In turn, even the sum
operation can be rewritten inline, however
there is a point of diminishing returns in terms of readability!
Notes
We do not support more than one dimension in the optimization (e.g., Minimize distance and cost). For supporting multiple dimensions we would need a function the normalize the unit across dimensions.
Intrinsic Functions
Homing provides a set of intrinsic functions that can be used inside templates to perform specific tasks. The following section describes the role and syntax of the intrinsic functions.
Functions are written as a dictionary with one key/value pair. The key is the function name. The value is a list of arguments. If only one argument is provided, a string may be used instead.
a_property: {FUNCTION_NAME: [ARGUMENT_LIST]}
a_property: {FUNCTION_NAME: ARGUMENT_STRING}
Note: These functions can only be used within “properties” sections.
get_file
The get_file
function inserts the content of a file into the
template. It is generally used as a file inclusion mechanism for files
containing templates from other services (e.g., Heat).
The syntax of the get_file
function is:
{get_file: <content key>}
The content
key is used to look up the files
dictionary that is
provided in the REST API call. The Homing client command (Homing
) is
get_file
aware and populates the files
dictionary with the
actual content of fetched paths and URLs. The Homing client command
supports relative paths and transforms these to the absolute URLs
required by the Homing API.
Note: The get_file
argument must be a static path or URL and not
rely on intrinsic functions like get_param
. The Homing client does
not process intrinsic functions. They are only processed by the Homing
engine.
The example below demonstrates the get_file
function usage with both
relative and absolute URLs:
constraints:
check_for_fit:
type: capacity
demands: [my_vnf_demand, my_other_vnf_demand]
properties:
template: {get_file: stack_template.yaml}
environment: {get_file: http://hostname/environment.yaml}
The files
dictionary generated by the Homing client during
instantiation of the plan would contain the following keys. Each value
would be of that file’s contents.
file:///path/to/stack_template.yaml
http://hostname/environment.yaml
Note
If Homing will only be accessed over DMaaP, files will need to be embedded using the Homing API request format. This will be a consideration when DMaaP integration happens.
get_param
The get_param
function references an input parameter of a template.
It resolves to the value provided for this input parameter at runtime.
The syntax of the get_param
function is:
{get_param: <parameter name>}
{get_param: [<parameter name>, <key/index1> (optional), <key/index2> (optional), ...]}
parameter name is the parameter name to be resolved. If the parameters returns a complex data structure such as a list or a dict, then subsequent keys or indices can be specified. These additional parameters are used to navigate the data structure to return the desired value. Indices are zero-based.
The following example demonstrates how the get_param
function is
used:
parameters:
software_id: SOFTWARE_ID
license_key: LICENSE_KEY
service_info:
provider: dmaap:///full.topic.name
costs: [10, 20, 30, 40, 50, 60, 70, 80, 90, 100]
constraints:
my_software:
type: license
demands: [demand_1, demand_2, ...]
properties:
id: {get_param: software_id}
key: {get_param: license_key}
check_for_availability:
type: service
demands: [my_vnf_demand, my_other_vnf_demand]
properties:
provider_url: {get_param: [service_info, provider]}
request: REQUEST_DICT
cost: {get_param: [service_info, costs, 4]}
In this example, properties would be set as follows:
Key |
Value |
---|---|
|
SOFTWARE_ID |
|
LICENSE_KEY |
|
dmaap:///full.topic.name |
|
50 |
Contact
Shankar Narayanan shankarpnsn@gmail.com
Example Conductor Templates
Example 1
{
"name": "yyy-yyy-yyyy",
"files": {},
"timeout": 600,
"limit": 1,
"num_solutions": 10,
"template": {
"homing_template_version": "2018-02-01",
"parameters": {
"service_name": "",
"service_id": "d61b2543-5914-4b8f-8e81-81e38575b8ec",
"customer_lat": 32.89748,
"customer_long": -97.040443
},
"locations": {
"customer_loc": {
"latitude": {
"get_param": "customer_lat"
},
"longitude": {
"get_param": "customer_long"
}
}
},
"demands": {
"vGMuxInfra": [
{
"inventory_provider": "aai",
"inventory_type": "service",
"service_type": "vGMuxInfra-xx",
"attributes": {
"customer-id": "",
"orchestration-status": "",
"model-invariant-id": "b3dc6465-942c-42af-8464-2bf85b6e504b",
"model-version-id": "ba3b8981-9a9c-4945-92aa-486234ec321f",
"service-type": "vGMuxInfra-xx",
"equipment-role": "",
"global-customer-id": "SDN-ETHERNET-INTERNET"
}
}
],
"vG": [
{
"inventory_provider": "aai",
"inventory_type": "cloud",
"service_type": "71d563e8-e714-4393-8f99-cc480144a05e"
}
]
},
"constraints": {
"affinity_vCPE": {
"type": "zone",
"demands": [
"vGMuxInfra",
"vG"
],
"properties": {
"category": "complex",
"qualifier": "same"
}
}
},
"optimization": {
"minimize": {
"sum": [
{
"product": [
"1",
{
"distance_between": [
"customer_loc",
"vGMuxInfra"
]
}
]
},
{
"product": [
"1",
{
"distance_between": [
"customer_loc",
"vG"
]
}
]
}
]
}
}
}
}
The example template is for the placement of vG and vGMuxInfra. It has an affinity constraint which specifies that both the vnfs must be in the same complex. The optimiation here is to minimize the sum of the distances of the vnfs from the customer location.
Example 2
{
"files": {},
"limit": 1,
"num_solutions": 10,
"name": "a2e3e0cc-3a97-44fc-8a08-1b86143fbdd3",
"template": {
"constraints": {
"affinity_vCPE": {
"demands": [
"vgMuxAR",
"vGW"
],
"properties": {
"category": "complex",
"qualifier": "same"
},
"type": "zone"
},
"distance-vGMuxAR": {
"demands": [
"vgMuxAR"
],
"properties": {
"distance": "< 500 km",
"location": "customer_loc"
},
"type": "distance_to_location"
},
"distance-vGW": {
"demands": [
"vGW"
],
"properties": {
"distance": "< 1500 km",
"location": "customer_loc"
},
"type": "distance_to_location"
}
},
"demands": {
"vGW": [
{
"attributes": {
"model-invariant-id": "782c87a6-b712-47d1-9c5b-1ea2cd9a2dd5",
"model-version-id": "9877dbbe-8ada-40a2-8adb-f6f26f1ad9ab"
},
"inventory_provider": "aai",
"inventory_type": "cloud",
"service_type": "c3e0e82b-3367-48ce-ab00-27dc2e91a34a"
}
],
"vgMuxAR": [
{
"attributes": {
"global-customer-id": "SDN-ETHERNET-INTERNET",
"model-invariant-id": "565d5b75-11b8-41be-9991-ee03a0049159",
"model-version-id": "61414c6c-6082-4e03-9824-bf53c3582b78"
},
"inventory_provider": "aai",
"inventory_type": "service",
"service_type": "46b29078-8442-4ea3-bea6-9199a7d514d4"
}
]
},
"homing_template_version": "2017-10-10",
"locations": {
"customer_loc": {
"latitude": {
"get_param": "customer_lat"
},
"longitude": {
"get_param": "customer_long"
}
}
},
"optimization": {
"minimize": {
"sum": [
{
"product": [
"1",
{
"distance_between": [
"customer_loc",
"vgMuxAR"
]
}
]
},
{
"product": [
"1",
{
"distance_between": [
"customer_loc",
"vGW"
]
}
]
}
]
}
},
"parameters": {
"customer_lat": 32.89748,
"customer_long": 97.040443,
"service_id": "0dbb9d5f-27d9-429b-bc36-293e9fab7731",
"service_name": ""
}
},
"timeout": 600
}
This is similar to the first example except that it has an additional distance constraint which specifies that the distance of each vnf from the customer location must be less than 500km.
Example 3
{
"files": {},
"limit": 10,
"name": "urllc_sample",
"num_solution": "10",
"template": {
"constraints": {
"URLLC_core_Threshold": {
"demands": [
"URLLC_core"
],
"properties": {
"evaluate": [
{
"attribute": "latency",
"operator": "lte",
"threshold": 30,
"unit": "ms"
}
]
},
"type": "threshold"
},
"URLLC_ran_Threshold": {
"demands": [
"URLLC_ran"
],
"properties": {
"evaluate": [
{
"attribute": "latency",
"operator": "lte",
"threshold": 30,
"unit": "ms"
}
]
},
"type": "threshold"
}
},
"demands": {
"URLLC_core": [
{
"filtering_attributes": {
"model-invariant-id": "21d57d4b-52ad-4d3c-a798-248b5bb9124a",
"model-version-id": "bfba363e-e39c-4bd9-a9d5-1371c28f4d22",
"orchestration-status": "active",
"service-role": "nssi"
},
"inventory_provider": "aai",
"inventory_type": "nssi",
"region": "RegionOne",
"unique": "true"
}
],
"URLLC_ran": [
{
"filtering_attributes": {
"model-invariant-id": "aa2d56ea-773d-11ea-bc55-0242ac130003",
"model-version-id": "d6296806-773d-11ea-bc55-0242ac130003",
"orchestration-status": "active",
"service-role": "nssi"
},
"inventory_provider": "aai",
"inventory_type": "nssi",
"region": "RegionOne",
"unique": "true"
}
]
},
"homing_template_version": "2018-02-01"
},
"timeout": 1200
}
This template is for the selecting the NSSI instances for Network Slicing use case. The demand here is the Slice subnets and the threshold constraint specifies that the latency of the the subnets must be less than a particular threshold.
Example 4
{
"name":"urllc_sample",
"files":{
},
"limit":10,
"num_solution":"1",
"timeout":1200,
"template":{
"homing_template_version":"2020-08-13",
"demands":{
"nst_demand":[
{
"inventory_provider":"aai",
"inventory_type":"nst",
"unique":"true",
"region":"RegionOne",
"filtering_attributes":{
"model-role":"nst"
}
}
]
},
"constraints":{
"nst_Threshold":{
"type":"threshold",
"demands":[
"nst_demand"
],
"properties":{
"evaluate":[
{
"attribute":"latency",
"operator":"lte",
"threshold":30,
"unit":"ms"
}
]
}
}
},
"optimization":{
"goal": "minimize",
"operation_function": {
"operator": "sum",
"operands": [{
"function": "attribute",
"params": {
"demand": "nst_demand",
"attribute": "latency"
}
}]
}
}
}
}
This template is for the selecting the NST templates for Network Slicing use case. The demand here is the slice templates and the threshold constraint specifies that the latency of the the templates must be less than a particular threshold.
Contact
Shankar Narayanan shankarpnsn@gmail.com
Release Notes
Abstract
This document provides the release notes for the JAKARTA release.
Summary
Release Data
OOF Project |
|
Docker images |
optf-has 2.3.0 |
Release designation |
10.0.0 jakarta |
Release date |
02/06/2022 (TBD) |
New features
Enhancements to support capacity based NSI/NSSI Selection for the Slicing usecase
Bug Fixes
OPTFRA-1064 - Fix bug in fetching capacity attributes from DCAE
Known Limitations, Issues and Workarounds
System Limitations
Known Vulnerabilities
Workarounds
Security Notes
References
For more information on the ONAP Jakarta release, please see:
Quick Links: - OOF project page - Passing Badge information for OOF
Abstract
This document provides the release notes for the Istanbul release.
Summary
Release Data
OOF Project |
|
Docker images |
|
Release designation |
9.0.0 istanbul |
Release date |
28/10/2021 (TBD) |
New features
Migration from MUSIC to ETCD for backend DB
Bug Fixes
OPTFRA-968 Fix AAI plugin to fetch service/slice profile associated with NSI/NSSI
OPTFRA-853 Remove unwanted gplv3 components from docker image
OPTFRA-971 Fix issues in OOF-CPS interface
Known Limitations, Issues and Workarounds
System Limitations
Known Vulnerabilities
Workarounds
Security Notes
References
For more information on the ONAP Honolulu release, please see:
- Quick Links:
Abstract
This document provides the release notes for the Honolulu release.
Summary
Release Data
OOF Project |
|
Docker images |
|
Release designation |
8.0.0 honolulu |
Release date |
04/08/2021 (TBD) |
New features
Support for NST selection feature with AAI and SDC interface
Enhancement in Slice profile generation - Deriving TA list from coverage Area
Bug Fixes
OPTFRA-907 Fix AAI plugin to fetch service/slice profile associated with NSI/NSSI
OPTFRA-924 Replace pycryptodome with pycrytodomex, since it is not well maintained
Known Limitations, Issues and Workarounds
System Limitations
Known Vulnerabilities
Workarounds
Security Notes
References
For more information on the ONAP Honolulu release, please see:
- Quick Links:
Abstract
This document provides the release notes for the Guilin release.
Summary
Release Data
OOF Project |
|
Docker images |
|
Release designation |
7.0.0 guilin |
Release date |
2020-11-19 (TBD) |
New features
Support for Generic objective functions
Candidate schema refactoring
New candidate types - NSI, Slice profiles
Functionality added in AAI plugin to support NSI candidates
Bug Fixes
OPTFRA-854 HAS to support multiple inventory provider for a demand
OPTFRA-839 Remove python 2.7 from HAS docker image
Known Limitations, Issues and Workarounds
System Limitations
Known Vulnerabilities
Workarounds
Security Notes
References
For more information on the ONAP Guilin release, please see:
- Quick Links:
Abstract
This document provides the release notes for the Frankfurt release.
Summary
Release Data
OOF Project |
|
Docker images |
|
Release designation |
6.0.0 frankfurt |
Release date |
2020-05-07 (TBD) |
New features
Passthrough attributes has been added to placement request.
HAS container to run as non-root user.
HAS Component has be upgraded to Python 3.8.
New inventory type NSSI is added.
Functionality has been added to AAI plugin to get the nssi candidates from AAI.
New constraint named
threshold
has been added to the solver.
Bug Fixes
OPTFRA-734 Nginx failing to start as non-root user.
OPTFRA-733 AAF authentication fails while handling API requests.
OPTFRA-746 Add NSI id to NSSI candidate.
OPTFRA-747 Music api not using server url in https mode.
OPTFRA-728 HPA CSIT test failures.
OPTFRA-726 Nginx needs to run as root.
OPTFRA-630 Sonar failing jobs.
Known Limitations, Issues and Workarounds
System Limitations
Known Vulnerabilities
Workarounds
Security Notes
Fixed Security issues
All HAS containers were running as root user which is fixed in this release under OPTFRA-711.
References
For more information on the ONAP Frankfurt release, please see:
- Quick Links:
Version: 5.0.1
- Release Date
2019-09-30 (El Alto Release)
The El Alto release is the fourth release for ONAP Optimization Framework (OOF).
Artifacts released:
optf-has:1.3.3
New Features
No new features were added in the release. However, the HAS-Music interface was enhanced from HAS to enable HTTPS based communication. Since MUSIC wasnt ready to expose HTTPS in El Alto, using HTTPS was made into an optional flag through config.
[OPTFRA-330] security: HTTPS support for HAS-MUSIC interface
- Platform Maturity Level 1
~56.2%+ unit test coverage
Bug Fixes
The El Alto release for OOF fixed the following Bugs.
[OPTFRA-579] Json error in homing solution
[OPTFRA-521] oof-has-api exposes plain text HTTP endpoint using port 30275
[OPTFRA-409] Template example : purpose to be explained
Known Issues
Security Notes
Fixed Security Issues
[OJSI-137] In default deployment OPTFRA (oof-has-api) exposes HTTP port 30275 outside of cluster. This issue has been also described in “[OPTFRA-521] oof-has-api exposes plain text HTTP endpoint using port 30275”
Known Security Issues
Known Vulnerabilities in Used Modules
Upgrade Notes
Deprecation Notes
Other
Version: 4.0.0
- Release Date
2019-06-06 (Dublin Release)
New Features
A summary of features includes:
Extend OOF to support traffic distribution optimization
Implement encryption for HAS internal and external communication
- Platform Maturity Level 1
~56.2%+ unit test coverage
The Dublin release for OOF delivered the following Epics.
[OPTFRA-424] Extend OOF to support traffic distribution optimization
[OPTFRA-422] Move OOF projects’ CSIT to run on OOM
[OPTFRA-270] This epic captures stories related to maintaining current S3P levels of the project as new functional requirements are supported
- Bug Fixes
OPTFRA-515 Pod oof-has-controller is in CrashLoopBackOff after ONAP deployment
OPTFRA-513 OOF-HAS pods fail to come up in ONAP deployment
OPTFRA-492 HAS API pod failure
OPTFRA-487 OOF HAS CSIT failing with HTTPS changes
OPTFRA-475 Remove Casablanca jobs in preparation for Dublin branch
OPTFRA-467 Remove aai simulator code from HAS solver
OPTFRA-465 Fix data code smells
OPTFRA-461 Enable HTTPS and TLS for HAS API
OPTFRA-452 Remove misleading reservation logic
OPTFRA-449 Create OOM based CSIT for HAS
OPTFRA-448 Multiple Sonar Issues
OPTFRA-445 Modify HAS Data component to support new A&AI requests required by Distribute Traffic functionality
OPTFRA-444 Implement Distribute Traffic API exposure in HAS
OPTFRA-412 Got ‘NoneType’ error when there’s no flavor info inside vim
OPTFRA-411 latency_country_rules_loader.py - Remove the unused local variable “ctx”.
OPTFRA-302 Enhance coverage of existing HAS code to 55%
Known Issues
These are all issues with fix version: Dublin Release and status: open, in-progress, reopened
OPTFRA-494 HAS request ‘limit’ argument is ignored.
Security Issues
Fixed Security Issues
Known Security Issues
[OJSI-137] In default deployment OPTFRA (oof-has-api) exposes HTTP port 30275 outside of cluster.
Known Vulnerabilities in Used Modules
OPTFRA code has been formally scanned during build time using NexusIQ and no Critical vulnerability was found. project.
- Quick Links:
Upgrade Notes To upgrade, run docker container or install from source, See Distribution page
Deprecation Notes No features deprecated in this release
Other None
Version: 3.0.1
- Release Date
2019-01-31 (Casablanca Maintenance Release)
The following items were deployed with the Casablanca Maintenance Release:
New Features
None.
Bug Fixes
[OPTFRA-401] - Need flavor id while launching vm.
Version: 3.0.0
- Release Date
2018-11-30 (R3 Casablanca Release)
New Features
A summary of features includes:
- Security enhancements, including integration with AAF to implement access controls on
OSDF and HAS northbound interfaces
Integration with SMS
- Platform Maturity Level 1
~50%+ unit test coverage
- Hardware Platform Awareness Enhancements
Added support for SRIOV-NIC and directives to assist the orchestrator
Select the best candidate across all cloud region based on HPA score.
HPA metrics using prometheus
The Casablanca release for OOF delivered the following Epics.
OPTFRA-106 - OOF Functional Testing Related User Stories and Tasks
OPTFRA-266 - Integrate OOF with Certificate and Secret Management Service (CSM)
OPTFRA-267 - OOF - HPA Enhancements
OPTFRA-269 - This epic covers the work to get the OOF development platform ready for Casablanca development
OPTFRA-270 - This epic captures stories related to maintaining current S3P levels of the project as new functional requirements are supported
OPTFRA-271 - This epic spans the work to progress further from the current security level
OPTFRA-272 - This epic spans the work to progress further from the current Performance level
OPTFRA-273 - This epic spans the work to progress further from the current Manageability level
OPTFRA-274 - This epic spans the work to progress further from the current Usability level
OPTFRA-275 - This epic spans the stories to improve deployability of services
OPTFRA-276 - Implementing a POC for 5G SON Optimization
OPTFRA-298 - Should be able to orchestrate Cross Domain and Cross Layer VPN
Bug Fixes
OPTFRA-205 - Generated conductor.conf missing configurations
OPTFRA-210 - Onboarding to Music error
OPTFRA-211 - Error solution for HPA
OPTFRA-249 - OOF does not return serviceResourceId in homing solution
OPTFRA-259 - Fix intermittent failure of HAS CSIT job
OPTFRA-264 - oof-has-zookeeper image pull error
OPTFRA-305 - Analyze OOM health check failure
OPTFRA-306 - OOF-Homing fails health check in HEAT deployment
OPTFRA-321 - Fix osdf functional tests script to fix builder failures
OPTFRA-323 - Cannot resolve multiple policies with the same ‘hpa-feature’ name
OPTFRA-325 - spelling mistake
OPTFRA-326 - hyperlink links are missing
OPTFRA-335 - Making flavors an optional field in HAS candidate object
OPTFRA-336 - OOM oof deployment failure on missing image - optf-osdf:1.2.0
OPTFRA-338 - Create authentication key for OOF-VFC integration
OPTFRA-341 - Cannot support multiple candidates for one feature in one flavor
OPTFRA-344 - Fix broken HPA CSIT test
OPTFRA-354 - Generalize the logic to process Optimization policy
OPTFRA-358 - Tox fails with the AttributeError: ‘module’ object has no attribute ‘MUSIC_API’
OPTFRA-359 - Create index on plans table for HAS
OPTFRA-362 - AAF Authentication CSIT issues
OPTFRA-365 - Fix Jenkins jobs for CMSO
OPTFRA-366 - HAS CSIT issues
OPTFRA-370 - Update the version of the OSDF and HAS images
OPTFRA-374 - ‘ModelCustomizationName’ should be optional for the request
OPTFRA-375 - SO-OSDF request is failing without modelCustomizationName value
OPTFRA-384 - Generate and Validate Policy for vFW testing
OPTFRA-385 - resourceModelName is sent in place of resourceModuleName
OPTFRA-388 - Fix OOF to handle sdnr/configdb api changes
OPTFRA-395 - CMSO - Fix security violations and increment version
Known Issues
These are all issues with fix version: Casablanca Release and status: open, in-progress, reopened
OPTFRA-401 - Need flavor id while launching vm
OPTFRA-398 - Add documentation for OOF-VFC interaction
OPTFRA-393 - CMSO Implement code coverage
OPTFRA-383 - OOF 7 of 8 pods are not starting in a clean master 20181029
OPTFRA-368 - Remove Beijing repositories from CLM jenkins
OPTFRA-337 - Document new transitions in HAS states
OPTFRA-331 - Role-based access controls to OOF
OPTFRA-329 - role based access control for OSDF-Policy interface
OPTFRA-316 - Clean up hard-coded references to south bound dependencies
OPTFRA-314 - Create user stories for documenting new APIs defined for OOF
OPTFRA-304 - Code cleaning
OPTFRA-300 - Fix Heat deployment scripts for OOF
OPTFRA-298 - Should be able to orchestrate Cross Domain and Cross Layer VPN
OPTFRA-297 - OOF Should support Cross Domain and Cross Layer VPN
OPTFRA-296 - Support SON (PCI) optimization using OSDF
OPTFRA-293 - Implement encryption for all OSDF internal and external communication
OPTFRA-292 - Implement encryption for all HAS internal and external communication
OPTFRA-279 - Policy-based capacity check enhancements
OPTFRA-276 - Implementing a POC for 5G SON Optimization
OPTFRA-274 - This epic spans the work to progress further from the current Usability level
OPTFRA-273 - This epic spans the work to progress further from the current Manageability level
OPTFRA-272 - This epic spans the work to progress further from the current Performance level
OPTFRA-271 - This epic spans the work to progress further from the current security level
OPTFRA-270 - This epic captures stories related to maintaining current S3P levels of the project as new functional requirements are supported
OPTFRA-269 - This epic covers the work to get the OOF development platform ready for Casablanca development
OPTFRA-268 - OOF - project specific enhancements
OPTFRA-266 - Integrate OOF with Certificate and Secret Management Service (CSM)
OPTFRA-262 - ReadTheDoc - update for R3
OPTFRA-260 - Testing vCPE flows with multiple clouds
OPTFRA-240 - Driving Superior Isolation for Tiered Services using Resource Reservation – Optimization Policies for Residential vCPE
OPTFRA-223 - On boarding and testing AAF certificates for OSDF
Security Issues
OPTFRA code has been formally scanned during build time using NexusIQ and no Critical vulnerability was found.
- Quick Links:
Upgrade Notes To upgrade, run docker container or install from source, See Distribution page
Deprecation Notes No features deprecated in this release
Other None
Version: 2.0.0
- Release Date
2018-06-07 (Beijing Release)
New Features
The ONAP Optimization Framework (OOF) is new in Beijing. A summary of features incldues:
- Baseline HAS functionality
support for VCPE use case
support for HPA (Hardware Platform Awareness)
Integration with OOF OSDF, SO, Policy, AAI, and Multi-Cloud
- Platform Maturity Level 1
~50%+ unit test coverage
The Beijing release for OOF delivered the following Epics.
[OPTFRA-2] - On-boarding and Stabilization of the OOF seed code
[OPTFRA-6] - Integrate OOF with other ONAP components
[OPTFRA-7] - Integration with R2 Use Cases [HPA, Change Management, Scaling]
[OPTFRA-20] - OOF Adapters for Retrieving and Resolving Policies
[OPTFRA-21] - OOF Packaging
[OPTFRA-28] - OOF Adapters for Beijing Release (Policy, SDC, A&AI, Multi Cloud, etc.)
[OPTFRA-29] - Policies and Specifications for Initial Applications [Change Management, HPA]
[OPTFRA-32] - Platform Maturity Requirements for Beijing release
[OPTFRA-33] - OOF Support for HPA
[OPTFRA-105] - All Documentation Related User Stories and Tasks
Bug Fixes
None. Initial release R2 Beijing. No previous versions
Known Issues
[OPTFRA-179] - Error solution for HPA
[OPTFRA-205] - Onboarding to Music error
[OPTFRA-210] - Generated conductor.conf missing configurations
[OPTFRA-211] - Remove Extraneous Flavor Information from cloud-region cache
Security Issues
OPTFRA code has been formally scanned during build time using NexusIQ and no Critical vulnerability was found.
- Quick Links:
Upgrade Notes None. Initial release R2 Beijing. No previous versions
Deprecation Notes None. Initial release R2 Beijing. No previous versions
Other None
Upgrade Strategy
HAS can be upgraded in place(remove and replace) or using a blue-green strategy.
There is no database migration required.
Supporting Facts
HAS only stores the info and status of the incoming homing requests. It leverages MUSIC APIs for storing this information. It also leverages MUSIC for communication among the HAS components. So, redeploying HAS will not impact the data stored in MUSIC.