Music Developer Documentation¶
Installation¶
Single VM/Site install¶
Local Installation¶
Prerequisites
If you are using a VM make sure it has at least 8 GB of RAM (It may work with 4 GB, but with 2 GB it does give issues).
Instructions
Create MUSIC Install dir /opt/app/music
Open /etc/hosts as sudo and enter the name of the vm alongside localhost in the line for 127.0.0.1. E.g. 127.0.0.1 localhost music-1. Some of the apt-get installation seem to require this.
Ensure you have OpenJDK 8 on your machine.
Download Apache Cassandra 3.0, install into /opt/app/music and follow these instructions http://cassandra.apache.org/doc/latest/getting_started/installing.html till and including Step
By the end of this you should have Cassandra working.
Create a music.properties file and place it in /opt/app/music/etc/. Here is a sample of the file:
music.properties:
music.properties
my.id=0
all.ids=0
my.public.ip=localhost
all.public.ips=localhost
########################
# Optional current values are defaults
######################################
# If using docker this would point to the specific docker name.
#cassandra.host=localhost
#music.ip=localhost
#debug=true
#music.rest.ip=localhost
#lock.lease.period=6000
# Cassandra Login - Do not user cassandra/cassandra
cassandra.user=cassandra1
cassandra.password=cassandra1
# AAF Endpoint
#aaf.endpoint.url=<aaf url>
Make a dir /opt/app/music/logs MUSIC dir with MUSIC logs will be created in this dir after MUSIC starts.
Build the MUSIC.war and place in tomcat webapps dir.
Authentications/AAF Setup For Authentication setup.
Start tomcat and you should now have MUSIC running.
Extra Cassandra information for Authentication:
To create first user in Cassandra
Edit conf/Cassandra.yaml file:
authenticator: PasswordAuthenticator authorizer: CassandraAuthorizer
Restart Cassandra
Login to cqlsh with default credentials:
cqlsh -u cassandra -p cassandra
To change default user create new user with the following command.:
CREATE USER new_user WITH PASSWORD ‘new_password’ SUPERUSER;
Change password for default user ‘Cassandra’ so that no one will be able to login:
ALTER USER cassandra WITH PASSWORD ‘SomeLongRandomStringNoonewillthinkof’;
Provide the new user credentials to Music. Update music.properties file and uncomment or add the following:
cassandra.user=<new_user> cassandra.password=<new_password>
To access keyspace through cqlsh, login with credentials that are passed to MUSIC while creating the keyspace.
Continue with Authentication
Multi-site or Local Cluster¶
Follow the instructions for local MUSIC installation on all the machines/VMs/hosts (referred to as a node) on which you want MUSIC installed. However, Cassandra and Zookeeper needs to be configured to run as multi-node installations (instructions below) before running them.
Cassandra:¶
In the cassandra.yaml file which is present in the cassa_install/conf directory in each node, set the following parameters: cassandra.yaml:
cluster_name: ‘name of cluster’
#...
num_tokens: 256
#...
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "<public ip of first seed>, <public ip of second seed>, etc"
#...
listen_address: private ip of VM
#...
broadcast_address: public ip of VM
#...
endpoint_snitch: GossipingPropertyFileSnitch
#...
rpc_address: <private ip>
#...
phi_convict_threshold: 12
In the cassandra-rackdc.properties file, assign data center and rack names as needed if required ( This is for multi data center install).
Once this is done on all three nodes, you can run cassandra on each of the nodes through the cassandra bin folder with this command:
./cassandra
In the cassandra bin folder, if you run the following it will tell you the state of the cluster:
./nodetool status
To access cassandra, one any of the nodes you can run the following and then perform CQL queries.:
./cqlsh <private ip>
Extra Cassandra information for Authentication:¶
To create first user in Cassandra
Edit conf/Cassandra.yaml file:
authenticator: PasswordAuthenticator authorizer: CassandraAuthorizer
Restart Cassandra
Login to cqlsh with default credentials:
cqlsh -u cassandra -p cassandra
To change default user create new user with the following command.:
CREATE USER new_user WITH PASSWORD ‘new_password’ SUPERUSER;
Change password for default user ‘Cassandra’ so that no one will be able to login:
ALTER USER cassandra WITH PASSWORD ‘SomeLongRandomStringNoonewillthinkof’;
Provide the new user credentials to Music. Update music.properties file and uncomment or add the following:
cassandra.user=<new_user> cassandra.password=<new_password>
To access keyspace through cqlsh, login with credentials that are passed to MUSIC while creating the keyspace.
MUSIC Create a music.properties file and place it in /opt/app/music/etc at each node. Here is a sample of the file: cassandra.yaml:
my.id=0
all.ids=0
my.public.ip=localhost
all.public.ips=localhost
#######################################
# Optional current values are defaults
#######################################
# If using docker this would point to the specific docker name.
#zookeeper.host=localhost
#cassandra.host=localhost
#music.ip=localhost
#debug=true
#music.rest.ip=localhost
#lock.lease.period=6000
# Cassandra Login - Do not user cassandra/cassandra
cassandra.user=cassandra1
cassandra.password=cassandra1
# AAF Endpoint
#aaf.endpoint.url=<aaf url>
Build the MUSIC.war (see Build Music) and place it within the webapps folder of the tomcat installation.
Start tomcat and you should now have MUSIC running.
For Logging create a dir /opt/app/music/logs. When MUSIC/Tomcat starts a MUSIC dir with various logs will be created.
Build Music¶
Documentation will be updated to show that. Code can be downloaded from Music Gerrit. To build you will need to ensure you update your settings with the ONAP settings.xml (Workspace and Development Tools)
Once you have done that run the following:
# If you installed settings.xml in your ./m2 folder
mvn clean package
# If you placed the settings.xml elsewhere:
mvn clean package -s /path/to/settings.xml
After it is built you will find the MUSIC.war in the ./target folder.
There is a folder called postman that contains a postman collection for testing with postman.
Continue with Authentication
Setup for Developing MUSIC¶
Authentication¶
Steps to test AAF MUSIC has been enhanced to support applications which are already authenticated using AAF and applications which are not authenticated using AAF.
If an application has already been using AAF, it should have required namespace, userId and password.
Non AAF applications (AID) Works just like AAF but Namespace is an app name and MUSIC manages the User instead of AAF
All the required params should be sent as headers.
Changed in Cassandra: Admin needs to create the following keyspace and table.
In the cassandra bin dir run ./cqlsh and log in to db then:
If you want to save the following in a file you can then run ./cqlsh -f <file.cql>
Single-Site Install¶
//Create Admin Keyspace
CREATE KEYSPACE admin
WITH REPLICATION = {
'class' : 'SimpleStrategy',
'replication_factor': 1
}
AND DURABLE_WRITES = true;
CREATE TABLE admin.keyspace_master (
uuid uuid,
keyspace_name text,
application_name text,
is_api boolean,
password text,
username text,
is_aaf boolean,
PRIMARY KEY (uuid)
);
Multi-Site Install¶
//Create Admin Keyspace
CREATE KEYSPACE admin
WITH REPLICATION = {
'class' : 'NetworkTopologyStrategy',
'DC1':2
}
AND DURABLE_WRITES = true;
CREATE TABLE admin.keyspace_master (
uuid uuid,
keyspace_name text,
application_name text,
is_api boolean,
password text,
username text,
is_aaf boolean,
PRIMARY KEY (uuid)
);
Headers¶
For AAF applications all the 3 headers ns, userId and password are mandatory.
For Non AAF applications if aid is not provided MUSIC creates new random unique UUID and returns to caller.
Caller application then need to save the UUID and need to pass the UUID to further modify/access the keyspace.
Required Headers
AAF Authentication¶
Key : Value : Description
ns : org.onap.aaf : AAF Namespace
userId : username : USer Id
password: password : Password of User
AID Authentication Non-AAF¶
Key : Value : Description
ns : App Name : App Name
userId : username : Username for this user (Required during Create keyspace Only)
password: password : Password for this user (Required during Create keyspace Only)
Onboarding API¶
Add Application¶
POST URL: /MUSIC/rest/v2/admin/onboardAppWithMusic with JSON as follows:
{
"appname": "<the Namespace for aaf or the Identifier for the specific app using AID access",
"userId" : "<userid>",
"isAAF" : true/false,
"password" : ""
}
Get Application¶
POST URL: /MUSIC/rest/v2/admin/search with JSON as follows:
{
"appname": "<the Namespace for aaf or the Identifier for the specific app using AID access",
"isAAF" : true/false,
"aid" : "Unique ID for this user"
}
Edit Application¶
PUT URL: /MUSIC/rest/v2/admin/onboardAppWithMusic with JSON as follows:
{
"aid" : "Unique ID for this user",
"appname": "<the Namespace for aaf or the Identifier for the specific app using AID access",
"userId" : "<userid>",
"isAAF" : true/false,
"password" : ""
}
Delete Application¶
DELETE URL: /MUSIC/rest/v2/admin/onboardAppWithMusic with JSON as follows:
{
"aid" : "Unique ID for this app"
}
MUSIC is to be installed in a single Dir on a vm.
The main MUSIC dir should be:
/opt/app/music
# These also need to be set up
/opt/app/music/etc
/opt/app/music/logs
When installing, Cassandra should also be installed here.:
/opt/app/music/apache-cassandra-n.n.n
You could also create links from install dirs to a common name ie::
ln -s /opt/app/music/apache-cassandra-n.n.n cassandra
Cassandra has data dirs.:
# For cassandra it should be (This is the default)
/opt/app/music/cassandra/data
Continue by selecting the link to the setup you are doing.
Release Notes¶
Initial Release for Frankfurt
Version: 3.2.40¶
- Release Date
2020-05-20
New Features
MUSIC now runs on a springboot server, instead of a standalone tomcat server
HTTPS support for clients through AAF certificates
A background lock clean up daemon will periodically check the status of current locks, cleaning up ‘stale’ references. Clients should see a performance boost if they were previously dealing with many stale locks.
Improved error messaging to the user, allowing clients to better debug their applications
Continued adherence to ONAP S3P requirements
- Bug Fixes
Known Issues N/A
Security Notes
MUSIC code has been formally scanned during build time using NexusIQ and all Critical vulnerabilities have been addressed, items that remain open have been assessed for risk and determined to be false positive. The MUSIC open Critical security vulnerabilities and their risk assessment have been documented as part of the project.
Quick Links:
Upgrade Notes
N/A
Deprecation Notes
N/A
Other
N/A
End of Release Notes
Architecture¶
Project Description¶
To achieve five 9s of availability on three 9s or lower software and infrastructure in a cost-effective manner, ONAP components need to work in a reliable, active-active manner across multiple sites (platform-maturity resiliency level 3). A fundamental aspect of this is state management across geo-distributed sites in a reliable, scalable, highly available and efficient manner. This is an important and challenging problem because of three fundamental reasons:
Current solutions for state-management of ONAP components like MariaDB clustering, that work very effectively within a site, may not scale across geo-distributed sites (e.g., Beijing, Amsterdam and Irvine) or allow partitioned operation (thereby compromising availability). This is mainly because WAN latencies are much higher across sites and frequent network partitions can occur.
ONAP components often have a diverse range of requirements in terms of state replication. While some components need to synchronously manage state across replicas, others may tolerate asynchronous replication. This diversity needs to be leveraged to provide better performance and higher availability across sites.
ONAP components often need to partition state across different replicas, perform consistent operations on them and ensure that on failover, the new owner has access to the latest state. The distributed protocols to achieve such consistent ownership is complex and replete with corners cases, especially in the face of network partitions. Currently, each component is building its own handcrafted solution which is wasteful and worse, can be erroneous.
In this project, we identify common state management concerns across ONAP components and provide a multi-site state coordination/management service (MUSIC) with a rich suite of recipes that each ONAP component can simply configure and use for their state-management needs.
Functionality¶
At its core, MUSIC provides a scalable sharded eventually-consistent data-store (Cassandra) wherein the access to the keys can be protected using a locking service (built on Zookeeper) that is tightly coupled with the data-store. ONAP components can use the MUSIC API directly to store and access their state across geo-distributed sites. This API enables ONAP components to achieve fine-grained flexible consistency on their state.
MUSIC also provides a rich set of recipes (mdbc, prom, musicCAS, musicQ) that ONAP components can use to build multi-site active-active/active-passive/federated state-management solutions:
mdbc: The most crucial recipe is a multi-site database cache (mdbc) that enable ONAP components that maintain state in a SQL database to avail the benefits of MUSIC without compromising their need to use transactional SQL DBs. These ONAP components can rely on existing db clustering techniques like MariaDB for transactionality and complex querying within a site. mdbc will intercept each of these read/write calls to the db cluster and mirror this state to other geo-distributed sites through MUSIC either synchronously or asynchronously (configurable at a table-level). For example, components like the ONAP Service Orchestrator that use MariaDB to maintain state can directly use this recipe with no change to their SQL code.
prom: MUSIC provides a recipe for policy-driven ownership management (prom) of state for ONAP components to (1) partition state across replicas during both initial placement and during failures based on their individual policies (2) ensure correct transfer of state ownership across replicas during site failures and network partitions (3) ensure that the new owner has access to the most recent version of state (if needed).
musicCAS: The distributed compare and set is a powerful primitive that will allow ONAP components to update shared state in an atomic manner. This is currently being used by the ONAP HAS (homing service) that is structured a job scheduler with multiple workers trying to pick up client-submitted jobs, while ensuring that only one of them actually performs the job.
musicQ: Implementing a geo-distributed queue is a hard problem with many performance implications when data belonging to a queue is sharded across nodes. MUSIC provides a queue API that carefully structures the data to ensure good performance. ONAP HAS (mentioned above) uses this as its job queue.
Scope¶
MUSIC is a shared service with recipes that individual ONAP components and micro-service can use for state replication, consistency management and state ownership across geo-distributed sites. MUSIC will make sure that the right service data is available at the right place, and at the right time to enable federated active-active operation of ONAP. For example, we envisage the use of MUSIC for multi-site state management in SO (to store Camunda state across sites), <SDN-C, AppC> (to store ODL related state across sites) , A&AI (to store its graph data) and most other ONAP components that need to manage state across sites.
Out of Scope¶
While MUSIC is an optional solution for state-management of ONAP components across sites, OOM will continue to manage component level and platform level deployment, scalability, redundancy, resiliency, self-healing and high availability on top of Kubernetes across sites for ONAP.
Usage¶
MUSIC and its recipes export a REST API apart from mdbc which is implemented as a jdbc driver to enable seamless integration with SQL-based ONAP components.
Architecture¶
The figures below how MUSIC can be used in a general context and also provide a specific example of its potential usage in ONAP SO.

Logging¶
Log file produced will be in /opt/app/music/logs/MUSIC/music.log,error.log,debug.log Log files are in EELF format.
Where to Access Information¶
Error / Warning Messages¶
Configuration¶
See the following pages for Configuration Information: