Running and Interfacing with Apache Unomi 1.4 on Ubuntu
This section includes a quick tutorial demonstrating how to install and interface with Unomi running on Ubuntu. The purpose of this tutorial is to demonstrate how to use the features of Apache Unomi.
Install Java 8
Unomi 1.4 requires Java 8. Use these command to install this specific version:
apt install openjdk-8-jdk
Set your JAVA_HOME
by editing /etc/environment
:
vi /etc/environment
and add these two lines below what is already there:
JAVA_HOME="/usr/lib/jvm/java-8-openjdk-amd64/"
PATH=$JAVA_HOME/bin:$PATH
JAVA_HOME
may vary. You can review the output of the apt install
command to see where Java was installed.Now reload the environment:
source /etc/environment
Installing ElasticSearch 5.6.3
Unomi 1.4 requires ElasticSearch version 5.6.3. Use these command to install this specific version:
apt-get update && apt-get -y install apt-transport-https curl wget
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.6.3.deb
dpkg -i elasticsearch-5.6.3.deb
Now, edit the ElasticSearch configuration:
vi /etc/elasticsearch/elasticsearch.yml
Uncomment and edit the line with cluster.name
to:
cluster.name: contextElasticSearch
Now start and check the status of ElasticSearch to confirm it is running:
service elasticsearch start
service elasticsearch status
Installing Unomi 1.4
You can install a binary distribution from any of these mirrors. Just download and extract the files, then run it using Karaf:
wget http://apache.mirrors.pair.com/incubator/unomi/1.4.0/unomi-1.4.0-bin.tar.gz
tar -xzf unomi-1.4.0-bin.tar.gz
After it is extracted, move it into /opt/unomi
:
mkdir /opt/unomi
mv unomi-1.4.0-incubating/*
Start Unomi
Next, start Unomi from the terminal:
/opt/unomi/bin/karaf
In the Karaf terminal, run unomi:start
:
karaf@root()> unomi:start
Installing Unomi as a Service
You can install Unomi as a service using Karaf's Service Wrapper.
From the Karaf command line:
karaf@root()> feature:install wrapper
karaf@root()> wrapper:install
The output from the wrapper:install
command will include instructions for finishing the installation and starting/stoping Karaf.
Interfacing with Unomi
Below are some Python scripts that demonstrate how to interface with Unomi.
You can check some endpoints in a web browser, the default username and password is `karaf` and `karaf`:
https://localhost:9443/cxs/cluster
http://localhost:8181/context.js?sessionId=1234
Create a New Profile
Run the Python code to create a new profile (use Python 3):
from requests import request
"""
Make a request to Unomi to create a profile with ID = 10
"""
r = requests.post('http://localhost:8181/cxs/profiles/',
auth=('karaf','karaf'),
json ={
"itemId":"10",
"itemType":"profile",
"version":None,
"properties": {
"firstName": "John",
"lastName": "Smith"
},
"systemProperties":{},
"segments":[],
"scores":{},
"mergedWith":None,
"consents":{}
})
print(r)
print(r.content)
This creates a profile with ID 10. You can view this profile with a GET /profile endpoint in the browser:
http://localhost:8181/cxs/profile/10
Create a New Profile and Session
Run the Python code to create a new profile and session (use Python 3):
from requests import post
from datetime import datetime
profile = {
"itemId":"10",
"itemType":"profile",
"version":None,
"properties": {
"firstName": "John",
"lastName": "Smith"
},
"systemProperties":{},
"segments":[],
"scores":{},
"mergedWith":None,
"consents":{}
}
session = {
"itemId": "101",
"itemType":"session",
"scope":None,
"version":1,
"profileId":profile_id,
"profile": profile,
"properties":{},
"systemProperties":{},
"timeStamp": datetime.now().strftime("%Y-%m-%dT%H:%M:%SZ")
}
# Create or update profile
r = post('http://localhost:8181/cxs/profiles/',
auth=('karaf','karaf'),
json =profile)
print(r)
print(r.content)
# Create session
r = post('http://localhost:8181/cxs/profiles/sessions/101',
auth=('karaf', 'karaf'),
json=session)
print(r)
print(r.content)
This creates a session with ID 101 and profile with ID 10. You can view this profile with a [GET /profile/{profile_id}/sessions endpoint] in the browser:
http://localhost:8181/cxs/profiles/10/sessions/
Create a New Rule
Run the Python code to create a new rule (use Python 3):
import requests
"""
Make a request to Unomi to create a rule that marks profiles as "eligible = yes"
when annualIncome < 12000
"""
r = requests.post('http://localhost:8181/cxs/rules/',
auth=('karaf','karaf'),
json ={
"metadata": {
"id": "eligibilityRule",
"name": "Example eligibility rule",
"description": "Profile annualIncome < 12000"
},
"condition": {
"parameterValues": {
"subConditions": [
{
"parameterValues": {
"propertyName": "properties.annualIncome",
"comparisonOperator": "greaterThan",
"propertyValueInt": 12000
},
"type": "profilePropertyCondition"
},
{
"type": "profileUpdatedEventCondition",
"parameterValues": {
}
}
],
"operator" : "and"
},
"type": "booleanCondition"
},
"actions": [
{
"parameterValues": {
"setPropertyName": "properties.eligibility",
"setPropertyValue": "yes"
},
"type": "setPropertyAction"
}
]
})
print("Rule Response Code:", r)
print("Rule Response Content:", r.content)
"""
Make a request to Unomi to create a profile with annualIncome < 12000
"""
r = requests.post('http://localhost:8181/cxs/profiles/',
auth=('karaf','karaf'),
json ={
"itemId":"10",
"itemType":"profile",
"version":None,
"properties": {
"firstName": "John",
"lastName": "Smith",
"annualIncome": 10000
},
"systemProperties":{},
"segments":[],
"scores":{},
"mergedWith":None,
"consents":{}
})
print("Profile Response Code:", r)
print("Profile Response Content:", r.content)
This creates a rule with ID eligibilityRule and a profile with ID 10. You can view this rule with a [GET /rule/{rule_id} endpoint] in the browser:
http://localhost:8181/cxs/rules/eligibilityRule/
and you can view the profile which has been marked as eligible = "yes":
http://localhost:8181/cxs/profile/10
Running Unomi 1.3 using Docker
This section includes a quick tutorial demonstrating how to run Unomi using Docker. The purpose of this tutorial is to demonstrate how to run Apache Unomi in a Docker container.
Install Docker and Docker Compose
Before you get started, you will need to install Docker and Docker Compose on your machine. Installation instructions can be found here. Docker for Mac and Docker Toolbox already include Docker Compose.
About the Required Images
Unomi requires ElasticSearch so this setup will use an Elasticsearch image provided by Elasticsearch B.V.. We will use a Unomi Docker image mantained by the community.
Create a Docker Compose Configuration
Create a new directory and add a docker-compose.yaml
file. Then copy the code below into that file.
version: '2.2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.3
volumes: # Persist ES data in seperate "esdata" volume
- esdata1:/usr/share/elasticsearch/data
environment:
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- discovery.type=single-node
- xpack.security.enabled=false
- cluster.name=contextElasticSearch
ports: # Expose Elasticsearch ports
- "9300:9300"
- "9200:9200"
unomi:
image: mikeghen/unomi:1.3
container_name: unomi
environment:
- ELASTICSEARCH_HOST=elasticsearch
- ELASTICSEARCH_PORT=9300
ports:
- 8181:8181
- 9443:9443
links:
- elasticsearch
depends_on:
- elasticsearch
volumes: # Define seperate volume for Elasticsearch data
esdata1:
driver: local
The configuration above creates a single node ElasticSearch container with persistent storage. It also creates a single Unomi container and links it so it can access ElasticSearch.
You can find the code for the Unomi image here: https://github.com/mikeghen/unomi-docker
Create the Environment with Docker Compose
To start everything, run this command from the same directory where the docker-compose.yaml
file exists:
docker-compose up
Check Unomi Services are Running
You will need to wait a few minutes for ElasticSearch and Unomi to start up. Check that services are running locally by opening this URL in a browser:
http://localhost:8181/cxs
This will check Unomi. It will come back with "Available RESTful services" and a list of services when Unomi finishes starting up.
You can check ElasticSearch is running with this curl command:
curl http://localhost:9200/_cat/health?format=json
This will come back with a "yellow" status which is just because we're only running 1 ElasticSearch node.