-
Notifications
You must be signed in to change notification settings - Fork 11
Reference
UniPipe Service broker translates OSB API communication into committed files in a git repo. To automate CRUD operations on Service Instances and Bindings, you can to build a CI/CD pipeline that acts upon commits to this git repo.
Below is the reference documentation for ways to interact with UniPipe Service Broker.
Git commit messages by UniPipe service broker will always start with OSB API:
.
UniPipe service broker writes the following git commit messages, depending on the event:
- Service Instance creation commits:
OSB API: Created Service instance {ServiceInstanceId}
- Service Instance deletion commits:
OSB API: Marked Service instance {ServiceInstanceId} as deleted.
- Service Binding creation commits:
OSB API: Created Service binding {ServiceBindingID}
- Service Binding deletion commits:
OSB API: Marked Service binding ${ServiceBindingID} as deleted.
Reading git commits of UniPipe service broker is useful for registering that some service or service binding is (de)provisioned.
The repository that stores the state of instances, bindings and the catalog is called instance git repository. UniPipe service broker expects the following structure:
catalog.yml # file that contains all infos about services and plans this Service broker provides
instances
<instance-id>
instance.yml # contains all service instance info written by the
status.yml # this file contains the current status, which is updated by the pipeline
g-metrics-%.yml # optional: gauge metric files
s-metrics-%.yml # optional: sampling counter metric files
p-metrics-%.yml # optional: periodic counter metric files
bindings
<binding-id>
binding.yml # contains all binding info written by the UniPipe Service Broker
status.yml # this file contains the current status, which is updated by the pipeline
credentials.yml # optional: credential for this binding to return to service marketplace/platform
Use unipipe cli to parse an instance repository:
unipipe list --help
The list of provided services and their plans has to be defined in this file. The following is an example for a catalog file:
services:
- id: "d40133dd-8373-4c25-8014-fde98f38a728"
name: "example-osb"
description: "This service spins up a host with OpenStack and Cloud Foundry CLI installed."
bindable: true
tags:
- "example"
plans:
- id: "a13edcdf-eb54-44d3-8902-8f24d5acb07e"
name: "S"
description: "A small host with OpenStack and CloudFoundry CLI installed"
free: true
bindable: true
- id: "b387b010-c002-4eab-8902-3851694ef7ba"
name: "M"
description: "A medium host with OpenStack and CloudFoundry CLI installed"
free: true
bindable: true
Use unipipe cli to generate a more elaborate example file:
unipipe generate catalog
The YAML structure is based on the OSB Spec (see
expected_instance.yml. You can use
all information provided in this file in your CI/CD pipeline. The most essential
properties that will be used in all service brokers are planId
and deleted
.
The deleted
property is the one that indicates to the pipeline, that this
instance shall be deleted.
Use unipipe cli to transform instance.yml files into file formats that match your needs:
unipipe transform --help
It is basically the same as the instance.yml, but on binding level. For an example file, see expected_binding.yml.
This feature only supported by service bindings. If you want to give some outputs from your service bindings, you can simply create credentials.yml
file inside of the bindings folder and put your credentials in it. Afterwards unipipe service broker automatically will serve your credentials. Based on the Yaml standard, the credential format is key: value
. These credentials can also be created by unipipe-cli itself. Both unipipe browse
and unipipe update
commands can be used to set binding credentials.
expected_credentials.yml.
This file contains the current status information and looks like this:
status: "in progress"
description: "Provisioning service instance"
The pipeline has to update this file. When the instance or binding has been
processed successfully, the status must be updated to succeeded
. In case of a
failure, it must be set to failed
. While the pipeline is still working, it
might update the description, to give the user some more information about the
progress of his request.
Use unipipe cli to write status.yml files:
unipipe update --help
This file contains the exposed Gauge
metrics for the service instance id. % part can be replaced with any other string value ( e.g. g-metrics-myresource-2022-01-01.yml
). If there are more than one Gauge metrics file available, unipipe will merge the datasets automatically on the exposed service definition endpoint. The filenames must be starts with g-metrics
and ends with .yml
respectively. After the file is created, it can be accessed with the following link pattern https://your-unipipe-osb-endpoint.com/metrics/gauges/$service-definition-id
(the $service-definition-id parameter must be replaced by your Service Definition Id for the Service Instance)
serviceInstanceId: SERVICE_INSTANCE_ID
resource: RESOURCE_NAME
values: # EXAMPLE VALUES
- writtenAt: '2022-01-07T12:34:33.616Z'
observedAt: '2022-01-07T12:00:00Z'
value: 7
- writtenAt: '2022-01-07T12:34:33.616Z'
observedAt: '2022-01-07T13:00:00Z'
value: 1
- writtenAt: '2022-01-07T12:34:33.616Z'
observedAt: '2022-01-07T14:00:00Z'
value: 5
This file contains the exposed PeriodicCounters
metrics for the service instance id. % part can be replaced with any other string value ( e.g. p-metrics-myresource-2022-01-01.yml
). If there are more than one Gauge metrics file available, unipipe will merge the datasets automatically on the exposed service definition endpoint. The filenames must be starts with p-metrics
and ends with .yml
respectively. After the file is created, it can be accessed with the following link pattern https://your-unipipe-osb-endpoint.com/metrics/periodicCounters/$service-definition-id
(the $service-definition-id parameter must be replaced by your Service Definition Id for the Service Instance)
serviceInstanceId: SERVICE_INSTANCE_ID
resource: RESOURCE_NAME
values: # EXAMPLE VALUES
- writtenAt: '2022-01-07T00:00:00.000Z'
periodStart: '2022-01-07T14:00:00Z'
periodEnd: '2022-01-07T15:00:00Z'
countedValue: 1
- writtenAt: '2022-01-07T00:00:00.000Z'
periodStart: '2022-01-07T15:00:00Z'
periodEnd: '2022-01-07T16:00:00Z'
countedValue: 2
This file contains the exposed SamplingCounters
metrics for the service instance id. % part can be replaced with any other string value ( e.g. s-metrics-myresource-2022-01-01.yml
). If there are more than one Gauge metrics file available, unipipe will merge the datasets automatically on the exposed service definition endpoint. The filenames must be starts with s-metrics
and ends with .yml
respectively. After the file is created, it can be accessed with the following link pattern https://your-unipipe-osb-endpoint.com/metrics/samplingCounters/$service-definition-id
(the $service-definition-id parameter must be replaced by your Service Definition Id for the Service Instance)
serviceInstanceId: SERVICE_INSTANCE_ID
resource: RESOURCE_NAME
values: # EXAMPLE VALUES
- writtenAt: '2022-01-07T12:34:33.616Z'
observedAt: '2022-01-07T12:00:00Z'
value: 2
- writtenAt: '2022-01-07T12:34:33.616Z'
observedAt: '2022-01-07T13:00:00Z'
value: 2
- writtenAt: '2022-01-07T12:34:33.616Z'
observedAt: '2022-01-07T14:00:00Z'
value: 5
UniPipe service broker reads the following environment variables.
-
GIT_REMOTE
: The remote Git repository to push the repo to -
GIT_REMOTE_BRANCH
: The used branch of the remote Git repository -
GIT_LOCAL_PATH
: The path where the local Git Repo shall be created/used. Defaults to tmp/git -
GIT_SSH_KEY
: If you want to use SSH, this is the PEM encoded SSH key to be used for accessing the remote repo. Linebreaks must be replaced with spaces. -
GIT_USERNAME
: If you use HTTPS to access the git repo, define the HTTPS username here -
GIT_PASSWORD
: If you use HTTPS to access the git repo, define the HTTPS password here -
APP_BASIC_AUTH_USERNAME
: The service broker API itself is secured via HTTP Basic Auth. Define the username for this here. -
APP_BASIC_AUTH_PASSWORD
: Define the basic auth password for requests against the API
We use an ECDSA key in the example below. You could also use an RSA key instead.
The expected format for the GIT_SSH_KEY
variable for an ECDSA key looks like this:
GIT_SSH_KEY=-----BEGIN EC PRIVATE KEY----- MIGkAgEBBD....6TcH0pY/T/Yw= -----END EC PRIVATE KEY-----
There is a space `` between -----BEGIN EC PRIVATE KEY-----
and
the key as well as between the key and `----END EC PRIVATE KEY-----`. If you
omit these spaces unipipe will not be able to read the private key. Please also
note that the SSH key is PEM encoded and therefore starts with
`-----BEGIN EC PRIVATE KEY-----`.
OpenSSH encoded keys starting with
`-----BEGIN OPENSSH PRIVATE KEY-----` must be converted to PEM before being
used. There is an open issue about supporting OpenSSH formatted keys.
Here is how to generate a valid SSH key manually Bash:
ssh-keygen -t ecdsa -b 384 -m pem -f unipipe-ssh -P ""
tr '\n' ' ' < unipipe-ssh
Powershell:
ssh-keygen -t ecdsa -b 384 -m pem -f unipipe-ssh -P ""
((Get-Content unipipe-ssh) -join " ")
Unipipe OSB can be used to expose your metric datapoints directly from your Git repository. The latest information about the implementation with meshStack can be found in meshcloud's Metrics-based Metering documentation.
The following information explains only how you can expose the metrics data with your Unipipe OSB.
When you provision a new service instance, it will create a new Folder under the instances
folder with the service instance id as a folder name. You should provide your metric files inside these instance folders to expose your metrics ( Please check the guides above for each metrics type. Gauges, PeriodicCounters, SamplingCounters). If you don't provide any metric files, Unipipe OSB will return an empty json array []
as a response.
You can use the following Dhall configurations to generate the metric data files.
let Gauge = {
writtenAt: Text,
observedAt: Text,
value: Natural
}
let SamplingCounter = {
writtenAt: Text,
observedAt: Text,
value: Natural
}
let PeriodicCounter = {
writtenAt: Text,
periodStart: Text,
periodEnd: Text,
countedValue: Natural
}
let MetricsProvider =
let MetricsProvider =
< Gauge : Gauge
| Sampling : SamplingCounter
| Periodic : PeriodicCounter
>
in MetricsProvider
in { Type = MetricsProvider, Gauge, SamplingCounter, PeriodicCounter }
let MetricsType = (./MetricsProvider.dhall).Type
let MetricsData = {
serviceInstanceId: Text,
resource: Text,
values: List MetricsType
}
in MetricsData
let MetricsType = (./MetricsProvider.dhall).Type
let MetricsData = (./MetricsData.dhall)
let output: MetricsData = {
serviceInstanceId = "test-instance-id",
resource = "test-resource-name",
values =
[
MetricsType.Gauge { writtenAt="2022-01-07T12:00:00Z", observedAt="2022-01-07T12:00:00Z", value=5 },
MetricsType.Gauge { writtenAt="2022-01-07T13:00:00Z", observedAt="2022-01-07T13:00:00Z", value=4 },
]
}
in output
After the Dhall configuration files are created in the same folder, you can use the following code to generate the metrics file. For the example case which is above you should output as
g-metrics.yml
file as needed becauseMetricsType.Gauge
values are used and youy should put the generated file in the correct service instance folder under theinstances
folder.
dhall-to-yaml <<< ./main.dhall > g-metrics.yml
Made with ❤️ by meshcloud