Skip to content

Core IoR Compute Platform. This was the Interlibr we worked on until 2020 Spring. We took a new direction after that point. This repository is preserved as a resource that allows us to reference things as they were then.

License

Notifications You must be signed in to change notification settings

Xalgorithms/interlibr-v1

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Summary

This is Interlibr - a document-oriented map/reduce platform for the Internet of Rules. The platform runs on Kubernetes and Docker Compose (for development). It uses design concepts from the SMACK (Spark, Mesos, Akka, Cassandra, Kafka) collection of frameworks. This platform is designed to retain, discover and execute rules against documents (hierarchical key-value maps aka JSON documents).

Components

The platform is made up of a number of independent services and support libraries:

Component Category Language Core Technologies Description
Execute service Scala Akka, Kafka Pulls rule execution events from Kafka and executes the rule
Events service Javascript Kafka, WebSockets Notifies external applications of events within the platform
Schedule service Scala Play, Akka, Kafka Accepts actions to schedule operations on the platform
Query service Ruby Sinatra Synchronous information about the platform and what's happening on it
Jobs spark jobs Scala Kafka, Spark Collection of Spark jobs initiated by Kafka events
GitHub Revisions service Ruby Sinatra Integration of published rule packages with the platform (via GitHub)
Rule Parser lib Ruby Parselet PEG-based parser of the Xalgo DSL
Rule Interpreter (Reference) lib Scala Play Interpreter of the JSON / AST generated by the parser
Storage lib Scala Common storage library for Cassandra, Mongo; used by the Scala services

Running the platform

Development

The team uses Docker Compose to instantiate local compositions of containers in ops/docker-compose for testing purposes. A number of different configurations have been designed for some specific cases.

  • compose.core: This script loads a composition that contains the core, external services that are used by our services. This includes Kafka, Cassandra, MongoDB, etc. It is intended to be used when doing development work on one or more of our core services (schedule, execute, etc). In this composition, Kakfa advertises itself as localhost thus allowing services that are under development to contact the brokers as localhost. In all other compositions, Kakfa advertises itself as "kafka", therefore it can only be used *within the composition.

  • compose.core.revisions: This is similar to compose.core except that the revisions service is added to the composition. Very often, features will either affect revisions or the compute-related services (schedule, execute, query, events). Using this composition saves running a few services locally.

  • compose.core.services: This is compose.core with all of the Interlibr services running. It is, essentially, the entire platorm running except any of the Spark jobs. This composition is taxing for some machines and is recommended only for development-class workstations. Typically, this composition is used for end-to-end testing.

  • compose.core.services.publish: This is exactly the same as compose.core.services except that it does not run the latest development container as published on Docker Hub, it runs the published version which would be run in production. This composition is typically used for demonstrations. Since the production platform is run on Google Container Engine (Kubernetes) there are some differences between this composition and a real production instance, therefore it is not recommended that this composition be used to run a production instance of Interlibr.

To run any of these compositions, merely invoke the script from a shell with the up or down option:

$ ./compose.core.sh up

Use docker ps or read the configuration files to learn which localhost ports each service is running on. For example:

CONTAINER ID        IMAGE                             COMMAND                  CREATED             STATUS              PORTS                                                       NAMES
daeeeb08487e        confluentinc/cp-kafka:5.0.0       "/etc/confluent/dock…"   15 seconds ago      Up 13 seconds       0.0.0.0:9092->9092/tcp                                      docker-compose_kafka_1
22f5ed1a6532        cassandra:3.11                    "docker-entrypoint.s…"   17 seconds ago      Up 14 seconds       7000-7001/tcp, 7199/tcp, 9160/tcp, 0.0.0.0:9042->9042/tcp   docker-compose_cassandra_1
ba59087b5dc0        confluentinc/cp-zookeeper:5.0.0   "/etc/confluent/dock…"   17 seconds ago      Up 14 seconds       2888/tcp, 0.0.0.0:2181->2181/tcp, 3888/tcp                  docker-compose_zookeeper_1
829b61928019        bitnami/redis:4.0-debian-9        "/entrypoint.sh /run…"   17 seconds ago      Up 14 seconds       0.0.0.0:6379->6379/tcp                                      docker-compose_redis_1
a2fa9a670283        mongo:3.6                         "docker-entrypoint.s…"   17 seconds ago      Up 14 seconds       0.0.0.0:27017->27017/tcp                                    docker-compose_mongo_1

Most of the CLI (cli/) tools have default options that assume you are running one of these compositions, permitting you to omit host:port configuration for simple testing.

End-to-end testing

The platform CLI tool has commands for performing end-to-end testing. These commands work against a file and directory layout called a test-run. These test runs contain rules and small files that outline expectations of output when some aspect of Interlibr is executed against those rules.

The support for executing test runs is part of the test command included in the Interlibr CLI. To execute a test against a test-run in the cli directory:

$ bundle exec xa test <name> <path> <profile>

Each test has two specific arguments:

  • <path>: a path to a test-run directory. Our basic end-to-end-tests are kept in test-runs/.

  • <profile>: this is a reference to a JSON configuration file in the profiles/ directory. This file contains common configuration options that may be used during a test (for example, the URL for a service).

There are several tests available to run. The name of one of these tests should appear in the <name> argument:

  • exec: This test will upload all of the rules (Xalgo or tables) to the revisions service, make a request to the schedule service to execute the rule against a configured execution context, wait for a result from the the events service and verify the results against the configured expectations. If no expectations are configured for the test run, then the final contents of the execution context will be printed to the screen.

  • effective: This test also uploads all rules and listens for results from the events service. It makes a submission to the schedule service that submits to the Effective Spark Job in validation mode. For this test to work correctly, the ValidateEffectiveRules Spark job must be running within the local Spark master. Instructions for running this job appear in the README for that project.

  • applicable: This test also uploads all rules and listens for results from the events service. It makes a submission to the schedule service that submits to the Applicable Spark Job in validation mode. For this test to work correctly, the ValidateApplicableRules Spark job must be running within the local Spark master. Instructions for running this job appear in the README for that project.

About

Core IoR Compute Platform. This was the Interlibr we worked on until 2020 Spring. We took a new direction after that point. This repository is preserved as a resource that allows us to reference things as they were then.

Resources

License

Stars

Watchers

Forks

Packages

No packages published