Sparky is a flexible and minimalist continuous integration server and distribute tasks runner written in Raku.
Sparky features:
- Defining jobs scheduling times in crontab style
- Triggering jobs using external APIs and custom logic
- Jobs scenarios are pure Raku code with additional support of Sparrow6 automation framework
- Use of plugins on different programming languages
- Everything is kept in SCM repository - easy to port, maintain and track changes
- Jobs get run in one of 3 flavors - 1) on localhost 2) on remote machines via ssh 3) on docker instances
- Nice web UI to run jobs and read reports
- Could be runs in a peer-to-peer network fashion with distributed tasks support
$ nohup sparkyd & # run Sparky daemon to trigger jobs
$ nohup cro run & # run Sparky CI UI to see job statuses and reports
$ nano ~/.sparky/projects/my-project/sparrowfile # write a job scenario
$ firefox 127.0.0.1:4000 # run jobs and get reports
$ sudo apt-get install sqlite3
$ git clone https://github.com/melezhik/sparky.git
$ cd sparky && zef install .
Sparky requires a database to operate.
Run database initialization script to populate database schema:
$ raku db-init.raku
Sparky comprises of several components:
-
Jobs scheduler
-
Jobs Definitions
-
Jobs workers (including remote jobs)
-
Jobs UI
-
CLI
To run Sparky jobs scheduler (aka daemon) runs in console:
$ sparkyd
Scheduler logic:
-
Sparky daemon traverses sub directories found at the project root directory.
-
For every directory found initiate job run process invoking sparky worker (
sparky-runner.raku
). -
Sparky root directory default location is
~/.sparky/projects
. -
Once all the sub directories are passed, sparky daemon sleeps for $timeout seconds.
-
A
timeout
option allows to balance a load on your system. -
You can change a timeout by applying
--timeout
parameter when running sparky daemon:
$ sparkyd --timeout=600 # sleep 10 minutes
- You can also set a timeout by using
SPARKY_TIMEOUT
environment variable:
$ SPARKY_TIMEOUT=30 sparkyd ...
Running job scheduler in demonized mode:
$ nohup sparkyd &
To install sparkyd as a systemd unit:
$ nano utils/install-sparky-web-systemd.raku # change working directory and user
$ sparrowdo --sparrowfile=utils/install-sparkyd-systemd.raku --no_sudo --localhost
Sparky has a simple web UI to allow trigger jobs and get reports.
To run Sparky UI web application:
$ cro run
To install Sparky CI web app as a systemd unit:
$ nano utils/install-sparky-web-systemd.raku # change working directory, user and root directory
$ sparrowdo --sparrowfile=utils/install-sparky-web-systemd.raku --no_sudo --localhost
By default Sparky UI application listens on host 0.0.0.0
, port 4000
,
to override these settings set SPARKY_HOST
, SPARKY_TCP_PORT
in ~/sparky.yaml
configuration file:
SPARKY_HOST: 127.0.0.1
SPARKY_TCP_PORT: 5000
Sparky job needs a directory located at the sparky root directory:
$ mkdir ~/.sparky/projects/teddy-bear-app
To create a job scenario, create file named sparrowfile
located in job directory.
Sparky uses pure Raku for job syntax, for example:
$ nano ~/.sparky/projects/hello-world/sparrowfile
#!raku
say "hello Sparky!";
To allow job to be executed by scheduler one need to create sparky.yaml
- yaml based
job definition, minimal form would be:
$ nano ~/.sparky/projects/hello-world/sparky.yaml
allow_manual_run: true
To extend core functions, Sparky is fully integrated with Sparrow automation framework.
Here in example of job that uses Sparrow plugins, to build typical Raku project:
$ nano ~/.sparky/projects/raku-build/sparrowfile
directory "project";
git-scm 'https://github.com/melezhik/rakudist-teddy-bear.git', %(
to => "project",
);
zef "{%*ENV<PWD>}/project", %(
depsonly => True
);
zef 'TAP::Harness App::Prove6';
bash 'prove6 -l', %(
debug => True,
cwd => "{%*ENV<PWD>}/project/"
);
Repository of Sparrow plugins is available at https://sparrowhub.io
Sparky uses Sparrowdo to launch jobs in three fashions:
- on localhost ( the same machine where Sparky is installed, default)
- on remote host with ssh
- docker container on localhost / remote machine
/--------------------\ [ localhost ]
| Sparky on localhost| --> sparrowdo client --> job (sparrow) --> [ container ]
\--------------------/ [ ssh host ]
By default job scenarios get executed on the same machine you run Sparky at,
to run jobs on remote host set sparrowdo section in sparky.yaml
file:
$ nano ~/.sparky/projects/teddy-bear-app/sparky.yaml
sparrowdo:
host: '192.168.0.1'
ssh_private_key: /path/to/ssh_private/key.pem
ssh_user: sparky
no_index_update: true
sync: /tmp/repo
Follow sparrowdo cli documentation for sparrowdo
configuration section explanation.
Sparrowdo client bootstrap might take some time.
To disable bootstrap use bootstrap: false
option.
Useful if sparrowdo client is already installed on target host.
sparrowdo:
bootstrap: false
To remove old job builds set keep_builds
parameter in sparky.yaml
:
$ nano ~/.sparky/projects/teddy-bear-app/sparky.yaml
Put number of builds to keep:
keep_builds: 10
That makes Sparky remove old builds and only keep last keep_builds
builds.
To run Sparky jobs periodically, set crontab
entry in sparky.yaml file.
For example, to run a job every hour at 30,50 or 55 minutes:
$ nano ~/.sparky/projects/teddy-bear-app/sparky.yaml
crontab: "30,50,55 * * * *"
Follow Time::Crontab documentation on crontab entries format.
To trigger job manually from web UI, use allow_manual_run
:
$ nano ~/.sparky/projects/teddy-bear-app/sparky.yaml
allow_manual_run: true
To trigger Sparky jobs on SCM changes, define scm
section in sparky.yaml
file:
scm:
url: $SCM_URL
branch: $SCM_BRANCH
Where:
url
- git URLbranch
- git branch, optional, default value ismaster
For example:
scm:
url: https://github.com/melezhik/rakudist-teddy-bear.git
branch: master
Once a job is triggered respected SCM data is available via tags()<SCM_*>
function:
directory "scm";
say "current commit is: {tags()<SCM_SHA>}";
git-scm tags()<SCM_URL>, %(
to => "scm",
branch => tags<SCM_BRANCH>
);
bash "ls -l {%*ENV<PWD>}/scm";
To set default values for SCM_URL and SCM_BRANCH, use sparrowdo tags
:
sparky.yaml
:
sparrowdo:
tags: SCM_URL=https://github.com/melezhik/rakudist-teddy-bear.git,SCM_BRANCH=master
These is useful when trigger job manually.
Flapper protection mechanism kicks out SCM urls that are timeouted (certain amount of time) during git connection, from scheduling, this mechanism protects sparkyd worker from stalling.
To disable flappers protection mechanism, set SPARKY_FLAPPERS_OFF
environment variable
or adjust ~/sparky.yaml
configuration file:
worker:
flappers_off: true
To prevent Sparky job from execution use disable
option:
$ nano ~/.sparky/projects/teddy-bear-app/sparky.yaml
disabled: true
Following are advanced topics covering some cool Sparky features.
Sparky UI DSL allows to grammatically describe UI for Sparky jobs and pass user input into a scenario as variables.
Read more at docs/ui.md
Downstream jobs get run after some main job has finished.
Read more at docs/downstream.md
Sparky triggering protocol allows to trigger jobs automatically by creating files in special format.
Read more at docs/stp.md
Job API allows to orchestrate multiple Sparky jobs.
Read more at docs/job_api.md
Sparky plugins is way to extend Sparky jobs by writing reusable plugins as Raku modules.
Read more at docs/plugins.md
Sparky HTTP API allows execute Sparky jobs remotely over HTTP.
Read more at docs/api.md
Sparky web server comes with two authentication protocols, choose proper one depending on your requirements.
Read more at docs/auth.md
Sparky ACL allows to create access control lists to manage role based access to Sparky resources.
Read more at docs/acl.md
Sparky keeps it's data in database, by default it uses sqlite, following databases are supported:
- SQLite
- MySQL/MariaDB
- PostgreSQL
Read more at docs/database.md
Sparky web server may run on TLS. To enable this add a couple of parameters to ~/sparky.yaml
configuration file:
SPARKY_USE_TLS: true
tls:
private-key-file: '/home/user/.sparky/certs/www.example.com.key'
certificate-file: '/home/user/.sparky/certs/www.example.com.cert'
SPARKY_USE_TLS
enables SSL mode and tls
section has paths to ssl certificate ( key and certificate parts ).
Sparky cli allows to trigger jobs in terminal.
Read more at docs/cli.md
Use environment variables to tune Sparky configuration.
Read more at docs/env.md
Some useful glossary.
Read more at docs/glossary.md
Sparky uses Bulma as CSS framework for web UI.
Examples of various Sparky jobs could be found at examples/ folder.
-
Cro - Raku Web Framework
-
Sparky-docker - Run Sparky as Docker container
Alexey Melezhik