diff --git a/README.md b/README.md
index 8625282..e93c2a3 100644
--- a/README.md
+++ b/README.md
@@ -1,46 +1,89 @@
-# Flake Pilot
-1. [Introduction](#introduction)
-2. [Installation](#installation)
-3. [Quick Start OCI containers](#oci)
- 1. [Use Cases](#usecases)
-4. [Quick Start FireCracker VMs](#fire)
- 1. [Use FireCracker VM image from components](#components)
- 2. [Networking](#networking)
-5. [Application Setup](#setup)
-6. [How To Build Your Own App Images](#images)
+1. [Installation](#installation)
+2. [`flake-ctl`](#flake-ctl)
+ 1. [Runtimes](Runtimes)
+ 2. [`build`](#Build)
+3. [Pilots](#Pilots)
+4. [`flake-studio`](#flake-studio)
+5. [Quickstart oci/podman](#oci)
+6. [Quickstart firecracker](#fire)
-## Introduction
-flake-pilot is a software to register, provision and launch applications
-that are actually provided inside of a runtime environment like an
-OCI container or a FireCracker VM. There are two main components:
+`flake-pilot` is a software suite that enables you to provision, modify, package and launch containerized applications (such as oci containers) that can be run directly like any other command line utility.
-1. The launchers
+This is accomplished creating symlinks to one of the lightweight `pilot` programs, which launch the corresponding runtime with all the arguments needed to mimic native behavior of the application as closely as possible.
- The launcher binary. Each application that was registered as a
- flake is redirected to a launcher binary. As of today
- support for the ```podman``` and ```firecracker``` engines are
- implemented leading to the respective ```podman-pilot``` and
- ```firecracker-pilot``` launcher binaries.
+# Installation
-2. The flake registration tool
+Manual compilation and installation can be done as follows:
- ```flake-ctl``` is the management utility to list, register,
- remove, and-more... flake applications on your host.
+```bash
+make build && make install
+```
+# Flake-ctl
+`flake-ctl` is the central command used to manage installed flakes. See `src/flake-ctl/README.md` for more detailed information
-## Installation
+## Runtimes
+Each supported runtime provides its own managing utility called `flake-ctl-`. These must provide at least the commands
+- `register` which creates a new local flake
+- `export` which exports all data needed to run a flake on another machine (e.g. the archived oci-container)
-flake-pilot components are written in rust and available as packages
-here: https://build.opensuse.org/package/show/home:marcus.schaefer:delta_containers/flake-pilot
+Beyond that the utilities may provide any number of additional sub commands
-Manual compilation and installation can be done as follows:
+### Example
+```bash
+flake-ctl podman register ubuntu --container my_container --app /usr/bin/foo
+```
+This will create a local flake. Afterwards running `/usr/bin/foo` on will launch `my_container` using podman and run `/usr/bin/foo` inside of the container, forwarding any output back to the caller.
+## Build
+The `build` command is used to create packages that can be used to directly install
+flakes on a system via a regular package manager. Right now the `build` command is only supported for oci-based flakes.
+
+There are separated binaries for each supported package manager. The `flake-ctl-build` command will defer the actual build process to the native package manager (run `flake-ctl build which` to see which builder will be used).
+
+### Example
```bash
-make build && make install
+# running on ubuntu
+flake-ctl build which # dpkg-buildpackage;flake-ctl-build-dpkg
+flake-ctl build --from-oci=my_container --target_app=/usr/bin/foo
```
+When you run this command, a wizard will prompt you for further details needed to create the package. All of these parameters can also be supplied via
+- command line (e.g. `--version`)
+- environment variable (e.g. `PKG_FLAKE_VERSION`)
+- inside a config file
+ - `./.flakes/package/options.yaml`
+ - `~/.flakes/package/options.yaml`
+
+After the process has finished it will place all produced files in you current working directory (or where specified with `--output`). The following would be produced by running the builder with `--name=foobar` and `--version=1.0.0`.
+```
+foobar_1.0.0_all.deb
+foobar_1.0.0_amd64.buildinfo
+foobar_1.0.0_amd64.changes
+foobar_1.0.0.dsc
+foobar_1.0.0.tar.gz
+```
+You can no install `foobar_1.0.0_all.deb` on another system using `dpkg`.
+
+The package requires `podman-pilot` which in turn requires `podman`, if you do not have these packages installed on the system you will be prompted to run `apt install --fix-broken`. Alternatively you can install them manually.
-## Quick Start OCI containers
+
+# Pilots
+Currently there are two pilots:
+- **podman-pilot** for oci containers using podman.
+- **firecracker-pilot** for micro VMs
+
+Pilots will not function if launched directly, instead they need to be called via
+a symlink. When starting the pilot will use the symlinks name to retrieve a configuration (by default stored in `/usr/share/flakes`). This config contains all information needed to run the flake, including an image identifier and a list of command line parameters to the runtime environment.
+
+Theses parameters can be tweaked to create a more seamless experience, for example by mounting directories from the host machine into the container to enable direct file access.
+
+The default configuration is optimized for one-shot applications that only communicate over stdin/out/err.
+# Flake-studio
+
+## Quick Start OCI containers
As a start let's register an application named ```aws``` which is
connected to the ```aws-cli``` container provided by Amazon on
@@ -73,41 +116,8 @@ connected to the ```aws-cli``` container provided by Amazon on
aws ec2 help
```
-### Use Cases
-
-Apart from this very simple example you can do a lot more. The main
-idea for flake-pilot was not only to launch container based apps but
-also allow to run a provision step prior calling the application.
-This concept then allows for use cases like:
-
-* delta containers used together with a base container such that
- only small delta containers gets pulled to the registry used with
- a base that exists only once.
-
-* include arbitrary data without harming the host integrity e.g custom
- binaries, proprietary software not following package guidelines and
- standards, e.g automotive industry processes which we will not be
- able to change in this live ;)
-* layering of several containers, e.g deltas on top of a base. Building
- up a solution stack e.g base + python + python-app.
-
-* provisioning app dependencies from the host instead of providing them
- in the container, e.g a delta container providing the app using a base
- container but take the certificates or other sensitive information
- from the host; three way dependency model.
-
-Actually all of the above use cases are immaterial if a proper packaging,
-release and maintenance of the application is possible. However, I have
-learned the world is not an ideal place and there might be a spot for
-this project to be useful, supporting users with "special" needs and
-adding an adaptive feature to the OS.
-
-For demo purposes and to showcase the mentioned use cases, some
-example images were created. See [How To Build Your Own App Images](#images)
-for further details
-
-## Quick Start FireCracker VMs
+## Quick Start FireCracker VMs
Using containers to isolate applications from the host system is a common
approach. The limitation comes on the level of the kernel. Each container
@@ -161,207 +171,3 @@ Start an application as virtual machine (VM) instance as follows:
be prevented or requires a customized kernel build to be suppressed.
As all messages are fetched from the serial console there is also
no differentiation between **stdout** and **stderr** anymore.
-
-### Use FireCracker VM image from components
-
-In the quickstart for FireCracker a special image type called ```kis-image```
-was used. This image type is specific to the KIWI appliance builder and
-it provides the required components to boot up a FireCracker VM in one
-archive. However, it's also possible to pull a FireCracker VM image from
-its single components. Mandatory components are the kernel image and the
-rootfs image, whereas the initrd is optional. The FireCracker project
-itself provides its images in single components and you can use them
-as follows:
-
-1. Pull a firecracker compatible VM
-
- ```bash
- flake-ctl firecracker pull --name firecore \
- --rootfs https://s3.amazonaws.com/spec.ccfc.min/ci-artifacts/disks/x86_64/ubuntu-18.04.ext4 \
- --kernel https://s3.amazonaws.com/spec.ccfc.min/img/quickstart_guide/x86_64/kernels/vmlinux.bin
- ```
-
-2. Register the ```fireshell``` application
-
- ```bash
- flake-ctl firecracker register \
- --app /usr/bin/fireshell --target /bin/bash --vm firecore --no-net
- ```
-
-3. Launch the application
-
- To run ```fireshell``` just call for example:
-
- ```bash
- fireshell -c "'ls -l'"
- ```
-
-### Networking
-
-As of today firecracker supports networking only through TUN/TAP devices.
-As a consequence it is a user responsibility to setup the routing on the
-host from the TUN/TAP device to the outside world. There are many possible
-solutions available and the following describes a simple static IP and NAT
-based setup.
-
-The proposed example works within the following requirements:
-
-* initrd_path must be set in the flake configuration
-* The used initrd has to provide support for systemd-(networkd, resolved)
- and must have been created by dracut such that the passed
- boot_args in the flake setup will become effective
-
-1. Enable IP forwarding
-
- ```bash
- sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
- ```
-
-2. Setup NAT on the outgoing interface
-
- Network Address Translation(NAT) is an easy way to route traffic
- to the outside world even when it originates from another network.
- All traffic looks like if it would come from the outgoing interface
- though. In this example we assume ```eth0``` to be the outgoing
- interface:
-
- ```bash
- sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
- sudo iptables -A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
- ```
-
-3. Setup network configuration in the flake setup
-
- The flake configuration for the registered ```mybash``` app from
- above can be found at:
-
- ```bash
- vi /usr/share/flakes/mybash.yaml
- ```
-
- The default network setup is based on DHCP because this is
- the only generic setting that flake-ctl offers at the moment.
- The setup offered for networking provides the setting
- ```ip=dhcp```. Change this setting to the following:
-
- ```yaml
- vm:
- runtime:
- firecracker:
- boot_args:
- - ip=172.16.0.2::172.16.0.1:255.255.255.0::eth0:off
- - rd.route=172.16.0.1/24::eth0
- - nameserver=8.8.8.8
- ```
-
- In this example the DHCP based setup changes to a static
- IP: 172.16.0.2 using 172.16.0.1 as its gateway and Google
- to perform name resolution. Please note: The name of the
- network interface in the guest is always ```eth0```. For
- further information about network setup options refer
- to ```man dracut.cmdline``` and lookup the section
- about ```ip=```
-
-4. Create a tap device matching the app registration. In the above example
- the app ```/usr/bin/mybash``` was registered. The firecracker pilot
- configures the VM instance to pass trafic on the tap device name
- ```tap-mybash```. If the application is called with an identifier like
- ```mybash @id```, the tap device name ```tap-mybash@id``` is used.
-
- ```bash
- sudo ip tuntap add tap-mybash mode tap
- ```
-
- **_NOTE:_** If the tap device does not exist, firecracker-pilot will
- create it for you. However, this might be too late in case of e.g a
- DHCP setup which requires the routing of the tap device to be present
- before the actual network setup inside of the guest takes place.
- If firecracker-pilot creates the tap device it will also be
- removed if the instance shuts down.
-
-5. Connect the tap device to the outgoing interface
-
- Select a subnet range for the tap and bring it up
-
- **_NOTE:_** The settings here must match with the flake configuration !
-
- ```bash
- ip addr add 172.16.0.1/24 dev tap-mybash
- ip link set tap-mybash up
- ```
-
- Forward tap to the outgoing interface
-
- ```bash
- sudo iptables -A FORWARD -i tap-mybash -o eth0 -j ACCEPT
- ```
-
-6. Start the application
-
- ```bash
- mybash
-
- $ ip a
- $ ping www.google.de
- ```
-
- **_NOTE:_** The tap device cannot be shared across multiple instances.
- Each instance needs its own tap device. Thus the steps 3,4 and 5 needs
- to be repeated for each instance.
-
-## Application Setup
-
-After the registration of an application they can be listed via
-
-```bash
-flake-ctl list
-```
-
-Each application provides a configuration below ```/usr/share/flakes/```.
-The term ```flake``` is a short name that we came up with to provide
-a generic name for an application running inside of an isolated environment.
-For our above registered ```aws``` flake the config file structure
-looks like the following:
-
-```
-/usr/share/flakes/
-├── aws.d
-└── aws.yaml
-```
-
-Please consult the manual pages for detailed information
-about the contents of the flake setup.
-
-https://github.com/Elektrobit/flake-pilot/tree/master/doc
-
-## How To Build Your Own App Images
-
-Building images as container- or VM images can be done in different ways.
-One option is to use the **Open Build Service** which is able to build
-software packages and images and therefore allows to maintain the
-complete application stack.
-
-For demo purposes and to showcase the mentioned
-some example images were created and could be considered as a simple
-```flake store```. Please find them here:
-
-* https://build.opensuse.org/project/show/home:marcus.schaefer:delta_containers
-
-Feel free to browse through the project and have some fun testing. There
-is a short description in each application build how to use them.
-
-**_NOTE:_** All images are build using the
-[KIWI](https://github.com/OSInside/kiwi) appliance builder which is
-supported by the Open Build Service backend and allows to build all the
-images in a maintainable way. KIWI uses an image description format
-to describe the image in a declarative way. Reading the above
-examples should give you an idea how things fits together. In case
-of questions regarding KIWI and the image builds please don't hesitate
-to get in contact with us.
-
-Flake pilot is a project in its early stages and the result of
-a fun conversation over beer on a conference. Feedback
-is very much welcome.
-
-Remember to have fun :)
-
diff --git a/doc/flake-ctl-build.rst b/doc/flake-ctl-build.rst
index 920b3a6..268697a 100644
--- a/doc/flake-ctl-build.rst
+++ b/doc/flake-ctl-build.rst
@@ -11,39 +11,72 @@ SYNOPSIS
.. code:: bash
- Usage: flake-ctl-build-dpkg [OPTIONS] [COMMAND_ARGS] [PACKAGE_OPTIONS] -- [TRAILING..]
-
- Commands:
- flake Package an existing flake
- flake_name Name of the pre-existing flake that should be packaged
- image Package an existing image as a flake
- pilot The type of pilot to use for the flake
- image_name The name of the pre-existing image to package (syntax depends on pilot)
+ Usage: flake-ctl-build [OPTIONS] -- [TRAILING..]
+
+ Options:
+ -a, --app
+ Name of the app on the host
+ -c, --from-oci
+ Build from the given oci container
+ -t, --from-tar
+ Build from tarball. NOTE: file should have extension ".tar.gz"
+ -o, --output