-
Notifications
You must be signed in to change notification settings - Fork 2
Reference
Falcon is a Rust API for creating network topologies composed of VMs on illumos. Virtual machines are created using Propolis. Networks between virtual machines are created using a combination of simnet and veth virtual network devices.
Falcon keeps all of the state needed to manage a topology in a local folder called .falcon
that is created when a topology is launched and deleted when a topology is destroyed. There is no global state that Falcon explicitly tracks. There is however implicit global state, such a network interfaces created by Falcon and Propolis processes. Because of this, falcon topologies running on the same host must have distinct names to avoid network interface name collisions.
This Wiki is a reference for the feature set Falcon exposes for managing and interacting with network system topologies. For a quick start introduction see the quick start section of the README.
When a topology is built, a CLI program is produced for managing that topology. The name of the program executable depends on your cargo
configuration. In this wiki, we'll refer to the generated topology management executable as $topo
.
To attach to the serial console of a node called violin
$topo serial violin
This will attach your current console's stdin
/stdout
to the serial console of the violin
VM. Initially, there will be no output, tap the enter key a few times to reveal the VM's terminal prompt.
To leave a serial session use ctl-q
.
Falcon supports mounting files from the host into guest VMs. Currently, mounts are read-only; writes from the guest back to the host are not supported.
let violin = d.node("violin", "helios-2.0", 2, GB(2));
d.mount("./cargo-bay", "/opt/cargo-bay", violin)?;
This example will create a P9 filesystem device on the violin
VM that provides access to the host's local folder cargo-bay
with the p9fs tag /opt/cargo-bay
in the guest.
Helios does not yet have p9fs filesystem support. However, a helios guest can access these files as follows.
mkdir /opt/cargo-bay
cd /opt/cargo-bay
p9kp pull
Which will run a user-space program that pulls the mounted files to the /opt/cargo-bay
location. This is not an active guest mount, so if the files in the host change, another p9kp pull
must be run.
To mount the filesystem in Linux
mount -t 9p -o ro /opt/cargo-bay /opt/cargo-bay
Starting and stopping virtual machines amounts to starting and stopping the associated hypervisor instance. Falcon provides two commands hyperstart
and hyperstop
that will start and stop VMs. These commands can also be used to recover from unexpected events such as a power loss on your workstation. Hyperstop is also a good way to stop a topology if you don't want it taking up CPU and memory resources, but want to pick back up with it later right where you left off.
Both commands take a VM name by default, but also come with an --all
switch to act over an entire topology.
Falcon comes with two base images
helios-2.0
debian-11.0
These are pre-built images that are installed by the setup-base-images.sh script. If you need an entirely new base image that is not derived from either of these you can make your own base image. If you want to save the state of a VM in one of your active topologies as a new base image, you can snapshot that image.
The Helios base image that comes with Falcon is created using the helios-engvm image creation machinery. The source for that image is the JSON specifications with "masaka" in the title located here.
You can create and upload a new Helios image for Falcon as follows.
export VERSION=2.5 #change this to whatever version you are building
git clone git@github.com:oxidecomputer/helios-engvm
cd helios-engvm
git checkout masaka
cd image
source falcon-env.sh
./falcon-bits.sh
gmake setup
./strap.sh -f
./image.sh
cp /rpool/images/output/helios-propolis-ttya-falcon.raw /tmp/helios-${VERSION}_0.raw
xz -T 0 /tmp/helios-${VERSION}_0.raw
Then upload /tmp/helios-${VERSION}_0.raw.xz
to the falcon S3 bucket. And make sure you set public read access.
![image](https://private-user-images.githubusercontent.com/1010256/367056089-81ac87f4-6235-4f2b-ac81-ecad8f58c835.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk0OTgxODEsIm5iZiI6MTczOTQ5Nzg4MSwicGF0aCI6Ii8xMDEwMjU2LzM2NzA1NjA4OS04MWFjODdmNC02MjM1LTRmMmItYWM4MS1lY2FkOGY1OGM4MzUucG5nP1gtQW16LUFsZ29yaXRobT1BV1M0LUhNQUMtU0hBMjU2JlgtQW16LUNyZWRlbnRpYWw9QUtJQVZDT0RZTFNBNTNQUUs0WkElMkYyMDI1MDIxNCUyRnVzLWVhc3QtMSUyRnMzJTJGYXdzNF9yZXF1ZXN0JlgtQW16LURhdGU9MjAyNTAyMTRUMDE1MTIxWiZYLUFtei1FeHBpcmVzPTMwMCZYLUFtei1TaWduYXR1cmU9NzEyMDkyZDI1NDhjMTVjOWNjYjExYzNjMTlhODVkOWViMmZkYzJhMWNlODZlNDBhNzM4ODM0Y2JiYmE0YTk4MyZYLUFtei1TaWduZWRIZWFkZXJzPWhvc3QifQ.xZY1FtUEAy6ZkYmyY2EPBjkOG6-izCZ8Woo9Uv3LHIM)
The base Debian-11 machine that comes with Falcon is an unmodified copy of a Debian "nocloud" cloud image. This is a small UEFI bootable Debian cloud image with a password-less root login by default.
The Falcon CLI comes with a snapshot
command that can be used to create a new base image from an existing node. The following will snapshot the VM image associated with the node violin
to a new base image called helios-dev
.
$topo snapshot violin helios-dev
It is recommended to hyperstop a node before imaging it. Snapshotting an underlying ZFS volume from a running node may produce unexpected results.
VM images are simply ZFS volumes. Base images are stored as
/rpool/falcon/img/<name>
each of which has a corresponding snapshot
/rpool/falcon/img/<name>@base
When a VM is launched its base image snapshot is cloned to rpool/images/falcon/topo/<topology_name>/<node_name>
. When a topology is destroyed rpool/images/falcon/topo/<topology_name>
is recursively destroyed.
When a VM snapshot is taken. First a ZFS snapshot is made of the node's base image clone. Then that snapshot is cloned to a new base image with the user provided name. Then the new base clone is promoted to decouple it from the active VM image. Finally a base snapshot is created for the new base image.
In addition to creating links between nodes, you can also create a link that attaches directly to a link on the host system. This is done through the ext_link
method of a deployment runner. The code below will create an interface on the node violin
that is connected to the interface igb0
on the host system.
let mut d = Runner::new("solo");
let violin = d.node("violin", "helios-2.0", 4, gb(4));
d.ext_link("igb0", violin);
You can create multiple external links per node if needed. Interfaces appear on guests in the order they are created in the topology code.
NOTE: this was for Tofino Simulator integration and is no longer really used as that simulator cannot handle real traffic. Use the SoftNPU setup below
Sidecar is a disaggregated switch. The data plane is a Tofino-based platform that is connected over PCIe to a compute sled in the Oxide rack. We'll refer to this PCIe-connected compute sled as the Sidecar driver. The Tofino has a "CPU port" over which packets are exchanged with its PCIe-connected host. These packets are encapsulated with a Sidecar header in place of a normal Ethernet header. In a Falcon-based development environment, the Tofino data plane is a Linux VM with the Intel Tofino simulator running on it. There is a single link that connects the CPU port of the simulator VM to its driver VM. However, there is a special annotation for this link to get it to present properly on the driver VM.
let sidecar = d.node("sidecar", "sidecar-1", 8, gb(8));
let scdriver = d.node("scdriver", "helios-2.0", 4, gb(4));
d.sidecar_link(sidecar, scdriver, 10);
The sidecar_link
is in most ways just like any other link, the differences are
- The final argument is the radix of the Sidecar switch.
- Order matters, the first argument is always the sidecar node and the second argument is always the driver node.
The above example will create a special Sidecar emulation device called a sidemux
on the scdriver
VM that will present 10 virtio-net
devices. When Sidecar encapsulated packets hit the sidemux
device, it takes a look at what port the packet is destined for in the Sidecar header, deencapsulates the packet from the Sidecar header, and sends a regular Ethernet packet to the destination port. In the opposite direction, when packets egress from the driver VM to the Sidecar VM, the sidemux
device encapsulates the packets with a Sidecar header indicating the appropriate source port.
Propolis server now supports a virtual switch ASIC called SoftNPU. You can create a VM with a SoftNPU ASIC in it by using the softnpu_link
methods on a deployment runner as follows.
// Define one switch (`scrimlet`) and two gimlets (`gc1`, `gc2`).
node!(d, scrimlet, "helios-2.0", 4, gb(4))
node!(d, gc1, "helios-2.0", 4, gb(4))
node!(d, gc2, "helios-2.0", 4, gb(4))
// Connect each gimlet to the scrimlet.
d.softnpu_link(scrimlet, gc1, None, None)
d.softnpu_link(scrimlet, gc2, None, None)
The None
values above indicate that we don't care about the MAC addresses on either side of the link. If you do care about this, you can specify MAC addresses here.
The environment variable RUST_LOG
may be used to set the logging level of Falcon itself. This logging is what is printed to stdout when running topology commands. For example, to launch a topology with debug-level logging you would do the following:
$ RUST_LOG=debug pfexec $topo launch
The log files for the Propolis server(s) are found under the ./falcon
dir. The stdout/stderr streams are mapped to the files ./falcon/${node}.out
and ./falcon/${node}.err
, respectively.