This tutorial demonstrates how to write and run a Nerpa program. This programming framework helps simplify the management of a programmable network. In this tutorial we demonstrate how to write and run a Nerpa program.
A Nerpa program consists of three sub-programs, each corresponding to a plane of an enterprise network.
- The management plane sets high-level policy for the network devices. We use Open vSwitch Database (OVSDB) for the management plane. An OVSDB schema initializes the management plane. The admin user can insert, modify, or delete rows in the database to represent changes in high-level configuration.
- The control plane configures the data plane based on the declared state of the network. We use a Differential Datalog (DDlog) program for the control plane. A DDlog program consists of rules that compute a set of output relations based on input relations. These rules are evaluated incrementally: given a set of changes to the input relations, DDlog produces a set of changes to the output relations. Those output relations are converted to match-action contents of P4 tables and then written to the switch.
- The data plane processes packets that pass through the system. We program this using a P4 program. A P4 program specifies how data plane devices, like switches and routers, process packets.
DDlog input and output relations are generated from the OVSDB schema and P4 program. They are imported by the DDlog program that is compiled and used as the control plane. This facilitates codesign and tighter integration of the control and data planes.
The Nerpa controller synchronizes state between the planes. It does the following:
- Listens for changes from the OVSDB management plane; converts them into DDlog input relations; and sends them to the control plane
- Receives DDlog output relations from the control plane; converts them into P4 entries; and sends them to the data plane
- Listens for digest notifications from the P4 data plane; converts them into DDlog input relations; and sends them to the control plane
Instructions to build dependencies are found in the README. To verify, confirm that the following environment variables are set:
$NERPA_DEPS
, to the directory containing thebehavioral-model
directory.$DDLOG_HOME
, to the DDlog installation directory. Make sure$PATH
includes the DDlog binary.
In this tutorial, we'll create a new Nerpa program called tutorial
. We will demonstrate how to write, build, and run a Nerpa program. This program will implement VLAN assignment, which assigns ports to VLANs. A port is represented using its ID; the type of VLAN; a tag; trunks; and priority. In a DDlog program, the input relations would represent ports, and the output relations represent the assigned VLANs.
Below, we instantiate the system diagram above for the tutorial
example.
We start by creating a new program. Note that all commands should be run from the top-level nerpa
directory. Run the creation script:
./scripts/create-new-nerpa.sh tutorial
Alternately, you can execute the script instructions manually one-by-one.
This should create a Nerpa program in nerpa_controlplane/tutorial
, composed of the following files:
tutorial.dl
: Datalog program implementing the control plane, that will run on a centralized controllertutorial.p4
: P4 program implementing the data plane, that will run on a software switchtutorial.ovsschema
: OVSDB schema specifying the management plane and the structure of clients used to talk to the switchcommands.txt
: commands sent to the P4 switch's command-line interface and used to initialize the control planeinit-ovsdb.sh
: transactions for OVSDB's initial contents. These can be commands usingovsdb-tool
orovsdb-client
, with transactions formatted as JSONs as per RFC 7047.
Accordingly, before moving forward, make sure that the following directory structure exists:
nerpa_controlplane/tutorial
| +-- commands.txt
| +-- init-ovsdb.sh
| +-- tutorial.dl
| +-- tutorial.ovsschema
| +-- tutorial.p4
These files should have the following contents.
-
commands.txt
should be empty. -
init-ovsdb.sh
should be empty. -
tutorial.dl
should only contain the following comments.
// Uncomment the following imports after running p4info2ddlog and generating relations from the P4 program and OVSDB schema.
// import tutorial_dp as tutorial_dp
// import Tutorial_mp as tutorial_mp
tutorial.ovsschema
should contain this schema.
{
"name": "tutorial",
"tables": {
"Client": {
"columns": {
"target": {"type": "string"},
"device_id": {"type": "integer"},
"role_id": {"type": "integer"},
"is_primary": {"type": "boolean"}
},
"isRoot": false
},
},
"version": "1.0.0"
}
The Client table represents a client for communication with a P4-enabled switch, over P4Runtime. Changes to this table are used to add and remove switches from the network. This allows more fine-grained control of individual devices while controlling multiple devices (e.g., sending packets to/from a specific switch, or altering table entries on a specific switch).
tutorial.p4
should be an empty P4 program.
We will first program the management plane by designing the OVSDB schema. We start here, because this explains the application's goal and is the simplest part of the program. Writing it down carefully ensures that you understand the problem at hand.
To do this, copy the contents of tutorial.ovsschema into nerpa_controlplane/tutorial/tutorial.ovsschema
.
We pass the OVSDB schema as input to the ovsdb2ddlog
tool, which generates DDlog relations from the schema. This helps us directly read changes from OVSDB and convert them to inputs for the running DDlog program.
To do this, run the following command. This, and all other commands in this tutorial, should be run from the top-level nerpa
directory:
ovsdb2ddlog --schema-file=nerpa_controlplane/tutorial/tutorial.ovsschema --output-file=nerpa_controlplane/tutorial/Tutorial_mp.dl
Compare the output file with Tutorial_mp.dl to verify its contents.
To program the data plane, we write the P4 program. tutorial.p4
specifies how packets with a VLAN header should be processed. P4 is a low-level language with many restrictions, and the DDlog program must cater to those restrictions.
Copy the contents of tutorial.p4 into nerpa_controlplane/tutorial/tutorial.p4
.
Compile the P4 program, and generate P4Runtime files:
cd nerpa_controlplane/tutorial
p4c --target bmv2 --arch v1model --p4runtime-files tutorial.p4info.bin,tutorial.p4info.txt tutorial.p4
cd ../..
Use the p4info2ddlog
tool to generate relations and helper functions from the compiled P4 program. Generating relations ensures that DDlog output relations can be converted to P4 match-action tables, and that P4 digests can be converted to DDlog input relations. Helper functions facilitate this type conversion within the Nerpa codebase.
cd p4info2ddlog
cargo run ../nerpa_controlplane/tutorial tutorial ../dp2ddlog
cd ..
Compare the output file with tutorial_dp.dl to verify its contents.
To program the control plane, we write the DDlog program that sits in between OVSDB and the P4 switch. Because we have generated the input and output relations, we know what the inputs and outputs to the control plane look like. The DDlog program connects these and implements the control plane's actions, by computing output changes from the input changes.
Copy the contents of tutorial.dl into nerpa_controlplane/tutorial/tutorial.dl
.
Compile the DDlog program, and build the generated crate. This generates the Rust code for the tutorial
DDlog program.
cd nerpa_controlplane/tutorial
ddlog -i tutorial.dl
cd tutorial_ddlog
cargo build
cd ../../..
Now that each sub-program exists, we can build the Nerpa tutorial program end-to-end:
./scripts/build-nerpa.sh nerpa_controlplane/tutorial tutorial
The build script executes the following steps. You will notice that Steps 1 to 5 replicate commands you have already run in the tutorial.
- Check if the Nerpa dependencies were built correctly, specifically the DDlog environment variables
$DDLOG_HOME
and in$PATH
, and the Nerpa dependencies directory$NERPA_DIR
. - Generate the DDlog relations from the OVSDB schema using
ovsdb2ddlog
. This is more fully described in the management plane section. - Compile the P4 program using
p4c
. This is more fully described in the data plane section. Compilation also creates a P4info file. - Generate DDlog relations from the P4info file using
p4info2ddlog
. This is also described in the data plane section. - Compile the DDlog program and build the generated DDlog crate. This is described in the control plane section.
- Build
ovsdb_client
, the OVSDB client library crate. Because this client crate depends on the DDlog crate, we must first generate theCargo.toml
to ensure that it correctly imports the DDlog dependencies. This library crate can read output changes from a running OVSDB, and convert those changes to input relations.nerpa_controller
uses this library to listen to changes from OVSDB and process those input relations in the control plane DDlog program. - Build
nerpa_controller
, the controller crate. As mentioned at the beginning, this crate synchronizes state between the planes. It listens for new inputs from the management and data planes; passes inputs to the control plane program; and writes any computed output changes to the data plane.
Run the Nerpa program, starting all pieces of software.
./scripts/run-nerpa.sh nerpa_controlplane/tutorial tutorial
The run script executes the following steps.
- Check that the Nerpa dependencies were built as expected. Specifically, it confirms that the environment variable
$NERPA_DEPS
points to a directory that contains thebehavioral-model
subdirectory. - Run
simple_switch_grpc
, the virtual switch. If using veth devices, it tears down existing interfaces and sets them up. - Initially configure the switch by passing the provided
commands.txt
tosswitch_CLI.py
. This is a Python wrapper around the simple switch command-line interface. - Start a new OVSDB server. Before this, we first stop any currently running ovsdb-server. We then use
ovsdb-tool
to create a new database, defined by the schema intutorial.ovsschema
. Finally, we start the server. - Run
nerpa_controller
, the Nerpa controller crate. This long-running program synchronizes state between the planes. It begins by starting the DDlog program, the control plane. It then reads inputs from the management and data planes and sends them to the control plane; computes the outputs corresponding to the those inputs; and writes outputs to the data plane.
After executing the run script in a Terminal window, you should see loglines that indicate bmv2
, ovsdb-server
, and nerpa_controller
are running.
These log lines indicate that bmv2
started as expected:
Server listening on 0.0.0.0:50051
[11:23:11.949] [bmv2] [I] [thread 23575] Starting Thrift server on port 9090
[11:23:11.949] [bmv2] [I] [thread 23575] Thrift server was started
Obtaining JSON from switch...
[11:23:13.696] [bmv2] [T] [thread 23657] bm_get_config
Done
Control utility for runtime P4 table manipulation
These show that ovsdb-server
started and is logging:
Stopping OVSDB...
Creating database...
Starting OVSDB...
2022-04-05T18:23:13Z|00001|vlog|INFO|opened log file nerpa/ovsdb-server.log
Finally, these show that nerpa_controller
started, connected to OVSDB, and is properly monitoring the necessary columns:
Finished dev [unoptimized + debuginfo] target(s) in 0.54s
Running `target/debug/nerpa-controller --ddlog-record=replay.txt nerpa_controlplane/tutorial tutorial`
... Many lines setting debug entries in the switch ...
[11:23:14.501] [bmv2] [D] [thread 23677] simple_switch target has been notified of a config swap
2022-04-05T18:23:14Z|00001|reconnect|INFO|unix:db.sock: connecting...
2022-04-05T18:23:24Z|00002|reconnect|INFO|unix:db.sock: connected
Monitoring the following OVSDB columns: {"Port":[{"columns":["id","priority_tagging","tag","trunks","vlan_mode"]}]}
At this point, you should have executed the build and run scripts. After executing the run script, three pieces of software should be running: OVSDB; the bmv2
software switch; and the nerpa_controller
binary program.
Open a new Terminal window, and confirm that these three pieces of software are all running:
ps -ef | grep ovsdb-server
ps -ef | grep simple_switch_grpc
ps -ef | grep nerpa_controller
Now, we will alter the network configuration. We do this by inserting rows into the OVSDB management plane. In the new Terminal window that you used to check the running processes, execute the following commands:
ovsdb-client -v transact tcp:127.0.0.1:6640 '["tutorial", {"op": "insert", "table": "Port", "row": {"id": 0, "vlan_mode": "access", "tag": 1, "trunks": 0, "priority_tagging": "no"}}]'
ovsdb-client -v transact tcp:127.0.0.1:6640 '["tutorial", {"op": "insert", "table": "Port", "row": {"id": 1, "vlan_mode": "access", "tag": 1, "trunks": 0, "priority_tagging": "no"}}]'
ovsdb-client -v transact tcp:127.0.0.1:6640 '["tutorial", {"op": "insert", "table": "Port", "row": {"id": 2, "vlan_mode": "access", "tag": 1, "trunks": 0, "priority_tagging": "no"}}]'
ovsdb-client -v transact tcp:127.0.0.1:6640 '["tutorial", {"op": "insert", "table": "Port", "row": {"id": 3, "vlan_mode": "access", "tag": 1, "trunks": 0, "priority_tagging": "no"}}]'
These rows are equivalent to the following DDlog relations:
Port(0, AccessPort{1}, NoPriorityTag).
Port(1, AccessPort{1}, NoPriorityTag).
Port(2, AccessPort{1}, NoPriorityTag).
Port(3, AccessPort{1}, NoPriorityTag).
After you execute each command, you will see corresponding log lines in the original Terminal window. These indicate that the P4 table entry is being set.
For example:
[11:49:34.760] [bmv2] [D] [thread 24107] Entry 0 added to table 'TutorialIngress.InputVlan'
[11:49:34.760] [bmv2] [D] [thread 24107] Dumping entry 0
Match key:
* port : EXACT 0000
* has_vlan : EXACT 00
* vid : TERNARY 0000 &&& 0fff
Priority: 2147483646
Action entry: TutorialIngress.SetVlan - 1,
Behind the scenes, the OVSDB client processed this new input from OVSDB, converted it to an input relation, and sent it to the running nerpa_controller
. The controller then used the running DDlog control plane program to compute the output relation. It converted the output into a P4 Runtime table entry and pushed that entry to the switch. Above, that final step is logged.
Note that you could also have copy-pasted those four lines into nerpa_controlplane/tutorial/init-ovsdb.sh
before calling run-nerpa
. Then, those rows would have been inserted into OVSDB on start-up. Feel free to stop the currently running script and try this!
Congratulations! You have successfully built, run, and tested VLAN assignment within the Nerpa programming framework.