-
Notifications
You must be signed in to change notification settings - Fork 23
QuantoCore
QuantoCore is the name given to the general purpose ML library for creating and rewriting with string graphs and !-graphs. It also refers to the Controller, which is a command line tool that provides access to all of the core's functionality via a JSON protocol. While the old controller has been retained for backwards compatibility with the old Quantomatic GUI, the new controller and protocol are what we discuss here.
Notable changes from the old controller are that everything is done is JSON (no more escape-sequences), the controller functionality is implemented in modules, and calls to the core are processed and handled asynchronously.
The controller is a thin wrapper for a table of modules. Requests look like this:
{
"request_id": #,
"controller": ...,
"module": ...,
"function": ...,
"input": JSON
}
For every message, the dispatch chain goes: ControllerRegistry
=>
Controller
=> Module
=> function
. By default, the controller registry has one controller per GRAPHICAL_THEORY
defined in the core. However, there is nothing in principal wrong with having more.
Responses look like this:
{
"request_id": #,
"success": true,
"output": JSON
}
or
{
"request_id": #,
"success": false,
"output": {
"message": STR,
"code": INT
}
}
The code -1
is reserved for protocol and IO errors. These will always result in the core terminating. Currently, all other errors are reported as code 0
(and do not cause termination), but this may change at some point.
Functions themselves are just maps from json to json. If they return
normally, the first response format is used. If they raise a user_exn,
the second response is used. Modules functorise over GRAPHICAL_THEORY
as well as any modules they depend on.
Modules keep their own state in synchronised refs (called vars), expose functions (typically state accessors used by themselves or other modules), and register protocol functions. This method of storing state seems to work fine, provided the functor call for each module is only called once per controller:
(* Foo and Bar share refs, but Baz's refs
* are distinct from Foo and Bar *)
structure Foo = Fun(X)
structure Bar = Foo
structure Baz = Fun(X)
Function definitions are made such that bindings can be automatically generated by ML code. Central to this is the concept of a protocol type, or ptype. A ptype is one of a short list of type symbols which the client bindings may want to handle in a particular way. Currently, ptypes are defined as:
datatype ptype =
list_t of ptype |
string_t | int_t | json_t |
graphname_t | vertexname_t | edgename_t | bboxname_t | rulename_t |
graphdesc_t | ruledesc_t
For instance, if a function takes a graphname_t as an argument, the java binding will actually take in a graph object, then call getCoreName.
New protocol functions are defined like this:
(* test named args *)
val ftab = ftab |> register
{
name = "concat",
doc = "Concatenates the given arguments",
input = N ["arg1" -: string_t, "arg2" -: string_t],
output = S string_t
} (fn x => (
let
val s1 = arg_str x "arg1"
val s2 = arg_str x "arg2"
in Json.String (s1 ^ s2)
end
))
It's a bit verbose, but if we try to keep modules at < 30 protocol functions a piece it should keep things pretty readable.
Register takes an fdesc followed by a function from json to json. The input and output fields can either be singletons (marked by S) or a collection of named arguments (marked by N). The benefit of this is the core now knows enough to generate documentation or code bindings on the fly. So, the last stage of the build process would be something like:
% quanto-core --XXX-bindings [dir]
Adding languages is just a matter of writing some scaffolding and ML functions to convert fdesc
entries to source code.
In the core, parallelisation is done using Makarius Wenzel's Futures structures. There are two main threads: one for processing requests and one for flushing responses.
The request thread listens on stdin
(or some other instream
). Once a full JSON object is received, it forks a new future and passes it the parsed JSON. That future does its processing until at point point it (atomically) pushes its output onto a global output buffer.
It should also maintain a table of active jobs (indexed by request ID) so that the client can issue commands to cancel running jobs.
The response thread gets woken up whenever the output buffer changes and converts all JSON objects to strings and sends them down stdout
(or some other outstream
).
The client also needs to behave ansynchronously. Looks like the best way to do this in Scala is using Akka Actors. These abstract away from threads using an Erlang-style message passing paradigm. Basically, objects are treated as agents and the only way to interact with their internal state is by sending them messages.
Actors related to the core are:
-
CoreState(initActor)
- hold the core executable and spawn child actors for managing IO and lifecycle of the core state. Its contructor takes an actor reference to a special actor called the initializer, which will (re)initialise the expected internal state of the core if it is started or restarted. -
CoreRequester
- takesCoreRequest
methods and writes them to the core executable as JSON -
CoreResponder
- continuously reads fromstdout
of the core process and fires aCoreResponse
any time a complete JSON response can be parsed.
The CoreState
runs in three different modes, depending on what state it is in.
It starts in the state down
, until it receives a message to start. At this point, it moves to state init
, fires up the core process, and the requester/responder actors. It will then listen for a "ready" message coming from the core (via CoreResponder
). When it receives this, it runs the provided initializer actor. This actor can communicate with the CoreRequester
and CoreResponder
to perform some initialization, then sends a message to CoreState
to signal that it is done. Once this happens, CoreState
transitions to the ready state. Only then will it pass requests on the CoreRequester
. In the other two states, it will simply respond immediately with a message indicating the core is down.