You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This issue is a placeholder putting together a group of issues around how the server should support plugin development. This is for #25537 and will build on the work in #25639
We'll want the server to be able to pull plugins from a local directory on the server it's running on. This could also be a deployment mechanism we'd prefer in the future (rather than saving it as a string in the catalog), but I'm not sure.
The server should be able to start up with a pointer to this directory, which is where it'll look for plugins. Then via an API and CLI, the user can trigger plugins to run against test input and then look at the output (and validate against saved outputs). Each plugin type will have its own endpoint.
Reminder the four types are
wal - on wal flush the rows are submitted to the plugin
snapshot - on snapshot of wal and persist of parquet files, parquet file metadata is submitted to the plugin
schedule - run on a given schedule (like cron)
request - bind to /api/v3/plugins/<name> and send request headers and body to plugin
The endpoints will be /api/v3/plugin_test/<wal | snapshot | schedule | request>. When any of these requests is run, the plugin will be reloaded from whatever is on the local disk.
Any database write backs from one of these plugins will be either returned in the API response or written out to a file in the plugin directory (depending on the type).
WAL
The request to the wal plugin test will have the following arugments:
plugin name (this should match a file name in the server configured dir <name>.py)
test input source (one of)
query (included in the request and then executed in the server with result rows yielded to the plugin)
last WAL contents (whatever the most recent WAL file had in it)
LP (string of LP submitted in the request)
input filename (looks in the plugin directory <plugin name>_tests/<filename>)
save input file name (optional, if using one of the first three options of input, this will save it in the plugin directory)
save output file name (optional, will save to <plugin name>_tests/<filename>)
validate output file name (optional, if provided will look in <plugin name>_tests/<filename> and validate output against that)
There should be a CLI to run this against the API. When the request is run, any write backs will be returned in the response. The CLI should display this at the terminal.
Snapshot
The request to the snapshot plugin test will have the following arguments:
plugin name (this should match a file name in the server configured dir <name>.py)
test input source (one of)
last PersistedSnapshot contents (whatever the most recent snapshot file had in it)
JSON array of PluginParquetMeta objects
input filename (looks in the plugin directory <plugin name>_tests/<filename>
save input file name (optional, if using one of the first two options of input, this will save it in the plugin directory)
save output file name (optional, will save to <plugin name>_tests/<filename>)
validate output file name (optional, if provided will look in <plugin name>_tests/<filename> and validate output against that)
Schedule
The request to the schedule plugin runs it a single time and returns whatever writebacks would have been made in the response. It has these arguments:
plugin name (this should match a file name in the server configured dir <name>.py)
save output file name (optional, will save to <plugin name>_tests/<filename>
validate output file name (optional, if provided will look in <plugin name>_tests/<filename> and validate output against that)
Since schedule plugins may query the database, we might have to figure out a good way to mock those out so that a test can run and the queries in the plugin will get yielded mock test data. Maybe some iteration on this will help figure out the best way. Let's let the development of some of these plugins drive what's needed to make testing them clean.
Request
The request plugin test will have the following arugments:
plugin name (this should match a file name in the server configured dir <name>.py)
request headers (one of)
map of headers
headers file name (file with json map in it)
request data file (a file located in `_tests/
save output file name (optional, will save to <plugin name>_tests/<filename>)
save db write back file name (optional)
validate output file name (optional, if provided will look in <plugin name>_tests/<filename> and validate output against that)
validate db write back file name (optional)
The output of the request plugin will be return on the API request. The CLI when used should show the DB write back output from the file (if specified) and show the response.
The text was updated successfully, but these errors were encountered:
This issue is a placeholder putting together a group of issues around how the server should support plugin development. This is for #25537 and will build on the work in #25639
We'll want the server to be able to pull plugins from a local directory on the server it's running on. This could also be a deployment mechanism we'd prefer in the future (rather than saving it as a string in the catalog), but I'm not sure.
The server should be able to start up with a pointer to this directory, which is where it'll look for plugins. Then via an API and CLI, the user can trigger plugins to run against test input and then look at the output (and validate against saved outputs). Each plugin type will have its own endpoint.
Reminder the four types are
/api/v3/plugins/<name>
and send request headers and body to pluginThe endpoints will be
/api/v3/plugin_test/<wal | snapshot | schedule | request>
. When any of these requests is run, the plugin will be reloaded from whatever is on the local disk.Any database write backs from one of these plugins will be either returned in the API response or written out to a file in the plugin directory (depending on the type).
WAL
The request to the wal plugin test will have the following arugments:
<name>.py
)<plugin name>_tests/<filename>
)<plugin name>_tests/<filename>
)<plugin name>_tests/<filename>
and validate output against that)There should be a CLI to run this against the API. When the request is run, any write backs will be returned in the response. The CLI should display this at the terminal.
Snapshot
The request to the snapshot plugin test will have the following arguments:
<name>.py
)<plugin name>_tests/<filename>
<plugin name>_tests/<filename>
)<plugin name>_tests/<filename>
and validate output against that)Schedule
The request to the schedule plugin runs it a single time and returns whatever writebacks would have been made in the response. It has these arguments:
<name>.py
)<plugin name>_tests/<filename>
<plugin name>_tests/<filename>
and validate output against that)Since schedule plugins may query the database, we might have to figure out a good way to mock those out so that a test can run and the queries in the plugin will get yielded mock test data. Maybe some iteration on this will help figure out the best way. Let's let the development of some of these plugins drive what's needed to make testing them clean.
Request
The request plugin test will have the following arugments:
<name>.py
)<plugin name>_tests/<filename>
)<plugin name>_tests/<filename>
and validate output against that)The output of the request plugin will be return on the API request. The CLI when used should show the DB write back output from the file (if specified) and show the response.
The text was updated successfully, but these errors were encountered: