It is a fat JAR (available here) with Karate Core and extensions to connect to other systems :
-
karate-connect-<version>-standalone.jar
The extensions require some tools. Three Docker images are also available with everything installed :
-
lectratech/karate-connect:<version>-minimal
(only the standalone JAR in a JRE image) -
lectratech/karate-connect:<version>
(Python packages & standard kubectl installation) -
lectratech/karate-connect:<version>-aks
(Python packages & Azure Kubernetes Service kubectl installation)
A set of Kotlin classes, Karate features, Javascript functions. This set is callable in your Karate project.
* def result1 = <extension>.<value>
* def result2 = <extension>.<feature>.<function>(args)
* json cliConfig = snowflake.cliConfigFromEnv
* def rabbitmqClient = rabbitmq.topology.createClient({"host": "...", "port": ..., ...})
- Requirements
-
-
JDK 21+
-
Kotlin
-
Gradle
-
Docker and Docker Compose
-
Python 3.x & pip
-
- Snowflake requirements
-
-
Define a
src/test/resources/snowflake/snowflake.properties
with your Snowflake information-
Example:
src/test/resources/snowflake/snowflake.template.properties
-
Note:
privateKeyBase64
is a base64 one-line encoded private key :cat my-private-key.pem | base64 -w0
-
-
Your Snowflake role has to be allowed to create/drop schemas/stages/tables, create/execute/drop tasks.
-
- Commands
-
-
source .envrc
to install Python packages in a virtual environment (ordirenv allow
if you prefer the greatdirenv
tool) -
./gradlew build
to build the fat JAR with all tests -
./gradlew build -DtestExtensions=…​
to build the fat JAR with only Karate tests on the given extensions (useful if you do not have a Snowflake account) -
docker compose build
to build the 3 Docker images locally :-
karate-connect:minimal
-
karate-connect
-
karate-connect:aks
-
-
docker run --rm \
-v <features_path>:/features \
-v <reports_path>:/target/karate-reports \
-e KARATE_EXTENSIONS=<ext1>,<ext2>... \
lectratech/karate-connect:<version> <karate_args>
Note
|
KARATE_EXTENSIONS , reports_path , karate_args are optional
|
docker run --rm \
... \
-v <my-specific-karate-config.js>:/karate-config.js \
lectratech/karate-connect:<version> <karate_args>
karate-config.js
function fn() {
const myFunction = (input) => input.toUpperCase();
return {
myValue: "foo",
myFunction: myFunction
};
}
Some common functions added to the Karate DSL
* string res = base.random.uuid() # ex: '8cd07583-cf24-4373-ad58-f1c9303501c5'
* def millis = base.time.currentTimeMillis() # ex: 1738851217499
* string now = base.time.offsetDateTimeNow() # ex: '2025-01-01T15:10:00.629772630+01:00'
* string str = base.json.toString({foo:"bar"}) # str='{"foo":"bar"}'
* string str = base.json.readLines("file.json") # str='{"id":"1c4b..."}\n{"id":"2a02..."}' with file.json = '{"id":"#(base.random.uuid())"}\n{"id":"#(base.random.uuid())"}'
* def bool = base.assert.withEpsilon(0.1234, 0.12, epsilon) # bool=true if epsilon=1E-2, false if epsilon=1E-3
Note
|
This extension is loaded by default. |
Rabbitmq topology creations & message publications/consumptions
* json rabbitmqConfigFromEnv = rabbitmq.configFromEnv # environment variable RABBITMQ_HOST, RABBITMQ_PORT, RABBITMQ_VIRTUAL_HOST, RABBITMQ_USERNAME, RABBITMQ_PASSWORD, RABBITMQ_SSL
* json rabbitmqConfigFromValue = { host: "localhost", port:5672, virtualHost:"default", username:"guest", password:"guest", ssl:false }
* json rabbitmqConfigFromJsonFile = read("my-rabbitmq-config.json")
* def rabbitmqClient = rabbitmq.topology.createClient(rabbitmqConfig)
Note
|
It will be closed automatically when no longer in use. |
Tip
|
It should be created only once!
Declare it in your karate-config.js
|
karate-config.js
with your rabbitmqClient
function fn() {
const rabbitmqConfig = ...;
const rabbitmqClient = karate.callSingle("classpath:rabbitmq/topology.feature@createClient", rabbitmqConfig).result;
return {
"rabbitmqClient": rabbitmqClient
};
}
These operations should not normally be performed by Karate. Nevertheless, it is possible if you need.
* json exchangeConfig = ({ rabbitmqClient, name: "<myexchange>", type: "direct|topic|fanout|headers", durable: true(default)|false, autoDelete: true|false(default) })
* json result = rabbitmq.topology.exchange(exchangeConfig)
* match result.status == "OK"
* json queueConfig = ({ rabbitmqClient, name: "<myqueue>", type: "classic|quorum|stream", durable: true(default)|false, exclusive: true|false(default), autoDelete: true|false(default) })
* json result = rabbitmq.topology.queue(queueConfig)
* match result.status == "OK"
* json bindingConfig = ({ rabbitmqClient, exchangeName: "<myexchange>", queueName: "<myqueue>", routingKey: "<my.routing.key>" })
* json result = rabbitmq.topology.bind(bindingConfig)
* match result.status == "OK"
* json publishConfig = ({ rabbitmqClient, exchangeName: "<myexchange>", routingKey: "<my.routing.key>" })
* json headers = { header1: "foo", header2: "bar", ... }
* json properties = ({ headers, contentType: "application/json", ... })
* json message = ({ body: "...", properties })
* json result = rabbitmq.message.publish({...publishConfig, message})
* match result.status == "OK"
name | type | default value |
---|---|---|
|
string |
"application/json" |
|
string |
"UTF-8" |
|
number |
null |
|
number |
null |
|
string |
"<uuid>" |
|
string |
null |
|
string |
null |
|
string |
"<uuid>" |
|
number |
nb milliseconds since January 1, 1970, 00:00:00 GMT, until now |
|
string |
null |
|
string |
null |
|
string |
null |
|
string |
null |
|
map<string,string> |
empty map |
* json consumeConfig = ({ rabbitmqClient, queueName: "<myqueue>", timeoutSeconds: <nbSeconds>(default 60), minNbMessages: <nbNeededMessages>(default 1) })
* json result = rabbitmq.message.consume(consumeConfig)
* match result.status == "OK"
Note
|
|
* json publishAndConsumeConfig = ({ rabbitmqClient, exchangeName: "<myexchange>", routingKey: "<my.routing.key>", timeoutSeconds: <nbSeconds>(default 60) })
* json message = ... # input
* json result = rabbitmq.message.publishAndConsume({...publishAndConsumeConfig, message})
* match result.status == "OK"
* match result.data.body == ... # output body
* match result.data.properties == ... # output properties
Note
|
|
Snowflake CLI / Snowflake REST API calls
Note
|
For the fat JAR usage, you have to install |
* json cliConfigFromEnv = snowflake.cliConfigFromEnv # environment variable SNOWFLAKE_ACCOUNT, SNOWFLAKE_USER, SNOWFLAKE_PRIVATE_KEY_PATH, PRIVATE_KEY_PASSPHRASE
* json cliConfigFromValue = { account: "xxx.yyy.azure", user: "<MY_USER>", privateKeyPath: "<.../file.pem>", privateKeyPassphrase: "..." }
* json cliConfigFromJsonFile = read("my-cli-config.json")
* json snowflakeConfigConfigFromEnv = snowflake.snowflakeConfigFromEnv # environment variable SNOWFLAKE_ROLE, SNOWFLAKE_WAREHOUSE, SNOWFLAKE_DATABASE, SNOWFLAKE_SCHEMA
* json snowflakeConfigConfigFromValue = { role: "<MY_ROLE>", warehouse: "<MY_WH>", database: "<MY_DB>", schema: "<MY_SCHEMA>" }
* json snowflakeConfigFromJsonFile = read("my-snowflake-config.json")
* string jwt = snowflake.cli.generateJwt(cliConfig)
* match jwt === '#regex .+\\..+\\..+'
Tip
|
The JWT should be created once (on once per scenario). Declare it in your karate-config.js
|
karate-config.js
with your rabbitmqClient
function fn() {
const cliConfig = ...;
const jwt = karate.callSingle("classpath:snowflake/cli.feature@generateJwt", cliConfig).result; // once
// const jwt = karate.call("classpath:snowflake/cli.feature@generateJwt", cliConfig).result; // once per scenario
return {
"jwt": jwt,
"cliConfig": cliConfig,
...
};
}
* string fileAbsolutePath = karate.toAbsolutePath("<relativePath>/<file>.csv")
* string tableName = "<MY_TABLE>"
* json result = snowflake.cli.putCsvIntoTable({ fileAbsolutePath, tableName, cliConfig, snowflakeConfig })
* match result.status == "OK"
* string fileAbsolutePath = karate.toAbsolutePath("<relativePath>/<file>.json")
* string tableName = "<MY_TABLE>"
* json result = snowflake.cli.putJsonIntoTable({ fileAbsolutePath, tableName, cliConfig, snowflakeConfig })
* match result.status == "OK"
* string statement = "SELECT FOO, BAR FROM..."
* json result = snowflake.cli.runSql({ statement, cliConfig, snowflakeConfig })
* match result.status == "OK"
* match result.output == [ { "FOO": 1, "BAR": "bar1" }, { "FOO": 2, "BAR": "bar2" }, ... ]
Note
|
Limitations for SQL statement through CLI is not yet analyzed. |
* json restConfig = ({ jwt, cliConfig, snowflakeConfig })
* string statement = "SELECT FOO, BAR FROM..."
* json result = snowflake.rest.runSql({ ...restConfig, statement})
* match result.status == "OK"
* match (result.data.length) == 1
* match result.data[0].FOO == 1
* match result.data[0].BAR == "bar1"
Note
|
|
* json restConfig = ({ jwt, cliConfig, snowflakeConfig })
* json result = snowflake.rest.cloneSchema({...restConfig, schemaToClone: "<MY_SOURCE_SCHEMA>", schemaToCreate: "<MY_TARGET_SCHEMA>"})
* match result.status == "OK"
* json restConfig = ({ jwt, cliConfig, snowflakeConfig })
* json result = snowflake.rest.dropSchema({...restConfig, schemaToDrop: "<MY_SCHEMA>"})
* match result.status == "OK"
Kafka Connect
usage* string table = "<MY_TABLE>"
# Single row
* json result = snowflake.rest.insertRowIntoStagingTable({...restConfigLocal, table, recordMetadata: {...}, recordValue: {...}})
* match result.status == "OK"
# Single row from files
* json result = snowflake.rest.insertRowIntoStagingTable({...restConfigLocal, table, recordMetadataFile: "<file-metadata-path>", recordValue: "<file-value-path>"})
* match result.status == "OK"
# Many rows
* json result = snowflake.rest.insertRowsIntoStagingTable({...restConfigLocal, table, records: [ {recordMetadata: {...}, recordValue: {...}}, ... ]})
* match result.status == "OK"
* string taskName = "<MY_TASK>"
* json restConfig = ({ jwt, cliConfig, snowflakeConfig })
* json result = snowflake.rest.runSql({...restConfig, statement: "EXECUTE TASK "+taskName})
* match result.status == "OK"
* json result = snowflake.rest.checkTaskStatus({...restConfig, taskName})
* match result.status == "OK"
Note
|
checkTaskStatus will use the retry strategy to wait for the task completion.
|
* string taskName = "<MY_TASK>"
* json restConfig = ({ jwt, cliConfig, snowflakeConfig })
* json result = snowflake.rest.cloneAndExecuteTask({...restConfig, taskName})
* match result.status == "OK"
Note
|
cloneAndExecuteTask will execute a temporary copy of the task taskName (without the parent task) and will wait for its completion.
|
Kubectl calls
Note
|
|
* string namespace = "<my-namespace>"
* string cronJobName = "<my-cronjob-name>"
* string jobName = "<my-created-job-name>"
* def timeoutSeconds = 60 # (default)
* json result = kubernetes.cronJob.runJob({namespace, cronJobName, jobName, timeoutSeconds})
* match result.status == "OK"
* karate.log(result.message)
* string namespace = "<my-namespace>"
* string cronJobName = "<my-cronjob-name>"
* string jobName = "<my-created-job-name>"
* def timeoutSeconds = 60 # (default)
* json env = { "MY_ENV1": "MY_VALUE1", "MY_ENV2": "MY_VALUE2" }
* json result = kubernetes.cronJob.runJobWithEnv({namespace, cronJobName, jobName, timeoutSeconds, env})
* match result.status == "OK"
* karate.log(result.message)
-
TODO tests
Dbt calls
Note
|
For the fat JAR usage, you have to install |
# nominal case
* json result = dbt.cli.run()
* match result.status == "OK"
* karate.log(result.output)
# with optional parameters
* string select = "my_model"
* string profilesDir = "/path/to/.dbt"
* string projectDir = "/path/to/dbtProject"
* string extra = "..."
* json result = dbt.cli.run({select, profilesDir, projectDir, extra})
* match result.status == "OK"
* karate.log(result.output)
-
TODO tests
The code is licensed under Apache License, Version 2.0.
The documentation and logo are licensed under Creative Commons Attribution-ShareAlike 4.0 International Public License.