Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pre-Release Fixes & Refactorings #23

Merged
merged 38 commits into from
May 16, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
38 commits
Select commit Hold shift + click to select a range
43d52ce
chore: cleanup env vars
moritzkirstein May 10, 2023
969e4ba
deps: bump lib version to 2.7
moritzkirstein May 10, 2023
4a47b7c
feat: add LogLevel to Nautilus
moritzkirstein May 10, 2023
96e1196
feat: add ddo validation to publish flow
moritzkirstein May 10, 2023
9322852
docs: update README example nodeUri
moritzkirstein May 10, 2023
b1afe32
tests: major test overhaul (WIP)
moritzkirstein May 10, 2023
8228459
refactor: rename files
moritzkirstein May 11, 2023
f366371
fix: optional consumerParameters
moritzkirstein May 11, 2023
38f44b5
refactor: move LogLevel to before()
moritzkirstein May 11, 2023
6ac1b23
docs: example.env
moritzkirstein May 11, 2023
b94464d
deps: revert microbundle to 0.14.2
moritzkirstein May 11, 2023
9e1b9fe
fix: optional metadata.algorithm
moritzkirstein May 11, 2023
9a58a1b
feat: update readme
Abrom8 May 11, 2023
9b92a85
fix: access order
moritzkirstein May 11, 2023
a25be98
fix: compute order
moritzkirstein May 11, 2023
fa7fd91
test: fix test fixture for algo service
moritzkirstein May 11, 2023
c1d1da6
test: fix algorithm container fixture checksum
moritzkirstein May 11, 2023
7670b84
fix: catch potential undefined initializedData in compute order
moritzkirstein May 11, 2023
1afcf66
test: add compute-flow
moritzkirstein May 11, 2023
72aed2f
chore: cleanup imports
moritzkirstein May 11, 2023
009512f
chroe: add filetype exports
moritzkirstein May 11, 2023
6663e19
feat: validate provider before publishing
moritzkirstein May 11, 2023
940b716
feat: validate files before publishing
moritzkirstein May 11, 2023
cd2b69e
refactor: remove metadata check after addition of aqua validation
moritzkirstein May 11, 2023
44b2cd6
test: refactor to unit test folder
moritzkirstein May 11, 2023
d054e70
test: refactor tests, add publishing integration tests
moritzkirstein May 11, 2023
238ed98
feat: add initializeProvider and serviceId for access requests
moritzkirstein May 11, 2023
254abb4
refactor: update nft name & symbol
moritzkirstein May 11, 2023
46b15df
test: remove .only
moritzkirstein May 11, 2023
9db05b1
feat: add compute status and results to nautilus class
moritzkirstein May 11, 2023
84b55f7
chore: remove unused chainConfig.json
moritzkirstein May 11, 2023
595d285
chore: remove only & skip from tests
moritzkirstein May 11, 2023
bde7156
refactor: update getComputeEnvironment
moritzkirstein May 12, 2023
53ab841
chore: rename function
moritzkirstein May 12, 2023
3322bb3
feat: add additional hints to readme
Abrom8 May 12, 2023
1c042c1
docs: add compute status & result to README
moritzkirstein May 12, 2023
03b9219
docs: improve readme
Abrom8 May 15, 2023
0dcf242
docs: add install nautilus to readme
Abrom8 May 15, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
170 changes: 143 additions & 27 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,25 +4,46 @@ A typescript library helping to navigate the OCEAN. It enables configurable auto

## Table of Contents

- [⚙️ Configuring a new Nautilus instance](#configuring-a-new-nautilus-instance)
- [🌐 Automated Publishing](#automated-publishing)
- [Services](#services)
- [Consumer Parameters](#consumer-parameters)
- [Pricing](#pricing)
- [Owner and optional configs](#owner-and-optional-configs)
- [🤖 Automated Compute Jobs](#automated-compute-jobs)
- [🔐 Automated Access](#automated-access)
- [📚 API Documentation](#api-documentation)
- [🏛️ License](#api-documentation)
- [Nautilus](#nautilus)
- [Table of Contents](#table-of-contents)
- [Configuring a new Nautilus instance](#configuring-a-new-nautilus-instance)
- [Automated Publishing](#automated-publishing)
- [Services](#services)
- [Consumer Parameters](#consumer-parameters)
- [Pricing](#pricing)
- [Owner and optional configs](#owner-and-optional-configs)
- [Automated Compute Jobs](#automated-compute-jobs)
- [Basic Config](#basic-config)
- [Optional Settings](#optional-settings)
- [Start compute job](#start-compute-job)
- [Get compute job status](#get-compute-job-status)
- [Get compute job results](#get-compute-job-results)
- [Automated Access](#automated-access)
- [API Documentation](#api-documentation)
- [License](#license)

## Configuring a new Nautilus instance

Setting up a new `Nautilus` instance to perform automated tasks, like publish & consume, is rather simple.

First make sure to setup the `Web3` instance to use:
Install Nautilus:

```shell
npm install @deltadao/nautilus
```

Make sure you have [web3](https://www.npmjs.com/package/web3) installed:

```shell
npm install web3
```

Setup the `Web3` instance to use:

```ts
const web3 = new Web3('https://rpc.genx.minimal-gaia-x.eu') // can be replaced with any Ocean Protocol supported network
import Web3 from 'web3'

const web3 = new Web3('https://matic-mumbai.chainstacklabs.com') // can be replaced with any Ocean Protocol supported network
```

Next, add the account you want to use for the automations:
Expand Down Expand Up @@ -71,7 +92,7 @@ You can use the `AssetBuilder` class to build an asset and publish it with the `
Let's start by creating the builder and specifying the account that will be the owner/publisher of the new asset:

```ts
import { AssetBuilder } from '@delta-dao/nautilus'
import { AssetBuilder } from '@deltadao/nautilus'

const assetBuilder = new AssetBuilder()
```
Expand All @@ -82,7 +103,7 @@ With this we can now continue to setup the metadata information for the asset:
assetBuilder
.setType('dataset') // 'dataset' or 'algorithm'
.setName('My New Asset')
.setDescription('A publish asset building test on GEN-X') // supports markdown
.setDescription('This is a publish asset building test using Nautilus') // supports markdown
.setAuthor('testAuthor')
.setLicense('MIT') // SPDX license identifier
```
Expand All @@ -97,7 +118,8 @@ const algoMetadata = {
entrypoint: 'node $ALGO',
image: 'node',
tag: 'latest',
checksum: '026026d98942438e4df232b3e8cd7ca32416b385918977ce5ec0c6333618c423'
checksum:
'sha256:026026d98942438e4df232b3e8cd7ca32416b385918977ce5ec0c6333618c423'
}
}
```
Expand Down Expand Up @@ -137,6 +159,8 @@ const service = serviceBuilder
assetBuilder.addService(service)
```

> **_NOTE:_** If you want to publish an algorithm or dataset for computation make sure to set `ServiceBuilder(ServiceTypes.COMPUTE, ...)`.

The code above will build a new `access` service, serving a `url` type file that is available at `https://link.to/my/asset`. The service will be accessible via the ocean provider hosted at `https://ocean.provider.to/use`.

The supported `ServiceTypes` are `ACCESS` and `COMPUTE`.
Expand Down Expand Up @@ -196,12 +220,13 @@ assetBuilder.setPricing({
type: 'fixed', // 'fixed' or 'free'
// freCreationParams can be ommitted for 'free' pricing schemas
freCreationParams: {
fixedRateAddress: '0x...',
baseTokenAddress: '0x...',
baseTokenDecimals: 18,
datatokenDecimals: 18,
fixedRateAddress: '0x25e1926E3d57eC0651e89C654AB0FA182C6D5CF7', // Fixed Rate Contract address on Mumbai network
baseTokenAddress: '0xd8992Ed72C445c35Cb4A2be468568Ed1079357c8', // OCEAN token contract address on Mumbai network
baseTokenDecimals: 18, // adjusted to OCEAN token
datatokenDecimals: 18, // adjusted to OCEAN token
fixedRate: '1', // PRICE
marketFee: '0'
marketFee: '0',
marketFeeCollector: '0x0000000000000000000000000000000000000000'
}
})

Expand Down Expand Up @@ -257,6 +282,8 @@ If all went well, you should be able to browse the asset on any OceanMarket conn
The `Nautilus` instance we created in the setup step provides access to a `compute()` function that we can use to start new compute jobs.
This includes all potentially necessary orders for required datatokens as well as the signed request towards Ocean Provider to start the compute job itself.

### Basic Config

The following values are required to start a new compute job:

```ts
Expand All @@ -274,11 +301,7 @@ const computeConfig = {
}
```

To start the new compute job simply call the compute function:

```ts
const computeJob = await nautilus.compute(computeConfig)
```
### Optional Settings

In addition to that you can also specify some optional properties if needed.
Both the dataset and algorithm support custom `userdata` that might be passed to the services. For algorithms you can also specify a `algocustomdata` property.
Expand Down Expand Up @@ -316,15 +339,108 @@ const algorithm = {
}
```

When you are happy with the configuration you can use the `Nautilus` instance just as before to start the new compute job:
### Start compute job

When you are happy with the configuration you can use the `Nautilus` instance to start the new compute job:

```ts
const computeJob = await nautilus.compute({
dataset,
algorithm
})

const jobId = computeJob[0].jobId // make sure to save your jobId to retrieve results later
```

### Get compute job status

Once you have started a compute job it is possible to get status reports via the `.getComputeStatus()` function.
For this you need to have the `jobId` as well as the `providerUri` endpoint that is used for orchestrating and accessing the compute job.

In most cases `providerUri` will be the `serviceEndpoint` of the `compute` service of the dataset that was computed on.

```ts
const computeJob = await nautilus.getComputeStatus({
jobId, // use your previously saved jobId
providerUri: 'https://v4.provider.oceanprotocol.com/' // default ocean provider(serviceEndpoint)
})
```

Example compute job status:

```json
{
"agreementId": "0x1234abcd",
"jobId": "9876",
"owner": "0x1234",
"status": 70,
"statusText": "Job finished",
"dateCreated": "1683849268.012345",
"dateFinished": "1683849268.012345",
"results": [
{
"filename": "results.txt",
"filesize": 1234,
"type": "output"
},
{
"filename": "algorithm.log",
"filesize": 1234,
"type": "algorithmLog"
},
{
"filename": "configure.log",
"filesize": 1234,
"type": "configrationLog"
},
{
"filename": "publish.log",
"filesize": 0,
"type": "publishLog"
}
],
"stopreq": 0,
"removed": 0,
"algoDID": "did:op:algo-did",
"inputDID": ["did:op:dataset-did"]
}
```

### Get compute job results

If a compute job has finished running and there are results available, you can access these utilizing Nautilus.
Once again you will need the `jobId` as well as the `providerUri` as specified in the previous section on compute status.

```ts
const computeJob = await nautilus.getComputeResult({
jobId, // use your previously saved jobId
providerUri: 'https://v4.provider.oceanprotocol.com/' // default ocean provider(serviceEndpoint)
})
```

In addition, you can also specify a `resultIndex` to access a specific result file you are interested in. If you do not specify a `resultIndex`, the first result file will be used by default. For example, you could use this to get the algorithm log of a specific compute job:

```ts
const jobStatus = await nautilus.getComputeStatus({
jobId,
providerUri
})

if (jobStatus.status === 70) {
const resultIndexAlgorithmLog = status.results?.findIndex(
(result) => result.type === 'algorithmLog'
)
const computeJob = await nautilus.getComputeResult({
jobId,
providerUri,
resultIndex:
resultIndexAlgorithmLog > -1 ? resultIndexAlgorithmLog : undefined
})
}
```

For more information on compute job status and result requests please refer to the [Ocean Provider API documentation](https://docs.oceanprotocol.com/api-references/provider-rest-api#status-and-result).

## Automated Access

To access assets or more specifically their respective services, we can make use of the `access()` function provided by the `Nautilus` instance we created in the setup step.
Expand Down Expand Up @@ -356,7 +472,7 @@ const accessUrl = await nautilus.access(accessConfig)
## API Documentation

If you want to learn more about Nautilus, we provide a more detailed documentation of the library, including a typedoc API documentation:
https://deltadao.github.io/nautilus
https://deltadao.github.io/nautilus/docs/api/

## License

Expand Down
7 changes: 2 additions & 5 deletions example.env
Original file line number Diff line number Diff line change
@@ -1,5 +1,2 @@
SUBGRAPH_URI=https://subgraph.v4.genx.minimal-gaia-x.eu
AQUARIUS_URI=https://aquarius.v4.delta-dao.com
RPC_URI=https://rpc.genx.minimal-gaia-x.eu
PRIVATE_KEY='xxx'
CHAIN_CONFIG_FILEPATH="./chainConfig.json"
PRIVATE_KEY_TESTS_1='0x1234...'
PRIVATE_KEY_TESTS_2='0x1234...'
Loading