Skip to content

Commit

Permalink
docs: removed link extensions
Browse files Browse the repository at this point in the history
  • Loading branch information
paulobressan committed Aug 11, 2023
1 parent 2f0ceb8 commit 8c2416c
Show file tree
Hide file tree
Showing 10 changed files with 28 additions and 28 deletions.
14 changes: 7 additions & 7 deletions docs/pages/v2/advanced.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,10 @@

This section provides detailed information on the some of the advanced features available in Oura:

- [Stateful Cursor](advanced/stateful_cursor.md): provides a mechanism to persist the "position" of the processing pipeline to make it resilient to restarts.
- [Rollback Buffer](advanced/rollback_buffer.md): provides a way to mitigate the impact of chain rollbacks in downstream stages.
- [Pipeline Metrics](advanced/pipeline_metrics.md): allows operators to track the progress and performance of long-running Oura sessions.
- [Mapper Options](advanced/mapper_options.md): A set of "expensive" event mapping procedures that require an explicit opt-in to be activated.
- [Intersect Options](advanced/intersect_options.md): Advanced options for instructing Oura from which point in the chain to start reading from.
- [Custom Network](advanced/custom_network.md): Instructions on how to configure Oura for connecting to a custom network.
- [Retry Policy](advanced/retry_policy.md): Instructions on how to configure retry policies for different operations
- [Stateful Cursor](advanced/stateful_cursor): provides a mechanism to persist the "position" of the processing pipeline to make it resilient to restarts.
- [Rollback Buffer](advanced/rollback_buffer): provides a way to mitigate the impact of chain rollbacks in downstream stages.
- [Pipeline Metrics](advanced/pipeline_metrics): allows operators to track the progress and performance of long-running Oura sessions.
- [Mapper Options](advanced/mapper_options): A set of "expensive" event mapping procedures that require an explicit opt-in to be activated.
- [Intersect Options](advanced/intersect_options): Advanced options for instructing Oura from which point in the chain to start reading from.
- [Custom Network](advanced/custom_network): Instructions on how to configure Oura for connecting to a custom network.
- [Retry Policy](advanced/retry_policy): Instructions on how to configure retry policies for different operations
10 changes: 5 additions & 5 deletions docs/pages/v2/filters.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,10 @@ _Filters_ are intermediate steps in the pipeline that process events as they tra

These are the existing filters that are included as part the main _Oura_ codebase:

- [ParseCbor](filters/parse_cbor.mdx): a filter that maps cbor transactions to a data structure.
- [SplitBlock](filters/split_block.mdx): a filter that will decode the cbor block and extract all transactions in an event in the format CborTx.
- [Deno](filters/deno.mdx): a filter that allows JS code to be implemented as a stage within the pipeline.
- [DSL](filters/dsl.mdx): a filter that can select which events to block and which to let pass.
- [Legacy V1](filters/legacy_v1.mdx): a filter that transforms the block data to the Oura V1 data structure.
- [ParseCbor](filters/parse_cbor): a filter that maps cbor transactions to a data structure.
- [SplitBlock](filters/split_block): a filter that will decode the cbor block and extract all transactions in an event in the format CborTx.
- [Deno](filters/deno): a filter that allows JS code to be implemented as a stage within the pipeline.
- [DSL](filters/dsl): a filter that can select which events to block and which to let pass.
- [Legacy V1](filters/legacy_v1): a filter that transforms the block data to the Oura V1 data structure.

New filters are being developed, information will be added in this documentation to reflect the updated list. Contributions and feature request are welcome in our [Github Repo](https://github.com/txpipe/oura).
2 changes: 1 addition & 1 deletion docs/pages/v2/filters/parse_cbor.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

The `parse_cbor` filter aims to map cbor transactions to a structured transaction.

However, the filter will only work when the record received in the stage is CborTx in other words a transaction in Cbor format that was previously extracted from a block by another stage, otherwise, parse_cbor will ignore and pass the record to the next stage. When the record is CborTx, parse_cbor will decode and map the Cbor to a structure, so the next stage will receive the ParsedTx record. If no filter is enabled, the stages will receive the record in CborBlock format, and if only the parse_cbor filter is enabled in `daemon.toml`, it will be necessary to enable the [split_cbor](split_block.mdx) filter for the stage to receive the CborTx format.
However, the filter will only work when the record received in the stage is CborTx in other words a transaction in Cbor format that was previously extracted from a block by another stage, otherwise, parse_cbor will ignore and pass the record to the next stage. When the record is CborTx, parse_cbor will decode and map the Cbor to a structure, so the next stage will receive the ParsedTx record. If no filter is enabled, the stages will receive the record in CborBlock format, and if only the parse_cbor filter is enabled in `daemon.toml`, it will be necessary to enable the [split_cbor](split_block) filter for the stage to receive the CborTx format.

## Configuration

Expand Down
2 changes: 1 addition & 1 deletion docs/pages/v2/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -37,4 +37,4 @@ _Oura_ running in `daemon` mode can be configured to use custom filters to pinpo
If the available out-of-the-box features don't satisfy your particular use-case, _Oura_ can be used a library in your Rust project to set up tailor-made pipelines. Each component (sources, filters, sinks, etc) in _Oura_ aims at being self-contained and reusable. For example, custom filters and sinks can be built while reusing the existing sources.

## (Experimental) Windows Support
_Oura_ Windows support is currently __experimental__, Windows build supports only [Node-to-Node](./sources/n2n.md) source with tcp socket bearer.
_Oura_ Windows support is currently __experimental__, Windows build supports only [Node-to-Node](./sources/n2n) source with tcp socket bearer.
8 changes: 4 additions & 4 deletions docs/pages/v2/installation.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

Depending on your needs, _Oura_ provides different installation options:

- [Binary Release](installation/binary_release.mdx): to use one of our pre-compiled binary releases for the supported platforms.
- [From Source](installation/from_source.mdx): to compile a binary from source code using Rust's toolchain
- [Docker](installation/docker.mdx): to run the tool from a pre-built docker image
- [Kubernetes](installation/kubernetes.mdx): to deploy _Oura_ as a resource within a Kubernetes cluster
- [Binary Release](installation/binary_release): to use one of our pre-compiled binary releases for the supported platforms.
- [From Source](installation/from_source): to compile a binary from source code using Rust's toolchain
- [Docker](installation/docker): to run the tool from a pre-built docker image
- [Kubernetes](installation/kubernetes): to deploy _Oura_ as a resource within a Kubernetes cluster
8 changes: 4 additions & 4 deletions docs/pages/v2/sinks/aws_sqs.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,9 @@ A sink that sends each event as a message to an AWS SQS queue. Each event is jso

The sink will process each incoming event in sequence and submit the corresponding `SendMessage` request to the SQS API. Once the queue acknowledges reception of the message, the sink will advance and continue with the following event.

The sink support both FIFO and Standard queues. The sink configuration will determine which logic to apply. In case of FIFO, it is necessary to enable `content-based deduplication` and the group id is determined by an explicit configuration value or `oura-sink` by default.
The sink support both FIFO and Standard queues. The sink configuration will determine which logic to apply. In case of FIFO, it is necessary to enable `content-based deduplication` and the group id is determined by an explicit configuration value or `oura-sink` by default.

Authentication against AWS is built-in in the SDK library and follows the common chain of providers (env vars, ~/.aws, etc).
Authentication against AWS is built-in in the SDK library and follows the common chain of providers (env vars, ~/.aws, etc).

## Configuration

Expand Down Expand Up @@ -43,10 +43,10 @@ Oura processes messages maintaining the sequence of the blocks and respecting th

Connecting Oura with a FIFO queue would provide the consumer with the guarantee that the events received follow the same order as they appeared in the blockchain. This might be useful, for example, in scenarios where the processing of an event requires a reference of a previous state of the chain.

Please note that rollback events might happen upstream, at the blockchain level, which need to be handled by the consumer to unwind any side-effects of the processing of the newly orphaned blocks. This problem can be mitigated by using Oura's [rollback buffer](../advanced/rollback_buffer.md) feature.
Please note that rollback events might happen upstream, at the blockchain level, which need to be handled by the consumer to unwind any side-effects of the processing of the newly orphaned blocks. This problem can be mitigated by using Oura's [rollback buffer](../advanced/rollback_buffer) feature.

If each event can be processed in isolation, if the process is idempotent or if the order doesn't affect the outcome, the recommended approach is to use a Standard queue which provides "at least once" processing guarantees, relaxing the constraints and improving the overall performance.

## Payload Size Limitation

AWS SQS service has a 256kb payload size limit. This is more than enough for individual events, but it might be too little for pipelines where the `include_cbor_hex` option is enabled. If your goal of your pipeline is to access the raw CBOR content, we recommend taking a look at the [AWS S3 Sink](./aws_s3.md) that provides a direct way for storing CBOR block in an S3 bucket.
AWS SQS service has a 256kb payload size limit. This is more than enough for individual events, but it might be too little for pipelines where the `include_cbor_hex` option is enabled. If your goal of your pipeline is to access the raw CBOR content, we recommend taking a look at the [AWS S3 Sink](./aws_s3) that provides a direct way for storing CBOR block in an S3 bucket.
2 changes: 1 addition & 1 deletion docs/pages/v2/sinks/elasticsearch.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,6 @@ We currently only implement _basic_ auth, other mechanisms will be implemented a

In services and API calls, _idempotency_ refers to a property of the system where the execution of multiple "equivalent" requests have the same effect as a single request. In other words, "idempotent" calls can be triggered multiple times without problem.

In our Elasticsearch sink, when the `idempotency` flag is enabled, each document sent to the index will specify a particular content-based ID: the [fingerprint](../filters/fingerprint.md) of the event. If Oura restarts without having a cursor or if the same block is processed for any reason, repeated events will present the same ID and Elasticsearch will reject them and Oura will continue with the following event. This mechanism provides a strong guarantee that our index won't contain duplicate data.
In our Elasticsearch sink, when the `idempotency` flag is enabled, each document sent to the index will specify a particular content-based ID: the [fingerprint](../filters/fingerprint) of the event. If Oura restarts without having a cursor or if the same block is processed for any reason, repeated events will present the same ID and Elasticsearch will reject them and Oura will continue with the following event. This mechanism provides a strong guarantee that our index won't contain duplicate data.

If the flag is disabled, each document will be generated using a random ID, ensuring that it will be indexed regardless.
6 changes: 3 additions & 3 deletions docs/pages/v2/sources.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,8 @@ Sources represent the "origin" of the events processed by _Oura_. Any compatible

These are the currently available sources included as part the main _Oura_ codebase:

- [N2N](sources/n2n.mdx): an Ouroboros agent that connects to a Cardano node using node-2-node protocols.
- [N2C](sources/n2c.mdx): an Ouroboros agent that connects to a Cardano node using node-2-client protocols.
- [UtxoRPC](sources/utxorpc.mdx): a source uses gRPC to fetch blocks and receive blocks from a no Dolos.
- [N2N](sources/n2n): an Ouroboros agent that connects to a Cardano node using node-2-node protocols.
- [N2C](sources/n2c): an Ouroboros agent that connects to a Cardano node using node-2-client protocols.
- [UtxoRPC](sources/utxorpc): a source uses gRPC to fetch blocks and receive blocks from a no Dolos.

New source are being developed, information will be added in this documentation to reflect the updated list. Contributions and feature request are welcome in our [Github Repo](https://github.com/txpipe/oura).
2 changes: 1 addition & 1 deletion docs/pages/v2/usage.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@

_Oura_ provides one execution mode:

- [Daemon](usage/daemon.mdx): a fully-configurable pipeline that runs in the background. Sources, filters and sinks can be combined to fulfil particular use-cases.
- [Daemon](usage/daemon): a fully-configurable pipeline that runs in the background. Sources, filters and sinks can be combined to fulfil particular use-cases.
2 changes: 1 addition & 1 deletion docs/pages/v2/usage/daemon.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ This section specifies the origin of the data. The special `type` field must alw

### The `intersect` section

Advanced options for instructing Oura from which point in the chain to start reading from. You can read more in [intersect advanced](../advanced/intersect_options.mdx)
Advanced options for instructing Oura from which point in the chain to start reading from. You can read more in [intersect advanced](../advanced/intersect_options)

### The `filters` section

Expand Down

0 comments on commit 8c2416c

Please sign in to comment.