From 3ec9a4fe81dfb97dfa82bb0fdb9882c2882be7f8 Mon Sep 17 00:00:00 2001 From: Patrik Nordwall Date: Mon, 25 Nov 2024 09:17:35 +0100 Subject: [PATCH] docs: Clarify atLeastOnceFlow filter --- docs/src/main/paradox/dynamodb.md | 3 +++ docs/src/main/paradox/flow.md | 3 ++- docs/src/main/paradox/r2dbc.md | 5 ++++- 3 files changed, 9 insertions(+), 2 deletions(-) diff --git a/docs/src/main/paradox/dynamodb.md b/docs/src/main/paradox/dynamodb.md index f31e111f9..ae2ad556b 100644 --- a/docs/src/main/paradox/dynamodb.md +++ b/docs/src/main/paradox/dynamodb.md @@ -301,6 +301,9 @@ A good alternative for advanced state management is to implement the handler as An Akka Streams `FlowWithContext` can be used instead of a handler for processing the envelopes, which is described in @ref:[Processing with Akka Streams](flow.md). +In addition to the caveats described there a `DynamoDBProjection.atLeastOnceFlow` must not filter out envelopes. Always +emit a `Done` element for each completed envelope, even if application processing was skipped for the envelope. + ### Handler lifecycle You can override the `start` and `stop` methods of the @apidoc[Handler] or @apidoc[DynamoDBTransactHandler] to diff --git a/docs/src/main/paradox/flow.md b/docs/src/main/paradox/flow.md index 7981ff544..63bd07c81 100644 --- a/docs/src/main/paradox/flow.md +++ b/docs/src/main/paradox/flow.md @@ -20,7 +20,8 @@ from previously stored offset some envelopes may be processed more than once. There are a few caveats to be aware of: * If the flow filters out envelopes the corresponding offset will not be stored, and such an envelope - will be processed again if the projection is restarted and no later offset was stored. + will be processed again if the projection is restarted and no later offset was stored. Instead of filter it + is better to skip the processing but still emit the `Done` element. * The flow should not duplicate emitted envelopes (`mapConcat`) with same offset, because then it can result in that the first offset is stored and when the projection is restarted that offset is considered completed even though more of the duplicated envelopes were never processed. diff --git a/docs/src/main/paradox/r2dbc.md b/docs/src/main/paradox/r2dbc.md index b6566e7aa..f20edc6e0 100644 --- a/docs/src/main/paradox/r2dbc.md +++ b/docs/src/main/paradox/r2dbc.md @@ -256,6 +256,9 @@ A good alternative for advanced state management is to implement the handler as An Akka Streams `FlowWithContext` can be used instead of a handler for processing the envelopes, which is described in @ref:[Processing with Akka Streams](flow.md). +In addition to the caveats described there a `R2dbcProjection.atLeastOnceFlow` must not filter out envelopes. Always +emit a `Done` element for each completed envelope, even if application processing was skipped for the envelope. + ### Handler lifecycle You can override the `start` and `stop` methods of the `R2dbcHandler` to implement initialization @@ -319,4 +322,4 @@ Scala : @@snip [Example.scala](/akka-projection-r2dbc/src/test/scala/docs/home/projection/R2dbcProjectionDocExample.scala){#customConnectionFactory} Java -: @@snip [Example.java](/akka-projection-r2dbc/src/test/java/jdocs/home/projection/R2dbcProjectionDocExample.java){#customConnectionFactory} \ No newline at end of file +: @@snip [Example.java](/akka-projection-r2dbc/src/test/java/jdocs/home/projection/R2dbcProjectionDocExample.java){#customConnectionFactory}