diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md
index f8076bc629..b9c30c3292 100644
--- a/CODE_OF_CONDUCT.md
+++ b/CODE_OF_CONDUCT.md
@@ -1,49 +1 @@
-# Contributor Covenant Code of Conduct
-
-## Our Pledge
-
-In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.
-
-## Our Standards
-
-Examples of behavior that contributes to creating a positive environment
-include:
-
-* Using welcoming and inclusive language
-* Being respectful of differing viewpoints and experiences
-* Gracefully accepting constructive criticism
-* Focusing on what is best for the community
-* Showing empathy towards other community members
-
-Examples of unacceptable behavior by participants include:
-
-* The use of sexualized language or imagery and unwelcome sexual attention or advances
-* Trolling, insulting/derogatory comments, and personal or political attacks
-* Public or private harassment
-* Publishing others' private information, such as a physical or electronic address, without explicit permission
-* Other conduct which could reasonably be considered inappropriate in a professional setting
-
-## Our Responsibilities
-
-Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
-
-Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
-
-## Scope
-
-This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
-
-## Enforcement
-
-Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at netflixoss@netflix.com. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
-
-Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
-
-## Attribution
-
-This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
-
-[homepage]: https://www.contributor-covenant.org
-
-For answers to common questions about this code of conduct, see
-https://www.contributor-covenant.org/faq
+[Code of Conduct](docs/docs/resources/code-of-conduct.md)
\ No newline at end of file
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 2fb6797762..925a60572c 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -1,72 +1 @@
-Thanks for your interest in Conductor!
-This guide helps to find the most efficient way to contribute, ask questions, and report issues.
-
-Code of conduct
------
-
-Please review our [code of conduct](CODE_OF_CONDUCT.md).
-
-I have a question!
------
-
-We have a dedicated [discussion forum](https://github.com/Netflix/conductor/discussions) for asking "how to" questions and to discuss ideas. The discussion forum is a great place to start if you're considering creating a feature request or work on a Pull Request.
-*Please do not create issues to ask questions.*
-
-I want to contribute!
-------
-
-We welcome Pull Requests and already had many outstanding community contributions!
-Creating and reviewing Pull Requests take considerable time. This section helps you set up for a smooth Pull Request experience.
-
-The stable branch is [main](https://github.com/Netflix/conductor/tree/main).
-
-Please create pull requests for your contributions against [main](https://github.com/Netflix/conductor/tree/main) only.
-
-It's a great idea to discuss the new feature you're considering on the [discussion forum](https://github.com/Netflix/conductor/discussions) before writing any code. There are often different ways you can implement a feature. Getting some discussion about different options helps shape the best solution. When starting directly with a Pull Request, there is the risk of having to make considerable changes. Sometimes that is the best approach, though! Showing an idea with code can be very helpful; be aware that it might be throw-away work. Some of our best Pull Requests came out of multiple competing implementations, which helped shape it to perfection.
-
-Also, consider that not every feature is a good fit for Conductor. A few things to consider are:
-
-* Is it increasing complexity for the user, or might it be confusing?
-* Does it, in any way, break backward compatibility (this is seldom acceptable)
-* Does it require new dependencies (this is rarely acceptable for core modules)
-* Should the feature be opt-in or enabled by default. For integration with a new Queuing recipe or persistence module, a separate module which can be optionally enabled is the right choice.
-* Should the feature be implemented in the main Conductor repository, or would it be better to set up a separate repository? Especially for integration with other systems, a separate repository is often the right choice because the life-cycle of it will be different.
-
-Of course, for more minor bug fixes and improvements, the process can be more light-weight.
-
-We'll try to be responsive to Pull Requests. Do keep in mind that because of the inherently distributed nature of open source projects, responses to a PR might take some time because of time zones, weekends, and other things we may be working on.
-
-I want to report an issue
------
-
-If you found a bug, it is much appreciated if you create an issue. Please include clear instructions on how to reproduce the issue, or even better, include a test case on a branch. Make sure to come up with a descriptive title for the issue because this helps while organizing issues.
-
-I have a great idea for a new feature
-----
-Many features in Conductor have come from ideas from the community. If you think something is missing or certain use cases could be supported better, let us know! You can do so by opening a discussion on the [discussion forum](https://github.com/Netflix/conductor/discussions). Provide as much relevant context to why and when the feature would be helpful. Providing context is especially important for "Support XYZ" issues since we might not be familiar with what "XYZ" is and why it's useful. If you have an idea of how to implement the feature, include that as well.
-
-Once we have decided on a direction, it's time to summarize the idea by creating a new issue.
-
-## Code Style
-We use [spotless](https://github.com/diffplug/spotless) to enforce consistent code style for the project, so make sure to run `gradlew spotlessApply` to fix any violations after code changes.
-
-## License
-
-By contributing your code, you agree to license your contribution under the terms of the APLv2: https://github.com/Netflix/conductor/blob/master/LICENSE
-
-All files are released with the Apache 2.0 license, and the following license header will be automatically added to your new file if none present:
-
-```
-/**
- * Copyright $YEAR Netflix, Inc.
- *
- * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
- * the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
- * an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
- * specific language governing permissions and limitations under the License.
- */
-```
+[Code of Conduct](docs/docs/resources/contributing.md)
\ No newline at end of file
diff --git a/RELATED.md b/RELATED.md
index 6e433a056a..b7adabea3f 100644
--- a/RELATED.md
+++ b/RELATED.md
@@ -1,74 +1 @@
-# Community projects related to Conductor
-
-## Client SDKs
-
-Further, all of the (non-Java) SDKs have a new GitHub home: the Conductor SDK repository is your new source for Conductor SDKs:
-
-* [Golang](https://github.com/conductor-sdk/conductor-go)
-* [Python](https://github.com/conductor-sdk/conductor-python)
-* [C#](https://github.com/conductor-sdk/conductor-csharp)
-* [Clojure](https://github.com/conductor-sdk/conductor-clojure)
-
-All contributions on the above client sdks can be made on [Conductor SDK](https://github.com/conductor-sdk) repository.
-
-## Microservices operations
-
-* https://github.com/flaviostutz/schellar - Schellar is a scheduler tool for instantiating Conductor workflows from time to time, mostly like a cron job, but with transport of input/output variables between calls.
-
-* https://github.com/flaviostutz/backtor - Backtor is a backup scheduler tool that uses Conductor workers to handle backup operations and decide when to expire backups (ex.: keep backup 3 days, 2 weeks, 2 months, 1 semester)
-
-* https://github.com/cquon/conductor-tools - Conductor CLI for launching workflows, polling tasks, listing running tasks etc
-
-
-## Conductor deployment
-
-* https://github.com/flaviostutz/conductor-server - Docker container for running Conductor with Prometheus metrics plugin installed and some tweaks to ease provisioning of workflows from json files embedded to the container
-
-* https://github.com/flaviostutz/conductor-ui - Docker container for running Conductor UI so that you can easily scale UI independently
-
-* https://github.com/flaviostutz/elasticblast - "Elasticsearch to Bleve" bridge tailored for running Conductor on top of Bleve indexer. The footprint of Elasticsearch may cost too much for small deployments on Cloud environment.
-
-* https://github.com/mohelsaka/conductor-prometheus-metrics - Conductor plugin for exposing Prometheus metrics over path '/metrics'
-
-## OAuth2.0 Security Configuration
-Forked Repository - [Conductor (Secure)](https://github.com/maheshyaddanapudi/conductor/tree/oauth2)
-
-[OAuth2.0 Role Based Security!](https://github.com/maheshyaddanapudi/conductor/blob/oauth2/SECURITY.md) - Spring Security with easy configuration to secure the Conductor server APIs.
-
-Docker image published to [Docker Hub](https://hub.docker.com/repository/docker/conductorboot/server)
-
-## Conductor Worker utilities
-
-* https://github.com/ggrcha/conductor-go-client - Conductor Golang client for writing Workers in Golang
-
-* https://github.com/courosh12/conductor-dotnet-client - Conductor DOTNET client for writing Workers in DOTNET
- * https://github.com/TwoUnderscorez/serilog-sinks-conductor-task-log - Serilog sink for sending worker log events to Netflix Conductor
-
-* https://github.com/davidwadden/conductor-workers - Various ready made Conductor workers for common operations on some platforms (ex.: Jira, Github, Concourse)
-
-## Conductor Web UI
-
-* https://github.com/maheshyaddanapudi/conductor-ng-ui - Angular based - Conductor Workflow Management UI
-
-## Conductor Persistence
-
-### Mongo Persistence
-
-* https://github.com/maheshyaddanapudi/conductor/tree/mongo_persistence - With option to use Mongo Database as persistence unit.
- * Mongo Persistence / Option to use Mongo Database as persistence unit.
- * Docker Compose example with MongoDB Container.
-
-### Oracle Persistence
-
-* https://github.com/maheshyaddanapudi/conductor/tree/oracle_persistence - With option to use Oracle Database as persistence unit.
- * Oracle Persistence / Option to use Oracle Database as persistence unit : version > 12.2 - Tested well with 19C
- * Docker Compose example with Oracle Container.
-
-## Schedule Conductor Workflow
-* https://github.com/jas34/scheduledwf - It solves the following problem statements:
- * At times there are use cases in which we need to run some tasks/jobs only at a scheduled time.
- * In microservice architecture maintaining schedulers in various microservices is a pain.
- * We should have a central dedicate service that can do scheduling for us and provide a trigger to a microservices at expected time.
-* It offers an additional module `io.github.jas34.scheduledwf.config.ScheduledWfServerModule` built on the existing core
-of conductor and does not require deployment of any additional service.
-For more details refer: [Schedule Conductor Workflows](https://jas34.github.io/scheduledwf) and [Capability In Conductor To Schedule Workflows](https://github.com/Netflix/conductor/discussions/2256)
+[Related Projects](docs/docs/resources/related.md)
diff --git a/docs/custom_theme/main.html b/docs/custom_theme/main.html
new file mode 100644
index 0000000000..26f55a0f51
--- /dev/null
+++ b/docs/custom_theme/main.html
@@ -0,0 +1,7 @@
+{% extends "base.html" %}
+
+
+ {%- block site_name %}
+
+
+ {%- endblock %}
\ No newline at end of file
diff --git a/docs/docs/apispec.md b/docs/docs/apispec.md
index c00142043d..94df57fcaf 100644
--- a/docs/docs/apispec.md
+++ b/docs/docs/apispec.md
@@ -1,21 +1,23 @@
+# API Specification
+
## Task & Workflow Metadata
| Endpoint | Description | Input |
|------------------------------------------|:---------------------------------|-------------------------------------------------------------|
| `GET /metadata/taskdefs` | Get all the task definitions | n/a |
| `GET /metadata/taskdefs/{taskType}` | Retrieve task definition | Task Name |
-| `POST /metadata/taskdefs` | Register new task definitions | List of [Task Definitions](../configuration/taskdef) |
-| `PUT /metadata/taskdefs` | Update a task definition | A [Task Definition](../configuration/taskdef) |
+| `POST /metadata/taskdefs` | Register new task definitions | List of [Task Definitions](/configuration/taskdef.html) |
+| `PUT /metadata/taskdefs` | Update a task definition | A [Task Definition](/configuration/taskdef.html) |
| `DELETE /metadata/taskdefs/{taskType}` | Delete a task definition | Task Name |
|||
| `GET /metadata/workflow` | Get all the workflow definitions | n/a |
-| `POST /metadata/workflow` | Register new workflow | [Workflow Definition](../configuration/workflowdef) |
-| `PUT /metadata/workflow` | Register/Update new workflows | List of [Workflow Definition](../configuration/workflowdef) |
+| `POST /metadata/workflow` | Register new workflow | [Workflow Definition](/configuration/workflowdef.html) |
+| `PUT /metadata/workflow` | Register/Update new workflows | List of [Workflow Definition](/configuration/workflowdef.html) |
| `GET /metadata/workflow/{name}?version=` | Get the workflow definitions | workflow name, version (optional) |
|||
## Start A Workflow
### With Input only
-See [Start Workflow Request](../gettingstarted/startworkflow/#start-workflow-request).
+See [Start Workflow Request](/gettingstarted/startworkflow.html).
#### Output
Id of the workflow (GUID)
@@ -142,8 +144,9 @@ Optionally updating task's input and output as specified in the payload.
| `GET /tasks/queue/sizes?taskType=&taskType=&taskType` | Return the size of pending tasks for given task types |
|||
-## Polling and Update Task
-These are critical endpoints used to poll for task and updating the task result by worker.
+## Polling, Ack and Update Task
+These are critical endpoints used to poll for task, send ack (after polling) and finally updating the task result by worker.
+
| Endpoint | Description |
|---------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
@@ -151,6 +154,7 @@ These are critical endpoints used to poll for task and updating the task result
| `GET /tasks/poll/batch/{taskType}?count=&timeout=&workerid=&domain` | Poll for a task in a batch specified by `count`. This is a long poll and the connection will wait until `timeout` or if there is at-least 1 item available, whichever comes first.`workerid` identifies the worker that polled for the job and `domain` allows the poller to poll for a task in a specific domain |
| `POST /tasks` | Update the result of task execution. See the schema below. |
+
### Schema for updating Task Result
```json
{
@@ -165,3 +169,5 @@ These are critical endpoints used to poll for task and updating the task result
}
```
+!!!Info "Acknowledging tasks after poll"
+ If the worker fails to ack the task after polling, the task is re-queued and put back in queue and is made available during subsequent poll.
diff --git a/docs/docs/architecture.md b/docs/docs/architecture.md
deleted file mode 100644
index 3440557681..0000000000
--- a/docs/docs/architecture.md
+++ /dev/null
@@ -1,136 +0,0 @@
-## High Level Architecture
-
-![Architecture diagram](img/conductor-architecture.png)
-
-The API and storage layers are pluggable and provide ability to work with different backends and queue service providers.
-
-## Installing and Running
-
-!!! hint "Running in production"
- For a detailed configuration guide on installing and running Conductor server in production visit [Conductor Server](../server) documentation.
-
-### Running In-Memory Server
-
-Follow the steps below to quickly bring up a local Conductor instance backed by an in-memory database with a simple kitchen sink workflow that demonstrate all the capabilities of Conductor.
-
-!!!warning:
- In-Memory server is meant for a quick demonstration purposes and does not store any data on disk. All the data is lost once the server dies.
-
-#### Checkout the source from GitHub
-
-```
-git clone git@github.com:Netflix/conductor.git
-```
-
-#### Start Local Server
-
-
-> **NOTE for Mac users**: If you are using a new Mac with an Apple Silicon Chip, you must make a small change to ```conductor/grpc/build.gradle``` - adding "osx-x86_64" to two lines:
-```
-protobuf {
- protoc {
- artifact = "com.google.protobuf:protoc:${revProtoBuf}:osx-x86_64"
- }
- plugins {
- grpc {
- artifact = "io.grpc:protoc-gen-grpc-java:${revGrpc}:osx-x86_64"
- }
- }
-...
-}
-```
-
-
-
-The server is in the directory `conductor/server`. To start it, execute the following command in the root of the project.
-
-```shell
-./gradlew bootRun
-# wait for the server to come online
-```
-Swagger APIs can be accessed at [http://localhost:8080/swagger-ui.html](http://localhost:8080/swagger-ui.html)
-
-#### Start UI Server
-
-The UI Server is in the directory `conductor/ui`.
-
-To run it you need to have [Node](https://nodejs.org) 14 (or greater) and [Yarn](https://yarnpkg.com/) installed.
-
-In a terminal other than the one running the Conductor server:
-
-```shell
-cd ui
-yarn install
-yarn run start
-```
-
-If you get an error message `ReferenceError: primordials is not defined`, you need to use an earlier version of Node (pre-12). See [this issue for more details](https://github.com/Netflix/conductor/issues/1232).
-
-#### Or Start all the services using [docker-compose](https://github.com/Netflix/conductor/blob/master/docker/docker-compose.yaml)
-- Using compose (with Dynomite):
- ```shell
- docker-compose -f docker-compose.yaml -f docker-compose-dynomite.yaml up
- ```
-- Using compose (with Postgres):
- ```shell
- docker-compose -f docker-compose.yaml -f docker-compose-postgres.yaml up
- ```
-
-Assuming that you started Conductor locally (directly, or with Docker), launch the UI at [http://localhost:5000/](http://localhost:5000/).
-
-!!! Note
- The server will load a sample kitchensink workflow definition by default. See [here](../labs/kitchensink) for details.
-
-## Runtime Model
-Conductor follows RPC based communication model where workers are running on a separate machine from the server. Workers communicate with server over HTTP based endpoints and employs polling model for managing work queues.
-
-![Runtime Model of Conductor](img/overview.png)
-
-**Notes**
-
-* Workers are remote systems that communicate over HTTP with the conductor servers.
-* Task Queues are used to schedule tasks for workers. We use [dyno-queues][1] internally but it can easily be swapped with SQS or similar pub-sub mechanism.
-* conductor-redis-persistence module uses [Dynomite][2] for storing the state and metadata along with [Elasticsearch][3] for indexing backend.
-* See section under extending backend for implementing support for different databases for storage and indexing.
-
-[1]: https://github.com/Netflix/dyno-queues
-[2]: https://github.com/Netflix/dynomite
-[3]: https://www.elastic.co
-
-## High Level Steps
-**Steps required for a new workflow to be registered and get executed:**
-
-1. Define task definitions used by the workflow.
-2. Create the workflow definition
-3. Create task worker(s) that polls for scheduled tasks at regular interval
-
-The [Beginner lab](../labs/beginner) has a good walk-through of steps 1-3.
-
-**Trigger Workflow Execution**
-
-```
-POST /workflow/{name}
-{
- ... //json payload as workflow input
-}
-```
-
-**Polling for a task**
-
-```
-GET /tasks/poll/batch/{taskType}
-```
-
-**Update task status**
-
-```
-POST /tasks
-{
- "outputData": {
- "encodeResult":"success",
- "location": "http://cdn.example.com/file/location.png"
- //any task specific output
- },
- "status": "COMPLETED"
-}
-```
diff --git a/docs/docs/architecture/overview.md b/docs/docs/architecture/overview.md
new file mode 100644
index 0000000000..81bd00e38b
--- /dev/null
+++ b/docs/docs/architecture/overview.md
@@ -0,0 +1,21 @@
+# Overview
+
+![Architecture diagram](/img/conductor-architecture.png)
+
+The API and storage layers are pluggable and provide ability to work with different backends and queue service providers.
+
+## Runtime Model
+Conductor follows RPC based communication model where workers are running on a separate machine from the server. Workers communicate with server over HTTP based endpoints and employs polling model for managing work queues.
+
+![Runtime Model of Conductor](/img/overview.png)
+
+**Notes**
+
+* Workers are remote systems that communicate over HTTP with the conductor servers.
+* Task Queues are used to schedule tasks for workers. We use [dyno-queues][1] internally but it can easily be swapped with SQS or similar pub-sub mechanism.
+* conductor-redis-persistence module uses [Dynomite][2] for storing the state and metadata along with [Elasticsearch][3] for indexing backend.
+* See section under extending backend for implementing support for different databases for storage and indexing.
+
+[1]: https://github.com/Netflix/dyno-queues
+[2]: https://github.com/Netflix/dynomite
+[3]: https://www.elastic.co
diff --git a/docs/docs/tasklifecycle.md b/docs/docs/architecture/tasklifecycle.md
similarity index 94%
rename from docs/docs/tasklifecycle.md
rename to docs/docs/architecture/tasklifecycle.md
index 699418052d..57f99512e3 100644
--- a/docs/docs/tasklifecycle.md
+++ b/docs/docs/architecture/tasklifecycle.md
@@ -1,14 +1,14 @@
## Task state transitions
The figure below depicts the state transitions that a task can go through within a workflow execution.
-![Task_States](img/task_states.png)
+![Task_States](/img/task_states.png)
## Retries and Failure Scenarios
### Task failure and retries
Retries for failed task executions of each task can be configured independently. retryCount, retryDelaySeconds and retryLogic can be used to configure the retry mechanism.
-![Task Failure](img/TaskFailure.png)
+![Task Failure](/img/TaskFailure.png)
1. Worker (W1) polls for task T1 from the Conductor server and receives the task.
2. Upon processing this task, the worker determines that the task execution is a failure and reports this to the server with FAILED status after 10 seconds.
@@ -18,7 +18,7 @@ Retries for failed task executions of each task can be configured independently.
### Timeout seconds
Timeout is the maximum amount of time that the task must reach a terminal state in, else the task will be marked as TIMED_OUT.
-![Task Timeout](img/TimeoutSeconds.png)
+![Task Timeout](/img/TimeoutSeconds.png)
**0 seconds** -> Worker polls for task T1 fom the Conductor server and receives the task. T1 is put into IN_PROGRESS status by the server.
Worker starts processing the task but is unable to process the task at this time. Worker updates the server with T1 set to IN_PROGRESS status and a callback of 9 seconds.
@@ -36,7 +36,7 @@ Server puts T1 back in the queue but makes it invisible and the worker continues
### Response timeout seconds
Response timeout is the time within which the worker must respond to the server with an update for the task, else the task will be marked as TIMED_OUT.
-![Response Timeout](img/ResponseTimeoutSeconds.png)
+![Response Timeout](/img/ResponseTimeoutSeconds.png)
**0 seconds** -> Worker polls for the task T1 from the Conductor server and receives the task. T1 is put into IN_PROGRESS status by the server.
diff --git a/docs/docs/configuration/eventhandlers.md b/docs/docs/configuration/eventhandlers.md
index e86bffcea8..8417d67c09 100644
--- a/docs/docs/configuration/eventhandlers.md
+++ b/docs/docs/configuration/eventhandlers.md
@@ -1,4 +1,4 @@
-## Introduction
+# Event Handlers
Eventing in Conductor provides for loose coupling between workflows and support for producing and consuming events from external systems.
This includes:
@@ -11,7 +11,7 @@ Conductor provides SUB_WORKFLOW task that can be used to embed a workflow inside
## Event Task
Event task provides ability to publish an event (message) to either Conductor or an external eventing system like SQS. Event tasks are useful for creating event based dependencies for workflows and tasks.
-See [Event Task](../../reference-docs/event-task) for documentation.
+See [Event Task](/reference-docs/event-task.html) for documentation.
## Event Handler
Event handlers are listeners registered that executes an action when a matching event occurs. The supported actions are:
@@ -108,7 +108,7 @@ Given the following payload in the message:
"expandInlineJSON": true
}
```
-Input for starting a workflow and output when completing / failing task follows the same [expressions](/configuration/workflowdef/#wiring-inputs-and-outputs) used for wiring workflow inputs.
+Input for starting a workflow and output when completing / failing task follows the same [expressions](/configuration/workflowdef.html#wiring-inputs-and-outputs) used for wiring workflow inputs.
!!!info "Expanding stringified JSON elements in payload"
`expandInlineJSON` property, when set to true will expand the inlined stringified JSON elements in the payload to JSON documents and replace the string value with JSON document.
diff --git a/docs/docs/configuration/isolationgroups.md b/docs/docs/configuration/isolationgroups.md
index 8aecdac225..1d2320261b 100644
--- a/docs/docs/configuration/isolationgroups.md
+++ b/docs/docs/configuration/isolationgroups.md
@@ -1,15 +1,16 @@
-#### Isolation Group Id
+# Isolation Groups
Consider an HTTP task where the latency of an API is high, task queue piles up effecting execution of other HTTP tasks which have low latency.
We can isolate the execution of such tasks to have predictable performance using `isolationgroupId`, a property of task definition.
-When we set isolationGroupId, the executor(SystemTaskWorkerCoordinator) will allocate an isolated queue and an isolated thread pool for execution of those tasks.
+When we set isolationGroupId, the executor `SystemTaskWorkerCoordinator` will allocate an isolated queue and an isolated thread pool for execution of those tasks.
If no `isolationgroupId` is specified in task definition, then fallback is default behaviour where the executor executes the task in shared thread-pool for all tasks.
-Example taskdef
+## Example
+** Task Definition **
```json
{
"name": "encode_task",
@@ -35,7 +36,7 @@ Example taskdef
"isolationgroupId": "myIsolationGroupId"
}
```
-Example Workflow task
+** Workflow Definition **
```json
{
"name": "encode_and_deploy",
@@ -73,7 +74,7 @@ The property `workflow.isolated.system.task.worker.thread.count` sets the threa
isolationGroupId is currently supported only in HTTP and kafka Task.
-#### Execution Name Space
+### Execution Name Space
`executionNameSpace` A property of taskdef can be used to provide JVM isolation to task execution and scale executor deployments horizontally.
@@ -112,10 +113,7 @@ If the property is not set, the executor executes tasks without any executionNam
}
```
-
-
-
-Example Workflow task
+#### Example Workflow task
```json
{
diff --git a/docs/docs/configuration/sysoperator.md b/docs/docs/configuration/sysoperator.md
index d85bfb2111..181d72fd3d 100644
--- a/docs/docs/configuration/sysoperator.md
+++ b/docs/docs/configuration/sysoperator.md
@@ -1,4 +1,4 @@
-# Operators
+# System Operators
Operators are built-in primitives in Conductor that allow you to define the control flow in the workflow.
Operators are similar to programming constructs such as for loops, decisions, etc.
@@ -9,12 +9,12 @@ Conductor supports the following programming language constructs:
| Language Construct | Conductor Operator |
|----------------------------------|-------------------------------------------------------------|
-| Do/While or Loops | [Do While Task](../../reference-docs/do-while-task) |
-| Dynamic Fork | [Dynamic Fork Task](../../reference-docs/dynamic-fork-task) |
-| Fork / Parallel execution | [Fork Task](../../reference-docs/fork-task) |
-| Join | [Join Task](../../reference-docs/join-task) |
-| Sub Process / Sub-Flow | [Sub Workflow Task](../../reference-docs/sub-workflow-task) |
-| Switch//Decision/if..then...else | [Switch Task](../../reference-docs/switch-task) |
-| Terminate | [Terminate Task](../../reference-docs/terminate-task) |
-| Variables | [Variable Task](../../reference-docs/set-variable-task) |
-| Wait | [Wait Task](../../reference-docs/wait-task) |
+| Do/While or Loops | [Do While Task](/reference-docs/do-while-task.html) |
+| Dynamic Fork | [Dynamic Fork Task](/reference-docs/dynamic-fork-task.html) |
+| Fork / Parallel execution | [Fork Task](/reference-docs/fork-task.html) |
+| Join | [Join Task](/reference-docs/join-task.html) |
+| Sub Process / Sub-Flow | [Sub Workflow Task](/reference-docs/sub-workflow-task.html) |
+| Switch//Decision/if..then...else | [Switch Task](/reference-docs/switch-task.html) |
+| Terminate | [Terminate Task](/reference-docs/terminate-task.html) |
+| Variables | [Variable Task](/reference-docs/set-variable-task.html) |
+| Wait | [Wait Task](/reference-docs/wait-task.html) |
diff --git a/docs/docs/configuration/systask.md b/docs/docs/configuration/systask.md
index 5a155be713..4b2f964a63 100644
--- a/docs/docs/configuration/systask.md
+++ b/docs/docs/configuration/systask.md
@@ -1,8 +1,4 @@
----
-sidebar_position: 1
----
-
-# System Task
+# System Tasks
System Tasks (Workers) are built-in tasks that are general purpose and re-usable. They run on the Conductor servers.
Such tasks allow you to get started without having to write custom workers.
@@ -11,14 +7,13 @@ Such tasks allow you to get started without having to write custom workers.
Conductor has the following set of system tasks available.
-
| Task | Description | Use Case |
|-----------------------|--------------------------------------------------------|------------------------------------------------------------------------------------|
-| Event Publishing | [Event Task](../../reference-docs/event-task) | External eventing system integration. e.g. amqp, sqs, nats |
-| HTTP | [HTTP Task](../../reference-docs/http-task) | Invoke any HTTP(S) endpoints |
-| Inline Code Execution | [Inline Task](../../reference-docs/inline-task) | Execute arbitrary lightweight javascript code |
-| JQ Transform | [JQ Task](../../reference-docs/json-jq-transform-task) | Use JQ to transform task input/output |
-| Kafka Publish | [Kafka Task](../../reference-docs/kafka-publish-task) | Publish messages to Kafka |
+| Event Publishing | [Event Task](/reference-docs/event-task.html) | External eventing system integration. e.g. amqp, sqs, nats |
+| HTTP | [HTTP Task](/reference-docs/http-task.html) | Invoke any HTTP(S) endpoints |
+| Inline Code Execution | [Inline Task](/reference-docs/inline-task.html) | Execute arbitrary lightweight javascript code |
+| JQ Transform | [JQ Task](/reference-docs/json-jq-transform-task.html) | Use JQ to transform task input/output |
+| Kafka Publish | [Kafka Task](/reference-docs/kafka-publish-task.html) | Publish messages to Kafka |
| Name | Description |
|--------------------------|-------------------------------------------------------------------------------------------------------------------------------------------|
diff --git a/docs/docs/configuration/taskdef.md b/docs/docs/configuration/taskdef.md
index af066343bf..075912ff47 100644
--- a/docs/docs/configuration/taskdef.md
+++ b/docs/docs/configuration/taskdef.md
@@ -1,4 +1,4 @@
-## Task Definition
+# Task Definition
Tasks are the building blocks of workflow in Conductor. A task can be an operator, system task or custom code written in any programming language.
diff --git a/docs/docs/configuration/taskdomains.md b/docs/docs/configuration/taskdomains.md
index c16647c20c..2d3f3d7da8 100644
--- a/docs/docs/configuration/taskdomains.md
+++ b/docs/docs/configuration/taskdomains.md
@@ -1,4 +1,4 @@
-## Task Domains
+# Task Domains
Task domains helps support task development. The idea is same “task definition” can be implemented in different “domains”. A domain is some arbitrary name that the developer controls. So when the workflow is started, the caller can specify, out of all the tasks in the workflow, which tasks need to run in a specific domain, this domain is then used to poll for task on the client side to execute it.
As an example if a workflow (WF1) has 3 tasks T1, T2, T3. The workflow is deployed and working fine, which means there are T2 workers polling and executing. If you modify T2 and run it locally there is no guarantee that your modified T2 worker will get the task that you are looking for as it coming from the general T2 queue. “Task Domain” feature solves this problem by splitting the T2 queue by domains, so when the app polls for task T2 in a specific domain, it get the correct task.
diff --git a/docs/docs/configuration/workerdef.md b/docs/docs/configuration/workerdef.md
index 8f3af163ae..34347a2ad8 100644
--- a/docs/docs/configuration/workerdef.md
+++ b/docs/docs/configuration/workerdef.md
@@ -1,4 +1,5 @@
-## Worker
+# Worker Definition
+
A worker is responsible for executing a task. Operator and System tasks are handled by the Conductor server, while user
defined tasks needs to have a worker created that awaits the work to be scheduled by the server for it to be executed.
Workers can be implemented in any language, and Conductor provides support for Java, Golang and Python worker framework that provides features such as
diff --git a/docs/docs/configuration/workflowdef.md b/docs/docs/configuration/workflowdef.md
index 9ccca3fd11..2a4a50c098 100644
--- a/docs/docs/configuration/workflowdef.md
+++ b/docs/docs/configuration/workflowdef.md
@@ -1,4 +1,4 @@
-# Workflows
+# Workflow Definition
## What are Workflows?
@@ -26,7 +26,7 @@ execution in a reliable & scalable manner.
Let's start with a basic workflow and understand what are the different aspects of it. In particular, we will talk about
two stages of a workflow, *defining* a workflow and *executing* a workflow
-### *Simple Workflow Example*
+### Simple Workflow Example
Assume your business logic is to simply to get some shipping information and then do the shipping. You start by
logically partitioning them into two tasks:
@@ -70,7 +70,7 @@ First we would build these two task definitions. Let's assume that ```shipping i
"failureWorkflow": "shipping_issues",
"restartable": true,
"workflowStatusListenerEnabled": true,
- "ownerEmail": "devrel@orkes.io",
+ "ownerEmail": "conductor@example.com",
"timeoutPolicy": "ALERT_ONLY",
"timeoutSeconds": 0,
"variables": {},
@@ -111,9 +111,9 @@ The mail_a_box workflow has 2 tasks:
| description | Description of the task | optional |
| optional | true or false. When set to true - workflow continues even if the task fails. The status of the task is reflected as `COMPLETED_WITH_ERRORS` | Defaults to `false` |
| inputParameters | JSON template that defines the input given to the task | See [Wiring Inputs and Outputs](#wiring-inputs-and-outputs) for details |
-| domain | See [Task Domains](/conductor/configuration/taskdomains) for more information. | optional |
+| domain | See [Task Domains](/configuration/taskdomains.html) for more information. | optional |
-In addition to these parameters, System Tasks have their own parameters. Checkout [System Tasks](/conductor/configuration/systask/) for more information.
+In addition to these parameters, System Tasks have their own parameters. Checkout [System Tasks](/configuration/systask.html) for more information.
## Wiring Inputs and Outputs
@@ -226,4 +226,4 @@ And `url` would be `https://some_url:7004` if no `url` was provided as input to
## Workflow notifications
-Conductor can be configured to publish notifications to external systems upon completion/termination of workflows. See [extending conductor](../../extend#workflow-status-listener) for details.
+Conductor can be configured to publish notifications to external systems upon completion/termination of workflows. See [extending conductor](/extend.html) for details.
diff --git a/docs/docs/css/custom.css b/docs/docs/css/custom.css
index e41a2e9b55..3e4728cd5d 100644
--- a/docs/docs/css/custom.css
+++ b/docs/docs/css/custom.css
@@ -1,20 +1,313 @@
-.hljs {
- font-size: 13px;
- background-color: #f3f6f6;
+:root {
+ /*--main-text-color: #212121;*/
+ --brand-blue: #1976d2;
+ --brand-dark-blue: #242A36;
+ --caption-color: #4f4f4f;
+ --brand-lt-blue: #f0f5fb;
+ --brand-gray: rgb(118, 118, 118);
+ --brand-lt-gray: rgb(203,204,207);
+ --brand-red: #e50914;
}
-.hljs-attribute {
- color: #000000
+body {
+ color: var(--brand-dark-blue);
+ font-family: "Roboto", sans-serif;
+ font-weight: 400;
}
-.hljs-string {
- color: green
+
+body::before {
+ background: none;
+ display: none;
}
-code {
- color: green;
- font-size: 13px
+body > .container {
+ padding-top: 30px;
}
-.wy-side-nav-search {
- margin-bottom: 0;
+
+.bg-primary {
+ background: #fff !important;
+}
+
+/* Navbar */
+.navbar {
+ box-shadow: 0 4px 8px 0 rgb(0 0 0 / 10%), 0 0 2px 0 rgb(0 0 0 / 10%);
+ padding-left: 30px;
+ padding-right: 30px;
+ height: 80px;
+}
+.navbar-brand {
+ background-image: url(/img/logo.svg);
+ background-size: cover;
+ color: transparent !important;
+ padding: 0;
+ text-shadow: none;
+ margin-top: -6px;
+ height: 37px;
+ width: 175px;
+}
+.navbar-nav {
+ margin-left: 50px;
+}
+.navbar-nav > .navitem, .navbar-nav > .dropdown {
+ margin-left: 30px;
+}
+.navbar-nav > li .nav-link{
+ font-size: 15px;
+}
+
+.navbar-nav .nav-link {
+ color: #242A36 !important;
+ font-family: "Inter";
+ font-weight: 700;
+}
+
+.navbar-nav.ml-auto > li:first-child {
+ display: none;
+}
+.navbar-nav.ml-auto .nav-link{
+ font-size: 0px;
+}
+.navbar-nav.ml-auto .nav-link .fa{
+ font-size: 30px;
+}
+.navbar-nav .dropdown-item {
+ color: var(--brand-dark-blue);
+ font-family: "Inter";
+ font-weight: 500;
+ font-size: 14px;
+ background-color: transparent;
+}
+.navbar-nav .dropdown-menu > li:hover {
+ background-color: var(--brand-blue);
+}
+.navbar-nav .dropdown-menu > li:hover > .dropdown-item {
+ color: #fff;
+}
+.navbar-nav .dropdown-submenu:hover > .dropdown-item {
+ background-color: var(--brand-blue);
+}
+
+
+.navbar-nav .dropdown-menu li {
+ margin: 0px;
+ padding-top: 5px;
+ padding-bottom: 5px;
+}
+.navbar-nav .dropdown-item.active {
+ background-color: transparent;
+}
+
+.brand-darkblue {
+ background: #242A36 !important;
+}
+
+.brand-gray {
+ background: rgb(245,245,245);
+}
+.brand-blue {
+ background: #1976D2;
+}
+.brand-white {
+ background: #fff;
+}
+.logo {
+ height: 444px;
+}
+
+/* Fonts */
+h1, h2, h3, h4, h5, h6 {
+ color: var(--brand-dark-blue);
+ margin-bottom: 20px;
+}
+h1:first-child {
+ margin-top: 0;
+}
+
+h1 {
+ font-family: "Inter", sans-serif;
+ font-size: 32px;
+ font-weight: 700;
+ margin-top: 50px;
+}
+
+h2 {
+ font-family: "Inter", sans-serif;
+ font-size: 24px;
+ font-weight: 700;
+ margin-top: 40px;
+}
+
+h3 {
+ font-family: "Roboto", sans-serif;
+ font-size: 20px;
+ font-weight: 500;
+ margin-top: 30px;
+}
+
+h4 {
+ font-family: "Roboto", sans-serif;
+ font-size: 18px;
+ font-weight: 400;
+ margin-top: 20px;
+}
+
+.main li {
+ margin-bottom: 15px;
+}
+
+
+.btn {
+ font-family: "Roboto", sans-serif;
+ font-size: 14px;
+}
+.btn-primary {
+ background: #1976D2;
+ border: none;
+}
+
+.hero {
+ padding-top: 100px;
+ padding-bottom: 100px;
+}
+
+.hero .heading {
+ font-size: 56px;
+ font-weight: 900;
+ line-height: 68px;
+}
+
+.hero .btn {
+ font-size: 16px;
+ padding: 10px 20px;
+}
+
+.hero .illustration {
+ margin-left: 35px;
+}
+
+
+.bullets .heading, .module .heading {
+ font-family: "Inter", sans-serif;
+ font-size: 26px;
+ font-weight: 700;
+}
+.bullets .row {
+ margin-bottom: 60px;
+}
+.bullets .caption {
+ padding-top: 10px;
+ padding-right: 30px;
+}
+.icon {
+ height: 25px;
+ margin-right: 5px;
+ vertical-align: -3px;
+}
+
+.caption {
+ font-weight: 400;
+ font-size: 17px;
+ line-height: 24px;
+ color: var(--caption-color);
+}
+
+.module {
+ margin-top: 80px;
+ margin-bottom: 80px;
+ padding-top: 50px;
+ padding-bottom: 50px;
+}
+
+.module .caption {
+ padding-top: 10px;
+ padding-right: 80px;
+}
+.module .screenshot {
+ width: 600px;
+ height: 337px;
+ box-shadow:inset 0 1px 0 rgba(255,255,255,.6), 0 22px 70px 4px rgba(0,0,0,0.56), 0 0 0 1px rgba(0, 0, 0, 0.0);
+ border-radius: 5px;
+ background-size: cover;
+}
+
+/* Footer */
+footer {
+ margin: 0px;
+ padding: 0px !important;
+ text-align: left;
+ font-weight: 400;
+}
+.footer {
+ background-color: var(--brand-dark-blue);
+ padding: 50px 0px;
+ color: #fff;
+ font-size: 14px;
+ margin-top: 50px;
+}
+.footer a {
+ color: var(--brand-lt-gray);
+}
+.footer .subhead {
+ font-weight: 700;
+ color: #fff;
+ font-size: 15px;
+ margin-bottom: 10px;
+}
+.footer .red {
+ color: var(--brand-red);
+}
+.footer .fr {
+ text-align: right;
+}
+
+/* TOC menu */
+.toc ul {
+ list-style: none;
+ padding: 0px;
+}
+.toc > ul > li li {
+ padding-left: 15px;
+ font-weight: 400;
+ font-size: 14px;
+}
+.toc > ul > li {
+ font-size: 15px;
+ font-weight: 500;
+}
+.toc .toc-link {
+ margin-bottom: 5px;
+ display: block;
+ color: var(--brand-dark-blue);
+}
+.toc .toc-link.active {
+ font-weight: 700;
+}
+
+/* Homepage Overrides */
+.homepage > .container {
+ max-width: none;
+}
+.homepage .toc {
+ display: none;
+}
+
+/* Comparison block */
+.compare {
+ background-color: var(--brand-lt-blue);
+ padding-top: 80px;
+ padding-bottom: 80px;
+ margin: 0px -15px;
+}
+.compare .heading {
+ margin-bottom: 30px;
+ margin-top: 0px;
+}
+.compare .bubble {
+ background: #fff;
+ border-radius: 10px;
+ padding: 30px;
+ height: 100%;
+}
+
+.compare .caption {
+ font-size: 15px;
+ line-height: 22px;
}
-body {
- font-family: "Arial","proxima-nova","Helvetica Neue","Arial","sans-serif";
-}
\ No newline at end of file
diff --git a/docs/docs/extend.md b/docs/docs/extend.md
index 3b49c33185..82895f6677 100644
--- a/docs/docs/extend.md
+++ b/docs/docs/extend.md
@@ -1,3 +1,5 @@
+# Extending Conductor
+
## Backend
Conductor provides a pluggable backend. The current implementation uses Dynomite.
diff --git a/docs/docs/externalpayloadstorage.md b/docs/docs/externalpayloadstorage.md
index 3d9c62dd34..097f0b8fca 100644
--- a/docs/docs/externalpayloadstorage.md
+++ b/docs/docs/externalpayloadstorage.md
@@ -1,3 +1,5 @@
+# External Payload Storage
+
!!!warning
The external payload storage is currently only implemented to be used to by the Java client. Client libraries in other languages need to be modified to enable this.
Contributions are welcomed.
diff --git a/docs/docs/faq.md b/docs/docs/faq.md
index 787b4f6623..d72a39a98b 100644
--- a/docs/docs/faq.md
+++ b/docs/docs/faq.md
@@ -20,14 +20,14 @@ Ensure all the tasks are registered via `/metadata/taskdefs` APIs. Add any miss
### Where does my worker run? How does conductor run my tasks?
Conductor does not run the workers. When a task is scheduled, it is put into the queue maintained by Conductor. Workers are required to poll for tasks using `/tasks/poll` API at periodic interval, execute the business logic for the task and report back the results using `POST /tasks` API call.
-Conductor, however will run [system tasks](../configuration/systask/) on the Conductor server.
+Conductor, however will run [system tasks](/configuration/systask.html) on the Conductor server.
### How can I schedule workflows to run at a specific time?
Netflix Conductor itself does not provide any scheduling mechanism. But there is a community project [_Schedule Conductor Workflows_](https://github.com/jas34/scheduledwf) which provides workflow scheduling capability as a pluggable module as well as workflow server.
Other way is you can use any of the available scheduling systems to make REST calls to Conductor to start a workflow. Alternatively, publish a message to a supported eventing system like SQS to trigger a workflow.
-More details about [eventing](../configuration/eventhandlers/).
+More details about [eventing](/configuration/eventhandlers.html).
### How do I setup Dynomite cluster?
@@ -65,11 +65,11 @@ When a workflow fails, you can configure a "failure workflow" to run using the``
You can also use the Workflow Status Listener:
-* Set the workflowStatusListenerEnabled field in your workflow definition to true which enables [notifications](https://netflix.github.io/conductor/configuration/workflowdef/#workflow-notifications).
-* Add a custom implementation of the Workflow Status Listener. Refer [this](https://netflix.github.io/conductor/extend/#workflow-status-listener).
-* This notification can be implemented in such a way as to either send a notification to an external system or to send an event on the conductor queue to complete/fail another task in another workflow as described [here](https://netflix.github.io/conductor/configuration/eventhandlers/#event-handler).
+* Set the workflowStatusListenerEnabled field in your workflow definition to true which enables [notifications](/configuration/workflowdef.html#workflow-notifications).
+* Add a custom implementation of the Workflow Status Listener. Refer [this](/extend.html#workflow-status-listener).
+* This notification can be implemented in such a way as to either send a notification to an external system or to send an event on the conductor queue to complete/fail another task in another workflow as described [here](/configuration/eventhandlers.html).
-Refer to this [documentation](../configuration/workflowdef/#workflow-notifications) to extend conductor to send out events/notifications upon workflow completion/failure.
+Refer to this [documentation](/configuration/workflowdef.html#workflow-notifications) to extend conductor to send out events/notifications upon workflow completion/failure.
diff --git a/docs/docs/gettingstarted/basicconcepts.md b/docs/docs/gettingstarted/basicconcepts.md
index c60a96c29c..6592c04239 100644
--- a/docs/docs/gettingstarted/basicconcepts.md
+++ b/docs/docs/gettingstarted/basicconcepts.md
@@ -1,3 +1,5 @@
+# Basic Concepts
+
## Definitions (aka Metadata or Blueprints)
Conductor definitions are like class definitions in OOP paradigm, or templates. You define this once, and use for each workflow execution. Definitions to Executions have 1:N relationship.
@@ -5,13 +7,13 @@ Conductor definitions are like class definitions in OOP paradigm, or templates.
Tasks are the building blocks of Workflow. There must be at least one task in a Workflow.
Tasks can be categorized into two types:
- * [System tasks](../../configuration/systask) - executed by Conductor server.
- * [Worker tasks](../../configuration/workerdef) - executed by your own workers.
+ * [System tasks](/configuration/systask.html) - executed by Conductor server.
+ * [Worker tasks](/configuration/workerdef.html) - executed by your own workers.
## Workflow
A Workflow is the container of your process flow. It could include several different types of Tasks, Sub-Workflows, inputs and outputs connected to each other, to effectively achieve the desired result. The tasks are either control tasks (fork, conditional etc) or application tasks (e.g. encode a file) that are executed on a remote machine.
-[Detailed description](../../configuration/workflowdef)
+[Detailed description](/configuration/workflowdef.html)
## Task Definition
Task definitions help define Task level parameters like inputs and outputs, timeouts, retries etc.
@@ -19,12 +21,12 @@ Task definitions help define Task level parameters like inputs and outputs, time
* All tasks need to be registered before they can be used by active workflows.
* A task can be re-used within multiple workflows.
-[Detailed description](../../configuration/taskdef)
+[Detailed description](/configuration/taskdef.html)
## System Tasks
System tasks are executed within the JVM of the Conductor server and managed by Conductor for its execution and scalability.
-See [Systems tasks](../../configuration/systask) for list of available Task types, and instructions for using them.
+See [Systems tasks](/configuration/systask.html) for list of available Task types, and instructions for using them.
!!! Note
Conductor provides an API to create user defined tasks that are executed in the same JVM as the engine. See [WorkflowSystemTask](https://github.com/Netflix/conductor/blob/main/core/src/main/java/com/netflix/conductor/core/execution/tasks/WorkflowSystemTask.java) interface for details.
diff --git a/docs/docs/gettingstarted/client.md b/docs/docs/gettingstarted/client.md
index 4ef4dce736..fd7f537bd7 100644
--- a/docs/docs/gettingstarted/client.md
+++ b/docs/docs/gettingstarted/client.md
@@ -1,3 +1,4 @@
+# Using the Client
Conductor tasks that are executed by remote workers communicate over HTTP endpoints/gRPC to poll for the task and update the status of the execution.
## Client APIs
diff --git a/docs/docs/gettingstarted/docker.md b/docs/docs/gettingstarted/docker.md
new file mode 100644
index 0000000000..c84b331fd4
--- /dev/null
+++ b/docs/docs/gettingstarted/docker.md
@@ -0,0 +1,148 @@
+
+# Running via Docker Compose
+
+In this article we will explore how you can set up Netflix Conductor on your local machine using Docker compose.
+The docker compose will bring up the following:
+1. Conductor API Server
+2. Conductor UI
+3. Elasticsearch for searching workflows
+
+## Prerequisites
+1. Docker: [https://docs.docker.com/get-docker/](https://docs.docker.com/get-docker/)
+2. Recommended host with CPU and RAM to be able to run multiple docker containers (at-least 16GB RAM)
+
+## Steps
+
+#### 1. Clone the Conductor Code
+
+```shell
+$ git clone https://github.com/Netflix/conductor.git
+```
+
+#### 2. Build the Docker Compose
+
+```shell
+$ cd conductor
+conductor $ cd docker
+docker $ docker-compose build
+```
+#### Note: Conductor supplies multiple docker compose templates that can be used with different configurations:
+
+| File | Containers |
+|--------------------------------|-----------------------------------------------------------------------------------------|
+| docker-compose.yaml | 1. In Memory Conductor Server 2. Elasticsearch 3. UI |
+| docker-compose-dynomite.yaml | 1. In Memory Conductor Server 2. Elasticsearch 3. UI 4. Dynomite Redis for persistence |
+| docker-compose-postgres.yaml | 1. In Memory Conductor Server 2. Elasticsearch 3. UI 4. Postgres persistence |
+| docker-compose-prometheus.yaml | Brings up Prometheus server |
+
+#### 3. Run Docker Compose
+
+```shell
+docker $ docker-compose up
+```
+
+Once up and running, you will see the following in your Docker dashboard:
+
+1. Elasticsearch
+2. Conductor UI
+3. Conductor Server
+
+You can access all three on your browser to verify that it is running correctly:
+
+Conductor Server URL: [http://localhost:8080/swagger-ui/index.html?configUrl=/api-docs/swagger-config](http://localhost:8080/swagger-ui/index.html?configUrl=/api-docs/swagger-config)
+
+
+
+Conductor UI URL: [http://localhost:5000/](http://localhost:5000/)
+
+
+
+
+### Exiting Compose
+`Ctrl+c` will exit docker compose.
+
+To ensure images are stopped execute: `docker-compose down`.
+
+## Standalone Server Image
+To build and run the server image, without using `docker-compose`, from the `docker` directory execute:
+```
+docker build -t conductor:server -f server/Dockerfile ../
+docker run -p 8080:8080 -d --name conductor_server conductor:server
+```
+This builds the image `conductor:server` and runs it in a container named `conductor_server`. The API should now be accessible at `localhost:8080`.
+
+To 'login' to the running container, use the command:
+```
+docker exec -it conductor_server /bin/sh
+```
+
+## Standalone UI Image
+From the `docker` directory,
+```
+docker build -t conductor:ui -f ui/Dockerfile ../
+docker run -p 5000:5000 -d --name conductor_ui conductor:ui
+```
+This builds the image `conductor:ui` and runs it in a container named `conductor_ui`. The UI should now be accessible at `localhost:5000`.
+
+### Note
+* In order for the UI to do anything useful the Conductor Server must already be running on port 8080, either in a Docker container (see above), or running directly in the local JRE.
+* Additionally, significant parts of the UI will not be functional without Elastisearch being available. Using the `docker-compose` approach alleviates these considerations.
+
+## Monitoring with Prometheus
+
+Start Prometheus with:
+`docker-compose -f docker-compose-prometheus.yaml up -d`
+
+Go to [http://127.0.0.1:9090](http://127.0.0.1:9090).
+
+
+## Potential problem when using Docker Images
+
+#### Not enough memory
+
+ 1. You will need at least 16 GB of memory to run everything. You can modify the docker compose to skip using
+ Elasticsearch if you have no option to run this with your memory options.
+ 2. To disable Elasticsearch using Docker Compose - follow the steps listed here: **TODO LINK**
+
+#### Elasticsearch fails to come up in arm64 based CPU machines
+
+ 1. As of writing this article, Conductor relies on 6.8.x version of Elasticsearch. This version doesn't have an
+ arm64 based Docker image. You will need to use Elasticsearch 7.x which requires a bit of customization to get up
+ and running
+
+#### Elasticsearch remains in Yellow health
+
+ 1. When you run Elasticsearch, sometimes the health remains in Yellow state. Conductor server by default requires
+ Green state to run when indexing is enabled. To work around this, you can use the following property:
+ `conductor.elasticsearch.clusteHealthColor=yellow` Reference: [Issue 2262](https://github.com/Netflix/conductor/issues/2262)
+
+
+
+#### Elasticsearch timeout
+Standalone(single node) elasticsearch has a yellow status which will cause timeout for conductor server (Required: Green).
+Spin up a cluster (more than one) to prevent timeout or use config option `conductor.elasticsearch.clusteHealthColor=yellow`.
+
+See issue: https://github.com/Netflix/conductor/issues/2262
+
+#### Changes in config-*.properties do not take effect
+Config is copy into image during docker build. You have to rebuild the image or better, link a volume to it to reflect new changes.
+
+#### To troubleshoot a failed startup
+Check the log of the server, which is located at `/app/logs` (default directory in dockerfile)
+
+#### Unable to access to conductor:server API on port 8080
+It may takes some time for conductor server to start. Please check server log for potential error.
+
+#### Elasticsearch
+Elasticsearch is optional, please be aware that disable it will make most of the conductor UI not functional.
+
+##### How to enable Elasticsearch
+* Set `workflow.indexing.enabled=true` in your_config.properties
+* Add config related to elasticsearch
+ E.g.: `conductor.elasticsearch.url=http://es:9200`
+
+##### How to disable Elasticsearch
+* Set `workflow.indexing.enabled=false` in your_config.properties
+* Comment out all the config related to elasticsearch
+E.g.: `conductor.elasticsearch.url=http://es:9200`
+
diff --git a/docs/docs/gettingstarted/intro.md b/docs/docs/gettingstarted/intro.md
new file mode 100644
index 0000000000..789a7db4b1
--- /dev/null
+++ b/docs/docs/gettingstarted/intro.md
@@ -0,0 +1,28 @@
+# Why Conductor?
+## Conductor was built to help Netflix orchestrate microservices based process flows with the following features:
+
+* A distributed server ecosystem, which stores workflow state information efficiently.
+* Allow creation of process / business flows in which each individual task can be implemented by the same / different microservices.
+* A DAG (Directed Acyclic Graph) based workflow definition.
+* Workflow definitions are decoupled from the service implementations.
+* Provide visibility and traceability into these process flows.
+* Simple interface to connect workers, which execute the tasks in workflows.
+* Workers are language agnostic, allowing each microservice to be written in the language most suited for the service.
+* Full operational control over workflows with the ability to pause, resume, restart, retry and terminate.
+* Allow greater reuse of existing microservices providing an easier path for onboarding.
+* User interface to visualize, replay and search the process flows.
+* Ability to scale to millions of concurrently running process flows.
+* Backed by a queuing service abstracted from the clients.
+* Be able to operate on HTTP or other transports e.g. gRPC.
+* Event handlers to control workflows via external actions.
+* Client implementations in Java, Python and other languages.
+* Various configurable properties with sensible defaults to fine tune workflow and task executions like rate limiting, concurrent execution limits etc.
+
+## Why not peer to peer choreography?
+
+With peer to peer task choreography, we found it was harder to scale with growing business needs and complexities.
+Pub/sub model worked for simplest of the flows, but quickly highlighted some of the issues associated with the approach:
+
+* Process flows are “embedded” within the code of multiple application.
+* Often, there is tight coupling and assumptions around input/output, SLAs etc, making it harder to adapt to changing needs.
+* Almost no way to systematically answer “How much are we done with process X”?
diff --git a/docs/docs/server.md b/docs/docs/gettingstarted/local.md
similarity index 89%
rename from docs/docs/server.md
rename to docs/docs/gettingstarted/local.md
index bb1bd1bdb5..ef3943762e 100644
--- a/docs/docs/server.md
+++ b/docs/docs/gettingstarted/local.md
@@ -45,6 +45,12 @@ protobuf {
...
}
```
+You may also need to install rosetta:
+
+ ```bash
+ softwareupdate --install-rosetta
+ ```
+
```shell
$ cd conductor
@@ -55,7 +61,7 @@ server $ ../gradlew bootRun
Navigate to the swagger API docs:
[http://localhost:8080/swagger-ui/index.html?configUrl=/api-docs/swagger-config](http://localhost:8080/swagger-ui/index.html?configUrl=/api-docs/swagger-config)
-![Conductor Swagger](img/tutorial/swagger.png)
+
## Build and Run UI
@@ -72,9 +78,9 @@ ui $ yarn run start
Launch UI [http://localhost:5000](http://localhost:5000)
-![Conductor Server Home Page](img/tutorial/conductorUI.png)
+
## Summary
1. All the data is stored in memory, so any workflows created or excuted will be wiped out once the server is terminated.
2. Indexing is disabled, so search functionality in UI will not work and will result an empty set.
-3. See how to install Conductor using [Docker](running-locally-docker.md) with persistence and indexing.
\ No newline at end of file
+3. See how to install Conductor using [Docker](docker.md) with persistence and indexing.
\ No newline at end of file
diff --git a/docs/docs/gettingstarted/startworkflow.md b/docs/docs/gettingstarted/startworkflow.md
index d438378a29..2b285c0a75 100644
--- a/docs/docs/gettingstarted/startworkflow.md
+++ b/docs/docs/gettingstarted/startworkflow.md
@@ -1,16 +1,16 @@
-## Start Workflow Request
-
-When starting a Workflow execution with a registered definition, Workflow accepts following parameters:
+# Starting a Workflow
+## Start Workflow Endpoint
+When starting a Workflow execution with a registered definition, `/workflow` accepts following parameters:
| Field | Description | Notes |
|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------|
| name | Name of the Workflow. MUST be registered with Conductor before starting workflow | |
| version | Workflow version | defaults to latest available version |
-| input | JSON object with key value params, that can be used by downstream tasks | See [Wiring Inputs and Outputs](../../configuration/workflowdef/#wiring-inputs-and-outputs) for details |
+| input | JSON object with key value params, that can be used by downstream tasks | See [Wiring Inputs and Outputs](/configuration/workflowdef.html#wiring-inputs-and-outputs) for details |
| correlationId | Unique Id that correlates multiple Workflow executions | optional |
-| taskToDomain | See [Task Domains](../../configuration/taskdomains/#task-domains) for more information. | optional |
-| workflowDef | An adhoc [Workflow Definition](../../configuration/workflowdef) to run, without registering. See [Dynamic Workflows](#dynamic-workflows). | optional |
-| externalInputPayloadStoragePath | This is taken care of by Java client. See [External Payload Storage](../../externalpayloadstorage/) for more info. | optional |
+| taskToDomain | See [Task Domains](/configuration/taskdomains.html) for more information. | optional |
+| workflowDef | An adhoc [Workflow Definition](/configuration/workflowdef.html) to run, without registering. See [Dynamic Workflows](#dynamic-workflows). | optional |
+| externalInputPayloadStoragePath | This is taken care of by Java client. See [External Payload Storage](/externalpayloadstorage.html) for more info. | optional |
| priority | Priority level for the tasks within this workflow execution. Possible values are between 0 - 99. | optional |
**Example:**
diff --git a/docs/docs/gettingstarted/steps.md b/docs/docs/gettingstarted/steps.md
new file mode 100644
index 0000000000..695aa62779
--- /dev/null
+++ b/docs/docs/gettingstarted/steps.md
@@ -0,0 +1,36 @@
+
+# High Level Steps
+Steps required for a new workflow to be registered and get executed
+
+1. Define task definitions used by the workflow.
+2. Create the workflow definition
+3. Create task worker(s) that polls for scheduled tasks at regular interval
+
+### Trigger Workflow Execution
+
+```
+POST /workflow/{name}
+{
+ ... //json payload as workflow input
+}
+```
+
+### Polling for a task
+
+```
+GET /tasks/poll/batch/{taskType}
+```
+
+### Update task status
+
+```
+POST /tasks
+{
+ "outputData": {
+ "encodeResult":"success",
+ "location": "http://cdn.example.com/file/location.png"
+ //any task specific output
+ },
+ "status": "COMPLETED"
+}
+```
diff --git a/docs/docs/how-tos/Tasks/creating-tasks.md b/docs/docs/how-tos/Tasks/creating-tasks.md
index 82a44013ac..54ae74e91b 100644
--- a/docs/docs/how-tos/Tasks/creating-tasks.md
+++ b/docs/docs/how-tos/Tasks/creating-tasks.md
@@ -1,14 +1,8 @@
----
-sidebar_position: 1
----
-
# Creating Task Definitions
Tasks can be created using the tasks metadata API
-```http request
-POST /api/metadata/taskdefs
-```
+`POST /api/metadata/taskdefs`
This API takes an array of new task definitions.
@@ -36,6 +30,6 @@ fetch("http://localhost:8080/api/metadata/taskdefs", {
## Best Practices
1. You can update a set of tasks together in this API
-2. Task configurations are important attributes that controls the behavior of this task in a Workflow. Refer to [Task Configurations](/content/docs/how-tos/task-configurations) for all the options and details'
+2. Task configurations are important attributes that controls the behavior of this task in a Workflow. Refer to [Task Configurations](/configuration/taskdef.html) for all the options and details'
3. You can also use the Conductor Swagger UI to update the tasks
diff --git a/docs/docs/how-tos/Tasks/dynamic-vs-switch-tasks.md b/docs/docs/how-tos/Tasks/dynamic-vs-switch-tasks.md
index f9e1e4e6d0..e81327d494 100644
--- a/docs/docs/how-tos/Tasks/dynamic-vs-switch-tasks.md
+++ b/docs/docs/how-tos/Tasks/dynamic-vs-switch-tasks.md
@@ -6,11 +6,11 @@ sidebar_position: 1
Learn more about
-1. [Dynamic Tasks](../reference-docs/dynamic-task)
-2. [Switch Tasks](../reference-docs/switch-task)
+1. [Dynamic Tasks](/reference-docs/dynamic-task.html)
+2. [Switch Tasks](/reference-docs/switch-task.html)
Dynamic Tasks are useful in situations when need to run a task of which the task type is determined at runtime instead
-of during the configuration. It is similar to the [SWITCH](../reference-docs/switch-task) use case but with `DYNAMIC`
+of during the configuration. It is similar to the [SWITCH](/reference-docs/switch-task.html) use case but with `DYNAMIC`
we won't need to preconfigure all case options in the workflow definition itself. Instead, we can mark the task
as `DYNAMIC` and determine which underlying task does it run during the workflow execution itself.
diff --git a/docs/docs/how-tos/Tasks/extending-system-tasks.md b/docs/docs/how-tos/Tasks/extending-system-tasks.md
new file mode 100644
index 0000000000..5661d040c2
--- /dev/null
+++ b/docs/docs/how-tos/Tasks/extending-system-tasks.md
@@ -0,0 +1,98 @@
+# Extending System Tasks
+
+[System tasks](/configuration/systask.html) allow Conductor to run simple tasks on the server - removing the need to build (and deploy) workers for basic tasks. This allows for automating more mundane tasks without building specific microservices for them.
+
+However, sometimes it might be necessary to add additional parameters to a System Task to gain the behavior that is desired.
+
+## Example HTTP Task
+
+```json
+{
+ "name": "get_weather_90210",
+ "version": 1,
+ "tasks": [
+ {
+ "name": "get_weather_90210",
+ "taskReferenceName": "get_weather_90210",
+ "inputParameters": {
+ "http_request": {
+ "uri": "https://weatherdbi.herokuapp.com/data/weather/90210",
+ "method": "GET",
+ "connectionTimeOut": 1300,
+ "readTimeOut": 1300
+ }
+ },
+ "type": "HTTP",
+ "decisionCases": {},
+ "defaultCase": [],
+ "forkTasks": [],
+ "startDelay": 0,
+ "joinOn": [],
+ "optional": false,
+ "defaultExclusiveJoinTask": [],
+ "asyncComplete": false,
+ "loopOver": []
+ }
+ ],
+ "inputParameters": [],
+ "outputParameters": {
+ "data": "${get_weather_ref.output.response.body.currentConditions.comment}"
+ },
+ "schemaVersion": 2,
+ "restartable": true,
+ "workflowStatusListenerEnabled": false,
+ "ownerEmail": "conductor@example.com",
+ "timeoutPolicy": "ALERT_ONLY",
+ "timeoutSeconds": 0,
+ "variables": {},
+ "inputTemplate": {}
+}
+
+```
+
+This very simple workflow has a single HTTP Task inside. No parameters need to be passed, and when run, the HTTP task will return the weather in Beverly Hills, CA (Zip code = 90210).
+
+> This API has a very slow response time. In the HTTP task, the connection is set to time out after 1300ms, which is *too short* for this API, resulting in a timeout. This API *will* work if we allowed for a longer timeout, but in order to demonstrate adding retries to the HTTP Task, we will artificially force the API call to fail.
+
+When this workflow is run - it fails, as expected.
+
+Now, sometimes an API call might fail due to an issue on the remote server, and retrying the call will result in a response. With many Conductor tasks, ```retryCount```, ```retryDelaySeconds``` and ```retryLogic``` fields can be applied to retry the worker (with the desired parameters).
+
+By default, the [HTTP Task](/reference-docs/http-task.html) does not have ```retryCount```, ```retryDelaySeconds``` or ```retryLogic``` built in. Attempting to add these parameters to a HTTP Task results in an error.
+
+## The Solution
+
+We can create a task with the same name with the desired parameters. Defining the following task (note that the ```name``` is identical to the one in the workflow):
+
+```json
+{
+
+ "createdBy": "",
+ "name": "get_weather_90210",
+ "description": "editing HTTP task",
+ "retryCount": 3,
+ "timeoutSeconds": 5,
+ "inputKeys": [],
+ "outputKeys": [],
+ "timeoutPolicy": "TIME_OUT_WF",
+ "retryLogic": "FIXED",
+ "retryDelaySeconds": 5,
+ "responseTimeoutSeconds": 5,
+ "inputTemplate": {},
+ "rateLimitPerFrequency": 0,
+ "rateLimitFrequencyInSeconds": 1
+}
+
+```
+
+We've added the three parameters: ```retryCount: 3, retryDelaySeconds: 5, retryLogic: FIXED```
+
+The ```get_weather_90210``` task will now run 4 times (it will fail once, and then retry 3 times), with a ```FIXED``` 5 second delay between attempts.
+
+Re-running the task (and looking at the timeline view) shows that this is what occurs. There are 4 attempts, with a 5 second delay between them.
+
+If we change the ```retryLogic``` to EXPONENTIAL_BACKOFF, the delay between attempts grows exponentially:
+
+1. 5*2^0 = 5 seconds
+2. 5*2^1 = 10 seconds
+3. 5*2^2 = 20 seconds
diff --git a/docs/docs/how-tos/Tasks/task-configurations.md b/docs/docs/how-tos/Tasks/task-configurations.md
index 5177833685..1fef6c7095 100644
--- a/docs/docs/how-tos/Tasks/task-configurations.md
+++ b/docs/docs/how-tos/Tasks/task-configurations.md
@@ -4,7 +4,7 @@ sidebar_position: 1
# Task Configurations
-Refer to [Task Definitions](../getting-started/concepts/tasks-and-workers#task-definitions) for details on how to configure task definitions
+Refer to [Task Definitions](/configuration/taskdef.html) for details on how to configure task definitions
### Example
@@ -31,5 +31,5 @@ Here is a task template payload with commonly used fields:
### Best Practices
-1. Refer to [Task Timeouts](./task-timeouts) for additional information on how the various timeout settings work
-2. Refer to [Monitoring Task Queues](./monitoring-task-queues) on how to monitor task queues
+1. Refer to [Task Timeouts](/how-tos/Tasks/task-timeouts.html) for additional information on how the various timeout settings work
+2. Refer to [Monitoring Task Queues](/how-tos/Tasks/monitoring-task-queues.html) on how to monitor task queues
diff --git a/docs/docs/how-tos/Tasks/updating-tasks.md b/docs/docs/how-tos/Tasks/updating-tasks.md
index b786d809a5..4978e80b37 100644
--- a/docs/docs/how-tos/Tasks/updating-tasks.md
+++ b/docs/docs/how-tos/Tasks/updating-tasks.md
@@ -39,4 +39,4 @@ fetch("http://localhost:8080/api/metadata/taskdefs", {
## Best Practices
1. You can also use the Conductor Swagger UI to update the tasks
-2. Task configurations are important attributes that controls the behavior of this task in a Workflow. Refer to [Task Configurations](task-configurations) for all the options and details'
+2. Task configurations are important attributes that controls the behavior of this task in a Workflow. Refer to [Task Configurations](/how-tos/Tasks/task-configurations.html) for all the options and details'
diff --git a/docs/docs/how-tos/Workflows/create-workflow.md b/docs/docs/how-tos/Workflows/create-workflow.md
deleted file mode 100644
index 1b5368a58b..0000000000
--- a/docs/docs/how-tos/Workflows/create-workflow.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-sidebar_position: 1
----
-
-# Creating Workflows
-
-TODO
-
-## Summary
-
-TODO
diff --git a/docs/docs/how-tos/Workflows/handling-errors.md b/docs/docs/how-tos/Workflows/handling-errors.md
index 7f8d4e20a6..b58fe1e91e 100644
--- a/docs/docs/how-tos/Workflows/handling-errors.md
+++ b/docs/docs/how-tos/Workflows/handling-errors.md
@@ -1,20 +1,59 @@
----
-sidebar_position: 1
----
-
# Handling Errors
When a workflow fails, there are 2 ways to handle the exception.
-## ```failureWorkflow```
+## Set ```failureWorkflow``` in Workflow Definition
+
+In your main workflow definition, you can configure a workflow to run upon failure, by adding the following parameter to the workflow:
+
+```json
+"failureWorkflow": "",
+ "method": "POST",
+ "body": {
+ "text": "workflow: ${workflow.input.workflowId} failed. ${workflow.input.reason}"
+ },
+ "connectionTimeOut": 5000,
+ "readTimeOut": 5000
+ }
+ },
+ "type": "HTTP",
+ "retryCount": 3
+ }
+ ],
+ "restartable": true,
+ "workflowStatusListenerEnabled": false,
+ "ownerEmail": "conductor@example.com",
+ "timeoutPolicy": "ALERT_ONLY",
+}
+```
+
+## Set ```workfowStatusListenerEnabled```
+
+When this is enabled, notifications are now possible, and by building a custom implementation of the Workflow Status Listener, a notification can be sent to an external service. [More details.](https://github.com/Netflix/conductor/issues/1017#issuecomment-468869173)
\ No newline at end of file
diff --git a/docs/docs/how-tos/Workflows/searching-workflows.md b/docs/docs/how-tos/Workflows/searching-workflows.md
index 21c4be56d5..be1f82c562 100644
--- a/docs/docs/how-tos/Workflows/searching-workflows.md
+++ b/docs/docs/how-tos/Workflows/searching-workflows.md
@@ -11,8 +11,8 @@ In this article we will learn how to search through workflow executions via the
1. Conductor app and UI installed and running in an environment. If required we can look at the following options to get
an environment up and running.
- 1. [Build and Run Conductor Locally](/content/docs/getting-started/install/running-locally)
- 2. [Running via Docker Compose](/content/docs/getting-started/install/running-locally-docker)
+ 1. [Build and Run Conductor Locally](/gettingstarted/local.html)
+ 2. [Running via Docker Compose](/gettingstarted/docker.html)
## UI Workflows View
diff --git a/docs/docs/how-tos/Workflows/starting-workflows.md b/docs/docs/how-tos/Workflows/starting-workflows.md
index 776f57405c..e35cc60046 100644
--- a/docs/docs/how-tos/Workflows/starting-workflows.md
+++ b/docs/docs/how-tos/Workflows/starting-workflows.md
@@ -1,8 +1,4 @@
----
-sidebar_position: 1
----
-
-# Starting Workflow Executions
+# Starting Workflows
Workflow executions can be started by using the following API:
diff --git a/docs/docs/how-tos/Workflows/updating-workflows.md b/docs/docs/how-tos/Workflows/updating-workflows.md
index 77d4b1f162..beb67a4043 100644
--- a/docs/docs/how-tos/Workflows/updating-workflows.md
+++ b/docs/docs/how-tos/Workflows/updating-workflows.md
@@ -1,8 +1,4 @@
----
-sidebar_position: 1
----
-
-# Updating Workflow Definitions
+# Updating Workflows
Workflows can be created or updated using the workflow metadata API
diff --git a/docs/docs/how-tos/Workflows/versioning-workflows.md b/docs/docs/how-tos/Workflows/versioning-workflows.md
new file mode 100644
index 0000000000..e44485176f
--- /dev/null
+++ b/docs/docs/how-tos/Workflows/versioning-workflows.md
@@ -0,0 +1,61 @@
+---
+sidebar_position: 1
+---
+
+# Versioning Workflows
+
+Every workflow has a version number (this number **must** be an integer.)
+
+Versioning allows you to run different versions of the same workflow simultaneously.
+
+
+## Summary
+
+> Use Case: A new version of your core workflow will add a capability that is required for *veryImportantCustomer*. However, *otherVeryImportantCustomer* will not be ready to implement this code for another 6 months.
+
+
+## Version 1
+
+```json
+{
+ "name": "Core_workflow",
+ "description": "Very_important_business",
+ "version": 1,
+ "tasks": [
+ {
+
+ }
+ ],
+ "outputParameters": {
+ }
+}
+```
+
+## Version 2
+
+```json
+{
+ "name": "Core_workflow",
+ "description": "Very_important_business",
+ "version": 2,
+ "tasks": [
+ {
+
+ }
+ ],
+ "outputParameters": {
+ }
+}
+```
+
+### Version 2 launch
+Initially, both customers are on version 1 of the workflow.
+
+* **veryImportantCustomer* may begin transitioning traffic onto version 2. Any tasks that remain unfinished on version 1 *stay* on version 1.
+* *otherVeryImportantCustomer* remains on version 1.
+
+
+### 6 months later
+
+* All *veryImportantCustomer* workflows are on version 2.
+* *otherVeryImportantCustomer* may begin transitioning traffic onto version 2. Any tasks that remain unfinished on version 1 *stay* on version 1.
\ No newline at end of file
diff --git a/docs/docs/how-tos/Workflows/view-workflow-executions.md b/docs/docs/how-tos/Workflows/view-workflow-executions.md
index 9eea307fda..445fc9b291 100644
--- a/docs/docs/how-tos/Workflows/view-workflow-executions.md
+++ b/docs/docs/how-tos/Workflows/view-workflow-executions.md
@@ -11,12 +11,12 @@ In this article we will learn how to view workflow executions via the UI.
1. Conductor app and UI installed and running in an environment. If required we can look at the following options to get
an environment up and running.
- 1. [Build and Run Conductor Locally](/content/docs/getting-started/install/running-locally)
- 2. [Running via Docker Compose](/content/docs/getting-started/install/running-locally-docker)
+ 1. [Build and Run Conductor Locally](/gettingstarted/local.html)
+ 2. [Running via Docker Compose](/gettingstarted/docker.html)
### Viewing a Workflow Execution
-Refer to [Searching Workflows](/content/docs/how-tos/searching-workflows) to filter and find an execution you want to
+Refer to [Searching Workflows](/how-tos/Workflows/searching-workflows.html) to filter and find an execution you want to
view. Click on the workflow id hyperlink to open the Workflow Execution Details page.
The following tabs are available to view the details of the Workflow Execution
diff --git a/docs/docs/how-tos/build-a-nodejs-task-worker.md b/docs/docs/how-tos/build-a-nodejs-task-worker.md
deleted file mode 100644
index e9d66be150..0000000000
--- a/docs/docs/how-tos/build-a-nodejs-task-worker.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-sidebar_position: 1
----
-
-# Build a Node.js Task Worker
-
-TODO
-
-## Summary
-
-TODO
diff --git a/docs/docs/how-tos/conductor-configurations.md b/docs/docs/how-tos/conductor-configurations.md
deleted file mode 100644
index 313ceca3f7..0000000000
--- a/docs/docs/how-tos/conductor-configurations.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-sidebar_position: 1
----
-
-# Conductor Configurations
-
-TODO
-
-## Summary
-
-TODO
diff --git a/docs/docs/how-tos/configuring-metrics.md b/docs/docs/how-tos/configuring-metrics.md
deleted file mode 100644
index 9accb72ba0..0000000000
--- a/docs/docs/how-tos/configuring-metrics.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-sidebar_position: 1
----
-
-# Configuring Metrics
-
-TODO
-
-## Summary
-
-TODO
diff --git a/docs/docs/how-tos/golang-sdk.md b/docs/docs/how-tos/golang-sdk.md
deleted file mode 100644
index d26f46cd98..0000000000
--- a/docs/docs/how-tos/golang-sdk.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-sidebar_position: 1
----
-
-# Golang SDK
-
-TODO
-
-## Summary
-
-TODO
diff --git a/docs/docs/how-tos/idempotent-tasks.md b/docs/docs/how-tos/idempotent-tasks.md
deleted file mode 100644
index 2c80869996..0000000000
--- a/docs/docs/how-tos/idempotent-tasks.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-sidebar_position: 1
----
-
-# Idempotency
-
-TODO
-
-## Summary
-
-TODO
diff --git a/docs/docs/how-tos/java-sdk.md b/docs/docs/how-tos/java-sdk.md
deleted file mode 100644
index b4295797db..0000000000
--- a/docs/docs/how-tos/java-sdk.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-sidebar_position: 1
----
-
-# Java SDK
-
-TODO
-
-## Summary
-
-TODO
diff --git a/docs/docs/how-tos/nodejs-sdk.md b/docs/docs/how-tos/nodejs-sdk.md
deleted file mode 100644
index 3ab27f7d7f..0000000000
--- a/docs/docs/how-tos/nodejs-sdk.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-sidebar_position: 1
----
-
-# Nodejs SDK
-
-TODO
-
-## Summary
-
-TODO
diff --git a/docs/docs/how-tos/python-sdk.md b/docs/docs/how-tos/python-sdk.md
index bc26d3fbb4..c245dee3e0 100644
--- a/docs/docs/how-tos/python-sdk.md
+++ b/docs/docs/how-tos/python-sdk.md
@@ -4,8 +4,192 @@ sidebar_position: 1
# Python SDK
-TODO
+Software Development Kit for Netflix Conductor, written on and providing support for Python.
-## Summary
+The code for the Python SDk is available on [Github](https://github.com/conductor-sdk/conductor-python). Please feel free to file PRs, issues, etc. there.
+
+## Quick Guide
+
+1. Create a virtual environment
+
+ $ virtualenv conductor
+ $ source conductor/bin/activate
+ $ python3 -m pip list
+ Package Version
+ ---------- -------
+ pip 22.0.3
+ setuptools 60.6.0
+ wheel 0.37.1
+
+2. Install latest version of `conductor-python` from pypi
+
+ $ python3 -m pip install conductor-python
+ Collecting conductor-python
+ Collecting certifi>=14.05.14
+ Collecting urllib3>=1.15.1
+ Requirement already satisfied: setuptools>=21.0.0 in ./conductor/lib/python3.8/site-packages (from conductor-python) (60.6.0)
+ Collecting six>=1.10
+ Installing collected packages: certifi, urllib3, six, conductor-python
+ Successfully installed certifi-2021.10.8 conductor-python-1.0.7 six-1.16.0 urllib3-1.26.8
+
+3. Create a worker capable of executing a `Task`. Example:
+
+ from conductor.client.worker.worker_interface import WorkerInterface
+
+ class SimplePythonWorker(WorkerInterface):
+ def execute(self, task):
+ task_result = self.get_task_result_from_task(task)
+ task_result.add_output_data('key', 'value')
+ task_result.status = 'COMPLETED'
+ return task_result
+
+
+ * The `add_output_data` is the most relevant part, since you can store information in a dictionary, which will be sent within `TaskResult` as your execution response to Conductor
+
+4. Create a main method to start polling tasks to execute with your worker. Example:
+
+ from conductor.client.automator.task_handler import TaskHandler
+ from conductor.client.configuration.configuration import Configuration
+ from conductor.client.worker.sample.faulty_execution_worker import FaultyExecutionWorker
+ from conductor.client.worker.sample.simple_python_worker import SimplePythonWorker
+
+
+ def main():
+ configuration = Configuration(debug=True)
+ task_definition_name = 'python_example_task'
+ workers = [
+ SimplePythonWorker(task_definition_name),
+ FaultyExecutionWorker(task_definition_name)
+ ]
+ with TaskHandler(workers, configuration) as task_handler:
+ task_handler.start()
+
+
+ if __name__ == '__main__':
+ main()
+
+ * This example contains two workers, each with a different execution method, capable of running the same `task_definition_name`
+
+5. Now that you have implemented the example, you can start the Conductor server locally:
+ 1. Clone [Netflix Conductor repository](https://github.com/Netflix/conductor):
+
+ $ git clone https://github.com/Netflix/conductor.git
+ $ cd conductor/
+
+ 2. Start the Conductor server:
+
+ /conductor$ ./gradlew bootRun
+
+ 3. Start Conductor UI:
+
+ /conductor$ cd ui/
+ /conductor/ui$ yarn install
+ /conductor/ui$ yarn run start
+
+ You should be able to access:
+ * Conductor API:
+ * http://localhost:8080/swagger-ui/index.html
+ * Conductor UI:
+ * http://localhost:5000
+
+6. Create a `Task` within `Conductor`. Example:
+
+ $ curl -X 'POST' \
+ 'http://localhost:8080/api/metadata/taskdefs' \
+ -H 'accept: */*' \
+ -H 'Content-Type: application/json' \
+ -d '[
+ {
+ "name": "python_task_example",
+ "description": "Python task example",
+ "retryCount": 3,
+ "retryLogic": "FIXED",
+ "retryDelaySeconds": 10,
+ "timeoutSeconds": 300,
+ "timeoutPolicy": "TIME_OUT_WF",
+ "responseTimeoutSeconds": 180,
+ "ownerEmail": "example@example.com"
+ }
+ ]'
+
+7. Create a `Workflow` within `Conductor`. Example:
+
+ $ curl -X 'POST' \
+ 'http://localhost:8080/api/metadata/workflow' \
+ -H 'accept: */*' \
+ -H 'Content-Type: application/json' \
+ -d '{
+ "createTime": 1634021619147,
+ "updateTime": 1630694890267,
+ "name": "workflow_with_python_task_example",
+ "description": "Workflow with Python Task example",
+ "version": 1,
+ "tasks": [
+ {
+ "name": "python_task_example",
+ "taskReferenceName": "python_task_example_ref_1",
+ "inputParameters": {},
+ "type": "SIMPLE"
+ }
+ ],
+ "inputParameters": [],
+ "outputParameters": {
+ "workerOutput": "${python_task_example_ref_1.output}"
+ },
+ "schemaVersion": 2,
+ "restartable": true,
+ "ownerEmail": "example@example.com",
+ "timeoutPolicy": "ALERT_ONLY",
+ "timeoutSeconds": 0
+ }'
+
+8. Start a new workflow:
+
+ $ curl -X 'POST' \
+ 'http://localhost:8080/api/workflow/workflow_with_python_task_example' \
+ -H 'accept: text/plain' \
+ -H 'Content-Type: application/json' \
+ -d '{}'
+
+ You should receive a *Workflow ID* at the *Response body*
+ * *Workflow ID* example: `8ff0bc06-4413-4c94-b27a-b3210412a914`
+
+ Now you must be able to see its execution through the UI.
+ * Example: `http://localhost:5000/execution/8ff0bc06-4413-4c94-b27a-b3210412a914`
+
+9. Run your Python file with the `main` method
+
+### Unit Tests
+
+#### Simple validation
+
+```shell
+/conductor-python/src$ python3 -m unittest -v
+test_execute_task (tst.automator.test_task_runner.TestTaskRunner) ... ok
+test_execute_task_with_faulty_execution_worker (tst.automator.test_task_runner.TestTaskRunner) ... ok
+test_execute_task_with_invalid_task (tst.automator.test_task_runner.TestTaskRunner) ... ok
+
+----------------------------------------------------------------------
+Ran 3 tests in 0.001s
+
+OK
+```
+
+#### Run with code coverage
+
+```shell
+/conductor-python/src$ python3 -m coverage run --source=conductor/ -m unittest
+```
+
+Report:
+
+```shell
+/conductor-python/src$ python3 -m coverage report
+```
+
+Visual coverage results:
+
+```shell
+/conductor-python/src$ python3 -m coverage html
+```
-TODO
diff --git a/docs/docs/how-tos/retry-configurations.md b/docs/docs/how-tos/retry-configurations.md
deleted file mode 100644
index 2ed5b93ce5..0000000000
--- a/docs/docs/how-tos/retry-configurations.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-sidebar_position: 1
----
-
-# Retry Configurations
-
-TODO
-
-## Summary
-
-TODO
diff --git a/docs/docs/how-tos/scaling-the-system.md b/docs/docs/how-tos/scaling-the-system.md
deleted file mode 100644
index a15275c8a8..0000000000
--- a/docs/docs/how-tos/scaling-the-system.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-sidebar_position: 1
----
-
-# Scaling the System
-
-TODO
-
-## Summary
-
-TODO
diff --git a/docs/docs/how-tos/timeouts.md b/docs/docs/how-tos/timeouts.md
deleted file mode 100644
index 6c455a434f..0000000000
--- a/docs/docs/how-tos/timeouts.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-sidebar_position: 1
----
-
-# Timeouts
-
-TODO
-
-## Summary
-
-TODO
diff --git a/docs/docs/how-tos/versioning-workflows.md b/docs/docs/how-tos/versioning-workflows.md
deleted file mode 100644
index c41f3f2f2a..0000000000
--- a/docs/docs/how-tos/versioning-workflows.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-sidebar_position: 1
----
-
-# Versioning Workflows
-
-TODO
-
-## Summary
-
-TODO
diff --git a/docs/docs/img/conductor-vector-x.png b/docs/docs/img/conductor-vector-x.png
deleted file mode 100644
index 5c05ecbd9d..0000000000
Binary files a/docs/docs/img/conductor-vector-x.png and /dev/null differ
diff --git a/docs/docs/img/conductor-vector.pdf b/docs/docs/img/conductor-vector.pdf
deleted file mode 100644
index 0c27ed6b7b..0000000000
Binary files a/docs/docs/img/conductor-vector.pdf and /dev/null differ
diff --git a/docs/docs/img/corner-logo.png b/docs/docs/img/corner-logo.png
deleted file mode 100644
index d0e8275cd3..0000000000
Binary files a/docs/docs/img/corner-logo.png and /dev/null differ
diff --git a/docs/docs/img/corner-logo2-oss.png b/docs/docs/img/corner-logo2-oss.png
deleted file mode 100644
index a1250561d8..0000000000
Binary files a/docs/docs/img/corner-logo2-oss.png and /dev/null differ
diff --git a/docs/docs/img/corner-logo2.png b/docs/docs/img/corner-logo2.png
deleted file mode 100644
index 4e95168de9..0000000000
Binary files a/docs/docs/img/corner-logo2.png and /dev/null differ
diff --git a/docs/docs/img/dag_workflow.png b/docs/docs/img/dag_workflow.png
new file mode 100644
index 0000000000..5e231e62f0
Binary files /dev/null and b/docs/docs/img/dag_workflow.png differ
diff --git a/docs/docs/img/dag_workflow2.png b/docs/docs/img/dag_workflow2.png
new file mode 100644
index 0000000000..fd547b209b
Binary files /dev/null and b/docs/docs/img/dag_workflow2.png differ
diff --git a/docs/docs/img/directed_graph.png b/docs/docs/img/directed_graph.png
new file mode 100644
index 0000000000..103189a675
Binary files /dev/null and b/docs/docs/img/directed_graph.png differ
diff --git a/docs/docs/img/favicon.svg b/docs/docs/img/favicon.svg
new file mode 100644
index 0000000000..1cd90c0ca4
--- /dev/null
+++ b/docs/docs/img/favicon.svg
@@ -0,0 +1,52 @@
+
+
diff --git a/docs/docs/img/icons/brackets.svg b/docs/docs/img/icons/brackets.svg
new file mode 100644
index 0000000000..606a48db30
--- /dev/null
+++ b/docs/docs/img/icons/brackets.svg
@@ -0,0 +1,3 @@
+
diff --git a/docs/docs/img/icons/modular.svg b/docs/docs/img/icons/modular.svg
new file mode 100644
index 0000000000..e8e3934961
--- /dev/null
+++ b/docs/docs/img/icons/modular.svg
@@ -0,0 +1,50 @@
+
+
diff --git a/docs/docs/img/icons/network.svg b/docs/docs/img/icons/network.svg
new file mode 100644
index 0000000000..7360cb36e6
--- /dev/null
+++ b/docs/docs/img/icons/network.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/docs/docs/img/icons/osi.svg b/docs/docs/img/icons/osi.svg
new file mode 100644
index 0000000000..3b14c8b030
--- /dev/null
+++ b/docs/docs/img/icons/osi.svg
@@ -0,0 +1,38 @@
+
+
diff --git a/docs/docs/img/icons/server.svg b/docs/docs/img/icons/server.svg
new file mode 100644
index 0000000000..b480e75992
--- /dev/null
+++ b/docs/docs/img/icons/server.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/docs/docs/img/icons/shield.svg b/docs/docs/img/icons/shield.svg
new file mode 100644
index 0000000000..4cb8af5e45
--- /dev/null
+++ b/docs/docs/img/icons/shield.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/docs/docs/img/icons/wrench.svg b/docs/docs/img/icons/wrench.svg
new file mode 100644
index 0000000000..42d6543004
--- /dev/null
+++ b/docs/docs/img/icons/wrench.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/docs/docs/img/logo.png b/docs/docs/img/logo.png
new file mode 100644
index 0000000000..132d52cde4
Binary files /dev/null and b/docs/docs/img/logo.png differ
diff --git a/docs/docs/img/logo.svg b/docs/docs/img/logo.svg
new file mode 100644
index 0000000000..57feb5b0fb
--- /dev/null
+++ b/docs/docs/img/logo.svg
@@ -0,0 +1,93 @@
+
+
diff --git a/docs/docs/img/logo_dark_background.png b/docs/docs/img/logo_dark_background.png
new file mode 100644
index 0000000000..013020f12b
Binary files /dev/null and b/docs/docs/img/logo_dark_background.png differ
diff --git a/docs/docs/img/corner-logo-oss.png b/docs/docs/img/netflix-oss.png
similarity index 100%
rename from docs/docs/img/corner-logo-oss.png
rename to docs/docs/img/netflix-oss.png
diff --git a/docs/docs/img/netflix.png b/docs/docs/img/netflix.png
new file mode 100755
index 0000000000..151775b3b1
Binary files /dev/null and b/docs/docs/img/netflix.png differ
diff --git a/docs/docs/img/pirate_graph.gif b/docs/docs/img/pirate_graph.gif
new file mode 100644
index 0000000000..41cbec8cb6
Binary files /dev/null and b/docs/docs/img/pirate_graph.gif differ
diff --git a/docs/docs/img/regular_graph.png b/docs/docs/img/regular_graph.png
new file mode 100644
index 0000000000..4d48b504a2
Binary files /dev/null and b/docs/docs/img/regular_graph.png differ
diff --git a/docs/docs/img/timeline.png b/docs/docs/img/timeline.png
new file mode 100644
index 0000000000..ed092b5a01
Binary files /dev/null and b/docs/docs/img/timeline.png differ
diff --git a/docs/docs/img/workflow.svg b/docs/docs/img/workflow.svg
new file mode 100644
index 0000000000..e00e8928dc
--- /dev/null
+++ b/docs/docs/img/workflow.svg
@@ -0,0 +1,615 @@
+
+
+
+
diff --git a/docs/docs/index.md b/docs/docs/index.md
index 361aa076c2..d7cf5081d8 100644
--- a/docs/docs/index.md
+++ b/docs/docs/index.md
@@ -1,35 +1,153 @@
-
Conductor is a _Workflow Orchestration engine_ that runs in the cloud.
+
+
+
+
+ Open Source
+
+
+ Apache-2.0 license for commercial and non-commerical use. Freedom to deploy, modify and contribute back.
+
+
+
+
+ Modular
+
+
+ A fully abstracted backend enables you choose your own database persistance layer and queueing service.
+
+
+
+
+ Proven
+
+
+ Enterprise ready, Java Spring based platform that has been battle tested in production systems at Netflix and elsewhere.
+
+
+
+
+
+
+
+
+ Control
+
+
+ Powerful flow control constructs including Decisions, Dynamic Fork-Joins and Subworkflows. Variables and templates are supported.
+
+
+
+
+ Polyglot
+
+
+ Client libraries in multiple languages allows workers to be implemented in Java, Node JS, Python and C#.
+
+
+
+
+ Scalable
+
+
+ Distributed architecture for both orchestrator and workers scalable from a single workflow to millions of concurrent processes.
+
+
+
+
-## Motivation
+
+
+
+
+ Developer Experience
+
+
+
+
Discover and visualize the process flows from the bundled UI
+
Integrated interface to create, refine and validate workflows
+
JSON based workflow definition DSL
+
Full featured API for custom automation
+
+
+
+
+
+
+
+
-## Conductor was built to help Netflix orchestrate microservices based process flows with the following features:
+
+
+
+
+ Observability
+
+
+
+
Understand, debug and iterate on task and workflow executions.
+
Fine grain operational control over workflows with the ability to pause, resume, restart, retry and terminate
+
+
+
+
+
+
+
+
-* A distributed server ecosystem, which stores workflow state information efficiently.
-* Allow creation of process / business flows in which each individual task can be implemented by the same / different microservices.
-* A DAG (Directed Acyclic Graph) based workflow definition.
-* Workflow definitions are decoupled from the service implementations.
-* Provide visibility and traceability into these process flows.
-* Simple interface to connect workers, which execute the tasks in workflows.
-* Workers are language agnostic, allowing each microservice to be written in the language most suited for the service.
-* Full operational control over workflows with the ability to pause, resume, restart, retry and terminate.
-* Allow greater reuse of existing microservices providing an easier path for onboarding.
-* User interface to visualize, replay and search the process flows.
-* Ability to scale to millions of concurrently running process flows.
-* Backed by a queuing service abstracted from the clients.
-* Be able to operate on HTTP or other transports e.g. gRPC.
-* Event handlers to control workflows via external actions.
-* Client implementations in Java, Python and other languages.
-* Various configurable properties with sensible defaults to fine tune workflow and task executions like rate limiting, concurrent execution limits etc.
-## Why not peer to peer choreography?
-
-With peer to peer task choreography, we found it was harder to scale with growing business needs and complexities.
-Pub/sub model worked for simplest of the flows, but quickly highlighted some of the issues associated with the approach:
-
-* Process flows are “embedded” within the code of multiple application.
-* Often, there is tight coupling and assumptions around input/output, SLAs etc, making it harder to adapt to changing needs.
-* Almost no way to systematically answer “How much are we done with process X”?
+
+
+
+
+
Why Conductor?
+
+
+
+
+
+
+ Service Orchestration
+
+
+
Workflow definitions are decoupled from task implementations. This allows the creation of process flows in which each individual task can be implemented
+ by an encapsulated microservice.
+
Desiging a workflow orchestrator that is resilient and horizontally scalable is not a simple problem. At Netflix we have developed a solution in Conductor.
+
+
+
+
+
+
+ Service Choreography
+
+
+ Process flows are implicitly defined across multiple service implementations, often with
+ tight peer-to-peer coupling between services. Multiple event buses and complex
+ pub/sub models limit observability around process progress and capacity.
+
+
+
+
+
+
diff --git a/docs/docs/labs/beginner.md b/docs/docs/labs/beginner.md
index a53282f777..3953cd525f 100644
--- a/docs/docs/labs/beginner.md
+++ b/docs/docs/labs/beginner.md
@@ -1,16 +1,16 @@
+# Beginner Lab
## Hands on mode
Please feel free to follow along using any of these resources:
-- Using cURL.
-- Postman or similar REST client.
+- Using cURL
+- Postman or similar REST client
## Creating a Workflow
Let's create a simple workflow that adds Netflix Idents to videos. We'll be mocking the adding Idents part and focusing on actually executing this process flow.
!!!info "What are Netflix Idents?"
- Netflix Idents are those 4 second videos with Netflix logo, which appears at the beginning and end of shows.
- Learn more about them [here](https://partnerhelp.netflixstudios.com/hc/en-us/articles/115004750187-Master-QC-Identifying-and-Implementing-the-Netflix-Ident-). You might have also noticed they're different for Animation and several other genres.
+ Netflix Idents are those 4 second videos with Netflix logo, which appears at the beginning and end of shows. You might have also noticed they're different for Animation and several other genres.
!!!warning "Disclaimer"
Obviously, this is not how Netflix adds Idents. Those Workflows are indeed very complex. But, it should give you an idea about how Conductor can be used to implement similar features.
@@ -22,12 +22,17 @@ The workflow in this lab will look like this:
This workflow contains the following:
* Worker Task `verify_if_idents_are_added` to verify if Idents are already added.
-* [Decision Task](../configuration/systask#decision) that takes output from the previous task, and decides whether to schedule the `add_idents` task.
+
+* [Switch Task](/reference-docs/switch-task.html) that takes output from the previous task, and decides whether to schedule the `add_idents` task.
+
* `add_idents` task which is another worker Task.
### Creating Task definitions
-Let's create the [task definition](../configuration/taskdef) for `verify_if_idents_are_added` in JSON. This task will be a *SIMPLE* task which is supposed to be executed by an Idents microservice. We'll be mocking the Idents microservice part.
+
+Let's create the [task definition](/configuration/taskdef.html) for `verify_if_idents_are_added` in JSON. This task will be a *SIMPLE* task which is supposed to be executed by an Idents microservice. We'll be mocking the Idents microservice part.
+
+
**Note** that at this point, we don't have to specify whether it is a System task or Worker task. We are only specifying the required configurations for the task, like number of times it should be retried, timeouts etc. We shall start by using `name` parameter for task name.
```json
@@ -61,7 +66,7 @@ i.e. if the task doesn't finish execution within this time limit after transitio
}
```
-And a [responseTimeout](/tasklifecycle/#response-timeout-seconds) of 180 seconds.
+And a [responseTimeout](/architecture/tasklifecycle.html#response-timeout-seconds) of 180 seconds.
```json
{
@@ -75,7 +80,9 @@ And a [responseTimeout](/tasklifecycle/#response-timeout-seconds) of 180 seconds
}
```
-We can define several other fields defined [here](../configuration/taskdef), but this is a good place to start with.
+
+We can define several other fields defined [here](/configuration/taskdef.html), but this is a good place to start with.
+
Similarly, create another task definition: `add_idents`.
@@ -93,7 +100,7 @@ Similarly, create another task definition: `add_idents`.
Send a `POST` request to `/metadata/taskdefs` endpoint to register these tasks. You can use Swagger, Postman, CURL or similar tools.
-!!!info "Why is the Decision Task not registered?"
+!!!info "Why is the Switch Task not registered?"
System Tasks that are part of control flow do not need to be registered. However, some system tasks where the retries, rate limiting and other mechanisms are required, like `HTTP` Task, are to be registered though.
!!! Important
@@ -131,7 +138,7 @@ curl -X POST \
### Creating Workflow Definition
-Creating Workflow definition is almost similar. We shall use the Task definitions created above. Note that same Task definitions can be used in multiple workflows, or for multipe times in same Workflow (that's where `taskReferenceName` is useful).
+Creating Workflow definition is almost similar. We shall use the Task definitions created above. Note that same Task definitions can be used in multiple workflows, or for multiple times in same Workflow (that's where `taskReferenceName` is useful).
A workflow without any tasks looks like this:
```json
@@ -170,13 +177,21 @@ Add the first task that this workflow has to execute. All the tasks must be adde
Notice how we were using `${workflow.input.contentId}` to pass inputs to this task. Conductor can wire inputs between workflow and tasks, and between tasks.
i.e The task `verify_if_idents_are_added` is wired to accept inputs from the workflow input using JSONPath expression `${workflow.input.param}`.
-Learn more about wiring inputs and outputs [here](../configuration/workflowdef#wiring-inputs-and-outputs).
-Let's define `decisionCases` now. Checkout the Decision task structure [here](../configuration/systask#decision).
+Learn more about wiring inputs and outputs [here](/configuration/workflowdef.html#wiring-inputs-and-outputs).
+
+Let's define `decisionCases` now.
+
+
+>Note: in earlier versions of this tutorial, the "decision" task was used. This has been deprecated.
+
+Checkout the Switch task structure [here](/reference-docs/switch-task.html).
+
+A Switch task is specified by the `evaulatorType`, `expression` (the expression that defines the Switch) and `decisionCases` which lists all the branches of Switch task.
-A Decision task is specified by `type:"DECISION"`, `caseValueParam` and `decisionCases` which lists all the branches of Decision task. This is similar to a `switch..case` but written in Conductor JSON DSL.
+In this case, we'll use `"evaluatorType": "value-param"`, meaning that we'll just use the value inputted to make the decision. Alternatively, there is a `"evaluatorType": "JavaScript"` that can be used for more complicated evaluations.
-Adding the decision task:
+Adding the switch task (without any decision cases):
```json
{
"name": "add_netflix_identation",
@@ -193,13 +208,14 @@ Adding the decision task:
"type": "SIMPLE"
},
{
- "name": "decide_task",
+ "name": "switch_task",
"taskReferenceName": "is_idents_added",
"inputParameters": {
"case_value_param": "${ident_verification.output.is_idents_added}"
},
- "type": "DECISION",
- "caseValueParam": "case_value_param",
+ "type": "SWITCH",
+ "evaluatorType": "value-param",
+ "expression": "case_value_param",
"decisionCases": {
}
@@ -208,7 +224,7 @@ Adding the decision task:
}
```
-Each decision branch could have multiple tasks, so it has to be defined as an array.
+Each switch task can have multiple tasks, so it has to be defined as an array.
```json
{
"name": "add_netflix_identation",
@@ -225,13 +241,14 @@ Each decision branch could have multiple tasks, so it has to be defined as an ar
"type": "SIMPLE"
},
{
- "name": "decide_task",
+ "name": "switch_task",
"taskReferenceName": "is_idents_added",
"inputParameters": {
"case_value_param": "${ident_verification.output.is_idents_added}"
},
- "type": "DECISION",
- "caseValueParam": "case_value_param",
+ "type": "SWITCH",
+ "evaluatorType": "value-param",
+ "expression": "case_value_param",
"decisionCases": {
"false": [
{
@@ -262,7 +279,6 @@ curl -X POST \
"description": "Adds Netflix Identation to video files.",
"version": 2,
"schemaVersion": 2,
- "ownerEmail": "type your email here",
"tasks": [
{
"name": "verify_if_idents_are_added",
@@ -273,13 +289,14 @@ curl -X POST \
"type": "SIMPLE"
},
{
- "name": "decide_task",
+ "name": "switch_task",
"taskReferenceName": "is_idents_added",
"inputParameters": {
"case_value_param": "${ident_verification.output.is_idents_added}"
},
- "type": "DECISION",
- "caseValueParam": "case_value_param",
+ "type": "SWITCH",
+ "evaluatorType": "value-param",
+ "expression": "case_value_param",
"decisionCases": {
"false": [
{
@@ -343,7 +360,7 @@ Feel free to explore the various functionalities that the UI exposes. To elabora
Now that `verify_if_idents_are_added` task is in `SCHEDULED` state, it is the worker's turn to fetch the task, execute it and update Conductor with final status of the task.
-Ideally, the workers implementing the [Client](../gettingstarted/client#worker) interface would do this process, executing the tasks on real microservices. But, let's mock this part.
+Ideally, the workers implementing the [Client](/gettingstarted/client.html#worker) interface would do this process, executing the tasks on real microservices. But, let's mock this part.
Send a `GET` request to `/poll` endpoint with your task type.
diff --git a/docs/docs/labs/eventhandlers.md b/docs/docs/labs/eventhandlers.md
index afe224d334..304795af49 100644
--- a/docs/docs/labs/eventhandlers.md
+++ b/docs/docs/labs/eventhandlers.md
@@ -1,3 +1,4 @@
+# Events and Event Handlers
## About
In this Lab, we shall:
@@ -9,8 +10,8 @@ In this Lab, we shall:
Conductor Supports Eventing with two Interfaces:
-* [Event Task](../../configuration/systask#event)
-* [Event Handlers](../../configuration/eventhandlers#event-handler)
+* [Event Task](/configuration/systask.html#event)
+* [Event Handlers](/configuration/eventhandlers.html#event-handler)
We shall create a simple cyclic workflow similar to this:
@@ -114,7 +115,7 @@ Event Handler can perform a list of actions defined in `actions` array parameter
}
```
-Let's define `start_workflow` action. We shall pass the name of workflow we would like to start. The `start_workflow` parameter can use any of the values from the general [Start Workflow Request](../../gettingstarted/startworkflow/). Here we are passing in the workflowId, so that the Complete Task Event Handler can use it.
+Let's define `start_workflow` action. We shall pass the name of workflow we would like to start. The `start_workflow` parameter can use any of the values from the general [Start Workflow Request](/gettingstarted/startworkflow.html). Here we are passing in the workflowId, so that the Complete Task Event Handler can use it.
```json
{
diff --git a/docs/docs/labs/img/bgnr_complete_workflow.png b/docs/docs/labs/img/bgnr_complete_workflow.png
index 7cfbb29ec7..0cf16491b1 100644
Binary files a/docs/docs/labs/img/bgnr_complete_workflow.png and b/docs/docs/labs/img/bgnr_complete_workflow.png differ
diff --git a/docs/docs/labs/img/bgnr_state_scheduled.png b/docs/docs/labs/img/bgnr_state_scheduled.png
index e5fe88eea6..4559b77166 100644
Binary files a/docs/docs/labs/img/bgnr_state_scheduled.png and b/docs/docs/labs/img/bgnr_state_scheduled.png differ
diff --git a/docs/docs/labs/img/bgnr_systask_state.png b/docs/docs/labs/img/bgnr_systask_state.png
index 035eb24dc5..977cc56de3 100644
Binary files a/docs/docs/labs/img/bgnr_systask_state.png and b/docs/docs/labs/img/bgnr_systask_state.png differ
diff --git a/docs/docs/labs/kitchensink.md b/docs/docs/labs/kitchensink.md
index 9762ebf050..d3f7f403fa 100644
--- a/docs/docs/labs/kitchensink.md
+++ b/docs/docs/labs/kitchensink.md
@@ -1,6 +1,7 @@
+# Kitchen Sink
An example kitchensink workflow that demonstrates the usage of all the schema constructs.
-###Definition
+### Definition
```json
{
@@ -162,10 +163,10 @@ An example kitchensink workflow that demonstrates the usage of all the schema co
}
```
### Visual Flow
-![img](../img/kitchensink.png)
+![img](/img/kitchensink.png)
### Running Kitchensink Workflow
-1. Start the server as documented [here](/server). Use ```-DloadSample=true``` java system property when launching the server. This will create a kitchensink workflow, related task definitions and kick off an instance of kitchensink workflow.
+1. Start the server as documented [here](/gettingstarted/docker.html). Use ```-DloadSample=true``` java system property when launching the server. This will create a kitchensink workflow, related task definitions and kick off an instance of kitchensink workflow.
2. Once the workflow has started, the first task remains in the ```SCHEDULED``` state. This is because no workers are currently polling for the task.
3. We will use the REST endpoints directly to poll for tasks and updating the status.
@@ -187,7 +188,7 @@ The response is a text string identifying the workflow instance id.
curl http://localhost:8080/api/tasks/poll/task_1
```
- The response should look something like:
+The response should look something like:
```json
{
@@ -253,5 +254,6 @@ curl -H 'Content-Type:application/json' -H 'Accept:application/json' -X POST htt
}
}'
```
+
This will mark the task_1 as completed and schedule ```task_5``` as the next task.
Repeat the same process for the subsequently scheduled tasks until the completion.
diff --git a/docs/docs/labs/running-first-worker.md b/docs/docs/labs/running-first-worker.md
deleted file mode 100644
index 3cdb16495a..0000000000
--- a/docs/docs/labs/running-first-worker.md
+++ /dev/null
@@ -1,243 +0,0 @@
-# Running First Worker
-
-In this article we will explore how you can get your first worker task running.
-
-We are hosting the code used in this article in the following location. You can clone and use it as a reference
-locally.
-
-#### https://github.com/orkes-io/orkesworkers
-
-In the previous article, you used an `HTTP` task run your first simple workflow. Now it's time to explore how to run a
-custom worker that you will implement yourself.
-
-After completing the steps in this article, you will:
-
-1. Learn about a SIMPLE worker type which runs your custom code
-2. Learn about how a custom worker task runs from your environment and connects to Conductor
-
-Worker tasks are implemented by your application(s) and runs in a separate environment from Conductor. The worker tasks
-can be implemented in any language. These tasks talk to Conductor server via REST/gRPC to poll for tasks and update its
-status after execution. In our example we will be implementing a Java based worker by leveraging the official Java
-Client SDK.
-
-Worker tasks are identified by task type `SIMPLE` in the workflow JSON definition.
-
-### Step 1 - Register the Worker Task
-
-First let's create task definition for "simple_worker". Send a POST request to `/metadata/taskdefs` API endpoint on your
-conductor server to register these tasks.
-
-```json
-[
- {
- "name": "simple_worker",
- "retryCount": 3,
- "retryLogic": "FIXED",
- "retryDelaySeconds": 10,
- "timeoutSeconds": 300,
- "timeoutPolicy": "TIME_OUT_WF",
- "responseTimeoutSeconds": 180,
- "ownerEmail": "example@gmail.com"
- }
-]
-```
-
-Here is the curl command to do that
-
-```shell
-curl 'http://localhost:8080/api/metadata/taskdefs' \
- -H 'accept: */*' \
- -H 'Referer: ' \
- -H 'Content-Type: application/json' \
- --data-raw '[{"name":"simple_worker","retryCount":3,"retryLogic":"FIXED","retryDelaySeconds":10,"timeoutSeconds":300,"timeoutPolicy":"TIME_OUT_WF","responseTimeoutSeconds":180,"ownerEmail":"example@gmail.com"}]'
-```
-
-You can also use the Conductor Swagger API UI to make this call.
-
-Here is an overview of the task fields that we just created
-
-1. `"name"` : Name of your worker. This should be unique.
-2. `"retryCount"` : The number of times Conductor should retry your worker task in the event of an unexpected failure
-3. `"retryLogic"` : `FIXED` - The retry logic - options are `FIXED` and `EXPONENTIAL_BACKOFF`
-4. `"retryDelaySeconds"` : Time to wait before retries
-5. `"timeoutSeconds"` : Time in seconds, after which the task is marked as `TIMED_OUT` if not completed after
- transitioning to `IN_PROGRESS` status for the first time
-6. `"timeoutPolicy"` : `TIME_OUT_WF` - Task's timeout policy. Options can be
- found [here](https://netflix.github.io/conductor/configuration/taskdef/#timeout-policy)
-7. `"responseTimeoutSeconds"` : Must be greater than 0 and less than timeoutSeconds. The task is rescheduled if not
- updated with a status after this time (heartbeat mechanism). Useful when the worker polls for the task but fails to
- complete due to errors/network failure. Defaults to 3600
-8. `"ownerEmail"` : **Mandatory** metadata to manage who created or owns this worker definition in a shared conductor
- environment.
-
-More details on the fields used and the remaining fields in the task definition can be
-found [here](https://netflix.github.io/conductor/configuration/taskdef/#task-definition).
-
-### Step 2 - Create a Workflow definition
-
-Creating Workflow definition is similar to creating the task definition. In our workflow, we will use the task we
-defined earlier. Note that same Task definitions can be used in multiple workflows, or for multiple times in same
-Workflow (that's where taskReferenceName is useful).
-
-```json
-{
- "createTime": 1634021619147,
- "updateTime": 1630694890267,
- "name": "first_sample_workflow_with_worker",
- "description": "First Sample Workflow With Worker",
- "version": 1,
- "tasks": [
- {
- "name": "simple_worker",
- "taskReferenceName": "simple_worker_ref_1",
- "inputParameters": {},
- "type": "SIMPLE"
- }
- ],
- "inputParameters": [],
- "outputParameters": {
- "currentTimeOnServer": "${simple_worker_ref_1.output.currentTimeOnServer}",
- "message": "${simple_worker_ref_1.output.message}"
- },
- "schemaVersion": 2,
- "restartable": true,
- "ownerEmail": "example@email.com",
- "timeoutPolicy": "ALERT_ONLY",
- "timeoutSeconds": 0
-}
-```
-
-Notice that in the workflow definition, we are using a single worker task using the task worker definition we created
-earlier. The task is of type `SIMPLE`.
-
-To create this workflow in your Conductor server using CURL, use the following:
-
-```shell
-curl 'http://localhost:8080/api/metadata/workflow' \
--H 'accept: */*' \
--H 'Referer: ' \
--H 'Content-Type: application/json' \
---data-raw '{"createTime":1634021619147,"updateTime":1630694890267,"name":"first_sample_workflow_with_worker","description":"First Sample Workflow With Worker","version":1,"tasks":[{"name":"simple_worker","taskReferenceName":"simple_worker_ref_1","inputParameters":{},"type":"SIMPLE"}],"inputParameters":[],"outputParameters":{"currentTimeOnServer":"${simple_worker_ref_1.output.currentTimeOnServer}","message":"${simple_worker_ref_1.output.message}"},"schemaVersion":2,"restartable":true,"ownerEmail":"example@email.com","timeoutPolicy":"ALERT_ONLY","timeoutSeconds":0}'
-```
-
-### Step 3 - Starting the Workflow
-
-This workflow doesn't need any inputs. So we can issue a curl command to start it.
-
-```shell
-curl 'http://localhost:8080/api/workflow/first_sample_workflow_with_worker' \
- -H 'accept: text/plain' \
- -H 'Referer: ' \
- -H 'Content-Type: application/json' \
- --data-raw '{}'
-```
-
-The API path contains the workflow name `first_sample_workflow_with_worker` and in our workflow since we don't need any
-inputs we will specify `{}`
-
-Successful POST request should return a workflow id, which you can use to find the execution in the UI by navigating to `http://localhost:5000/execution/`.
-
-*Note: You can also run this using the Swagger UI instead of curl.*
-
-### Step 4 - Poll for Worker Task
-
-To get your Worker taskId, you first naviaget to `http://localhost:5000/execution/`. Next, click on the `simple_worker_ref_1` in the UI. This will open a summary pane with the `Task Execution ID`
-
-If you look up the worker using the following URL `http://localhost:8080/api/tasks/{taskId}`, you will notice that the worker is in `SCHEDULED` state. This is
-because we haven't implemented the worker yet. Let's walk through the steps to implement the worker.
-
-#### Prerequisite
-
-1. Setup a Java project on your local. You can also use an existing Java project if you already have one
-
-#### Adding worker implementation
-
-In your project, add the following dependencies. We are showing here how you will do this in gradle:
-
-```javascript
-implementation("com.netflix.conductor:conductor-client:${versions.conductor}") {
- exclude group: 'com.github.vmg.protogen', module: 'protogen-annotations'
- }
-
-implementation("com.netflix.conductor:conductor-common:${versions.conductor}") {
- exclude group: 'com.github.vmg.protogen', module: 'protogen-annotations'
-}
-```
-
-[See full example on GitHub](https://github.com/orkes-io/orkesworkers/blob/main/build.gradle)
-
-You can do this for Maven as well, just need to use the appropriate syntax. We will need the following two libraries
-available in maven repo and you can use the latest version if required.
-
-1. `com.netflix.conductor:conductor-client:3.0.6`
-2. `com.netflix.conductor:conductor-common:3.0.6`
-
-Now "simple_worker" task is in `SCHEDULED` state, it is worker's turn to fetch the task, execute it and update Conductor
-with final status of the task.
-
-A class needs to be created which implements Worker and defines its methods.
-
-**Note**: Make sure the method `getTaskDefName` returns string same as the task name we defined in step
-1 (`simple_worker`).
-
-```js reference
-https://github.com/orkes-io/orkesworkers/blob/main/src/main/java/io/orkes/samples/workers/SimpleWorker.java#L11-L30
-```
-
-Awesome, you have implemented your first worker in code! Amazing.
-
-#### Connecting, Polling and Executing your Worker
-
-In your main method or where ever your application starts, you will need to configure a class
-called `TaskRunnerConfigurer` and initialize it. This is the step that makes your code connect to a Conductor server.
-
-Here we have used a SpringBoot based Java application and we are showing you how to create a Bean for this class.
-
-```js reference
-https://github.com/orkes-io/orkesworkers/blob/main/src/main/java/io/orkes/samples/OrkesWorkersApplication.java#L18-L45
-```
-
-This is the place where you define your conductor server URL:
-
-```javascript
-env.getProperty("conductor.server.url")
-```
-
-We have defined this in a property file as shown below. You can also hard code this.
-
-```javascript reference
-https://github.com/orkes-io/orkesworkers/blob/main/src/main/resources/application.properties#L1-L1
-```
-
-That's it. You are all set. Run your Java application to start running your worker.
-
-After running your application, it will be able to poll and run your worker. Let's go back and load up the previously
-created workflow in your UI.
-
-![Conductor UI - Workflow Run](../img/tutorial/successfulWorkerExecution.png)
-
-In your worker you had this implementation:
-
-```js
- result.addOutputData("currentTimeOnServer", currentTimeOnServer);
-result.addOutputData("message", "Hello World!");
-```
-
-As you can see in the screenshot above the worker has added these outputs to the workflow run. Play around with this,
-change the outputs and re-run and see how it works.
-
-## Summary
-
-In this blog post — we learned how to run a sample workflow in our Conductor installation with a custom worker.
-Concepts we touched on:
-
-1. Adding Task (worker) definition
-2. Adding Workflow definition with a custom `SIMPLE` task
-3. Running Conductor client using the Java SDK
-
-Thank you for reading, and we hope you found this helpful. Please feel free to reach out to us for any questions and we
-are happy to help in any way we can.
-
-
-
-
diff --git a/docs/docs/labs/running-first-workflow.md b/docs/docs/labs/running-first-workflow.md
index 79c15e8452..1abdc3787b 100644
--- a/docs/docs/labs/running-first-workflow.md
+++ b/docs/docs/labs/running-first-workflow.md
@@ -1,16 +1,16 @@
-# Running First Workflow
+# A First Workflow
In this article we will explore how we can run a really simple workflow that runs without deploying any new microservice.
Conductor can orchestrate HTTP services out of the box without implementing any code. We will use that to create and run the first workflow.
-See [System Task](../concepts/system-tasks) for the list of such built-in tasks.
+See [System Task](/configuration/systask.html) for the list of such built-in tasks.
Using system tasks is a great way to run a lot of our code in production.
To bring up a local instance of Conductor follow one of the recommended steps:
-1. [Running Locally - From Code](../server)
-2. [Running Locally - Docker Compose](../running-locally-docker)
+1. [Running Locally - From Code](/gettingstarted/local.html)
+2. [Running Locally - Docker Compose](/gettingstarted/docker.html)
---
@@ -105,17 +105,17 @@ To configure the workflow, head over to the swagger API of conductor server and
If the link doesn’t open the right Swagger section, we can navigate to Metadata-Resource
→ `POST /api/metadata/workflow`
-![Swagger UI - Metadata - Workflow](../img/tutorial/metadataWorkflowPost.png)
+![Swagger UI - Metadata - Workflow](/img/tutorial/metadataWorkflowPost.png)
Paste the workflow payload into the Swagger API and hit Execute.
Now if we head over to the UI, we can see this workflow definition created:
-![Conductor UI - Workflow Definition](../img/tutorial/uiWorkflowDefinition.png)
+![Conductor UI - Workflow Definition](/img/tutorial/uiWorkflowDefinition.png)
If we click through we can see a visual representation of the workflow:
-![Conductor UI - Workflow Definition - Visual Flow](../img/tutorial/uiWorkflowDefinitionVisual.png)
+![Conductor UI - Workflow Definition - Visual Flow](/img/tutorial/uiWorkflowDefinitionVisual.png)
## 2. Running our First Workflow
@@ -123,7 +123,7 @@ Let’s run this workflow. To do that we can use the swagger API under the workf
[http://localhost:8080/swagger-ui/index.html?configUrl=/api-docs/swagger-config#/workflow-resource/startWorkflow_1](http://localhost:8080/swagger-ui/index.html?configUrl=/api-docs/swagger-config#/workflow-resource/startWorkflow_1)
-![Swagger UI - Metadata - Workflow - Run](../img/tutorial/metadataWorkflowRun.png)
+![Swagger UI - Metadata - Workflow - Run](/img/tutorial/metadataWorkflowRun.png)
Hit **Execute**!
@@ -131,7 +131,7 @@ Conductor will return a workflow id. We will need to use this id to load this up
search enabled we wouldn't need to copy this. If we don't have search enabled (using Elasticsearch) copy it from the
Swagger UI.
-![Swagger UI - Metadata - Workflow - Run](../img/tutorial/workflowRunIdCopy.png)
+![Swagger UI - Metadata - Workflow - Run](/img/tutorial/workflowRunIdCopy.png)
Ok, we should see this running and get completed soon. Let’s go to the UI to see what happened.
@@ -144,7 +144,7 @@ http://localhost:5000/execution/
Replace `` with our workflow id from the previous step. We should see a screen like below. Click on the
different tabs to see all inputs and outputs and task list etc. Explore away!
-![Conductor UI - Workflow Run](../img/tutorial/workflowLoaded.png)
+![Conductor UI - Workflow Run](/img/tutorial/workflowLoaded.png)
## Summary
diff --git a/docs/docs/metrics/client.md b/docs/docs/metrics/client.md
index dd6d132f31..1e4bb731be 100644
--- a/docs/docs/metrics/client.md
+++ b/docs/docs/metrics/client.md
@@ -1,3 +1,5 @@
+# Client Metrics
+
When using the Java client, the following metrics are published:
| Name | Purpose | Tags |
diff --git a/docs/docs/metrics/server.md b/docs/docs/metrics/server.md
index b26444466d..e22d08e2cb 100644
--- a/docs/docs/metrics/server.md
+++ b/docs/docs/metrics/server.md
@@ -1,4 +1,4 @@
-## Publishing metrics
+# Server Metrics
Conductor uses [spectator](https://github.com/Netflix/spectator) to collect the metrics.
diff --git a/docs/docs/reference-docs/annotation-processor.md b/docs/docs/reference-docs/annotation-processor.md
new file mode 100644
index 0000000000..7ebfe84cfc
--- /dev/null
+++ b/docs/docs/reference-docs/annotation-processor.md
@@ -0,0 +1,33 @@
+# Annotation Processor
+
+- Original Author: Vicent Martí - https://github.com/vmg
+- Original Repo: https://github.com/vmg/protogen
+
+This module is strictly for code generation tasks during builds based on annotations.
+Currently supports `protogen`
+
+### Usage
+
+See example below
+
+### Example
+
+This is an actual example of this module which is implemented in common/build.gradle
+
+```groovy
+task protogen(dependsOn: jar, type: JavaExec) {
+ classpath configurations.annotationsProcessorCodegen
+ main = 'com.netflix.conductor.annotationsprocessor.protogen.ProtoGenTask'
+ args(
+ "conductor.proto",
+ "com.netflix.conductor.proto",
+ "github.com/netflix/conductor/client/gogrpc/conductor/model",
+ "${rootDir}/grpc/src/main/proto",
+ "${rootDir}/grpc/src/main/java/com/netflix/conductor/grpc",
+ "com.netflix.conductor.grpc",
+ jar.archivePath,
+ "com.netflix.conductor.common",
+ )
+}
+```
+
diff --git a/docs/docs/how-tos/archival-of-workflows.md b/docs/docs/reference-docs/archival-of-workflows.md
similarity index 91%
rename from docs/docs/how-tos/archival-of-workflows.md
rename to docs/docs/reference-docs/archival-of-workflows.md
index 859073b7c1..064d48e81b 100644
--- a/docs/docs/how-tos/archival-of-workflows.md
+++ b/docs/docs/reference-docs/archival-of-workflows.md
@@ -1,9 +1,3 @@
----
-sidebar_position: 1
-id: archival-of-workflows
-title: Archival of Workflows
----
-
# Archival Of Workflows
Conductor has support for archiving workflow upon termination or completion. Enabling this will delete the workflow from the configured database, but leave the associated data in Elasticsearch so it is still searchable.
diff --git a/docs/docs/reference-docs/azureblob-storage.md b/docs/docs/reference-docs/azureblob-storage.md
new file mode 100644
index 0000000000..47a370a0af
--- /dev/null
+++ b/docs/docs/reference-docs/azureblob-storage.md
@@ -0,0 +1,44 @@
+# Azure Blob Storage
+
+The [AzureBlob storage](https://github.com/Netflix/conductor/tree/main/azureblob-storage) module uses azure blob to store and retrieve workflows/tasks input/output payload that
+went over the thresholds defined in properties named `conductor.[workflow|task].[input|output].payload.threshold.kb`.
+
+**Warning** Azure Java SDK use libs already present inside `conductor` like `jackson` and `netty`.
+You may encounter deprecated issues, or conflicts and need to adapt the code if the module is not maintained along with `conductor`.
+It has only been tested with **v12.2.0**.
+
+## Configuration
+
+### Usage
+
+Cf. Documentation [External Payload Storage](https://netflix.github.io/conductor/externalpayloadstorage/#azure-blob-storage)
+
+### Example
+
+```properties
+conductor.additional.modules=com.netflix.conductor.azureblob.AzureBlobModule
+es.set.netty.runtime.available.processors=false
+
+workflow.external.payload.storage=AZURE_BLOB
+workflow.external.payload.storage.azure_blob.connection_string=DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;EndpointSuffix=localhost
+workflow.external.payload.storage.azure_blob.signedurlexpirationseconds=360
+```
+
+## Testing
+
+You can use [Azurite](https://github.com/Azure/Azurite) to simulate an Azure Storage.
+
+### Troubleshoots
+
+* When using **es5 persistance** you will receive an `java.lang.IllegalStateException` because the Netty lib will call `setAvailableProcessors` two times. To resolve this issue you need to set the following system property
+
+```
+es.set.netty.runtime.available.processors=false
+```
+
+If you want to change the default HTTP client of azure sdk, you can use `okhttp` instead of `netty`.
+For that you need to add the following [dependency](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/storage/azure-storage-blob#default-http-client).
+
+```
+com.azure:azure-core-http-okhttp:${compatible version}
+```
diff --git a/docs/docs/reference-docs/directed-acyclic-graph.md b/docs/docs/reference-docs/directed-acyclic-graph.md
new file mode 100644
index 0000000000..5c707eccf5
--- /dev/null
+++ b/docs/docs/reference-docs/directed-acyclic-graph.md
@@ -0,0 +1,54 @@
+# Directed Acyclic Graph (DAG)
+## What is a Directed Acyclic Graph (DAG)?
+Conductor workflows are directed acyclic graphs (DAGs). But, what exactly is a DAG?
+
+To understand a DAG, we'll walk through each term (but not in order):
+
+### Graph
+
+A graph is "a collection of vertices (or point) and edges (or lines) that indicate connections between the vertices."
+
+By this definition, this is a graph - just not exactly correct in the context of DAGs:
+
+
+
+But in the context of workflows, we're thinking of a graph more like this:
+
+
+
+Imagine each vertex as a microservice, and the lines are how the microservices are connected together. However, this graph is not a directed graph - as there is no direction given to each connection.
+
+### Directed
+
+A directed graph means that there is a direction to each connection. For example, this graph is directed:
+
+
+
+Each arrow has a direction, Point "N" can proceed directly to "B", but "B" cannot proceed to "N" in the opposite direction.
+
+### Acyclic
+
+Acyclic means without circular or cyclic paths. In the directed example above, A -> B -> D -> A is a cyclic loop.
+
+So a Directed Acyclic Graph is a set of vertices where the connections are directed without any looping. DAG charts can only "move forward" and cannot redo a step (or series of steps.)
+
+Since a Conductor workflow is a series of vertices that can connect in only a specific direction and cannot loop, a Conductor workflow is thus a directed acyclic graph:
+
+
+
+### Can a workflow have loops and still be a DAG?
+
+Yes. For example, Conductor workflows have Do-While loops:
+
+
+
+This is still a DAG, because the loop is just shorthand for running the tasks inside the loop over and over again. For example, if the 2nd loop in the above image is run 3 times, the workflow path will be:
+
+1. zero_offset_fix_1
+2. post_to_orbit_ref_1
+3. zero_offset_fix_2
+4. post_to_orbit_ref_2
+5. zero_offset_fix_3
+6. post_to_orbit_ref_3
+
+The path is directed forward, and the loop just makes it easier to define the workflow.
diff --git a/docs/docs/reference-docs/do-while-task.md b/docs/docs/reference-docs/do-while-task.md
index 12d3849b2c..af4372002b 100644
--- a/docs/docs/reference-docs/do-while-task.md
+++ b/docs/docs/reference-docs/do-while-task.md
@@ -2,21 +2,21 @@
sidebar_position: 1
---
-# Do While
+# Do-While
```json
"type" : "DO_WHILE"
```
-### Introduction
+## Introduction
Sequentially execute a list of task as long as a condition is true.
The list of tasks is executed first, before the condition is checked (even for the first iteration).
When scheduled, each task of this loop will see its `taskReferenceName` concatenated with __i, with i being the iteration number, starting at 1. Warning: taskReferenceName containing arithmetic operators must not be used.
-Each task output is stored as part of the DO_WHILE task, indexed by the iteration value (see example below), allowing the condition to reference the output of a task for a specific iteration (eg. $.LoopTask['iteration']['first_task'])
+Each task output is stored as part of the DO_WHILE task, indexed by the iteration value (see example below), allowing the condition to reference the output of a task for a specific iteration (eg. $.LoopTask['iteration]['first_task'])
The DO_WHILE task is set to `FAILED` as soon as one of the loopOver fails. In such case retry, iteration starts from 1.
-#### Limitations
+### Limitations
- Domain or isolation group execution is unsupported; - Nested DO_WHILE is unsupported;
- Since loopover tasks will be executed in loop inside scope of parent do while task, crossing branching outside of DO_WHILE task is not respected.
- Nested DO_WHILE tasks are not supported. However, DO_WHILE task supports SUB_WORKFLOW as loopOver task, so we can achieve similar functionality.
@@ -25,16 +25,16 @@ Branching inside loopOver task is supported.
-### Configuration
+## Configuration
-**Parameters:**
+### Input Parameters:
| name | type | description |
|---------------|------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| loopCondition | String | Condition to be evaluated after every iteration. This is a Javascript expression, evaluated using the Nashorn engine. If an exception occurs during evaluation, the DO_WHILE task is set to FAILED_WITH_TERMINAL_ERROR. |
| loopOver | List[Task] | List of tasks that needs to be executed as long as the condition is true. |
-**Outputs:**
+### Output Parameters
| name | type | description |
|-----------|------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
@@ -42,7 +42,7 @@ Branching inside loopOver task is supported.
| `i` | Map[String, Any] | Iteration number as a string, mapped to the task references names and their output. |
| * | Any | Any state can be stored here if the `loopCondition` does so. For example `storage` will exist if `loopCondition` is `if ($.LoopTask['iteration'] <= 10) {$.LoopTask.storage = 3; true } else {false}` |
-### Examples
+## Examples
The following definition:
```json
@@ -135,3 +135,62 @@ will produce the following execution, assuming 3 executions occurred (alongside
}
}
```
+
+## Example using iteration key
+
+Sometimes, you may want to use the iteration value/counter in the tasks used in the loop. In this example, an API call is made to GitHub (to the Netflix Conductor repository), but each loop increases the pagination.
+
+The Loop ```taskReferenceName``` is "get_all_stars_loop_ref".
+
+In the ```loopCondition``` the term ```$.get_all_stars_loop_ref['iteration']``` is used.
+
+In tasks embedded in the loop, ```${get_all_stars_loop_ref.output.iteration}``` is used. In this case, it is used to define which page of results the API should return.
+
+```json
+{
+ "name": "get_all_stars",
+ "taskReferenceName": "get_all_stars_loop_ref",
+ "inputParameters": {
+ "stargazers": "4000"
+ },
+ "type": "DO_WHILE",
+ "decisionCases": {},
+ "defaultCase": [],
+ "forkTasks": [],
+ "startDelay": 0,
+ "joinOn": [],
+ "optional": false,
+ "defaultExclusiveJoinTask": [],
+ "asyncComplete": false,
+ "loopCondition": "if ($.get_all_stars_loop_ref['iteration'] < Math.ceil($.stargazers/100)) { true; } else { false; }",
+ "loopOver": [
+ {
+ "name": "100_stargazers",
+ "taskReferenceName": "hundred_stargazers_ref",
+ "inputParameters": {
+ "counter": "${get_all_stars_loop_ref.output.iteration}",
+ "http_request": {
+ "uri": "https://api.github.com/repos/ntflix/conductor/stargazers?page=${get_all_stars_loop_ref.output.iteration}&per_page=100",
+ "method": "GET",
+ "headers": {
+ "Authorization": "token ${workflow.input.gh_token}",
+ "Accept": "application/vnd.github.v3.star+json"
+ }
+ }
+ },
+ "type": "HTTP",
+ "decisionCases": {},
+ "defaultCase": [],
+ "forkTasks": [],
+ "startDelay": 0,
+ "joinOn": [],
+ "optional": false,
+ "defaultExclusiveJoinTask": [],
+ "asyncComplete": false,
+ "loopOver": [],
+ "retryCount": 3
+ }
+ ]
+ }
+
+```
diff --git a/docs/docs/reference-docs/dynamic-fork-task.md b/docs/docs/reference-docs/dynamic-fork-task.md
index 0eb1630aa0..e8f8721f6d 100644
--- a/docs/docs/reference-docs/dynamic-fork-task.md
+++ b/docs/docs/reference-docs/dynamic-fork-task.md
@@ -1,7 +1,3 @@
----
-sidebar_position: 1
----
-
# Dynamic Fork
```json
"type" : "FORK_JOIN_DYNAMIC"
@@ -9,58 +5,97 @@ sidebar_position: 1
## Introduction
-Dynamic Forks are an extension of the [Fork](../fork-task) operation in conductor.
-
-In a regular fork operation (`FORK_JOIN` task), the size of the fork is defined at the time of workflow definition.
+A Fork operation in conductor, lets you run a specified list of other tasks or sub workflows in parallel after the fork
+task. A fork task is followed by a join operation that waits on the forked tasks or sub workflows to finish. The `JOIN`
+task also collects outputs from each of the forked tasks or sub workflows.
-For dynamic forks the list of tasks is provided at runtime using the task's input.
+In a regular fork operation (`FORK_JOIN` task), the list of tasks or sub workflows that need to be forked and run in
+parallel are already known at the time of workflow definition creation time. However, there are cases when that list can
+only be determined at run-time and that is when the dynamic fork operation (FORK_JOIN_DYNAMIC task) is needed.
-There are four things that are needed to configure a `FORK_JOIN_DYNAMIC` task:
+There are three things that are needed to configure a `FORK_JOIN_DYNAMIC` task.
1. A list of tasks or sub-workflows that needs to be forked and run in parallel.
2. A list of inputs to each of these forked tasks or sub-workflows
-3. A task prior to the `FORK_JOIN_DYNAMIC` tasks outputs 1 and 2 above that can be wired in as in input to the `FORK_JOIN_DYNAMIC` tasks.
-4. A ```join``` task to accept the results of the dynamic forks. This join will wait for ALL the forked branches to complete before completing.
+3. A task prior to the `FORK_JOIN_DYNAMIC` tasks outputs 1 and 2 above that can be wired in as in input to
+ the `FORK_JOIN_DYNAMIC` tasks
## Use Cases
-A `FORK_JOIN_DYNAMIC` is useful when a set of tasks or sub-workflows need to be executed and the number of tasks or
-sub-workflows are determined at run time.
-
-> Note: Unlike ```FORK```, which can execute parallel flows with each fork executing a series of tasks in sequence, ```FORK_JOIN_DYNAMIC``` is limited to only one task per fork. However, forked task can be a Sub Workflow, allowing for more complex execution flows.
+A `FORK_JOIN_DYNAMIC` is useful, when a set of tasks or sub-workflows needs to be executed and the number of tasks or
+sub-workflows are determined at run time. E.g. Let's say we have a task that resizes an image, and we need to create a
+workflow that will resize an image into multiple sizes. In this case, a task can be created prior to
+the `FORK_JOIN_DYNAMIC` task that will prepare the input that needs to be passed into the `FORK_JOIN_DYNAMIC` task. The
+single image resize task does one job. The `FORK_JOIN_DYNAMIC` and the following `JOIN` will manage the multiple
+invokes of the single image resize task. Here, the responsibilities are clearly broken out, where the single image resize
+task does the core job and `FORK_JOIN_DYNAMIC` manages the orchestration and fault tolerance aspects.
## Configuration
-### Input Configuration
-| Attribute | Description |
-|--------------------------------|--------------------------------------------------------------------------------------------------------------------------------------|
-| name | Task Name. A unique name that is descriptive of the task function |
-| taskReferenceName | Task Reference Name. A unique reference to this task. There can be multiple references of a task within the same workflow definition |
-| type | `FORK_JOIN_DYNAMIC` |
-| inputParameters | The input parameters that will be supplied to this task. |
-| dynamicForkTasksParam | This is a JSON array of tasks or sub-workflow objects that needs to be forked and run in parallel |
-| dynamicForkTasksInputParamName | A JSON map, where the keys are task or sub-workflow names, and the values are its corresponding inputParameters |
+Here is an example of a `FORK_JOIN_DYNAMIC` task followed by a `JOIN` task
+
+```json
+{
+ "inputParameters": {
+ "dynamicTasks": "${fooBarTask.output.dynamicTasksJSON}",
+ "dynamicTasksInput": "${fooBarTask.output.dynamicTasksInputJSON}"
+ },
+ "type": "FORK_JOIN_DYNAMIC",
+ "dynamicForkTasksParam": "dynamicTasks",
+ "dynamicForkTasksInputParamName": "dynamicTasksInput"
+},
+{
+"name": "image_multiple_convert_resize_join",
+"taskReferenceName": "image_multiple_convert_resize_join_ref",
+"type": "JOIN"
+}
+```
+
+Dissecting into this example above, let's look at the three things that are needed to configured for
+the `FORK_JOIN_DYNAMIC` task
-### Example
+`dynamicForkTasksParam` This is a JSON array of task or sub-workflow objects that specifies the list of tasks or
+sub-workflows that needs to be forked and run in parallel `dynamicForkTasksInputParamName` This is a JSON map of task or
+sub-workflow objects that specifies the list of tasks or sub-workflows that needs to be forked and run in parallel
+fooBarTask This is a task that is defined prior to the FORK_JOIN_DYNAMIC in the workflow definition. This task will need
+to output (outputParameters) 1 and 2 above so that it can be wired into inputParameters of the FORK_JOIN_DYNAMIC
+tasks. (dynamicTasks and dynamicTasksInput)
-Let's say we have a task that resizes an image, and we need to create a
-workflow that will resize an image into multiple sizes. In this case, a task can be created prior to
+## Input Configuration
+
+
+| Attribute | Description |
+| ----------- | ----------- |
+| name | Task Name. A unique name that is descriptive of the task function |
+| taskReferenceName | Task Reference Name. A unique reference to this task. There can be multiple references of a task within the same workflow definition |
+| type | Task Type. In this case, `FORK_JOIN_DYNAMIC` |
+| inputParameters | The input parameters that will be supplied to this task. |
+| dynamicForkTasksParam | This is a JSON array of tasks or sub-workflow objects that needs to be forked and run in parallel (Note: This has a different format for ```SUB_WORKFLOW``` compared to ```SIMPLE``` tasks.) |
+| dynamicForkTasksInputParamName | A JSON map, where the keys are task or sub-workflow names, and the values are its corresponding inputParameters |
+
+
+## Example
+
+Let's say we have a task that resizes an image, and we need to create a workflow that will resize an image into multiple sizes. In this case, a task can be created prior to
the `FORK_JOIN_DYNAMIC` task that will prepare the input that needs to be passed into the `FORK_JOIN_DYNAMIC` task. These will be:
-* ```dynamicForkTasksParam``` the JSON array of tasks/subworkflows to be run in parallel.
-* ```dynamicForkTasksInputParamName``` a JSON map of input parameters for each task. The keys will be the tasks/subworkflows, and the values will be the input parameters for the tasks.
+* ```dynamicForkTasksParam``` the JSON array of tasks/subworkflows to be run in parallel. Each JSON object will have:
+ * A unique ```taskReferenceName```.
+ * The name of the Task/Subworkflow to be called (note - the location of this key:value is different for a subworkflow).
+ * The type of the task (This is optional for SIMPLE tasks).
+* ```dynamicForkTasksInputParamName``` a JSON map of input parameters for each task. The keys will be the unique ```taskReferenceName``` defined in the first JSON array, and the values will be the specific input parameters for the task/subworkflow.
-The
-single image resize task does one job. The `FORK_JOIN_DYNAMIC` and the following `JOIN` will manage the multiple
-invokes of the single image resize task. Here, the responsibilities are clearly broken out, where the single image resize
-task does the core job and `FORK_JOIN_DYNAMIC` manages the orchestration and fault tolerance aspects.
+The ```image_resize``` task works to resize just one image. The `FORK_JOIN_DYNAMIC` and the following `JOIN` will manage the multiple invocations of the single ```image_resize``` task. The responsibilities are clearly broken out, where the individual ```image_resize```
+tasks do the core job and `FORK_JOIN_DYNAMIC` manages the orchestration and fault tolerance aspects of handling multiple invocations of the task.
-### The workflow
+## The workflow
-Here is an example of a `FORK_JOIN_DYNAMIC` task followed by a `JOIN` task:
+Here is an example of a `FORK_JOIN_DYNAMIC` task followed by a `JOIN` task. The fork is named and given a taskReferenceName, but all of the input parameters are JSON variables that we will discuss next:
```json
-{
+{
+ "name": "image_multiple_convert_resize_fork",
+ "taskReferenceName": "image_multiple_convert_resize_fork_ref",
"inputParameters": {
"dynamicTasks": "${fooBarTask.output.dynamicTasksJSON}",
"dynamicTasksInput": "${fooBarTask.output.dynamicTasksInputJSON}"
@@ -78,7 +113,7 @@ Here is an example of a `FORK_JOIN_DYNAMIC` task followed by a `JOIN` task:
This appears in the UI as follows:
-![](../img/dynamic-task-diagram.png)
+![diagram of dynamic fork](/img/dynamic-task-diagram.png)
Let's assume this data is sent to the workflow:
@@ -92,43 +127,88 @@ Let's assume this data is sent to the workflow:
"height":300},
{"width":200,
"height":200}
-
-
],
"maintainAspectRatio": "true"
-
}
```
With 2 file formats and 2 sizes in the input, we'll be creating 4 images total. The first task will generate the tasks and the parameters for these tasks:
-* `dynamicForkTasksParam` This is a JSON array of task or sub-workflow objects that specifies the list of tasks or
-sub-workflows that needs to be forked and run in parallel. This will have the form:
+* `dynamicForkTasksParam` This is a JSON array of task or sub-workflow objects that specifies the list of tasks or sub-workflows that needs to be forked and run in parallel. This JSON varies depeding oon the type of task.
+
+
+### ```dynamicForkTasksParam``` Simple task
+In this case, our fork is running a SIMPLE task: ```image_convert_resize```:
```
{ "dynamicTasks": [
- 0: {
+ {
"name": :"image_convert_resize",
"taskReferenceName": "image_convert_resize_png_300x300_0",
...
},
- 1: {
+ {
"name": :"image_convert_resize",
"taskReferenceName": "image_convert_resize_png_200x200_1",
...
},
- 2: {
+ {
"name": :"image_convert_resize",
"taskReferenceName": "image_convert_resize_jpg_300x300_2",
...
},
- 3: {
+ {
"name": :"image_convert_resize",
"taskReferenceName": "image_convert_resize_jpg_200x200_3",
...
}
]}
```
+### ```dynamicForkTasksParam``` SubWorkflow task
+In this case, our Dynamic fork is running a SUB_WORKFLOW task: ```image_convert_resize_subworkflow```
+
+```
+{ "dynamicTasks": [
+ {
+ "subWorkflowParam" : {
+ "name": :"image_convert_resize_subworkflow",
+ "version": "1"
+ },
+ "type" : "SUB_WORKFLOW",
+ "taskReferenceName": "image_convert_resize_subworkflow_png_300x300_0",
+ ...
+ },
+ {
+ "subWorkflowParam" : {
+ "name": :"image_convert_resize_subworkflow",
+ "version": "1"
+ },
+ "type" : "SUB_WORKFLOW",
+ "taskReferenceName": "image_convert_resize_subworkflow_png_200x200_1",
+ ...
+ },
+ {
+ "subWorkflowParam" : {
+ "name": :"image_convert_resize_subworkflow",
+ "version": "1"
+ },
+ "type" : "SUB_WORKFLOW",
+ "taskReferenceName": "image_convert_resize_subworkflow_jpg_300x300_2",
+ ...
+ },
+ {
+ "subWorkflowParam" : {
+ "name": :"image_convert_resize_subworkflow",
+ "version": "1"
+ },
+ "type" : "SUB_WORKFLOW",
+ "taskReferenceName": "image_convert_resize_subworkflow_jpg_200x200_3",
+ ...
+ }
+]}
+```
+
+
* `dynamicForkTasksInputParamName` This is a JSON map of task or
sub-workflow objects and all the input parameters that these tasks will need to run.
@@ -167,5 +247,4 @@ sub-workflow objects and all the input parameters that these tasks will need to
### The Join
-The [JOIN](../../reference-docs/join-task) task will run after all of the dynamic tasks, collecting the output for all of the tasks.
-
+The [JOIN](/reference-docs/join-task.html) task will run after all of the dynamic tasks, collecting the output for all of the tasks.
\ No newline at end of file
diff --git a/docs/docs/reference-docs/dynamic-task.md b/docs/docs/reference-docs/dynamic-task.md
index 6cf86107f4..ffe26b8178 100644
--- a/docs/docs/reference-docs/dynamic-task.md
+++ b/docs/docs/reference-docs/dynamic-task.md
@@ -1,6 +1,3 @@
----
-sidebar_position: 1
----
# Dynamic
```json
"type" : "DYNAMIC"
@@ -148,10 +145,6 @@ If the input value is provided while running the workflow it can be accessed by
We can see in the below example that on the basis of Post Code the shipping service is being
decided.
-```js reference
-https://github.com/orkes-io/orkesworkers/blob/main/src/main/java/io/orkes/samples/workers/ShippingInfoWorker.java#L10-L36
-```
-
Based on given set of inputs i.e. Post Code starts with '9' hence, `ship_via_fedex` is executed -
![Conductor UI - Workflow Run](/img/tutorial/ShippingWorkflowRunning.png)
diff --git a/docs/docs/reference-docs/dynamic.md b/docs/docs/reference-docs/dynamic.md
deleted file mode 100644
index 7699cced27..0000000000
--- a/docs/docs/reference-docs/dynamic.md
+++ /dev/null
@@ -1,24 +0,0 @@
-## Dynamic Task
-
-Dynamic Tasks allow you to execute a registered task dynamically at run-time. It accepts the task name to execute in inputParameters.
-
-**Parameters:**
-
-| name | description |
-|----------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| dynamicTaskNameParam | Name of the parameter from the task input whose value is used to schedule the task. e.g. if the value of the parameter is ABC, the next task scheduled is of type 'ABC'. |
-
-**Example**
-``` json
-{
- "name": "user_task",
- "taskReferenceName": "t1",
- "inputParameters": {
- "files": "${workflow.input.files}",
- "taskToExecute": "${workflow.input.user_supplied_task}"
- },
- "type": "DYNAMIC",
- "dynamicTaskNameParam": "taskToExecute"
-}
-```
-If the workflow is started with input parameter user_supplied_task's value as __user_task_2__, Conductor will schedule __user_task_2__ when scheduling this dynamic task.
diff --git a/docs/docs/reference-docs/exclusive-join-task.md b/docs/docs/reference-docs/exclusive-join-task.md
deleted file mode 100644
index 3a9cd71b31..0000000000
--- a/docs/docs/reference-docs/exclusive-join-task.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-sidebar_position: 1
----
-
-# Exclusive Join Task
-
-TODO
-
-## Summary
-
-TODO
diff --git a/docs/docs/reference-docs/fork-task.md b/docs/docs/reference-docs/fork-task.md
index 80bf97c79a..5e87bfccae 100644
--- a/docs/docs/reference-docs/fork-task.md
+++ b/docs/docs/reference-docs/fork-task.md
@@ -1,7 +1,3 @@
----
-sidebar_position: 1
----
-
# Fork
```json
"type" : "FORK_JOIN"
@@ -62,7 +58,7 @@ Imagine a workflow that sends 3 notifications: email, SMS and HTTP. Since none o
The diagram will appear as:
-![fork diagram](../img/fork-task-diagram.png)
+![fork diagram](/img/fork-task-diagram.png)
Here's the JSON definition for the workflow:
@@ -141,4 +137,4 @@ references that were being `joinOn`. The corresponding values are the outputs of
}
```
-See [JOIN](../../reference-docs/join-task) for more details ni the JOIN aspect of the FORK.
+See [JOIN](/reference-docs/join-task.html) for more details on the JOIN aspect of the FORK.
diff --git a/docs/docs/reference-docs/http-task.md b/docs/docs/reference-docs/http-task.md
index a2c1dbf8cc..35e558a0de 100644
--- a/docs/docs/reference-docs/http-task.md
+++ b/docs/docs/reference-docs/http-task.md
@@ -140,5 +140,5 @@ Following is the example of HTTP task with `DELETE` method.
1. Why are my HTTP tasks not getting picked up?
1. We might have too many HTTP tasks in the queue. There is a concept called Isolation Groups that you can rely on
- for prioritizing certain HTTP tasks over others. Read more here: [Isolation Groups](https://netflix.github.io/conductor/configuration/isolationgroups/)
+ for prioritizing certain HTTP tasks over others. Read more here: [Isolation Groups](/configuration/isolationgroups.html)
diff --git a/docs/docs/reference-docs/join-task.md b/docs/docs/reference-docs/join-task.md
index 6fa7522173..9a76d68d64 100644
--- a/docs/docs/reference-docs/join-task.md
+++ b/docs/docs/reference-docs/join-task.md
@@ -15,7 +15,7 @@ a `FORK_JOIN_DYNAMIC` task, it implicitly waits for all of the dynamically forke
### Use Cases
-[FORK_JOIN](../../reference-docs/fork-task) and [FORK_JOIN_DYNAMIC](../../reference-docs/dynamic-fork-task) task are used to execute a collection of other tasks or sub workflows in parallel. In
+[FORK_JOIN](/reference-docs/fork-task.html) and [FORK_JOIN_DYNAMIC](/reference-docs/dynamic-fork-task.html) task are used to execute a collection of other tasks or sub workflows in parallel. In
such cases, there is a need for these forked tasks to complete before moving to the next stage in the workflow.
### Configuration
diff --git a/docs/docs/reference-docs/json-jq-transform-task.md b/docs/docs/reference-docs/json-jq-transform-task.md
index 0fdf983f5e..794676a610 100644
--- a/docs/docs/reference-docs/json-jq-transform-task.md
+++ b/docs/docs/reference-docs/json-jq-transform-task.md
@@ -5,7 +5,7 @@ sidebar_position: 1
# JSON JQ Transform Task
```json
-"type" : "JSON_JQ_TRANSFORM_TASK"
+"type" : "JSON_JQ_TRANSFORM"
```
### Introduction
@@ -45,6 +45,7 @@ the output of one task to the input of another.
### Example
+
Here is an example of a _`JSON_JQ_TRANSFORM`_ task. The `inputParameters` attribute is expected to have a value object
that has the following
@@ -107,3 +108,83 @@ attribute along with a string message will be returned if there was an error pro
]
}
```
+
+## Example JQ transforms
+
+### Cleaning up a JSON response
+
+A HTTP Task makes an API call to GitHub to request a list of "stargazers" (users who have starred a repository). The API response (for just one user) looks like:
+
+
+Snippet of ```${hundred_stargazers_ref.output}```
+
+``` JSON
+
+"body":[
+ {
+ "starred_at":"2016-12-14T19:55:46Z",
+ "user":{
+ "login":"lzehrung",
+ "id":924226,
+ "node_id":"MDQ6VXNlcjkyNDIyNg==",
+ "avatar_url":"https://avatars.githubusercontent.com/u/924226?v=4",
+ "gravatar_id":"",
+ "url":"https://api.github.com/users/lzehrung",
+ "html_url":"https://github.com/lzehrung",
+ "followers_url":"https://api.github.com/users/lzehrung/followers",
+ "following_url":"https://api.github.com/users/lzehrung/following{/other_user}",
+ "gists_url":"https://api.github.com/users/lzehrung/gists{/gist_id}",
+ "starred_url":"https://api.github.com/users/lzehrung/starred{/owner}{/repo}",
+ "subscriptions_url":"https://api.github.com/users/lzehrung/subscriptions",
+ "organizations_url":"https://api.github.com/users/lzehrung/orgs",
+ "repos_url":"https://api.github.com/users/lzehrung/repos",
+ "events_url":"https://api.github.com/users/lzehrung/events{/privacy}",
+ "received_events_url":"https://api.github.com/users/lzehrung/received_events",
+ "type":"User",
+ "site_admin":false
+ }
+}
+]
+
+```
+
+We only need the ```starred_at``` and ```login``` parameters for users who starred the repository AFTER a given date (provided as an input to the workflow ```${workflow.input.cutoff_date}```). We'll use the JQ Transform to simplify the output:
+
+```JSON
+{
+ "name": "jq_cleanup_stars",
+ "taskReferenceName": "jq_cleanup_stars_ref",
+ "inputParameters": {
+ "starlist": "${hundred_stargazers_ref.output.response.body}",
+ "queryExpression": "[.starlist[] | select (.starred_at > \"${workflow.input.cutoff_date}\") |{occurred_at:.starred_at, member: {github: .user.login}}]"
+ },
+ "type": "JSON_JQ_TRANSFORM",
+ "decisionCases": {},
+ "defaultCase": [],
+ "forkTasks": [],
+ "startDelay": 0,
+ "joinOn": [],
+ "optional": false,
+ "defaultExclusiveJoinTask": [],
+ "asyncComplete": false,
+ "loopOver": []
+ }
+```
+
+The JSON is stored in ```starlist```. The ```queryExpression``` reads in the JSON, selects only entries where the ```starred_at``` value meets the date criteria, and generates output JSON of the form:
+
+```JSON
+{
+ "occurred_at": "date from JSON",
+ "member":{
+ "github" : "github Login from JSON"
+ }
+}
+```
+
+The entire expression is wrapped in [] to indicate that the response should be an array.
+
+
+
+
+
diff --git a/docs/docs/how-tos/redis.md b/docs/docs/reference-docs/redis.md
similarity index 98%
rename from docs/docs/how-tos/redis.md
rename to docs/docs/reference-docs/redis.md
index c0fb43997f..ee96c6c087 100644
--- a/docs/docs/how-tos/redis.md
+++ b/docs/docs/reference-docs/redis.md
@@ -1,4 +1,4 @@
-# Redis Configuration
+# Redis
By default conductor runs with an in-memory Redis mock. However, you
can change the configuration by setting the properties `conductor.db.type` and `conductor.redis.hosts`.
diff --git a/docs/docs/reference-docs/sample-layout.md b/docs/docs/reference-docs/sample-layout.md
deleted file mode 100644
index 55a1041b38..0000000000
--- a/docs/docs/reference-docs/sample-layout.md
+++ /dev/null
@@ -1,39 +0,0 @@
----
-sidebar_position: 1
----
-# Dynamic Task
-
-## What is a Dynamic Task?
-
-TODO: What is a Dynamic task? How does it work?
-
-## Common Use Cases
-
-TODO: List out some common use cases
-
-## Configuration / Properties
-
-### Inputs
-
-TODO: Talk about inputs for the task
-
-### Output
-
-TODO: Talk about output of the task, what to expect
-
-
-## Examples
-
-TODO: Example 1
-
-TODO: Example 2
-
-## FAQs
-
-TODO: Gotchas and other nuances
-
-1. Question 1
- 1. Answer
-
-1. Question 2
- 1. Answer
diff --git a/docs/docs/reference-docs/set-variable-task.md b/docs/docs/reference-docs/set-variable-task.md
index 32f3b50a15..c9c8b55e60 100644
--- a/docs/docs/reference-docs/set-variable-task.md
+++ b/docs/docs/reference-docs/set-variable-task.md
@@ -41,7 +41,7 @@ Following is the workflow definition with `SET_VARIABLE` task.
"taskReferenceName": "Set_Name",
"type": "SET_VARIABLE",
"inputParameters": {
- "name": "Orkes"
+ "name": "Foo"
}
},
{
@@ -61,5 +61,5 @@ Following is the workflow definition with `SET_VARIABLE` task.
```
In the above example, it can be seen that the task `Set_Name` is a Set Variable Task and
-the variable `name` is set to `Orkes` and later in the workflow it is referenced by
+the variable `name` is set to `Foo` and later in the workflow it is referenced by
`"${workflow.variables.name}"` in another task.
diff --git a/docs/docs/reference-docs/start-workflow-task.md b/docs/docs/reference-docs/start-workflow-task.md
index 94501a6e7b..53db54886a 100644
--- a/docs/docs/reference-docs/start-workflow-task.md
+++ b/docs/docs/reference-docs/start-workflow-task.md
@@ -26,7 +26,7 @@ Start Workflow task is defined directly inside the workflow with type `START_WOR
| name | type | description |
|---------------|------------------|---------------------------------------------------------------------------------------------------------------------|
-| startWorkflow | Map[String, Any] | The value of this parameter is [Start Workflow Request](../../gettingstarted/startworkflow#start-workflow-request). |
+| startWorkflow | Map[String, Any] | The value of this parameter is [Start Workflow Request](/gettingstarted/startworkflow.html#start-workflow-request). |
#### Output
diff --git a/docs/docs/reference-docs/sub-workflow-task.md b/docs/docs/reference-docs/sub-workflow-task.md
index fc7a987b30..53a0be416e 100644
--- a/docs/docs/reference-docs/sub-workflow-task.md
+++ b/docs/docs/reference-docs/sub-workflow-task.md
@@ -31,8 +31,8 @@ Sub Workflow task is defined directly inside the workflow with type `SUB_WORKFLO
|--------------------|-------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| name | String | Name of the workflow to execute |
| version | Integer | Version of the workflow to execute |
-| taskToDomain | Map[String, String] | Allows scheduling the sub workflow's tasks per given mappings. See [Task Domains](../../configuration/taskdomains) for instructions to configure taskDomains. |
-| workflowDefinition | [WorkflowDefinition](../../configuration/workflowdef) | Allows starting a subworkflow with a dynamic workflow definition. |
+| taskToDomain | Map[String, String] | Allows scheduling the sub workflow's tasks per given mappings. See [Task Domains](/configuration/taskdomains.html) for instructions to configure taskDomains. |
+| workflowDefinition | [WorkflowDefinition](/configuration/workflowdef.html) | Allows starting a subworkflow with a dynamic workflow definition. |
#### Output
@@ -46,10 +46,12 @@ Sub Workflow task is defined directly inside the workflow with type `SUB_WORKFLO
Imagine we have a workflow that has a fork in it. In the example below, we input one image, but using a fork to create 2 images simultaneously:
-![workflow with fork](../img/workflow_fork.png)
+
+![workflow with fork](/img/workflow_fork.png)
The left fork will create a JPG, and the right fork a WEBP image. Maintaining this workflow might be difficult, as changes made to one side of the fork do not automatically propagate the other. Rather than using 2 tasks, we can define a ```image_convert_resize``` workflow that we can call for both forks as a sub-workflow:
+
```json
{{
@@ -133,7 +135,7 @@ The left fork will create a JPG, and the right fork a WEBP image. Maintaining th
"schemaVersion": 2,
"restartable": true,
"workflowStatusListenerEnabled": true,
- "ownerEmail": "devrel@orkes.io",
+ "ownerEmail": "conductor@example.com",
"timeoutPolicy": "ALERT_ONLY",
"timeoutSeconds": 0,
"variables": {},
@@ -142,11 +144,13 @@ The left fork will create a JPG, and the right fork a WEBP image. Maintaining th
```
Now our diagram will appear as:
-![workflow with 2 subworkflows](../img/subworkflow_diagram.png)
+![workflow with 2 subworkflows](/img/subworkflow_diagram.png)
+
The inputs to both sides of the workflow are identical before and after - but we've abstracted the tasks into the sub-workflow. Any change to the sub-workflow will automatically occur in bth sides of the fork.
+
Looking at the subworkflow (the WEBP version):
```
diff --git a/docs/docs/reference-docs/switch-task.md b/docs/docs/reference-docs/switch-task.md
index d2cfd7ff01..6cb78d8089 100644
--- a/docs/docs/reference-docs/switch-task.md
+++ b/docs/docs/reference-docs/switch-task.md
@@ -85,7 +85,7 @@ is used to determine the switch-case. The evaluator type is `value-param` and th
the name of an input parameter. If the value of `switch_case_value` is `fedex` then the decision case `ship_via_fedex`is
executed as shown below.
-![Conductor UI - Workflow Run](../img/Switch_Fedex.png)
+![Conductor UI - Workflow Run](/img/Switch_Fedex.png)
In a similar way - if the input was `ups`, then `ship_via_ups` will be executed. If none of the cases match then the
default option is executed.
diff --git a/docs/docs/reference-docs/terminate-task.md b/docs/docs/reference-docs/terminate-task.md
index 5d3461f48c..0412f801aa 100644
--- a/docs/docs/reference-docs/terminate-task.md
+++ b/docs/docs/reference-docs/terminate-task.md
@@ -53,7 +53,7 @@ Terminate task is defined directly inside the workflow with type
### Examples
-Let's consider the same example we had in [Switch Task](../switch-task).
+Let's consider the same example we had in [Switch Task](/reference-docs/switch-task.html).
Suppose in a workflow, we have to take decision to ship the courier with the shipping
service providers on the basis of input provided while running the workflow.
@@ -83,7 +83,7 @@ Here is a snippet that shows the defalt switch case terminating the workflow:
Workflow gets created as shown in the diagram.
-![Conductor UI - Workflow Diagram](../img/Terminate_Task.png)
+![Conductor UI - Workflow Diagram](/img/Terminate_Task.png)
### Best Practices
diff --git a/docs/docs/resources/code-of-conduct.md b/docs/docs/resources/code-of-conduct.md
new file mode 100644
index 0000000000..f8076bc629
--- /dev/null
+++ b/docs/docs/resources/code-of-conduct.md
@@ -0,0 +1,49 @@
+# Contributor Covenant Code of Conduct
+
+## Our Pledge
+
+In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.
+
+## Our Standards
+
+Examples of behavior that contributes to creating a positive environment
+include:
+
+* Using welcoming and inclusive language
+* Being respectful of differing viewpoints and experiences
+* Gracefully accepting constructive criticism
+* Focusing on what is best for the community
+* Showing empathy towards other community members
+
+Examples of unacceptable behavior by participants include:
+
+* The use of sexualized language or imagery and unwelcome sexual attention or advances
+* Trolling, insulting/derogatory comments, and personal or political attacks
+* Public or private harassment
+* Publishing others' private information, such as a physical or electronic address, without explicit permission
+* Other conduct which could reasonably be considered inappropriate in a professional setting
+
+## Our Responsibilities
+
+Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
+
+Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
+
+## Scope
+
+This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
+
+## Enforcement
+
+Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at netflixoss@netflix.com. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
+
+Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
+
+## Attribution
+
+This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
+
+[homepage]: https://www.contributor-covenant.org
+
+For answers to common questions about this code of conduct, see
+https://www.contributor-covenant.org/faq
diff --git a/docs/docs/resources/contributing.md b/docs/docs/resources/contributing.md
new file mode 100644
index 0000000000..7d84a32adf
--- /dev/null
+++ b/docs/docs/resources/contributing.md
@@ -0,0 +1,73 @@
+# Contributing
+Thanks for your interest in Conductor!
+This guide helps to find the most efficient way to contribute, ask questions, and report issues.
+
+Code of conduct
+-----
+
+Please review our [code of conduct](code-of-conduct.md).
+
+I have a question!
+-----
+
+We have a dedicated [discussion forum](https://github.com/Netflix/conductor/discussions) for asking "how to" questions and to discuss ideas. The discussion forum is a great place to start if you're considering creating a feature request or work on a Pull Request.
+*Please do not create issues to ask questions.*
+
+I want to contribute!
+------
+
+We welcome Pull Requests and already had many outstanding community contributions!
+Creating and reviewing Pull Requests take considerable time. This section helps you set up for a smooth Pull Request experience.
+
+The stable branch is [main](https://github.com/Netflix/conductor/tree/main).
+
+Please create pull requests for your contributions against [main](https://github.com/Netflix/conductor/tree/main) only.
+
+It's a great idea to discuss the new feature you're considering on the [discussion forum](https://github.com/Netflix/conductor/discussions) before writing any code. There are often different ways you can implement a feature. Getting some discussion about different options helps shape the best solution. When starting directly with a Pull Request, there is the risk of having to make considerable changes. Sometimes that is the best approach, though! Showing an idea with code can be very helpful; be aware that it might be throw-away work. Some of our best Pull Requests came out of multiple competing implementations, which helped shape it to perfection.
+
+Also, consider that not every feature is a good fit for Conductor. A few things to consider are:
+
+* Is it increasing complexity for the user, or might it be confusing?
+* Does it, in any way, break backward compatibility (this is seldom acceptable)
+* Does it require new dependencies (this is rarely acceptable for core modules)
+* Should the feature be opt-in or enabled by default. For integration with a new Queuing recipe or persistence module, a separate module which can be optionally enabled is the right choice.
+* Should the feature be implemented in the main Conductor repository, or would it be better to set up a separate repository? Especially for integration with other systems, a separate repository is often the right choice because the life-cycle of it will be different.
+
+Of course, for more minor bug fixes and improvements, the process can be more light-weight.
+
+We'll try to be responsive to Pull Requests. Do keep in mind that because of the inherently distributed nature of open source projects, responses to a PR might take some time because of time zones, weekends, and other things we may be working on.
+
+I want to report an issue
+-----
+
+If you found a bug, it is much appreciated if you create an issue. Please include clear instructions on how to reproduce the issue, or even better, include a test case on a branch. Make sure to come up with a descriptive title for the issue because this helps while organizing issues.
+
+I have a great idea for a new feature
+----
+Many features in Conductor have come from ideas from the community. If you think something is missing or certain use cases could be supported better, let us know! You can do so by opening a discussion on the [discussion forum](https://github.com/Netflix/conductor/discussions). Provide as much relevant context to why and when the feature would be helpful. Providing context is especially important for "Support XYZ" issues since we might not be familiar with what "XYZ" is and why it's useful. If you have an idea of how to implement the feature, include that as well.
+
+Once we have decided on a direction, it's time to summarize the idea by creating a new issue.
+
+## Code Style
+We use [spotless](https://github.com/diffplug/spotless) to enforce consistent code style for the project, so make sure to run `gradlew spotlessApply` to fix any violations after code changes.
+
+## License
+
+By contributing your code, you agree to license your contribution under the terms of the APLv2: https://github.com/Netflix/conductor/blob/master/LICENSE
+
+All files are released with the Apache 2.0 license, and the following license header will be automatically added to your new file if none present:
+
+```
+/**
+ * Copyright $YEAR Netflix, Inc.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
+ * an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
+ * specific language governing permissions and limitations under the License.
+ */
+```
diff --git a/docs/docs/license.md b/docs/docs/resources/license.md
similarity index 98%
rename from docs/docs/license.md
rename to docs/docs/resources/license.md
index 1c070fea09..518de40643 100644
--- a/docs/docs/license.md
+++ b/docs/docs/resources/license.md
@@ -1,3 +1,5 @@
+# License
+
Copyright 2022 Netflix, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
diff --git a/docs/docs/resources/related.md b/docs/docs/resources/related.md
new file mode 100644
index 0000000000..9a293f1520
--- /dev/null
+++ b/docs/docs/resources/related.md
@@ -0,0 +1,74 @@
+# Community projects related to Conductor
+
+## Client SDKs
+
+Further, all of the (non-Java) SDKs have a new GitHub home: the Conductor SDK repository is your new source for Conductor SDKs:
+
+* [Golang](https://github.com/conductor-sdk/conductor-go)
+* [Python](https://github.com/conductor-sdk/conductor-python)
+* [C#](https://github.com/conductor-sdk/conductor-csharp)
+* [Clojure](https://github.com/conductor-sdk/conductor-clojure)
+
+All contributions on the above client sdks can be made on [Conductor SDK](https://github.com/conductor-sdk) repository.
+
+## Microservices operations
+
+* https://github.com/flaviostutz/schellar - Schellar is a scheduler tool for instantiating Conductor workflows from time to time, mostly like a cron job, but with transport of input/output variables between calls.
+
+* https://github.com/flaviostutz/backtor - Backtor is a backup scheduler tool that uses Conductor workers to handle backup operations and decide when to expire backups (ex.: keep backup 3 days, 2 weeks, 2 months, 1 semester)
+
+* https://github.com/cquon/conductor-tools - Conductor CLI for launching workflows, polling tasks, listing running tasks etc
+
+
+## Conductor deployment
+
+* https://github.com/flaviostutz/conductor-server - Docker container for running Conductor with Prometheus metrics plugin installed and some tweaks to ease provisioning of workflows from json files embedded to the container
+
+* https://github.com/flaviostutz/conductor-ui - Docker container for running Conductor UI so that you can easily scale UI independently
+
+* https://github.com/flaviostutz/elasticblast - "Elasticsearch to Bleve" bridge tailored for running Conductor on top of Bleve indexer. The footprint of Elasticsearch may cost too much for small deployments on Cloud environment.
+
+* https://github.com/mohelsaka/conductor-prometheus-metrics - Conductor plugin for exposing Prometheus metrics over path '/metrics'
+
+## OAuth2.0 Security Configuration
+Forked Repository - [Conductor (Secure)](https://github.com/maheshyaddanapudi/conductor/tree/oauth2)
+
+[OAuth2.0 Role Based Security!](https://github.com/maheshyaddanapudi/conductor/blob/oauth2/SECURITY.md) - Spring Security with easy configuration to secure the Conductor server APIs.
+
+Docker image published to [Docker Hub](https://hub.docker.com/repository/docker/conductorboot/server)
+
+## Conductor Worker utilities
+
+* https://github.com/ggrcha/conductor-go-client - Conductor Golang client for writing Workers in Golang
+
+* https://github.com/courosh12/conductor-dotnet-client - Conductor DOTNET client for writing Workers in DOTNET
+ * https://github.com/TwoUnderscorez/serilog-sinks-conductor-task-log - Serilog sink for sending worker log events to Netflix Conductor
+
+* https://github.com/davidwadden/conductor-workers - Various ready made Conductor workers for common operations on some platforms (ex.: Jira, Github, Concourse)
+
+## Conductor Web UI
+
+* https://github.com/maheshyaddanapudi/conductor-ng-ui - Angular based - Conductor Workflow Management UI
+
+## Conductor Persistence
+
+### Mongo Persistence
+
+* https://github.com/maheshyaddanapudi/conductor/tree/mongo_persistence - With option to use Mongo Database as persistence unit.
+ * Mongo Persistence / Option to use Mongo Database as persistence unit.
+ * Docker Compose example with MongoDB Container.
+
+### Oracle Persistence
+
+* https://github.com/maheshyaddanapudi/conductor/tree/oracle_persistence - With option to use Oracle Database as persistence unit.
+ * Oracle Persistence / Option to use Oracle Database as persistence unit : version > 12.2 - Tested well with 19C
+ * Docker Compose example with Oracle Container.
+
+## Schedule Conductor Workflow
+* https://github.com/jas34/scheduledwf - It solves the following problem statements:
+ * At times there are use cases in which we need to run some tasks/jobs only at a scheduled time.
+ * In microservice architecture maintaining schedulers in various microservices is a pain.
+ * We should have a central dedicate service that can do scheduling for us and provide a trigger to a microservices at expected time.
+* It offers an additional module `io.github.jas34.scheduledwf.config.ScheduledWfServerModule` built on the existing core
+of conductor and does not require deployment of any additional service.
+For more details refer: [Schedule Conductor Workflows](https://jas34.github.io/scheduledwf) and [Capability In Conductor To Schedule Workflows](https://github.com/Netflix/conductor/discussions/2256)
\ No newline at end of file
diff --git a/docs/docs/running-locally-docker.md b/docs/docs/running-locally-docker.md
deleted file mode 100644
index f55e585a39..0000000000
--- a/docs/docs/running-locally-docker.md
+++ /dev/null
@@ -1,74 +0,0 @@
-
-# Running via Docker Compose
-
-In this article we will explore how you can set up Netflix Conductor on your local machine using Docker compose.
-The docker compose will bring up the following:
-1. Conductor API Server
-2. Conductor UI
-3. Elasticsearch for searching workflows
-
-## Prerequisites
-1. Docker: [https://docs.docker.com/get-docker/](https://docs.docker.com/get-docker/)
-2. Recommended host with CPU and RAM to be able to run multiple docker containers (at-least 16GB RAM)
-
-## Steps
-
-#### 1. Clone the Conductor Code
-
-```shell
-$ git clone https://github.com/Netflix/conductor.git
-```
-
-#### 2. Build the Docker Compose
-
-```shell
-$ cd conductor
-conductor $ cd docker
-docker $ docker-compose build
-```
-#### Note: Conductor supplies multiple docker compose templates that can be used with different configurations:
-
-| File | Containers |
-|--------------------------------|-----------------------------------------------------------------------------------------|
-| docker-compose.yaml | 1. In Memory Conductor Server 2. Elasticsearch 3. UI |
-| docker-compose-dynomite.yaml | 1. In Memory Conductor Server 2. Elasticsearch 3. UI 4. Dynomite Redis for persistence |
-| docker-compose-postgres.yaml | 1. In Memory Conductor Server 2. Elasticsearch 3. UI 4. Postgres persistence |
-| docker-compose-prometheus.yaml | Brings up Prometheus server |
-
-#### 3. Run Docker Compose
-
-```shell
-docker $ docker-compose up
-```
-
-Once up and running, you will see the following in your Docker dashboard:
-
-1. Elasticsearch
-2. Conductor UI
-3. Conductor Server
-
-You can access all three on your browser to verify that it is running correctly:
-
-Conductor Server URL: [http://localhost:8080/swagger-ui/index.html?configUrl=/api-docs/swagger-config](http://localhost:8080/swagger-ui/index.html?configUrl=/api-docs/swagger-config)
-
-![Conductor Server Home Page](img/tutorial/swagger.png)
-
-Conductor UI URL: [http://localhost:5000/](http://localhost:5000/)
-
-![Conductor Server Home Page](img/tutorial/conductorUIHome.png)
-
-### Potential problems
-
-1. Not enough memory
- 1. You will need at least 16 GB of memory to run everything. You can modify the docker compose to skip using
- Elasticsearch if you have no option to run this with your memory options.
- 2. To disable Elasticsearch using Docker Compose - follow the steps listed here: **TODO LINK**
-2. Elasticsearch fails to come up in arm64 based CPU machines
- 1. As of writing this article, Conductor relies on 6.8.x version of Elasticsearch. This version doesn't have an
- arm64 based Docker image. You will need to use Elasticsearch 7.x which requires a bit of customization to get up
- and running
-3. Elasticsearch remains in Yellow health
- 1. When you run Elasticsearch, sometimes the health remains in Yellow state. Conductor server by default requires
- Green state to run when indexing is enabled. To work around this, you can use the following property:
- `conductor.elasticsearch.clusteHealthColor=yellow` Reference: [Issue 2262](https://github.com/Netflix/conductor/issues/2262)
-
diff --git a/docs/docs/technicaldetails.md b/docs/docs/technicaldetails.md
index 54ece799cd..b226303305 100644
--- a/docs/docs/technicaldetails.md
+++ b/docs/docs/technicaldetails.md
@@ -1,3 +1,5 @@
+# Technical Details
+
### gRPC Framework
As part of this addition, all of the modules and bootstrap code within them were refactored to leverage providers, which facilitated moving the Jetty server into a separate module and the conformance to Guice guidelines and best practices.
This feature constitutes a server-side gRPC implementation along with protobuf RPC schemas for the workflow, metadata and task APIs that can be run concurrently with the Jersey-based HTTP/REST server. The protobuf models for all the types are exposed through the API. gRPC java clients for the workflow, metadata and task APIs are also available for use. Another valuable addition is an idiomatic Go gRPC client implementation for the worker API.
@@ -11,7 +13,7 @@ All the datastore operations that are used during the critical execution path of
### External Payload Storage
The implementation of this feature is such that the externalization of payloads is fully transparent and automated to the user. Conductor operators can configure the usage of this feature and is completely abstracted and hidden from the user, thereby allowing the operators full control over the barrier limits. Currently, only AWS S3 is supported as a storage system, however, as with all other Conductor components, this is pluggable and can be extended to enable any other object store to be used as an external payload storage system.
-The externalization of payloads is enforced using two kinds of [barriers](../externalpayloadstorage). Soft barriers are used when the payload size is warranted enough to be stored as part of workflow execution. These payloads will be stored in external storage and used during execution. Hard barriers are enforced to safeguard against voluminous data, and such payloads are rejected and the workflow execution is failed.
+The externalization of payloads is enforced using two kinds of [barriers](/externalpayloadstorage.html). Soft barriers are used when the payload size is warranted enough to be stored as part of workflow execution. These payloads will be stored in external storage and used during execution. Hard barriers are enforced to safeguard against voluminous data, and such payloads are rejected and the workflow execution is failed.
The payload size is evaluated in the client before being sent over the wire to the server. If the payload size exceeds the configured soft limit, the client makes a request to the server for the location at which the payload is to be stored. In this case where S3 is being used, the server returns a signed url for the location and the client uploads the payload using this signed url. The relative path to the payload object is then stored in the workflow/task metadata. The server can then download this payload from this path and use as needed during execution. This allows the server to control access to the S3 bucket, thereby making the user applications where the worker processes are run completely agnostic of the permissions needed to access this location.
diff --git a/docs/docs/theme/base.html b/docs/docs/theme/base.html
deleted file mode 100644
index 9456fa9454..0000000000
--- a/docs/docs/theme/base.html
+++ /dev/null
@@ -1,134 +0,0 @@
-
-
-
-
- {%- block site_meta %}
-
-
-
- {% if page and page.is_homepage %}{% endif %}
- {% if config.site_author %}{% endif %}
- {% if config.site_favicon %}
- {% else %}{% endif %}
- {%- endblock %}
-
- {%- block htmltitle %}
- {% if page and page.title and not page.is_hompage %}{{ page.title }} - {% endif %}{{ config.site_name }}
- {%- endblock %}
-
- {%- block styles %}
-
-
-
-
-
- {%- for path in extra_css %}
-
- {%- endfor %}
- {%- endblock %}
-
- {%- block libs %}
- {% if page %}
-
- {% endif %}
-
-
-
- {%- endblock %}
-
- {%- block extrahead %} {% endblock %}
-
- {%- block analytics %}
- {% if config.google_analytics %}
-
- {% endif %}
- {%- endblock %}
-
-
-
-
-
-
- {# SIDE NAV, TOGGLES ON MOBILE #}
-
-
-
-
- {# MOBILE NAV, TRIGGLES SIDE NAV ON TOGGLE #}
-
-
- {# PAGE CONTENT #}
-