Skip to content

Commit

Permalink
tutorial docs section (#156)
Browse files Browse the repository at this point in the history
  • Loading branch information
ukclivecox authored Apr 21, 2022
1 parent a39dfa4 commit d65eb00
Show file tree
Hide file tree
Showing 8 changed files with 42 additions and 3 deletions.
5 changes: 5 additions & 0 deletions docs/source/contents/inference-artifacts/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,11 @@ To run your model inside Seldon you must supply an inference artifact that can b
- Tag
- Server Docs
- Example
* - Alibi-Detect
- MLServer
- `alibi-detect`
- [docs](https://docs.seldon.io/projects/alibi-detect/en/stable/)
- TBC
* - DALI
- Triton
- `dali`
Expand Down
10 changes: 10 additions & 0 deletions docs/source/contents/tutorials/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# Tutorials

* [An introduction to building inference pipelines and using Seldon](./workflow.md)

```{toctree}
:maxdepth: 1
:hidden:
workflow.md
```
23 changes: 23 additions & 0 deletions docs/source/contents/tutorials/workflow.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# Seldon Workflow Introduction

Seldon inference is built from atomic Model components. Models as [shown here](../inference-artifacts/index) cover a wide range of artifacts including:

* Core machine learning models, e.g. a Tensorflow model.
* Feature transformations that might be built with custom python code.
* Drift detectors.
* Outlier detectors.
* Adversarial detectors.

A typical workflow for a production machine learning setup might be as follows:

1. You create a Tensorflow model for your core application use case.
1. You test this model in isolation to validate
1. You create SKLearn feature transformation component before your model to convert the input into the correct form for your model. You also create Drift and Outlier dteectors using Seldon's open source ALibi-detect library and test these in isolation.
1. You join these components together into a Pipeline for the final production setup.


These steps are shown in the diagram below:

![Workflow](./workflow.png)


Binary file added docs/source/contents/tutorials/workflow.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions docs/source/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@
contents/about/index
contents/getting-started/index
contents/tutorials/index
contents/inference-artifacts/index
contents/kubernetes/index
contents/examples/index
Expand Down
2 changes: 1 addition & 1 deletion operator/config/serverconfigs/mlserver.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ spec:
name: agent
env:
- name: SELDON_SERVER_CAPABILITIES
value: "lightgbm,mlflow,python,sklearn,spark-mlib,xgboost"
value: "alibi-detect,lightgbm,mlflow,python,sklearn,spark-mlib,xgboost"
- name: SELDON_OVERCOMMIT_PERCENTAGE
value: "10"
- name: SELDON_SERVER_HTTP_PORT
Expand Down
2 changes: 1 addition & 1 deletion scheduler/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -471,7 +471,7 @@ start-agent-local-mlserver:
--server-type mlserver \
--log-level debug \
--config-path ${PWD}/config \
--replica-config '{"inferenceSvc":"0.0.0.0","inferenceHttpPort":8080,"inferenceGrpcPort":8081,"memoryBytes":1000000,"capabilities":["lightgbm","python","sklearn","xgboost"],"overCommitPercentage":20}'
--replica-config '{"inferenceSvc":"0.0.0.0","inferenceHttpPort":8080,"inferenceGrpcPort":8081,"memoryBytes":1000000,"capabilities":["alibi-detect","lightgbm","python","sklearn","xgboost"],"overCommitPercentage":20}'


.PHONY: start-agent-local-triton
Expand Down
2 changes: 1 addition & 1 deletion scheduler/all-base.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ services:
- SELDON_SCHEDULER_PORT=${SCHEDULER_AGENT_PORT}
- MEMORY_REQUEST=${AGENT_MEMORY_REQUEST}
- SELDON_SERVER_TYPE=mlserver
- SELDON_SERVER_CAPABILITIES=lightgbm,mlflow,python,sklearn,spark-mlib,xgboost
- SELDON_SERVER_CAPABILITIES=alibi-detect,lightgbm,mlflow,python,sklearn,spark-mlib,xgboost

agent-triton:
build:
Expand Down

0 comments on commit d65eb00

Please sign in to comment.