- ML model inference serving template using Fast API and Docker
- Using
ujson
orrjson
. - Token Auth.
- Exception Handling.
- Model Versioning with APIRoute.
- Message Broker to support Request Batching.
cd fast-ml-api
bash run.sh
python tests/api_test.py
uvicorn project_api.main:app
(OR)
bash deploy.sh
ctrl + c