Skip to content

Latest commit

 

History

History
 
 

autopilot-serverless-inference

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 

Introduction

Amazon SageMaker Autopilot currently allow deploying generated models to real-time inference endpoints by default. In this repository, we'll show how to deploy Autopilot models trained with ENSEMBLING and HYPERPARAMETER OPTIMIZATION (HPO) modes to serverless endpoints.

The notebook in this folder is the solution as described in this blog post.

Dataset

In this example, we use the UCI Bank Marketing dataset to predict if a client will subscribe to a term deposit offered by the bank. This is a binary classification problem type.

Solution Overview

In the first part of the notebook we'll launch two Autopilot jobs one with training mode set to ENSEMBLING and the other with HYPERPARAMETER OPTIMIZATION (HPO).

Autopilot ensembling model to serverless endpoint

Autopilot generates a single model in ENSEMBLING training mode. We deploy this single model to a serverless endpoint. Then we also send an inference request with test data to the serverless endpoint.

Deploying Autopilot Ensembling Models to Serverless Endpoints

Autopilot HPO models to serverless endpoints

In the second part of the notebook we'll extract the three inference containers generated by Autopilot in HPO training mode and deploy these models to three separate serverless endpoints and send inference requests in sequence.

Deploying Autopilot HPO Models to Serverless Endpoints


Additional References

Security

See CONTRIBUTING for more information.

License

This project is licensed under the MIT License.