Skip to content

Deploy a machine learning model and Create a Flask API that can handle a machine learning model. We deploy an API to Heroku with Docker.

Notifications You must be signed in to change notification settings

manasanoolu7/challenge-api-deployment

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

challenge-api-deployment

Introduction:

This project is an BeCode Bouman group challange for 5 days (Deadline:08/12/2020).

The project was done by Davy Nimbona (team leader), Manasa Devinoolu, Christophe Schellinck and Selma Esen.

Goal:

In that project we have created a prediction model to predict new selling prices for the new properties according to our dataset. The real estate data has been scrapped, celeaned and analized in our previous projects. The goal of this project is creating an API to get an new property input and return a price as output. The API Wrapped by a Docker file and deployed by Heroku.

The input:

{
    "area": int,
    "property-type": "APARTMENT" | "HOUSE" | "OTHERS",
    "rooms-number": int,
    "zip-code": int,
    "garden": Optional[bool],
    "equipped-kitchen": Optional[bool],
    "furnished": Opional[bool],
    "terrace": Optional[bool],
    "facades-number": Optional[int]
}

Area, property-type, rooms-number and zip-code are required(mandatory) in order to run the application

The output:

{
    "prediction": [float],
}

In case we have the right data

{
    "error": Optional[str]
}

In case we have wrong or missing data

File structure:

.
├── ...
├── docker                    
│   ├── Dockerfile                           
├── pipeline                    
│   ├── model
│       ├── model.py
│       ├── model.pkl
│       ├── ready_to_model_df.csv
│   ├── predict
│       ├── prediction.py
│   ├── preprocessing 
│       ├── cleaning_data.py
│       │── test-dataframe.csv
├── Procfile
├── app.py
├── requirements.txt
├── README.md

Details:

The model_featured.py file makes use of Linear regression and dumps the regressor into a pickle file.

The prediction.py file imports and loads the pickle file with data for the prediction and executes the prediction being a price for given inputs.

The data_cleaning.py file deals with cleaning and preprocessing. Processing the data ready to feed the model and make predictions. Also dealing with error messages if the user provides incorrect input.

Run

You can access the application on this Link.

  • Home: "/"
  • Predict page: "/predict":
  • GET: Returns the data format you need to input
  • POST: Returns the predicted price or error message in case of error

To run it on your local machine:

  • Clone the project
  • Install the requirements for this project by running:
pip install -r requirements.txt
  • Then run file:
app.py

Docker File

image creation

docker build -f docker/Dockerfile . -t image_name:tag_name

docker run

docker run -it image_name:tag_name

Heroku

You can find the link here

  • access welcome page by /welcome.

  • access price prediction page by /predict (Use Postman)

About

Deploy a machine learning model and Create a Flask API that can handle a machine learning model. We deploy an API to Heroku with Docker.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 98.1%
  • Dockerfile 1.9%