A predictive engine API deployment with AWS and serverless in minutes.

Christian Schulz
3 min readFeb 23, 2019
Photo by Mika Baumeister on Unsplash

Introduction

I worked a couple of weeks with AWS Sagemaker vs. AWS Lambda and came to the conclusion that AWS Sagemaker has some drawbacks.

  • Every model has one endpoint with a running instance ( You have the ability using your own docker container if you would otherwise, but this is not really handy)
  • The deployment process demands a lot of configuration.
  • You pay for your endpoint instance as soon the endpoint is running. If the endpoint is 20h/day idle it makes no difference.

On the contrary you have the ability to deploy a machine learning model with AWS Lambda , API Gateway and serverless fast and the freedom to do anything as long as Lambda support your needs. Layer support in AWS Lambda and serverless makes it even more easy.

Deployment in 4 steps

  1. First of all, let’s build a classification model and serialize it.
import pandas as pd
import numpy as np
import pickle
from sklearn.linear_model import LogisticRegression
from sklearn.datasets import load_wine…

--

--