Building and deploying an NLP model to AWS Lambda and Vercel

Steven Tey
4 min readDec 5, 2020

In this three-part series, we will teach you everything you need to build and deploy your own Chatbot. This is the second part of the series, where the rest of the series will cover the following topics:

We use a Jupyter Python 3 notebook as a collaborative coding environment for this project, and other bits of code for the web app development and deployment. All the code for this series is available in this GitHub repository.

Before we explain our building process, try and test our fully-deployed Chatbot — maybe it can recommend your next Movie :)

Deploying the NLP Model

“No machine learning model is valuable, unless it’s deployed to production.” — Luigi Patruno

Indeed — you can create the most advanced machine learning model known to mankind, but without actually deploying it to the web, your model will only be accessible through your localhost.

And this was the main problem we faced when building out our chatbot. Here’s a tutorial on how we eventually deployed it successfully to the cloud.

Choosing the Right Cloud Provider

Since the IMDB dataset that we were using was over 500MB in size, we needed a powerful enough cloud platform to deploy our function on.

Initially, we chose Google Cloud, but we quickly ran into a couple of complications with the CLI. This prompted us to switch over to AWS, more specifically AWS Lambda. The plan was to deploy the model as a Flask API on AWS Lambda and call it from the frontend.

Building the API Endpoint

In a model.py file, we started out by importing all the necessary libraries:

We then uploaded the dataset to an AWS S3 bucket and connected them to the model.py file:

Lastly, we added the rest of the model code that Pedro and Ahmed came up with and wrapped it all together with a `make_recommendation()` and a `get_recommendations()` function:

Here, we’re using `@cross_origin()` because we will be calling the API from the frontend, which will be hosted on a different origin (more on that later). Therefore, it is crucial to enable cross-origin resource sharing (CORS), which can be done using the Flask-CORS library.

Deploying to AWS Lambda

Here comes the fun part — deploying to AWS. To do that, we used Zappa — a python framework that makes it super easy to build serverless Python applications using AWS Lambda + API Gateway.

Following this guide, we started out by installing zappa and creating a zappa instance in the command line (terminal for Mac):

$ pip install zappa
$ zappa init

Here’s my zappa_settings.json file:

After making sure that all the necessary libraries are declared in the `requirements.txt` file using `pip freeze > requirements.txt`, it’s now time to deploy to the cloud!

Deploying with Zappa is super simple, all we needed to do was input the following in the command line

zappa deploy production

One problem we faced here was the Lambda function kept running out of memory during deployment, and we had to manually check my AWS console to make sure that the allocated memory was sufficient for deployment.

Building the Frontend Interface

Here’s the index.html file that contains the code that we built for the frontend. We also used a Flask web-app for the frontend and deployed it to Vercel.

Since this is more of a machine learning project, I’m not going to dive into too much detail for the frontend development side of things.

This is the main chunk of HTML code that sets up our movie bot web interface:

And this is the Javascript/jQuery portion:

We then set up a simple index.py file to perform the get requests to our AWS Lambda function:

Deploying the Frontend Web-App

We chose to deploy our frontend web-app on Vercel because of its simplicity and speed.

First, we made sure that all the necessary libraries were declared in the `requirements.txt` file using `pip freeze > requirements.txt`. Then, we created a vercel.json file:

Next, we had to install the Vercel CLI with the following command:

npm i -g vercel

We then set up our development environment with the following two lines of code in Terminal:

export FLASK_APP=index.py   
export FLASK_ENV=development

Now, we can check if everything looks good locally with the command `flask run

Looks great! It’s time to set up the production environment on Vercel for our app — we did that with the help of the handy Vercel CLI by running the command vercel in Terminal:

? Set up and deploy “~/Desktop/username/moviebot-vercel”? [Y/n] y
? Which scope do you want to deploy to? username
? Link to existing project? [y/N] n
? What’s your project’s name? venv
? In which directory is your code located? ./
> Upload [====================] 98% 0.0sNo framework detected. Default Project Settings:
- Build Command: `npm run vercel-build` or `npm run build`
- Output Directory: `public` if it exists, or `.`
- Development Command: None
? Want to override the settings? [y/N] n

Finally, we ran the production command to deploy our app to Vercel:

vercel --prod

That’s it! Our app is now live at the following URL: https://nlp-moviebot.vercel.app/

Resources

--

--

Steven Tey
Steven Tey

Written by Steven Tey

Data scientist who likes building products | Data Science @MinervaSchools | Currently building oneword.domains