Serverless django

Serverless django DEFAULT

Using Zappa and AWS Lambda to deploy serverless Django apps

Serverless architecture has been one of the hot points of discussion concerning software development and deployment in recent years. This tutorial explains the concept of implementing serverless architecture in a Django app using Zappa and Amazon Web Services (AWS) Lambda.

Zappa requirements

To follow along with this tutorial, Zappa assumes you have the following:

  • AWS Lambda IAM credentials (follow this guide)
  • Some experience with Django
  • A Python development environment with Pipenv and Django setup

What does it mean to go serverless?

To go serverless simply means that you no longer need to manually maintain your own servers. Instead, you subscribe to a platform such as AWS Lambda that manages the workaround infrastructure for you. A bit of a misnomer, being serverless does not mean there are no servers, but rather that the management of servers, operating systems, and other related infrastructure are handled for you.

AWS Lambda

AWS Lambda is a popular function as a service (FAAS) that helps you run and manage servers by doing virtually all the heavy-lifting for you. As a bonus, you only have to pay for the time your servers are actually in use.


Zappa is a dev ops toolbox designed to help ease the workload developers face when deploying and managing serverless web applications compatible with the Web Server Gateway Interface (WSGI) on AWS Lambda and the AWS API Gateway. If you are familiar with using Laravel Vapor for managing Laravel applications, then you’ll notice that Zappa serves a similar function for Python web-based frameworks like Django and Flask.

While Zappa has many functions as a deployment tool, here are a few of its most notable advantages:

  • Package your projects into Lambda-ready zip files and upload them to Amazon S3
  • Set up necessary AWS IAM roles and permissions
  • Deploy your application to various stages (dev, staging, prod)
  • Automatically configure your project’s API Gateway routes, methods, and integration responses
  • Turn your project’s API Gateway requests into valid WSGI, and return API Gateway-compatible HTTP responses

Next, we will walk through how to set up Zappa and AWS Lambda in a Django app.

Setting up our Django project with Zappa

Zappa supports Python 3.6, 3.7, and 3.8. Before we can set up our Django project, verify that you have a supported version of Python by running:

$ python3 --version

If an error message is returned, you may want to consider downgrading to an earlier version of Python.

One issue I experienced was receiving an error when running Django version 2.2. There is an SQLite version conflict that seems to throw an error when Zappa is being run. To avoid this, you may need to use version 2.1.9.

Scaffold a Django 2.1.9 with Zappa installed below:

mkdir djangoprojects && cd djangoprojects # creates and navigate into directory called djangoprojects pipenv install --python 3.7 # Sets up a Pipenv environment specific to Python 3.7 pipenv install django~=2.1.9 # Install Django 2.1.9 pip3 install zappa #install zappa with pip3 (I ran into an issue installing with pipenv but pip3 seemed to work just fine) django-admin startproject zappatest cd zappatest ## navigate to the zappatest folder pipenv shell #activate the pipenv shell python3 runserver ## serve the project locally

When the install is successful, the output should look like this:

django project with zappa

Setting up AWS credentials

To set up AWS access keys locally on your computer, open up your AWS dashboard console to create an IAM user with Administrator access and grab the AWS credentials section and grab the as well as the .

Next, into your computer’s root directory and create the a folder inside the folder. Then, create a file called and add your AWS access keys in this format:

cd ~ # navigate to your root directory mkdir .aws # create a .aws folder cd .aws # navigate into the created aws folder touch credentials # create a file named credentials

Open up the credentials file in a text editor of your choice (I used nano) and add the following:

[default] aws_access_key_id = your_aws_access_key_id aws_secret_access_key = your_aws_secret_access_key

Before saving and exiting, do not forget to replace and with the values from the key provided in the AWS console.

Integrating Zappa for deployment

Once you’re ready to setup Zappa on your project, initialize the file by running .

When you do this, you are going to be asked a few questions including whether you want you application to be deployed globally. My recommendation would be to decline since this is only a demo project. For the rest of the prompts, select the default options.

At the end of the configurations process, your file should look like this:

{ "dev": { "django_settings": "zappatest.settings", "profile_name": "default", "project_name": "zappatest", "runtime": "python3.7", "s3_bucket": "zappa-bqof1ad4l" } }

Finally, you’ll need to specify which region you want your application to deploy in. To do this, open up the file and add your specified to the dev object, for example:

{ "dev": { ... "profile_name": "default", "aws_region" : "us-east-2", ... } }

Django, Zappa, AWS … blast off 🚀

To deploy your application to AWS Lambda in dev mode, run:

$ zappa deploy dev

Note that when you visit your application’s URL at this stage, you will get a DisallowedHost error message because Django does not recognize the URL where the app is being served from:

deploy zappa aws lambda

To fix this, add the host to the array in the file as show below:

... ALLOWED_HOSTS = ['', '',] ...

Next, update the remote deployment by running:

You should now see the standard 404 Django page:

If you were working on a project such as a simple API, this should be enough to get you started.

For more complex projects, if you visit the route to access the django-admin interface, you will see the following result:

This is because our deployed project has not been configured to handle static files. We will discuss this configuration in the next section.

Handling static files

Create bucket

First, create an S3 bucket with a unique name (you’ll need to remember this name for later):

Allow access from other hosts

In the “permissions” tab for your bucket, navigate to the CORS rules settings and add the following configuration to allow access from other hosts:

[ { "AllowedHeaders": [ "*" ], "AllowedMethods": [ "GET" ], "AllowedOrigins": [ "*" ], "MaxAgeSeconds": 3000 } ]

Installdjango-s3-storage package

Open up the terminal in your project’s root folder once more and install the django-s3-storage package by running:

Add Django S3 to your installeda*pps*

Open up and include djangos3storage as such:

INSTALLED_APPS = [ ... 'django_s3_storage' ]

Configure Django S3 Storage

Place the following block of code anywhere in your and then replace ‘zappatest-static-files’ with whatever name you used in naming your bucket:

S3_BUCKET_NAME = "zappatest-static-files" STATICFILES_STORAGE = "" AWS_S3_BUCKET_NAME_STATIC = S3_BUCKET_NAME # serve the static files directly from the specified s3 bucket AWS_S3_CUSTOM_DOMAIN = '' % S3_BUCKET_NAME STATIC_URL = "https://%s/" % AWS_S3_CUSTOM_DOMAIN # if you have configured a custom domain for your static files use: #AWS_S3_PUBLIC_URL_STATIC = ""

Push static files to bucket

Next, update the changes and push the static files to the bucket by running:

$ zappa update dev $ zappa manage dev "collectstatic --noinput"

Render page

Finally, open up the admin page once more and your page should render correctly:

successful zappa aws django page render


In this article, we explored serverless architecture in a Django app using Zappa and Amazon Web Services (AWS) Lambda.

We started by getting our Django project up and running locally with pipenv and pip3. Then, we set up our Zappa configurations and deployed to AWS Lambda in dev mode. Finally, we added support for static files with AWS S3 to make sure our web app looks and functions the way we want it to.

While we covered a lot in this article, there is still much to learn about serverless Django. To continue your education, I recommend that you check out the official Zappa documentation on the Python Python Index (PYPI) website, as well as the AWS Lambda docs.

LogRocket: Full visibility into your web apps

LogRocket Dashboard Free Trial Banner

LogRocket is a frontend application monitoring solution that lets you replay problems as if they happened in your own browser. Instead of guessing why errors happen, or asking users for screenshots and log dumps, LogRocket lets you replay the session to quickly understand what went wrong. It works perfectly with any app, regardless of framework, and has plugins to log additional context from Redux, Vuex, and @ngrx/store.

In addition to logging Redux actions and state, LogRocket records console logs, JavaScript errors, stacktraces, network requests/responses with headers + bodies, browser metadata, and custom logs. It also instruments the DOM to record the HTML and CSS on the page, recreating pixel-perfect videos of even the most complex single-page apps.

Try it for free.

Deploying Serverless Django

We will see how to deploy a Django application onto AWS Lambda using Zappa and use AWS Aurora-Serverless as the DB.

AWS Lambda is a serverless computing platform by amazon, which is completely event driven and it automatically manages the computing resources. It scales automatically when needed, depending upon the requests the application gets.

Zappa is a python framework used for deploying python applications onto AWS-Lambda. Zappa handles all of the configuration and deployment automatically for us.

And Aurora Serverless is an on-demand, auto-scaling Relational Database System by Amazon AWS(presently compatible with only MySQL). It automatically starts up & shuts down the DB depending on the requirement.

Install and Configure the Environment

Configure AWS Credentials

First, before using AWS, we have to make sure we have a valid AWS account and have the aws environment variables(access-keys).

then, create a folder at the root level

Now, create a file called credentials and store the and . To find these access credentials

  • Go to IAM dashboard in AWS console
  • Click on Users
  • Click on your User name
  • Then, go to Security credentials tab
  • Go down to Access keys
  • Note down the . is only visible when you are creating new user or when creating a new access key, so you need to note down both the access_key_id and secret_access_key at the time of user creation only or create a new access key so that we can get both the keys.

Go to Django app

After setting up the aws credentials file, now let us go to the django project, here we used Pollsapi ( as the django project. Now go inside the pollsapi app in this repo.

Create a virtual env for the project and do .

Install & Configure Zappa

Next install zappa

After installing Zappa, let us initilise zappa

which will ask us for the following:

  • Name of environment - default ‘dev’
  • S3 bucket for deployments. If the bucket does not exist, zappa will create it for us. Zappa uses this bucket to hold the zappa package temporarily while it is being transferred to AWS lambda, which is then deleted after deployment.

    (Its better to create an S3 bucket, which we will later also use to host the static files of our application)

  • Project’s settings - (which will take the ‘pollsapi.settings’)

Zappa will automatically find the correct Django settings file and the python runtime version

After accepting the info. A file gets created which looks like

Now, before deploying we have to mention the (where we want ot deploy the django app). Make sure that you have the and in the same region.

Now let us deploy the app

which will show us

Now, when we click on the link we will see this

So, we will add the host to our to our ALLOWED_HOSTS in

After this, we have update zappa,

and after updating the app when we refresh the page we see,

The Static files are not available !!

Serving Static Files

For serving static files we use S3 bucket(which we have created earlier).

We have to enable CORS for the S3 bucket, which enables browsers to get resources/files from different urls. Go to S3 Bucket properties and then to Permissions, and click , and paste these lines

Configure Django for S3

and also add it in the file.

Now update the file to add ‘django_s3_storage’ to

and also add these lines at the bottom

Push the static files to the cloud

we can push the static files by

and do

and after updating zappa, let us check by refreshing the page

Setup Django

Setup Serverless MySQL Database

Let us create an AWS Aurora MySQL serverless.

Go to AWS console and go to RDS and create a new Database

select Amazon Aurora and choose the edition which is Aurora serverless and click next

Select the Serverless radio button.

And in DB cluster identifier enter MyClusterName

Set the Master username and password and remember them for later use. And click Next.

In next page, Configure advanced settings , in Capacity setting section, select the Minimum & Maximum Aurora capacity units.

And in Network & Security section, under Virtual Private Cloud (VPC) list, select Create new VPC. Under Subnet group list, select Create new DB Subnet Group. Under VPC security groups list, select Create new VPC security group.

And Click Create database

Now our Serverless Database is created, click on the db-cluster name to see the details

We will use the VPC, Subnet Ids and the security-group later.

Connect Django to MySQL DB

Now our MySQL db is created, we have to link it to our app.

We use to connect django to the MySQl Database Server.

and add it to the file

Now we need to update file,

Configure Zappa Settings for RDS

Now go to Lambda Management console and click on functions and click on our lambda function(pollsapi)

Then we will go to the configuration page, Under the Network section, in Virtual Private Cloud (VPC)

select the same VPC as in Aurora DB

As Aurora Serverless DB clusters do not have publically accessible endpoints, our MyClusterName RDS can only be accessed from within the same VPC.

Then in Subnets select all the subnets as in Aurora DB

and for Security groups select a different security group than the one on Aurora DB.

Update Security Group Endpoint

Now we have to update the security group Inbound endpoint.

In the RDS console, go to databases section and click on our DB name, which will take us to

Now click on the security group and we will be taken to the Security Group page

Go to Inbound tab in the bottom and click on the edit button

Here click on Add Rule and enter Type as MYSQL/Aurora & in Source enter the Security Group Id of the Lambda function and save it.

Setup the Database

Now let us create a management command our polls app

Now let us update zappa

And create the databse using the management command

which will show us

We have to migrate now

Now let us create the admin user

Now let us check by logging in the admin page


We can check the lambda logs by

This is part 1 of Serverless Deployments for Django,

Check out part 2 Deploying Serverless Django with Zeit and RDS Postgres

Check out part 3 Deploying completely serverless Django with Apex Up and Aurora Serverless

Check out part 4 Deploying Django in AWS Fargate and using Aurora Serverless as the database

Buy Django ORM Cookbook

Would you like to download 10+ free Django and Python books? Get them here

  1. 2003 toshiba tv
  2. Giro vivid
  3. Solid autocad
  4. Dell 144hz ips

Deploy a REST API using Serverless, Django and Python

I started using Django seriously 2 years ago, and I think it's an exceptional framework.

In addition to its core strength, Django has a vast list of add-ons and supporting libraries. One of those gems is called the Django Rest Framework (or DRF for short), a library that gives developers an easy-to-use, out-of-the-box REST functionality that plugs seamlessly with Django’s ORM functionality.

But what if you want to do this serverless-ly? In this post, I'll talk about deploying serverless Django apps with the Serverless Framework!

Django: the SQL beast

Django is powerful, but it’s also heavily dependent on a SQL database like MySql or Postgresql. No matter how hard I tried, I couldn’t find any Django DB engine that is able to work on top of AWS DynamoDB.

The solution I'm suggesting here is built from 2 components:

  1. Using RDS. RDS is a managed SQL service, but not completely serverless; you pay for idle, and it does not scale automatically.

  2. Using a VPC. When using RDS, this is a necessary step for security. When adding VPC into the mix, your Lambda must also run inside the VPC, which leads to slow starts and a complicated configuration.

But, all that is too complicated for my demo. I wanted something quick and dirty.

Using SQLite

SQLite here I come!

Ok, so SQLite is actually not that dirty. It's the right tool for constrained environments (like mobile), or when you don't need to save a lot of data and you want to keep everything in memory.

Global shared configuration might be a good idea. Have a look at the following diagram:


  • You have a lambda function that requires configuration in order to function, the configuration is saved in a SQLite DB located in S3 bucket.
  • The Lambda pulls the SQLite on startup and does its magic.
  • On the other end, you have a management console that does something similar, it pulls the SQLite DB, changes it and puts it back
  • Pay attention that only a single writer is allowed here, otherwise things will get out of sync.

How long will it take us to develop this? None. We can use this one from Zappa. Let's call it Serverless SQLite, or 'SSQL' for short.

Let's get this thing started

Let’s define what we're building here:

  • It's going to be a Django app with the appropriate Django admin for our models

  • You should be able to log into the admin and add or change configuration.

  • The user should be able to call a REST API created by DRF to read configuration details, something very similar to to this Python rest API.

You can find all the code for the demo here.

I'm assuming you already know how to create a Django app, so we’ll skip the boring stuff and concentrate on the extra steps required to set up this app.

WSGI configuration

It’s something small, but that’s what’s doing the magic. In , the wsgi configuration points to the app that Django exposes.

SSQL configuration

Under a configuration was added which loads the SSQL DB driver:

But when testing locally, I do not want to connect to any S3 bucket. It slows down the operation. Therefore, we'll make a check to verify whether we are running a Lambda environment or not. If not, then we'll load the regular SQLite driver:

I prefer not to run , because Django already has wonderful management CLI support. Instead, I like to run .

As part of its configuration, SSQL requires a bucket name. You can create it manually and set the name in , but note that under the Lambda function has and permissions on all S3 buckets. You should use your S3 bucket ARN instead.

WhiteNoise configuration

WhiteNoise allows our web app to serve its own static files, without relying on nginx, Amazon S3 or any other external service.

We’ll use this library to serve our static admin files. I’m not going to go over all the configuration details here, but you can feel free follow them on your own. Make sure the static files are part of the Lambda package.

A tale of a missing SO

While trying to make it work, I encountered a strange error—Unable to import module ‘app’: No module named ‘_sqlite3’. After some digging, I found out that the Lambda environment does not contain the shared library which is required by SQLite. 😲

Luckily, Zappa has provided a compiled SO which is packaged as part of the deployment script.

Deployment script

Let's review the step:

  • Collect all static files ✔️

  • Migrate our remote DB before code deployment ✔️

  • Create a default admin user with password ✔️

  • Add to the mix ✔️

  • ✔️

You have a deploy script located under folder.

So how do I prepare my environment locally?

  1. Create a virtual env for your python project

  2. Run DB migration:

  3. Create a super user for the management console:

  4. Run the server locally:

  5. Go to and log in onto the management console; add a configuration

  6. Try and see if you get the configuration back


We covered how to use Django with the Serverless Framework, using SQLite as our SQL database, which was served from a S3 bucket.

I hope you enjoyed the journey! You are more than welcom to ask question below, and/or fork the repository.

Subscribe to Our Newsletter Packed with Tips, Guide and Good Development Inspiration

Thank you! Your submission has been received!

Oops! Something went wrong while submitting the form.

Tutorial: Katie McLaughlin - Deploying Django on Serverless Infrastructure

Deploying a Django project on AWS Lambda using Serverless (Part 1)

As a follow-up to a post where we looked at the most common questions about Django in the Cloud, now, I'd like to help you deploy your Django App on Amazon Web Services and make you more independent from other developers like DevOps and CloudOps Engineers. There are many options for doing that but I'd like to show one of them and I hope that in the end you will be able to deploy your Django App on AWS Lambda using Serverless.

I was motivated by Daniil Bratchenko's article Don’t Let Software Vendors Dictate Your Business Processes to start writing this blog post.

It is so hard to find Software that will fit all your business processes as all companies are unique. This is why many companies have decided to set up dedicated teams building Software for their specific business processes and needs. From my personal point of view, Django App on AWS Lambda using Serverless is a good solution for cases like that.

Also, you can use this approach for prototyping your projects running them at their early stage.

There are a few advantages and disadvantages of using this approach.

Advantages of using AWS Lambdas:

  • cost (AWS Lambda is cheaper comparing to AWS EC2);
  • simplicity in running and maintaining;
  • scalability;
  • quick deployment.

the disadvantages:

  • AWS Lambda requires some extra time to run your App;
  • size limit for deployment package;
  • API Gateway limitation (30-sec timeout, 6 Mb response body size);
  • it might cost more than AWS EC2 if there are too many requests.

Prepare AWS infrastructure

Probably, you are aware of a variety of AWS services required for web applications. In order to deploy a Django project on AWS Lambdas you should prepare your AWS infrastructure.
There is a list of AWS services I use for my Django project:

  1. Lambdas to run our wsgi application
  2. API Gateway to handle HTTP request and send them to Lambdas
  3. S3 buckets for Lambda deployments and storing static files
  4. CloudFront distribution for serving static files from S3 bucket
  5. RDS for a database (I use Postgres)
  6. VPC with subnets
  7. EC2 for Security Groups
  8. IAM for roles and policies
  9. CloudWatch for logs

AWS Lambdas, API Gateway will be created automatically by Serverless. I will try to walk you though the process of creating all the necessary AWS resources in my following blog posts.

Create a Django project

Django command allows us to create a simple Django project, in addition to that, there are some great Cookiecutter projects that can help you start your project easily (For example Cookiecutter Django). I use default cli command in this example.

Configure requirements

There are many options to store your project requirements, for example , , . You can one of these options. I'm using here.

  • create file in a root directory of the project
  • add the following libraries to file:
  • create and activate virtual environments

Choose your preferred tool for managing virtual environments (like conda, pyenv, virtualenv, etc.)

Create Django app

  • create app using Django command
  • create file in folder with the following lines:
  • create folder in the root directory of the project
  • add an image file to folder, for example

  • update

Configure environments variables:

  • create file in the root directory of the project
  • configure the following variables:

Create configuration for local development and production

  • update in folder with the following lines:
  • create and files inside folder on the same level as
  • add the following lines to :
  • add the following lines to :
  • update file in folder with the following lines:
  • update file in folder with the following lines:
  • update with the following lines:
  • create a folder inside
  • create file inside folder with the following lines:

Run Django project locally

  • set environment variable with a path to Django local configuration file
  • create a superuser in the database

Then provide a username, user email, password, and confirm the password

Create serverless configuration

  • install serverless plugins
  • create serverless.yaml file with the following configuration:

Use Docker for deploying your Django project to AWS Lambda using Serverless

  • run Amazon Linux 2 docker image:
  • install the necessary Unix dependencies:
  • install node.js version 14:
  • create python and pip aliases:
  • update pip and setuptools:
  • move to project directory
  • install requirements inside docker container:
  • set environment variable with a path to django production configuration file
  • create a superuser in the database

Then provide a username, user email, password, and confirm the password

  • collect static files to AWS S3 bucket

If you get from you should add to environment variables :

  • install serverless packages from package.json
  • deploy your Django project to AWS Lambda using Serverless

Your response will look like that:

Now, your Django Project will be available at this URL:



Here is a link to a GitHub repository with the code shown in this blog post.

If you want to learn more about Django projects on AWS Lambdas follow me on Twitter (@vadim_khodak) I plan to write a post showing how to create all the necessary AWS resources for this Django project, how to add React.js client to the Django project and more.


Django serverless


As web developers, we have barely scratched the surface of what’s possible with Amazon Web Services (AWS) Lambda and its cousins at Google and Azure. As deployment options for web apps have multiplied in the past few years, AWS Lambda continues to stand out as a powerful, simple way to run a web app with minimal maintenance and cost — if there is low traffic.

If you’re new to AWS Lambda — but not new to web development — it can be helpful to think of a Lambda function as the combination of:

  • a packaged server program, 
  • environment settings,
  • a message queue, 
  • an auto-scaling controller, 
  • and a firewall.

I’m impressed by the people who decided to combine all of these ideas as a single unit, price it by the millisecond, then give away a reasonable number of compute-seconds for free. It’s an interesting 21st century invention. 

Now that you understand the basics of Lambda, I’ll explain a solution to host a Wagtail app using Lambda at its core. Wagtail is a framework used for building Django apps focused on content management. I’ll also explore how to deploy and manage the app using Terraform, an infrastructure configuration management tool. AWS Lambda alone doesn’t expose a website or provide a database; you’ll need Terraform (or a similar tool) to deploy the rest and connect the pieces together in a repeatable and testable way.

There is already a demo of deploying Wagtail to AWS Lambda, but that tutorial creates a non-production-ready site. I’d like to achieve a more flexible and secure deployment using Terraform instead of Zappa. By the end, we’ll be able to create a web site that runs locally, deploy it as a Lambda function behind AWS API Gateway, and keep the state in AWS RDS Aurora and S3.

We’ll divide this process into nine sections:

  1. Create a Wagtail Site Locally
  2. Create the Production Django Settings
  3. Create the Lambda Entry Point Module
  4. Generate the Zip File Using Docker
  5. Run Terraform
  6. Publish the Static Resources
  7. Set Up the Site
  8. Evaluation
  9. Conclusion

All the code created for this blog is publicly available on GitHub at:

I recommend either cloning that repository or starting from scratch and following the next steps in detail until you start the Terraform deployment. Note: The Terraform steps are too complex to include here in full detail.

1. Create a Wagtail Site Locally

Using Ubuntu 20.04 or similar, create a folder called . Create and activate a Python virtual environment (VE) inside it, like this: 

mkdir wagtail_lambda_demo cd wagtail_lambda_demo python3 -m venv venv . venv/bin/activate

Install the library in that environment using

Wagtail isn’t a complete content management system (CMS) on its own; it only becomes interesting once you have added your own code. For this blog, I followed Wagtail’s tutorial, “Your first Wagtail site” — starting after the command — resulting in a simple blog site hosted locally. You can follow the Wagtail tutorial without worrying about the security of local passwords and secrets because the local site and the Lambda site will run different databases. I recommend following Wagtail’s tutorial; it teaches a lot about both Django and Wagtail that will help further in this process.


An interesting feature of Wagtail sites is the bird icon in the bottom right corner. It opens a menu for editing the page, as well as other administrative functions.


The bird menu is a clue that solidifies the purpose of Wagtail: while Django allows arbitrary database models, Wagtail is focused on editable, hierarchical, publishable web pages. There’s a lot of demand for web apps that start with that narrow focus, but there’s also a high demand for apps with a different focus, so it’s great that Django and Wagtail are kept distinct.

Before moving on to step 2, ensure your app works locally.

 2. Create the Production Django Settings

Replace with the following Python code:

from.baseimport*importosimporturllib.parse DEBUG =False SECRET_KEY = os.environ['DJANGO_SECRET_KEY'] DATABASES = { 'default': { 'ENGINE': os.environ['DJANGO_DB_ENGINE'], 'NAME': os.environ['DJANGO_DB_NAME'], 'USER': os.environ['DJANGO_DB_USER'], 'PASSWORD': os.environ['DJANGO_DB_PASSWORD'], 'HOST': os.environ['DJANGO_DB_HOST'], 'PORT': os.environ['DJANGO_DB_PORT'], } } ALLOWED_HOSTS = [] for spec in os.environ['ALLOWED_HOSTS'].split(): if'://'in spec: host = urllib.parse.urlsplit(spec).hostname ALLOWED_HOSTS.append(host) else: ALLOWED_HOSTS.append(spec) STATIC_URL = os.environ['STATIC_URL'] # The static context processor provides STATIC_URL to templates TEMPLATES[0]['OPTIONS']['context_processors'].append( 'django.template.context_processors.static') DEFAULT_FROM_EMAIL = os.environ['DEFAULT_FROM_EMAIL'] EMAIL_BACKEND ='django.core.mail.backends.smtp.EmailBackend' EMAIL_HOST = os.environ['EMAIL_HOST'] EMAIL_HOST_USER = os.environ.get('EMAIL_HOST_USER', '') EMAIL_HOST_PASSWORD = os.environ.get('EMAIL_HOST_PASSWORD', '') EMAIL_PORT =int(os.environ.get('EMAIL_PORT', 587)) EMAIL_USE_TLS =True

As shown above, the development and production settings differ in the following ways:

  • Production settings come from environment variables, following the 12-factor app methodology;
  • The is different;
  • The production site uses a different database;
  • The development site allows an arbitrary header, but the production site locks it down for security;
  • In production, a separate service hosts the static assets for speed and cost savings;
  • The development site doesn’t send emails, but the production site does.

The majority of the environment variable values will be provided by settings on the Lambda function, while a few will be provided by a secret stored in AWS Secrets Manager.

3. Create the Lambda Entry Point Module

Create a Python file called in the folder with the following content:

# The lambda_venv_path module is generated by `lambda.dockerfile`.importlambda_venv_path# noqadefhello(event, context): """Entry point for minimal testing"""if event.get('install_secrets'): install_secrets() return { 'message': 'Hello from the Wagtail Lambda Demo', 'event': event, 'context': repr(context), } definstall_secrets(): """Add the secrets from the secret named by ENV_SECRET_ID to os.environ"""importos secret_id = os.environ.get('ENV_SECRET_ID') ifnot secret_id: returnimportboto3importjson session = boto3.session.Session() client = session.client('secretsmanager') response = client.get_secret_value(SecretId=secret_id) overlay = json.loads(response['SecretString']) os.environ.update(overlay) defmanage(event, context): """Entry point for running a management command. Supported formats: - "migrate" - ["migrate"] - {"command": ["migrate"]} """ifisinstance(event, dict): command = event['command'] else: command = event ifisinstance(command, str): command = command.split() install_secrets() fromdjango.core.wsgiimport get_wsgi_application get_wsgi_application() # Initialize Djangofromdjango.coreimport management return management.call_command(*command) _real_handler =Nonedeflambda_handler(event, context): """Entry point for web requests"""global _real_handler if _real_handler isNone: install_secrets() fromapig_wsgiimport make_lambda_handler fromdjango.core.wsgiimport get_wsgi_application application = get_wsgi_application() _real_handler = make_lambda_handler(application) return _real_handler(event, context)

This module provides three entry points for three different Lambda functions. Note: Don’t try to run it; it’s designed to run in AWS Lambda only. Here’s what the module makes available to AWS Lambda:

  • The is a module that will be generated automatically. It needs to be imported first because it alters to make other libraries importable.
  • The function is a Lambda entry point used for minimal testing. It’s useful for verifying that AWS can successfully import the module and optionally read the environment secrets.
  • The function:
    • gets the name of an AWS secret from the environment,
    • reads the value, 
    • and imports the value into the environment. 
      • The secret value includes the Django secret key and the database password.
  • The function provides access to Django management commands. Once everything is set up, you’ll be able to invoke management commands using the tab of the AWS Lambda console.
    • The function is a Lambda entry point that adapts events from the AWS API Gateway to the Django WSGI Interface. It creates the handler once per process and stores it in a module global called .
  • At this stage, the Python code is ready for packaging and you can continue on to step 4.

4. Generate the Zip File Using Docker

Next you’ll need to generate a zip file for AWS Lambda. Docker is a great way to produce that zip file because Docker lets you build in an environment that’s very similar to the Lambda environment. The app won’t use Docker in production; Docker is only used for building the zip file.

 Create a file called next to :

FROM amazonlinux:2.0.20210219.0 AS build-stage RUN yum upgrade -y RUN yum install -y gcc gcc-c++ make freetype-devel yum-utils findutils openssl-devel git zip ARG PYTHON_VERSION_WITH_DOT=3.8 ARG PYTHON_VERSION_WITHOUT_DOT=38 RUN amazon-linux-extras install -y python${PYTHON_VERSION_WITH_DOT} && \ yum install -y python${PYTHON_VERSION_WITHOUT_DOT}-devel ARG INSTBASE=/var/task WORKDIR ${INSTBASE} RUN python${PYTHON_VERSION_WITH_DOT} -m venv venv COPY requirements.txt . RUN venv/bin/pip install \ -r requirements.txt \ psycopg2-binary \ apig-wsgi # Create RUN INSTBASE=${INSTBASE} venv/bin/python -c \ 'import os; import sys; instbase = os.environ["INSTBASE"]; print("import sys; sys.path[:0] = %s" % [p for p in sys.path if p.startswith(instbase)])' \ >${INSTBASE}/ COPY blog blog COPY home home COPY search search COPY mysite mysite COPY static static # Remove artifacts that won't be used.# If lib64 is a symlink, remove it. RUN rm -rf venv/bin venv/share venv/include && \ (if test -h venv/lib64 ; then rm -f venv/lib64 ; fi) RUN zip-r9q /tmp/ *# Generate a filesystem image with just the zip file as the output.# See: FROM scratch AS export-stage COPY --from=build-stage /tmp/ /

This dockerfile expresses a lot in only a few lines. Most of it is straightforward if you’re familiar with dockerfiles. Here are a few notes on what’s going on: 

  • The base image mirrors the environment AWS Lambda runs.
  • The and commands install Python 3.8, the preferred version of Python on AWS Lambda at the time of writing. When AWS makes a new version of Python available in Lambda, you’ll need to update the version here too.
  • AWS Lambda runs a container with the zip file unpacked in the folder, so this dockerfile creates a VE and installs everything in .
  • This dockerfile generates a tiny module called . AWS Lambda doesn’t use the full Python VE created by this dockerfile, so the generated module mimics the VE by adding the entries that would normally be present there. 
  • The Lambda entry point module, , imports before anything else, making it possible for AWS to load the library code as if everything were running in the VE.
    • Note: the generated module is usually very simple and predictable. In your own projects, you might want to simplify and use a static module instead of a generated module.
  • Each Django application folder needs to be copied into the image. The example dockerfile copies the , , , and apps prepared by the Wagtail tutorial, along with the generated folder. If you change the set of apps to install, update the command list.
  • This is a two stage dockerfile:
    • The first stage is called ; it starts from an Amazon Linux image, runs the build steps, and produces . 
    • The second stage is called ; it starts from an empty image (a virtual filesystem containing no files) and grabs a copy of the zip file from the build stage.
    • The command outputs the full contents of the export stage, which consists of just the zip file.

Once the file exists, run the following command to build the zip file:

DOCKER_BUILDKIT=1 docker build -o out -f lambda.dockerfile .

If successful, that command produces the file . You can open the zip file to ensure it contains installed versions of Django, Wagtail, your code, and all the necessary libraries. It even includes compiled shared library code as files. The zip file is now ready to use as the code for Lambda functions.

To make the project easy to build, I recommend you create a similar to the following:

default: out/ out/ lambda.dockerfile requirements.txt mysite/settings/ static/staticfiles.json mkdir -p out && \ DOCKER_BUILDKIT=1 docker build -o out -f lambda.dockerfile . static/staticfiles.json: rm -rf static && \ ../venv/bin/python collectstatic --no-input upload-static: static/staticfiles.json aws s3 sync static "s3://$(shell cd tf; terraform output -raw static_bucket)/s" \ --exclude staticfiles.json --delete .PHONY: default upload-static

Use the standard command to run the .

5. Run Terraform

We are almost ready to deploy, but first let’s talk about Terraform. Terraform is a great tool to deploy code and keep cloud infrastructure maintained. Terraform is designed to be portable, meaning that if I write the configuration correctly, the configuration I create for my AWS account can be applied to other AWS accounts without changing the configuration. Terraform has the following responsibilities:

  • keep track of the IDs of resources it creates,
  • take care of dependencies between resources,
  • and keep the resources it manages in sync with the configuration. 

Unfortunately, the Terraform code created for this article is too much to include inline. You can find it on GitHub at:

If you created the previous code from scratch, you can now copy the folder to your own folder. The folder should be a subfolder of the folder that contains , , , and , among other Wagtail and Django files and folders.

Next, you’ll need to:

There are a few variables you can set; to do so, see If you want to change any variables from the default, create a file called and format it like this:

default_from_email ="[email protected]" vpc_cidr_block =""

From the folder, run Terraform to deploy:

terraform init && terraform apply

Unfortunately, Terraform cannot complete successfully without some manual help. Terraform creates many resources, including the RDS database cluster that will host the data. However, because the created cluster is in a virtual private cloud (VPC) subnet, there isn’t a simple way for Terraform to connect to it and create the application role and database —  even though Terraform knows the location of the cluster and its master password. 

To fix that, once Terraform creates the RDS cluster, do the following in the AWS console:

  1. Locate the secret in AWS Secrets Manager named . Copy the of that secret. It might be too long to fit on one line, so make sure you copy all of it.
    Note: the Secret ARN is not very sensitive on its own; it’s only the ID of a sensitive value.
  2. Locate the RDS cluster. From the menu, choose . A dialog will pop up.
  3. Choose in the selector.
  4. Paste the you copied earlier into the field that appears.
  5. Enter postgres as the name of the database and click the button.
  6. Locate the secret in AWS Secrets Manager named . Click the button. Copy the value of the .
  7. In the database query window, enter the following statements, replacing with the you copied in the previous step:
create role appuser with password 'PASSWORD' login inherit; create database appdb owner appuser;

Click the button. Once it succeeds, the database will be ready to accept connections from the Lambda functions. You may need to run more than once, as some AWS resources are eventually consistent.

When Terraform completes successfully, it generates a few outputs, including:

  • : The URL of the Wagtail site. (You can customize the URL later.)
  • : The initial superuser password. (You should change it after logging in.)

6. Publish the Static Resources

Once Terraform completes successfully, in the folder containing , type:

The performs the following steps:

  • runs the Django management command to populate the folder,
  • consults Terraform to identify the S3 bucket that should contain the static files, and
  • uses to efficiently upload the static files.

Once that step is complete, the static files are available through AWS Cloudfront.

7. Set Up the Site

In the AWS console, visit the Lambda service. Terraform added 3 new functions:

  • Use the function as a quick test to verify that AWS can call the Python code without calling Django. Use the tab to call it with as the event.
  • Use the function again to verify the Python code can read the secret containing environment passwords. Use the tab to call it with as the event.
  • Use the function to complete installation.
    • Call it with as the event (it’s JSON, so include the double quotes) to run Django migration on the database. If it times out, try running it again.
    • Call it with as the event to create the initial superuser.

The function provides the main service used by API Gateway.

Now you’re ready to view the site. Get the and from the command; they are shown near the end of the output. Visit the URL specified by . The page will likely be blank, as there’s no content yet, but the title of the page — shown in the browser title — should be . Add to the URL and log in as user with the . Add some text content and verify your content is published live. Congratulations!

8. Evaluation

In the process of writing this blog, I learned a lot about Wagtail, AWS and Django production settings. The goal was to learn how close I could get to publishing a production Django site using only serverless patterns with minimal costs. The project was a success: the demo site works, it can scale fluidly to thousands of concurrent users — all built on AWS resources that are easy to maintain and change — and I learned about AWS features and limitations.

 My total AWS bill for this project — even with all the different approaches I tried, as well as a few mistakes — was $5.39 USD:

  • RDS: $4.65 USD
  • VPC: $0.72 USD
  • S3: $0.02 USD

All other services I used stayed within the AWS free tier. The RDS cost was higher than I intended because I initially forgot to configure the database to scale to zero instances. Once I set that up correctly, the RDS costs grew very slowly and only increased on days I used the site.

The VPC cost is the only cost that draws my attention. The VPC cost consists only of VPC endpoint hours. AWS charges $0.01 per hour per endpoint, and it was necessary to create 2 endpoints in order to run RDS in a way that lets me scale to zero database instances, while keeping with recommended security practices — especially storing secrets in Secrets Manager. Two pennies per hour doesn’t sound like much, but that adds up to $14.40 USD in 30 days.

$14.40 USD is significantly more than the price of a small, always-on virtual machine (VM) that does the same thing as I built, but the VM would have no scaling ability or redundancy. Therefore, the minimal cost of building something that depends on RDS in serverless mode seems to be around $15 USD/month, even if no one uses the service. I conclude that if I want to provide a service to customers based on AWS, I can’t offer a single-tenant scalable solution for less than $15 USD per month. Knowing the minimal costs is helpful for identifying the kinds of business models that can be created on AWS. 

One additional takeaway from this project is how the VPC concept works at AWS. A VPC is like a private network. I came to understand how well VPC components are isolated, making them behave much like physical network components. In particular, if you want a Lambda function in a VPC to be able to connect to the Internet, you need more VPC components than you would need with Elastic Compute Cloud (EC2).

EC2 instances in a VPC can be assigned ephemeral public IP addresses, making them easy to wire up to the Internet. To get the same thing with a Lambda function in a VPC, you need a few more AWS resources: you need a private subnet — preferably 2 or more — that routes to a NAT Gateway with its own Elastic IP address, which forwards packets to a public subnet connected to an Internet Gateway. No resources can be removed from that stack. The NAT Gateway assigns a public IP address to packets, but can’t connect to the Internet on its own; the Internet Gateway connects a subnet to the Internet, but can’t assign a public IP address to packets on its own. The correct VPC structure for Lambda is less complicated than the official AWS documentation makes it seem.

9. Conclusion

A few technical issues still remain in this project:

  • The service relies on API Gateway — which has a fixed 30 second timeout — even though the Lambda functions API Gateway calls are allowed to run as long as 15 minutes. 
    • In online forums, AWS has repeatedly refused to raise the limit.
    • This means the site appears to time out on the first request because it’s waiting for database instances to spin up, which takes about 35 seconds. The request does succeed, but the user doesn’t get to see the result. 
    • A possible solution is to switch from API Gateway to AWS Elastic Load Balancer (ELB) — which allows up to an hour timeout — but ELB has a fixed hourly cost.
  • If you try to upload an image, the image won’t be stored in a permanent place. Wiring up media storage on S3 is left as an exercise for the reader.
  • I tried to use a Terraform provider that would set up the database using the RDS Data API to avoid the VPC database configuration issue, but that provider isn’t finished - especially the “role” resource.
  • It would’ve been nice to avoid the VPC costs, but Aurora Serverless only runs on a VPC.
  • When a Lambda function tries to reach an AWS service from a VPC, and there’s no endpoint in that VPC for that AWS service, the function hangs until Lambda kills it.
    • Timeouts exist in the code, but they don’t seem to work when running in Lambda. This makes it difficult to understand why AWS isn’t working until you are aware of the broken timeouts.
  • Because the SES (Simple Email Service) VPC endpoint accepts only SMTP connections (not SES API connections), the library does not work on an AWS VPC. 
    • My solution was to fall back to the default Django SMTP connector — possibly losing some functionality — but it did work as expected.

There are a few alternatives I would be interested in trying/researching further:

  • Wagtail can run on MySQL instead of Postgres; Aurora Serverless supports MySQL.
    • The next preview release of Aurora Serverless is currently MySQL only, which suggests AWS may have a preference for MySQL.
  • I’d consider replacing API Gateway with Elastic Load Balancer, fixing the 30 second timeout issue, despite ELB’s significant fixed costs.
  • Is there an alternative to VPC Endpoints that’s free with low usage? Is it possible to make an API Gateway (or something else) listen to an IP address in a VPC and use that to proxy to AWS services?
  • I would like to try building a similar stack on Azure and GCP.

I hope this blog inspires you to build Django apps and connect with us at Six Feet Up! We love solving complex problems, learning how to build things better and networking with fellow do-gooders. There is so much left to do.

Strategy Guide Photo

Thanks for filling out the form! A Six Feet Up representative will be in contact with you soon.

Connect with us

Serverless Deployment of a Django Project with Google Cloud Run

Deploying Django RESTful APIs as Serverless Applications with Zappa

TL;DR: In this article, you will see how to build and deploy a serverless Django API with Zappa.


Serverless technology was developed from the idea of allowing developers to build and run applications without server management. Servers, of course, still exist, but developers and administrators don't need to manage them. This concept was heralded by AWS when it open-sourced its Serverless Application Model (SAM) framework. It then went on to release Chalice, a flask-like Python framework for building serverless applications. You can learn about building Chalice applications and APIs in this article.

With all the goodness that we are presented with, deploying serverless applications can be difficult when using frameworks like Django. Debugging in production environments can be hard too. Hence, there is a need for a way to deploy painlessly and also provide a means to debug and also monitor logs in production.

Zappa is an open-source Python tool created by Rich Jones of for developers to create, deploy and manage serverless Python software on API Gateway and AWS Lambda infrastructure. Zappa allows you to carry out deployment with speed and ease in the most cost-efficient way possible.

In this article, you will learn how to develop serverless Django RESTful APIs, use Zappa to deploy them to AWS, and add authentication with Auth0.

Features and Benefits of Zappa

Some of the benefits provided by Zappa include:

  • Automated deployment: With a single command, you can package and deploy your Python project as an AWS serverless app. You can also update and destroy your app with a single command in the different staging environments. You can even deploy your application in different AWS regions.
  • No server maintenance: in compliance with the serverless ecosystem, Zappa takes away the maintenance of your servers from your list of tasks.
  • Out-of-the-box Scalability: Zappa enables quick infrastructure scaling.
  • Great Usability: Zappa provides a single settings file where you can do all your configuration as well as environment variables. It can be stored in JSON or YAML format.
  • SSL certificates: Zappa provides a dedicated certify command which grants you an access to SSL certificates from different provides such as let's Encrypt and AWS

Using Zappa with Django

To use Zappa for your Django projects, ensure you have met the following prerequisites:

  • Have a machine running Python 3.6, 3.7, or 3.8
  • Can create a virtual environment with either venv or virtualenv
  • Have an AWS account.
  • Configured AWS credentials
  • Have basic experience with Python and the Django framework

If you don't have any of the specified Python versions installed, you can download Python here. You can use Venv, Virtualenv, or Pyenv to create virtual environments. If you do not have an AWS account yet, you can sign up for free. You may follow these instructions to configure your AWS credentials.

Building a Django RESTful API

Companies need an employee management system to keep details about each employee. It could be in a form of web application where new staff could be added or removed. What's more, staff details should also be presented.

An employee would have attributes like:

  • name
  • email
  • age
  • salary
  • location

Hence, in this section, you will create an API that could do the following actions:

  • List all employees
  • Show the details of an employee
  • Add a new employee
  • Edit the details of an employee
  • Delete an employee

The endpoints could then be as follows:

  • List all employees:
  • Show the details of an employee:
  • Add a new employee:
  • Edit the details of an employee:
  • Delete an employee:

Installing Dependencies

Create a folder where the files of this project will reside. Name it or any other name. Also, navigate to the new folder:

Create a new virtual environment and activate it:

Use to install the needed dependencies on the CLI as shown below:

We used the Pip package manager that comes pre-installed with Python to install the following packages:

  • : version 2.1.9 of the Django package. Installing this earlier version helps avoid an SQLite version conflict that could arise when deploying Zappa
  • : the version 3.10.0 of the Django REST framework package for creating APIs
  • : Zappa package for deployment

Scaffolding the Project

Now, you will create the Django project and application. Use the command to create a new Django project called :

Then, create a new app called inside the parent directory. This app will contain the API code:

Next, open the file of the Django project and add and to the list of installed apps.

Creating the Model, Database Migrations and Serializer

Here, you will create a model for employee API that will determine which employee details will be saved in the database. Go to the file inside the application directory and add the following code:

Next, you need to create migrations to seed the model into the database:

Then, you will create a serializer that will allow Django views to return an appropriate response to the users' requests. Create a new file in the directory and add the following code:

Creating Views and URLs

You need to make views that will handle the logic of the HTTP actions when users make requests to the API endpoints. The views will also interact with models via the serializer.

Go to the file of the directory and add the following code:

The code above has 2 views. The first one, called lists all employees and also allows the creation of a new employee. The other view, , allows the creation, retrieve, update, and deletion of particular employees. It accepts a single model instance.

Next, create URLs for the views respectively. Navigate to the file in the project sub-directory and add the following code:

In the code above, you added paths to the two views created earlier. The URLs were also defined as and respectively. The second URL accepts the as an integer parameter, which is a primary key, . This is for the single model instance to be fetched when requests are made at the endpoint.

Now, the API has been successfully created. Next, Zappa will be used to deploy the API to AWS Lambda.

Testing Locally

Use the command in your terminal to start the built-in Django server so you can access the API in your browser.

Navigate to in your browser. You should see a page that shows a list of accessible endpoints like this:

API running in local environment

If you navigate to endpoint at, you will see the Django REST framework browsable API:

Browsable API running in local environment

You have confirmed that the API works well in your local environment.

Deploying with Zappa

Before you deploy the Django API with Zappa, you have to initialize Zappa in the project:

When you ran , you should get a command-line output that looks like the following:

Follow the prompts to name your staging environment and private S3 bucket where the project files will be stored.

  • It will prompt you to request whether you want to deploy globally or not.
  • Then, it will detect your application type as Django.
  • It will also locate the settings file as .
  • Finally, it will create a file in your project directory.

By the end of the prompts, you should get an output like this:

The file is crucial for your deployment because it contains the deployment settings.

The file may, however, contain the information for other regions if you choose global deployment while initializing Zappa.

You will deploy your project with the command where stage-name could be or any other stage name you use when initializing Zappa:

Upon successful deployment, you should get a URL where you can access your API on the internet. It should look like this:

Copy the URL generated for your application and add it to the list of inside the file of the project:

Then, update the deployment:

Keep in mind that you need to update the deployment with the command above when making any changes in your project.

Serving Static Files

At this point, you may need to serve static files so that default Django styles can be active in the deployed stage.

Creating an S3 bucket

  • Go to the Amazon S3 console and select
  • Give your bucket a unique name. The name must start with a lowercase character or number. It must not contain an uppercase character. The length should be between 3 and 63 characters.

Note that you can't change a bucket name after creating it.

  • Select the AWS region where you want your bucket to be hosted.
  • Under the Bucket settings for Block Public Access, make sure you uncheck Block all public access
  • Note the name of the S3 bucket that you created

Go to the tab of your S3 bucket and navigate to the Cross-Origin resource sharing (CORS) section. Click the button and add the following configuration:

Configuring Django Settings for Handling Static Files

Install library for Django to work with S3:

Next, open the file and add the following:

Now, update the deployment with the static files.

The browsable API should render with its styles appropriately like below:

Browsable API

Adding Authentication with Auth0

You need an Auth0 account to use Auth0 APIs for authenticating your Django APIs with Auth0. You can create an Auth0 account if you do not have one.

Click the Create API button on your Auth0 dashboard to create an API. Then, go to the tab of the newly created API to allow read access.

Now, install the libraries needed for authentication:

Next, open the file of the project and add the and the to the list of Authentication backends.

The code above allows Django to link the Django users database with your Auth0 users database.

Next, update the middleware in the file as thus:

The middleware connects the user in the Auth0 Access Token to the user in the Django authentication system. The handles authentication. Make sure you add the after the to avoid an .

Next, navigate to the application directory and create a new file called . Then, add the following:

The code above consists of a function that accepts the authentication , which is the Access Token. It then maps the field from the Access Token to the username variable. The method imported from the creates a remote user in the Django authentication system. Then, a User object is returned for the username.

Next, navigate back to the file of the project and to the . Also, add the to the like this:

In the code above, we have defined settings for permissions, i.e., what can be accessed after authentication. We also defined the type of authentication that we want as JWT.

Next, set the variable for the JWT authentication library in Django.

Add the following code to your file. Make sure you replace the with your Auth0 API identifier and the with your Auth0 domain.

Next, navigate to the file and add a function to get the JSON Web Token Key Sets (JWKS) from your dashboard to verify and decode Access Tokens.

Next, create two methods in the file of the application to check the scopes granted from the :

In the code above, the method gets the Access token from the authorization header while the checks if the Access Token obtained contains the scope required to allow access to some specific parts of the application.

Now, you can protect endpoints by adding the and decorators to methods that need authentication and permission scopes, respectively.

Now, your application has Auth0 authentication enabled. Update the deployment for the authentication to be active:


This tutorial has taken you through the process of building Django RESTful APIs and deploying them as serverless applications on AWS Lambda with Zappa. You also learned how to configure authentication using Auth0. Now, you can go on and use the knowledge gained in your projects building APIs and deploying serverless Python applications.

Let us have your suggestions and questions in the comments section below. Thanks.

  • Twitter icon
  • LinkedIn icon
  • Faceboook icon

You will also be interested:

Serverless Django with Zappa

Whether you are a solo developer working on your next startup idea or a team of folks working with Django, this course shows you the way to simplify your operations overhead while saving money. The open source Zappa utility paves the way to migrate your Django application away from old-school servers and onto the AWS Lambda serverless platform. It's almost like having your own private Infrastructure team!

No need to watch for OS security patches or tweak Apache settings -- just focus on delivering value to your users. All this with minimal changes to your Django application. Leverage the power of AWS to rapidly respond to spikes in user traffic, while saving money by only paying for the web requests actually needed.

For folks ready for serverless Django, this course covers:

  • Which Django apps are best to migrate and how to prepare them
  • Configuring a generic and portable development environment easily set up by new developers in minutes
  • Deploying and updating your code to AWS Lambda
  • Best practices in serving Django static files
  • Leveraging AWS services to secure your site with HTTPS and your own custom domain name
  • Learning how to configure a secure network environment in AWS VPC
  • Connecting your Django application to a hosted RDS database

All this while leveraging the AWS Free Tier. While there are some lessons that may have additional cost to complete (e.g. bring your own domain name), the vast majority of this course will be using the AWS Free Tier and won't cost anything.

It's taught by Edgar Román. I was introduced to Django in 2009 and I consider it my go-to web framework. With continuous support from a vibrant community in addition to an incredibly rich ecosystem, I would be hard pressed to find another framework that matches up.

I've been using Zappa to host my Django applications for several years now, and I have found no other solution that is as powerful and cost-effective as serverless Django hosting on AWS -- especially for the solo developer working to get a product to market.

I hope you enjoy this course and harness the power of serverless Django with Zappa in your projects!

Course Content

6 modules28 lessons2h 24m total

Your Instructors


Frequently Asked Questions

This course is for folks who are already familiar with Python and Django, but want to learn more about migrating to the AWS Lambda serverless platform

Maybe. Most, but not all, Django applications are well suited to be serverless. Especially web sites that get moderate traffic (or less) and have usage that fluctuates over time

You will need an Amazon Web Services account (or sign up for one) and optionally access to a custom domain name for your site. During the course, any overhead costs will be identified along with options for workarounds.


14260 14261 14262 14263 14264