Python Hosting · 2025

Python & Django Cloud Hosting: Production Deployment Guide

Updated April 2025 · 9 min read

Python 3.12, Gunicorn, migrations, and environment variables — managed Python hosting with git push.

HomeBlog › Python & Django Cloud Hosting: Production Deployment Guide

Python and Django Cloud Hosting: A Production-Ready Deployment Guide

Python has become one of the dominant languages for web development, data APIs, and AI-powered applications. Django, FastAPI, and Flask each serve different use cases — but all three share the same deployment challenges. This guide covers how to take Python applications from development to production on modern cloud infrastructure.

Why Python Deployments Go Wrong

Python's flexibility is both its strength and its deployment risk. A few common failure modes:

Dependency hell: requirements.txt pinned incorrectly, pip install pulling different versions in production than development, or a package requiring a system library that isn't installed on the server.

Virtual environment mismanagement: Running Python without an isolated virtualenv means system packages bleed into your application dependencies and vice versa.

Environment variable leakage: Django's SECRET_KEY, database URLs, and API keys getting committed to repositories or not set in production at all.

Gunicorn/uWSGI misconfiguration: Running Django with the development server in production (python manage.py runserver) — still happens more than it should.

Static files not collected: python manage.py collectstatic not running in the deployment pipeline, resulting in a Django app with no CSS or JavaScript.

Container-based cloud hosting eliminates most of these issues by giving Python applications a consistent, isolated environment with dependencies bundled into the container.

Preparing Your Django Application for Production

Settings Structure

Don't use a single settings.py for all environments. Use environment-specific settings:

# settings/base.py — shared settings
# settings/development.py — local dev overrides
# settings/production.py — production settings

# settings/production.py
from .base import *
import os

DEBUG = False
ALLOWED_HOSTS = [os.environ.get('APP_DOMAIN', '')]

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.postgresql',
        'NAME': os.environ['DB_NAME'],
        'USER': os.environ['DB_USER'],
        'PASSWORD': os.environ['DB_PASSWORD'],
        'HOST': os.environ['DB_HOST'],
        'PORT': os.environ.get('DB_PORT', '5432'),
        'CONN_MAX_AGE': 60,  # Connection pooling
    }
}

SECRET_KEY = os.environ['DJANGO_SECRET_KEY']

# Static files (served via WhiteNoise or CDN)
STATIC_ROOT = '/app/staticfiles'
STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage'

# Security
SECURE_SSL_REDIRECT = True
SESSION_COOKIE_SECURE = True
CSRF_COOKIE_SECURE = True

Requirements File Best Practices

# requirements.txt — pin exact versions for reproducible builds
Django==5.0.3
gunicorn==21.2.0
psycopg2-binary==2.9.9
redis==5.0.3
whitenoise==6.6.0
django-environ==0.11.2

Always use pip freeze > requirements.txt from a clean virtualenv to capture exact versions. The psycopg2-binary package handles the PostgreSQL driver without requiring build tools.

The Gunicorn Production Server

Never use Django's dev server in production. Use Gunicorn:

# start command for cloud platform
gunicorn myproject.wsgi:application \
  --bind 0.0.0.0:$PORT \
  --workers 2 \
  --threads 4 \
  --timeout 120 \
  --log-level info \
  --access-logfile - \
  --error-logfile -

Worker count formula: (2 × CPU cores) + 1. On a container with 1 CPU core, 3 workers is the standard starting point.

The --bind 0.0.0.0:$PORT reads the PORT from the environment — essential on cloud platforms that inject it dynamically.

Health Check Endpoint

# urls.py
from django.http import JsonResponse
from django.db import connection

def health_check(request):
    try:
        connection.ensure_connection()
        db_ok = True
    except Exception:
        db_ok = False

    return JsonResponse({
        'status': 'healthy' if db_ok else 'degraded',
        'database': 'connected' if db_ok else 'disconnected',
    }, status=200 if db_ok else 503)

urlpatterns = [
    path('health/', health_check),
    # ... your routes
]

Including a database connectivity check in your health endpoint ensures the platform knows when your app can't reach its database — not just when Gunicorn is running.

FastAPI Production Setup

FastAPI is increasingly popular for API services and ML model serving:

# main.py
from fastapi import FastAPI
import uvicorn
import os

app = FastAPI()

@app.get("/health")
def health():
    return {"status": "ok"}

@app.get("/api/v1/predict")
async def predict(data: InputSchema):
    result = model.predict(data.features)
    return {"prediction": result}

if __name__ == "__main__":
    port = int(os.environ.get("PORT", 8000))
    uvicorn.run("main:app", host="0.0.0.0", port=port, workers=4)

Start command: uvicorn main:app --host 0.0.0.0 --port $PORT --workers 4

FastAPI with async endpoints is significantly more efficient than Django for I/O-heavy APIs. On the same container resources, a FastAPI service handles substantially more concurrent connections.

Environment Variables for Python Applications

# Django
DJANGO_SECRET_KEY=your-50-char-secret-key-here
DJANGO_SETTINGS_MODULE=myproject.settings.production
DEBUG=False
APP_DOMAIN=myapp.com

# Database
DATABASE_URL=postgresql://user:password@internal-host:5432/mydb
# Or individual variables if using separate settings
DB_HOST=internal-db-host
DB_PORT=5432
DB_NAME=myapp_production
DB_USER=myapp_user
DB_PASSWORD=secure-password-here

# Email
EMAIL_HOST=smtp.sendgrid.net
EMAIL_HOST_USER=apikey
EMAIL_HOST_PASSWORD=SG.your-sendgrid-key

# Third-party
STRIPE_SECRET_KEY=sk_live_xxx
AWS_ACCESS_KEY_ID=AKIA...
AWS_SECRET_ACCESS_KEY=xxx

When your database runs on the same cloud platform as your Django app, use the internal hostname for DB_HOST. This routes traffic over the private network, eliminating external latency and bandwidth costs.

Database Migrations in the Deployment Pipeline

Django migrations need to run before the new code serves traffic:

# pre_deployment_command (configured in platform settings)
python manage.py migrate --no-input && python manage.py collectstatic --no-input

This runs before the new container replaces the old one. Schema changes arrive before the code that depends on them.

If a migration fails, the deployment stops and the old container continues serving traffic. No broken state reaches production.

Celery Background Tasks

For Django applications using Celery:

# celery.py
import os
from celery import Celery

os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myproject.settings.production')
app = Celery('myproject')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
# Start command for Celery worker (separate service)
celery -A myproject worker --loglevel=info --concurrency=4

Deploy your Celery worker as a separate service on the same platform, connected to the same Redis instance (deployed on the same platform too). All three services — Django app, Celery worker, Redis — communicate over the internal network.

Static Files with WhiteNoise

# settings/production.py
MIDDLEWARE = [
    'django.middleware.security.SecurityMiddleware',
    'whitenoise.middleware.WhiteNoiseMiddleware',  # Add this second
    # ... rest of middleware
]

STATIC_ROOT = BASE_DIR / 'staticfiles'
STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage'

WhiteNoise serves compressed static files directly from Gunicorn without needing a separate static file server. For high-traffic applications, a CDN in front is better, but WhiteNoise is the correct production starting point.

Django REST Framework API Deployment

# settings/production.py — DRF production configuration
REST_FRAMEWORK = {
    'DEFAULT_RENDERER_CLASSES': [
        'rest_framework.renderers.JSONRenderer',
        # Remove BrowsableAPIRenderer in production
    ],
    'DEFAULT_AUTHENTICATION_CLASSES': [
        'rest_framework_simplejwt.authentication.JWTAuthentication',
    ],
    'DEFAULT_THROTTLE_CLASSES': [
        'rest_framework.throttling.AnonRateThrottle',
        'rest_framework.throttling.UserRateThrottle',
    ],
    'DEFAULT_THROTTLE_RATES': {
        'anon': '100/hour',
        'user': '1000/hour',
    },
}

Removing the browsable API renderer in production saves memory and reduces information disclosure.

Monitoring Python Applications in Production

What to watch:

Memory usage: Python processes with memory leaks show a gradual upward trend over time. Your cloud platform should graph memory consumption per container so you can spot this pattern before it causes OOM crashes.

Response time: Gunicorn's access log (writing to stdout with --access-logfile -) streams to your platform's log viewer. Filter for slow requests to identify performance bottlenecks.

Celery queue depth: If using Celery, monitor the Redis queue length. A growing queue that never drains indicates your workers can't keep up.

Error rate: Unhandled exceptions in Django go to stderr. Your cloud platform captures this and surfaces it in log views. Set up alerts on error keywords.

Scaling Python Applications

Python's GIL limits CPU parallelism within a single process, but there are several scaling strategies:

Vertical scaling: Increase container CPU/memory allocation — typically a plan upgrade. The right starting point for most Django applications is 1 CPU core, 512MB to 1GB RAM.

Worker scaling: More Gunicorn workers for CPU-bound work, more threads per worker for I/O-bound work. Adjust without redeploying via environment variable: WEB_CONCURRENCY=4.

Async workers: For high-concurrency I/O workloads, switch to Uvicorn workers: gunicorn myproject.asgi:application -k uvicorn.workers.UvicornWorker.

Autoscale alerts: Set up platform alerts for sustained high CPU/memory usage so you know when to scale up before performance degrades.

The Bottom Line

Python in production in 2025 means containers, environment variables, Gunicorn/Uvicorn, and automated deployments from Git. The tools are mature and well-documented. The patterns are established.

What remains variable is the quality of the platform you deploy onto. Get the infrastructure right — isolated containers, co-located databases, automated SSL, proper resource limits — and Python deployment becomes as reliable as any other stack.

Deploy Your App with Git Push

Automatic builds, environment variables, live logs, rollback, and custom domains. No server management required.

Deploy Free — No Card Required

Powered by WHMCompleteSolution