django nginx gunicorn = 504 timeout

  • Last Update :
  • Techknowledgy :

504 can be caused by gunicorn timeout you need to start it with --timeout arg like

gunicorn--access - logfile - --workers 3--timeout 300--bind unix: /home/ubuntu / myproject / myproject.sock myproject.wsgi: application

Suggestion : 2

I have some requests on my website which take longer than 30 seconds . I changed all gunicorn timeouts to 3 seconds but still getting ,Server Fault is a question and answer site for system and network administrators. It only takes a minute to sign up., By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. ,Timeouts work well when i run the website directly from gunicorn (--bind 0.0.0.0:8000) , this problem occurs with nginx-gunicorn .

Gunicorn Config File :

[Unit]
Description = gunicorn daemon
After = network.target

[Service]
User = user
Group = www - data
WorkingDirectory = /home/user / Lav / project
ExecStart = /home/user / LavEnv / project / bin / gunicorn--access - logfile - --workers 3--keep - alive 3--timeout 3--graceful - timeout 3--bind unix: /home/user / Lav / project / project.socket project.wsgi: application

[Install]
WantedBy = multi - user.target

Nginx Config File :

server {
   listen 80;
   server_name 178.63 .217 .47;

   location = /favicon.ico { access_log off; log_not_found off; }
   location / static / {
      root project_root;
   }
   location / media / {
      root project_root;
   }
   location / files / {
      root project_root;
   }

   location / {
      proxy_set_header Host $http_host;
      proxy_set_header X - Real - IP $remote_addr;
      proxy_set_header X - Forwarded - For $proxy_add_x_forwarded_for;
      proxy_set_header X - Forwarded - Proto $scheme;
      proxy_connect_timeout 300;
      proxy_send_timeout 300;
      proxy_read_timeout 300;
      include uwsgi_params;
      proxy_pass http: //unix:/project_root/StoreManager.socket;
   }
}

Suggestion : 3

But when it deployed to droplet it didn’t work, it gives response 504 Gateway timeout after request take longer than 1 minute to process. Don’t know how it goes wrong on server. Read lot of articles and post regarding docker, nginx, django and gateway timeout.,The thing is that my some request need more than 1 minute to process in my Django project. And I have already figure it out that how to increase the request timeout to 2 minutes from gunicorn and nginx but it worked on local system.,Under gunicorn is timing out the solution to your problem is explained. https://www.datadoghq.com/blog/nginx-502-bad-gateway-errors-gunicorn/,DigitalOcean makes it simple to launch in the cloud and scale up as you grow – whether you’re running one virtual machine or ten thousand.

nginx.conf

upstream keev {
   server web: 8000;
}

server {

   listen 80;

   location / {
      proxy_set_header X - Forwarded - For $proxy_add_x_forwarded_for;
      proxy_set_header Host $host;
      proxy_redirect off;

      <
      ^ > proxy_connect_timeout 120;
      proxy_send_timeout 120;
      proxy_read_timeout 120;
      send_timeout 120;
      client_body_timeout 120; < ^ >

      proxy_pass http: //keev;
   }
   location / staticfiles / {
      alias / home / app / web / staticfiles / ;
   }

   location / mediafiles / {
      alias / home / app / web / mediafiles / ;
   }
   proxy_connect_timeout 120;
   proxy_send_timeout 120;
   proxy_read_timeout 120;
   send_timeout 120;
   client_body_timeout 120;
}

docker-compose-prod.yml

version: '3.7'

services:

   # Redis
redis:
   image: redis: 5.0 - alpine

db:
   image: postgres: 12 - alpine
env_file:
   -.env

web:
   image: "${WEB_IMAGE}"
expose:
   -"8000"
volumes:
   -. / app: /app -
   static_volume: /home/app / web / staticfiles -
   media_volume: /home/app / web / mediafiles
command: sh - c "python manage.py wait_for_db &&
python manage.py migrate &&
   python manage.py collectstatic--no - input--clear &&
   gunicorn keev.wsgi: application--workers 8--threads 8 - t 120--bind 0.0 .0 .0: 8000 "
env_file:
   -.env
depends_on:
   -db -
   redis

nginx:
   image: "${NGINX_IMAGE}"
ports:
   -"8001:80"
volumes:
   -static_volume: /home/app / web / staticfiles -
   media_volume: /home/app / web / mediafiles
depends_on:
   -web
links:
   -web
restart: on - failure

volumes:
   static_volume:
   media_volume:

Dockerfile

FROM python: 3.7 - alpine

# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# some packages install needed
   ...
   ...
   ...
   COPY. / requirements.txt / requirements.txt
RUN pip install--no - cache - dir - r requirements.txt

# Setup directory structure
RUN mkdir - p / app
ENV APP_HOME = /app

RUN mkdir APP_HOME
RUN mkdir $APP_HOME / staticfiles
RUN mkdir $APP_HOME / mediafiles
WORKDIR $APP_HOME

COPY / app $APP_HOME

Suggestion : 4

https://medium.com/@paragsharma.py/504-gateway-timeout-django-gunicorn-nginx-4570deaf0922,504 can be caused by gunicorn timeout anycodings_nginx you need to start it with --timeout arg anycodings_nginx like,React how to return a variable so that it renders without giving an error of undefined,Change the column name of a matrix depending on a filter selection

504 can be caused by gunicorn timeout anycodings_nginx you need to start it with --timeout arg anycodings_nginx like

gunicorn--access - logfile - --workers 3--timeout 300--bind unix: /home/ubuntu / myproject / myproject.sock myproject.wsgi: application

Suggestion : 5

The server is running fine, requests are getting responded to pretty well. However, when i start directing traffic to this setup from my old server, pages start giving 504 gateway timeout errors.,What the requests are doing is only a matter of fetching data from DB and rendering using django-rest-framework. Looking at MySQL processlist, there doesn't seem to be any stuck queries there. This is kinda weird.,First you could try how your backend (django/gunicorn) performs without nginx in front. ab (apache benchmark) is a simple tool for this task.,If the backend is the bottleneck, and you're using nginx anyway - it could be an option to do some caching (don't know your application, but maybe...). If your API exposes data that changes often you could set the cache time really short. (1 to 10 seconds) - so if you have e.g. 100 requests per second, only one of those has to hit the backend, the others will get the cached response.

You can run it directly from the server, or if your 8081 port is not firewalled from any machine:

ab - c 50 - n 500 http: //localhost/path-xyz/