EzDevInfo.com

puma

A ruby web server built for concurrency A Modern, Concurrent Web Server for Ruby - Puma puma - a modern concurrent web server for ruby.

My app keeps creating database connections, how do I trace the reason?

I have a Ruby on Rails application running on Heroku. I keep getting these messages in the log:

2015-05-05T16:11:14Z app[postgres.27102]: [AQUA] connection received: host=xx.xxx.xx.26 port=60278
2015-05-05T16:11:14Z app[postgres.27102]: [AQUA] connection authorized: user=postgres database=somedb
2015-05-05T16:11:14Z app[postgres.27103]: [AQUA] connection received: host=xx.xxx.xx.26 port=60291
2015-05-05T16:11:14Z app[postgres.27103]: [AQUA] connection authorized: user=postgres database=postgres
2015-05-05T16:11:18Z app[postgres.27104]: [AQUA] connection received: host=xx.xxx.x.166 port=54180
2015-05-05T16:11:18Z app[postgres.27104]: [AQUA] connection authorized: user=postgres database=somedb
2015-05-05T16:11:23Z app[postgres.27105]: [AQUA] connection received: host=xx.xxx.x.166 port=55488
2015-05-05T16:11:23Z app[postgres.27105]: [AQUA] connection authorized: user=postgres database=somedb
2015-05-05T16:11:28Z app[postgres.27106]: [AQUA] connection received: host=xx.xxx.x.166 port=56774
2015-05-05T16:11:28Z app[postgres.27106]: [AQUA] connection authorized: user=postgres database=somedb
2015-05-05T16:11:28Z app[postgres.27107]: [AQUA] connection received: host=xx.xxx.x.166 port=56854
2015-05-05T16:11:28Z app[postgres.27107]: [AQUA] connection authorized: user=postgres database=somedb
2015-05-05T16:11:28Z app[postgres.27108]: [AQUA] connection received: host=xx.xxx.x.166 port=56885
2015-05-05T16:11:28Z app[postgres.27108]: [AQUA] connection authorized: user=postgres database=somedb
2015-05-05T16:11:28Z app[postgres.27109]: [AQUA] connection received: host=xx.xxx.x.166 port=56912
2015-05-05T16:11:28Z app[postgres.27109]: [AQUA] connection authorized: user=postgres database=somedb
2015-05-05T16:11:33Z app[postgres.27110]: [AQUA] connection received: host=xx.xxx.x.166 port=58039
2015-05-05T16:11:33Z app[postgres.27110]: [AQUA] connection authorized: user=postgres database=somedb
2015-05-05T16:11:38Z app[postgres.27111]: [AQUA] connection received: host=xx.xxx.x.166 port=59387
2015-05-05T16:11:38Z app[postgres.27111]: [AQUA] connection authorized: user=postgres database=somedb
2015-05-05T16:11:43Z app[postgres.27112]: [AQUA] connection received: host=xx.xxx.x.166 port=60944
2015-05-05T16:11:43Z app[postgres.27112]: [AQUA] connection authorized: user=postgres database=somedb
2015-05-05T16:11:14+00:00 app[heroku-postgres]: source=HEROKU_POSTGRESQL_AQUA sample#current_transaction=511990 sample#db_size=203303096bytes sample#tables=17 sample#active-connections=2 sample#waiting-connections=0 sample#index-cache-hit-rate=0.99997 sample#table-cache-hit-rate=0.94699 sample#load-avg-1m=0.14 sample#load-avg-5m=0.25 sample#load-avg-15m=0.24 sample#read-iops=0.1875 sample#write-iops=1 sample#memory-total=7629448kB sample#memory-free=428388kB sample#memory-cached=6784860kB sample#memory-postgres=171732kB

I can't figure out what's causing this. The application runs Cedar 10 stack, ruby 2.1.4, rails 3.2.11 and puma 2.11.2 with 3 workers and 1 thread. It's not happening in the development or staging environments, only on Heroku.

Running: select application_name from pg_stat_activity; shows:

        application_name         
---------------------------------

 puma: cluster worker 2: 3 [app]
 puma: cluster worker 1: 3 [app]
 puma: cluster worker 0: 3 [app]
 psql johnny interactive

Here's my puma configuration file (min and max threads is equal to 1):

workers Integer(ENV['PUMA_WORKERS'] || 3)
threads Integer(ENV['MIN_THREADS']  || 1), Integer(ENV['MAX_THREADS'] || 16)

preload_app!

rackup      DefaultRackup
port        ENV['PORT']     || 3000
environment ENV['RACK_ENV'] || 'development'

on_worker_boot do
  # worker specific setup
  ActiveSupport.on_load(:active_record) do
    config = ActiveRecord::Base.configurations[Rails.env] ||
                Rails.application.config.database_configuration[Rails.env]
    config['pool'] = ENV['MAX_THREADS'] || 16
    ActiveRecord::Base.establish_connection(config)
  end
end

Any ideas on how to trace this?

Update: I added a debug message to the on_worker_boot block and it only gets invoked on the application startup, so I still have no clue why connections are beings established so frequently.


Source: (StackOverflow)

why did gitlab 6 switch back to unicorn?

Gitlab 6.0 was released yesterday. I am curious to know why they switched to Unicorn from Puma. Versions prior to 5 were using Unicorn. I thought switch to Puma was for the better.

Is there a technical reason for this switch?


Source: (StackOverflow)

Advertisements

How do I get 'puma' to start, automatically, when I run `rails server` (like Thin does)

Normally, when you run rails server it starts Webrick. If you install the 'thin' gem, then 'thin' starts instead. I would like to do the same thing with the 'puma' server.

I see that the start command within railties (lib/rails/commands) calls super, but I can't find what the various options for 'super' are. I have also reviewed many references to Rails within 'thin'.

I found a Changelog entry entitled "Added Thin support to script/server. #488 [Bob Klosinski]" from Oct. of 2008, but that code area has changed significantly since that commit (a93ea88c0623b4f65af98c0eb55924c335bb3ac1).

If someone could direct me to the right section of code, that would be very helpful.

Thanks.


Source: (StackOverflow)

Server sent events, Puma, Rails and max dedicated threads for each client

I am using Redis for my rails project to subscribe to channels and publishing to those channels when an event occurs. On the client side, I am registering to EventSource that correspond to these channels. Whenever an event occurs for the subscribed channel at the server, the server does a SSE write so that all registered clients receive the update.

Now the connection with the server stays alive for each client that is subscribed to those channels i.e. server-thread dedicated to this client keeps running until the client disconnects. With this approach, if there are 1000 concurrent users subscribed to a channel, I'd have 1000 TCP/IP connections open.

I am using Puma as the web server as suggested in this tutorial. Puma by default specifies 16 max threads. I can change this limit to a higher limit.

I might not know how many concurrent users there might be at a time in my app and do not know what max no. of threads I can specify in Puma. In a worst case scenario, if the count of threads dedicated to each concurrent user reaches the max count of the threads specified for the Puma webserver, the app would freeze for all users until one of the concurrent user disconnects.

I was excited to use Rails live streaming, and server sent events in my rails project but with this approach I risk to reach the limit of max threads specified in my web server and consequently app getting unresponsive for all users till one of the concurrent user disconnects.

Not sure what is the typical max thread count for Puma for a large concurrent user-base.

Should I consider other approaches - perhaps ajax-based polling or Node.js that uses an event-driven, non-blocking I/O model? Or just run some benchmarks to know what my max thread count can be?


Source: (StackOverflow)

PG::TRDeadlockDetected: ERROR: deadlock detected

I am restarting 8 puma workers via bundle exec pumactl -F config/puma.rb phased-restart what works fine. Now I am getting more and more postgres errors:

PG::TRDeadlockDetected: ERROR:  deadlock detected

I found a about 50 of idle postgres processes running:

postgres: myapp myapp_production 127.0.0.1(59950) idle
postgres: myapp myapp_production 127.0.0.1(60141) idle
...

They disappear when I am running bundle exec pumactl -F config/puma.rb stop. After starting the app with bundle exec pumactl -F config/puma.rb start, I get exactly 16 idle processes. (Eight too many in my opinion.)

How can I manage these processes better? Thanks for your help!


Update

My puma.rb:

environment 'production'
daemonize true

pidfile 'tmp/pids/puma.pid'
state_path 'tmp/pids/puma.state'

threads 0, 1
bind 'tcp://0.0.0.0:3010'

workers 8

quiet

Source: (StackOverflow)

Best tuning practices, experiences with Puma + Heroku + Rails 4 + Ruby 2.0

I been reading nearly all articles all articles covering Puma tuning on Heroku, yet I'm not being able to find the sweet spot here.

I have a site with around 100k / daily visits.

I tried using 2x Dynos. The app is an average Rails app that performs mostly selects hitting 80% of the tema memcache directly. Ram usage per worker can be between 160-180mb.

I tried

DB_POOL=25
PUMA_THREADS=16
PUMA_WORKERS=4

An also something like this.

DB_POOL=10
PUMA_THREADS=5
PUMA_WORKERS=5

None of the results were convincing to me. Pageviews are always down a % compared to last week and traffic of the site has no changed.

Does anyone have a experiencing in tuning high traffic sites that would like to share? Nearly all articles explain mostly the same configs but things start to get nasty when having 100 people visiting the site at the same time.


Source: (StackOverflow)

Server Sent Events and Rails Streaming

I'm experimenting with Rails 4 ActionController::Live and Server Sent Events. I'm using MRI 2.0.0 and Puma.

For what I can see, each connected client keeps an active connection to the server. I was wondering if it is possible to leverage SSEs without keeping all response streams running.

Puma manages multiple connections using threads, and I imagine there is a limit to the number of cuncurrent connections.
What if I want to support a real-world scenario with thousands of clients registering to my Rails app for SSE events?

Is there any example?

Also, I usually run Rails app servers behind an nginx reverse proxy. Would it require any particular setup?


Source: (StackOverflow)

How to check Rails app's thread safety for Puma

I wish to deploy my Rails app to Heroku using Puma webserver. However, I am not really sure whether all Gems are thread safe. Reading all Gems' source code is not feasible option for us.

Is there a way to automatically check all Gems for thread safety? Or does Puma complain/display specific error log if thread unsafe code were executed/detected?


Source: (StackOverflow)

ActiveRecord::ConnectionTimeoutError: could not obtain a database connection within 5.000 seconds (waited 5.000 seconds)

I have a rails app in production that i deployed some changes to the other day. All of a sudden now I get the error ActiveRecord::ConnectionTimeoutError: could not obtain a database connection within 5.000 seconds (waited 5.000 seconds) multiple times a day and have to restart puma to fix the issue.

I'm completely stumped as to what is causing this. I didn't change anything on my server and the changes I made were pretty simple (add to a view and add to a controller method).

I'm not seeing much of anything in the log files.

I'm using rails 4.1.4 and ruby 2.0.0p481

Any ideas as to why my connections are filling up? My connection pool is set to 10 and i'm using the default puma configuration.

Here's a stack trace:

ActiveRecord::ConnectionTimeoutError (could not obtain a database connection within 5.000 seconds (waited 5.000 seconds)):
  activerecord (4.1.4) lib/active_record/connection_adapters/abstract/connection_pool.rb:190:in `block in wait_poll'
  activerecord (4.1.4) lib/active_record/connection_adapters/abstract/connection_pool.rb:181:in `loop'
  activerecord (4.1.4) lib/active_record/connection_adapters/abstract/connection_pool.rb:181:in `wait_poll'
  activerecord (4.1.4) lib/active_record/connection_adapters/abstract/connection_pool.rb:136:in `block in poll'
  /usr/local/rvm/rubies/ruby-2.0.0-p481/lib/ruby/2.0.0/monitor.rb:211:in `mon_synchronize'
  activerecord (4.1.4) lib/active_record/connection_adapters/abstract/connection_pool.rb:146:in `synchronize'
  activerecord (4.1.4) lib/active_record/connection_adapters/abstract/connection_pool.rb:134:in `poll'
  activerecord (4.1.4) lib/active_record/connection_adapters/abstract/connection_pool.rb:418:in `acquire_connection'
  activerecord (4.1.4) lib/active_record/connection_adapters/abstract/connection_pool.rb:351:in `block in checkout'
  /usr/local/rvm/rubies/ruby-2.0.0-p481/lib/ruby/2.0.0/monitor.rb:211:in `mon_synchronize'
  activerecord (4.1.4) lib/active_record/connection_adapters/abstract/connection_pool.rb:350:in `checkout'
  activerecord (4.1.4) lib/active_record/connection_adapters/abstract/connection_pool.rb:265:in `block in connection'
  /usr/local/rvm/rubies/ruby-2.0.0-p481/lib/ruby/2.0.0/monitor.rb:211:in `mon_synchronize'
  activerecord (4.1.4) lib/active_record/connection_adapters/abstract/connection_pool.rb:264:in `connection'
  activerecord (4.1.4) lib/active_record/connection_adapters/abstract/connection_pool.rb:541:in `retrieve_connection'
  activerecord (4.1.4) lib/active_record/connection_handling.rb:113:in `retrieve_connection'
  activerecord (4.1.4) lib/active_record/connection_handling.rb:87:in `connection'
  activerecord (4.1.4) lib/active_record/query_cache.rb:51:in `restore_query_cache_settings'
  activerecord (4.1.4) lib/active_record/query_cache.rb:43:in `rescue in call'
  activerecord (4.1.4) lib/active_record/query_cache.rb:32:in `call'
  activerecord (4.1.4) lib/active_record/connection_adapters/abstract/connection_pool.rb:621:in `call'
  actionpack (4.1.4) lib/action_dispatch/middleware/callbacks.rb:29:in `block in call'
  activesupport (4.1.4) lib/active_support/callbacks.rb:82:in `run_callbacks'
  actionpack (4.1.4) lib/action_dispatch/middleware/callbacks.rb:27:in `call'
  actionpack (4.1.4) lib/action_dispatch/middleware/remote_ip.rb:76:in `call'
  airbrake (4.1.0) lib/airbrake/rails/middleware.rb:13:in `call'
  actionpack (4.1.4) lib/action_dispatch/middleware/debug_exceptions.rb:17:in `call'
  actionpack (4.1.4) lib/action_dispatch/middleware/show_exceptions.rb:30:in `call'
  railties (4.1.4) lib/rails/rack/logger.rb:38:in `call_app'
  railties (4.1.4) lib/rails/rack/logger.rb:20:in `block in call'
  activesupport (4.1.4) lib/active_support/tagged_logging.rb:68:in `block in tagged'
  activesupport (4.1.4) lib/active_support/tagged_logging.rb:26:in `tagged'
  activesupport (4.1.4) lib/active_support/tagged_logging.rb:68:in `tagged'
  railties (4.1.4) lib/rails/rack/logger.rb:20:in `call'
  actionpack (4.1.4) lib/action_dispatch/middleware/request_id.rb:21:in `call'
  rack (1.5.2) lib/rack/methodoverride.rb:21:in `call'
  dragonfly (1.0.5) lib/dragonfly/cookie_monster.rb:9:in `call'
  rack (1.5.2) lib/rack/runtime.rb:17:in `call'
  activesupport (4.1.4) lib/active_support/cache/strategy/local_cache_middleware.rb:26:in `call'
  rack (1.5.2) lib/rack/sendfile.rb:112:in `call'
  airbrake (4.1.0) lib/airbrake/user_informer.rb:16:in `_call'
  airbrake (4.1.0) lib/airbrake/user_informer.rb:12:in `call'
  railties (4.1.4) lib/rails/engine.rb:514:in `call'
  railties (4.1.4) lib/rails/application.rb:144:in `call'
  railties (4.1.4) lib/rails/railtie.rb:194:in `public_send'
  railties (4.1.4) lib/rails/railtie.rb:194:in `method_missing'
  puma (2.9.0) lib/puma/configuration.rb:71:in `call'
  puma (2.9.0) lib/puma/server.rb:490:in `handle_request'
  puma (2.9.0) lib/puma/server.rb:361:in `process_client'
  puma (2.9.0) lib/puma/server.rb:254:in `block in run'
  puma (2.9.0) lib/puma/thread_pool.rb:92:in `call'
  puma (2.9.0) lib/puma/thread_pool.rb:92:in `block in spawn_thread'

Puma init.d script

#!/bin/sh
# Starts and stops puma
#


case "$1" in
        start)
                su myuser -c  "source /etc/profile && cd /var/www/myapp/current && rvm gemset use myapp && puma -d -e production -b unix:///var/www/myapp/myapp_app.sock -S /var/www/myapp/myapp_app.state"
        ;;

        stop)
                su myuser -c "source /etc/profile && cd /var/www/myapp/current &&  rvm gemset use myapp && RAILS_ENV=production bundle exec pumactl -S /var/www/myapp/myapp_app.state stop"
        ;;

        restart)
                $0 stop
                $0 start
        ;;

        *)
                echo "Usage: $0 {start|stop|restart}"
                exit 1
esac

EDIT

I think i've finally narrowed down the issue to be with the airbrake gem and using the devise method current_user or user_signed_in? inapplication_controller.rbin abefore_action`.

Here's my application controller:

class ApplicationController < ActionController::Base
  protect_from_forgery
  before_filter :authenticate_user!, :get_new_messages 

  # Gets the unread messages for the logged in user
  def get_new_messages
    @num_new_messages = 0 # Initially set to 0 so login page, etc works
    # If the user is signed in, fetch the new messages
    if user_signed_in? # I also tried !current_user.nil?
      @num_new_messages = Message.where(:created_for => current_user.id).where(:viewed => false).count
    end
  end

...
end

If i remove the if block, i have no problems. Since i introduced that code, my app seems to run out of connections. If i leave that if block in place and remove the airbrake gem, my app seems to run just fine with only the default 5 connections set on my pool in my database.yml file.

EDIT

I finally figure out that if I comment out this line in my config/environments/production.rb file config.exceptions_app = self.routes that I don't get the error. It seems that custom routes + devise in the app controller before_action are the cause. I've created an issue and a reproducable project on github.

https://github.com/plataformatec/devise/issues/3422 https://github.com/toymachiner62/devise-connection-failure/blob/master/config/environments/production.rb#L84


Source: (StackOverflow)

What is the difference between Workers and Threads in Puma

What is the difference between a puma worker and a puma thread in context of a heroku dyno?

What I know (please correct me if I am wrong):

  • Thin is not concurrent, so a web process can only do one request at a time

  • In unicorn, I know I can have several unicorn workers in one process to add concurrency.

But in puma there is threads and workers.. Isn't a worker a thread inside the puma process?

Can I use more workers/threads to add web concurrency in Heroku?


Source: (StackOverflow)

ActiveRecord::ConnectionNotEstablished - No connection pool for X

I can't make my sinatra/ruby app hosted on heroku works as desired. I fiddled with some setup trying to resolve this issue but so far no results.

ActiveRecord::ConnectionNotEstablished - No connection pool for User:
2015-06-25T14:26:11.736854+00:00 app[web.1]:    /app/vendor/bundle/ruby/2.0.0/gems/activerecord-4.2.1/lib/active_record/connection_adapters/abstract/connection_pool.rb:566:in `retrieve_connection'
2015-06-25T14:26:11.736856+00:00 app[web.1]:    /app/vendor/bundle/ruby/2.0.0/gems/activerecord-4.2.1/lib/active_record/connection_handling.rb:113:in `retrieve_connection'
2015-06-25T14:26:11.736858+00:00 app[web.1]:    /app/vendor/bundle/ruby/2.0.0/gems/activerecord-4.2.1/lib/active_record/connection_handling.rb:87:in `connection'

User is one of my ActiveRecords table and the app fails because I try to query it.

I use sinatra with puma backup. Here is my Procfile:

web: ruby app/my-server.rb -s puma

I was also checking how many open connections there is using:

select count(*) from pg_stat_activity where pid <> pg_backend_pid()  and usename = current_user; 

but its says 0 every time.

I'm hosting the app on free plan and dev plan of herokupostgres.

I also noticed that the problem occurs when there are 2 quick calls to api at short interval of time. Like there was only 1, not 5 connections available, because 1st call succeds and the second one fails. In my database.yml I setup pool to 5.

I'm on Rails 4.2.1 and Postgres 9.4

Here is my database.yml aswell:

default: &default
  adapter: postgresql
  encoding: utf8
  pool: 5
  timeout: 5000

production:
  <<: *default
  host: my_db_address
  port: 5432
  database: my_db_name
  username: my_db_user_name
  password: my_db_password

< test and development ommited >

Do I miss some configuration or does free heroku plan chokes on it?


Source: (StackOverflow)

Cannot install Puma gem on Ruby on Rails.

I'm trying to install the puma gem, but when I run

gem install puma

I get this error message:

Temporarily enhancing PATH to include DevKit Building native extensions. This could take a while... ERROR: Error installing puma: ERROR: Failed to build gem native extension.

   C:/Ruby193/bin/ruby.exe extconf.rb

creating Makefile

make
generating puma_http11-i386-mingw32.def
compiling http11_parser.c
ext/http11/http11_parser.rl: In function 'puma_parser_execute':
ext/http11/http11_parser.rl:111:3: warning: comparison between signed and unsigned integer expressions
compiling io_buffer.c
io_buffer.c: In function 'buf_to_str':
io_buffer.c:119:3: warning: pointer targets in passing argument 1 of 'rb_str_new' differ in signedness
c:/Ruby193/include/ruby-1.9.1/ruby/intern.h:653:7: note: expected 'const char *' but argument is of type 'uint8_t *'
compiling mini_ssl.c
In file included from mini_ssl.c:3:0:
c:/Ruby193/include/ruby-1.9.1/ruby/backward/rubyio.h:2:2: warning: #warning use "ruby/io.h" instead of "rubyio.h"
mini_ssl.c:4:25: fatal error: openssl/bio.h: No such file or directory
compilation terminated.
make: * [mini_ssl.o] Error 1

Gem files will remain installed in C:/Ruby193/lib/ruby/gems/1.9.1/gems/puma-2.6.0 for inspection. Results logged to C:/Ruby193/lib/ruby/gems/1.9.1/gems/puma-2.6.0/ext/puma_http11/gem_make.out

Adding gem 'puma' to my Gemfile and running bundle install isn't an option, because that just doesn't work with any gem and gives me an error message (which is a separate issue, one that I've circumvented with the other gems I've used by installing them via gem install).


Source: (StackOverflow)

Is Puma better than Unicorn for Ruby 1.9.3 and Rails 3.2? [closed]

There is a lot of talk about Puma and how it is faster than Unicorn. But, they also mention that it is more suitable for instances of JRuby and Rubinius.

MY question: What about a Rails 3.2 app with Ruby 1.9.3? Unicorn or Puma?


Source: (StackOverflow)

What do multi-processes VS multi-threaded servers most benefit from?

Can anyone explain what's the bottleneck of each concurrency method?

Servers like Unicorn (process based) an Puma (thread based).

Does each method prefer CPU cores? threads? or simply clock speed? or a special combination?

How to determine the optimal CPU characteristics needed in case of using dedicated servers?

and how to determine the best workers amount in the case of Unicorn, or threads amount in the case of Puma?


Source: (StackOverflow)

NGINX: upstream timed out (110: Connection timed out) while reading response header from upstream

I have puma running as the upstream app server and riak as my background db cluster. When I send a request that map reduces a chunk of data for about 25K users and returns it from riak to app I get an error in the nginx log "upstream timed out (110: Connection timed out) while reading response header from upstream". If I query my upstream directly without nginx proxy, with the same request, I get the required data.

Nginx time out occurs once proxy is put in.

**nginx.conf**

user www-data;

worker_processes 2;

pid /var/run/nginx.pid;

events {
    worker_connections 4000;
}

http {

    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 10m;

    proxy_connect_timeout  600s;
    proxy_send_timeout  600s;
    proxy_read_timeout  600s;
    fastcgi_send_timeout 600s;
    fastcgi_read_timeout 600s;

    types_hash_max_size 2048;
    proxy_cache_path /opt/cloud/cache levels=1  keys_zone=cloud:10m;

    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log;

    gzip on;
    gzip_disable "msie6";
    include /etc/nginx/sites-enabled/*.conf;
    }

**virtual host conf**

upstream ss_api {

  server 127.0.0.1:3000 max_fails=0  fail_timeout=600;

  }

server {

  listen 81;

  server_name xxxxx.com; # change to match your URL

  if ($http_x_forwarded_proto != 'https') {

    return 301 https://$server_name$request_uri;

  }

  location / {

    proxy_pass http://ss_api; # match the name of upstream directive which is defined above

    proxy_set_header  Host $http_host;

    proxy_set_header  X-Real-IP  $remote_addr;

    proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;

    proxy_cache cloud;

    proxy_cache_valid  200 302  60m;

    proxy_cache_valid  404      1m;

    proxy_cache_bypass $http_authorization;

    proxy_cache_bypass http://ss_api/account/;

    add_header X-Cache-Status $upstream_cache_status;

  }

location ~ /\. { deny  all; }

}

Nginx has a bunch of timeout directives. I don't know if I'm missing something important. Any help would be highly appreciated....


Source: (StackOverflow)