EzDevInfo.com

reverse-proxy interview questions

Top reverse-proxy frequently asked interview questions

Can a Reverse Proxy use SNI with SSL pass through?

I need to serve several applications over https using one external ip address.

The ssl certificates should not be managed on the reverse proxy. They are installed on the application servers.

Can a reverse proxy be configured to use SNI and pass ssl through for termination at the endpoint?

Is this possible using something like Nginx or Apache? What does the configuration look like?


Source: (StackOverflow)

How to set up Nginx as a caching reverse proxy?

I heard recently that Nginx has added caching to its reverse proxy feature. I looked around but couldn't find much info about it.

I want to set up Nginx as a caching reverse proxy in front of Apache/Django: to have Nginx proxy requests for some (but not all) dynamic pages to Apache, then cache the generated pages and serve subsequent requests for those pages from cache.

Ideally I'd want to invalidate cache in 2 ways:

  1. Set an expiration date on the cached item
  2. To explicitly invalidate the cached item. E.g. if my Django backend has updated certain data, I'd want to tell Nginx to invalidate the cache of the affected pages

Is it possible to set Nginx to do that? How?


Source: (StackOverflow)

Advertisements

Nginx reverse proxy + URL rewrite

Nginx is running on port 80, and I'm using it to reverse proxy URLs with path /foo to port 3200 this way:

location /foo {
                proxy_pass http://localhost:3200;
                proxy_redirect     off;
                proxy_set_header   Host $host;
}

This works fine, but I have an application on port 3200, for which I don't want the initial /foo to be sent to. That is - when I access http://localhost/foo/bar, I want only /bar to be the path as received by the app. So I tried adding this line to the location block above:

rewrite ^(.*)foo(.*)$ http://localhost:3200/$2 permanent;

This causes 302 redirect (change in URL), but I want 301. What should I do?


Source: (StackOverflow)

Make nginx to pass hostname of the upstream when reverseproxying

I run several docker containers with hostnames:

web1.local web2.local web3.local

Routing to these done based on hostname by nginx. I have a proxy in front of this setup (on different machine connected to internet) where I define upstream as:

    upstream main {
      server web1.local:80;
      server web2.local:80;
      server web3.local:80;
    }

And actual virtual host description:

    server {
      listen 80;
      server_name example.com;
      location / {
        proxy_pass http://main;
      }
    }

Now, because containers receive hostname "main" instead of "web1.local", they do not respond properly to the request.

Question: how I can tell nginx to pass name of the upstream server instead of name of upstream group of servers in Host: header when proxying request?


Source: (StackOverflow)

nginx real_ip_header and X-Forwarded-For seems wrong

The wikipedia description of the HTTP header X-Forwarded-For is:

X-Forwarded-For: client1, proxy1, proxy2, ...

The nginx documentation for the directive real_ip_header reads, in part:

This directive sets the name of the header used for transferring the replacement IP address.
In case of X-Forwarded-For, this module uses the last ip in the X-Forwarded-For header for replacement. [Emphasis mine]

These two descriptions seem at odds with one another. In our scenario, the X-Forwarded-For header is exactly as described -- the client's "real" IP address is the left-most entry. Likewise, the behavior of nginx is to use the right-most value -- which, obviously, is just one of our proxy servers.

My understanding of X-Real-IP is that it is supposed to be used to determine the actual client IP address -- not the proxy. Am I missing something, or is this a bug in nginx?

And, beyond that, does anyone have any suggestions for how to make the X-Real-IP header display the left-most value, as indicated by the definition of X-Forwarded-For?


Source: (StackOverflow)

Nginx vs Apache as reverse proxy, which one to choose

this kind of question maybe has been asked here but I couldn't find any that really match my question. Heard that nginx performance is quite impressive, but Apache has more docs, community(read:expert) to get help

Now what I want to know, how both web servers compare in term of performance, easiness of config, level of customization,etc. AS REVERSE PROXY server in a vps environment??

I'm still weighing between the two for a ruby web app(not ROR) served with thin (one of ruby web server).
Specific answer will be much appreciated. General answer not touching the ruby part is okay. I'm still noob in web server administration.


Source: (StackOverflow)

What is a Reverse Proxy?

I know what a proxy is, but I'm not sure what a reverse proxy is. It seems to me that it's probably akin to a load balancer. Is that correct?


Source: (StackOverflow)

Why is setting Nginx as a reverse proxy a good idea?

I have a Django site running on Gunicorn with a reverse proxy through Nginx. Isn't Nginx just an extra unnecessary overhead? How does adding that on top of Gunicorn help?


Source: (StackOverflow)

Is there a name based virtual host SSH reverse proxy?

I've grown quite fond of HTTP reverse proxies in our development environment and found the DNS based virtual host reverse proxy quite useful. Having only one port (and the standard one) open on the firewall makes it much easier for management.

I'd like to find something similar to do SSH connections but haven't had much luck. I'd prefer not to simply use SSH tunneling since that requires opening port ranges other than the standard. Is there anything out there that can do this?

Could HAProxy do this?


Source: (StackOverflow)

Configure Nginx as reverse proxy with upstream SSL

I try to configure an Nginx server as a reverse proxy so the https requests it receives from clients are forwarded to the upstream server via https as well.

Here's the configuration that I use:

http {

        # enable reverse proxy
        proxy_redirect              off;
        proxy_set_header            Host            $http_host;
        proxy_set_header            X-Real-IP       $remote_addr;
        proxy_set_header            X-Forwared-For  $proxy_add_x_forwarded_for;

    upstream streaming_example_com 
    {
          server WEBSERVER_IP:443; 
    }

    server 
    {
        listen      443 default ssl;
        server_name streaming.example.com;
        access_log  /tmp/nginx_reverse_access.log;
        error_log   /tmp/nginx_reverse_error.log;
        root        /usr/local/nginx/html;
        index       index.html;

        ssl_session_cache    shared:SSL:1m;
        ssl_session_timeout  10m;
        ssl_certificate /etc/nginx/ssl/example.com.crt;
        ssl_certificate_key /etc/nginx/ssl/example.com.key;
        ssl_verify_client off;
        ssl_protocols        SSLv3 TLSv1 TLSv1.1 TLSv1.2;
        ssl_ciphers RC4:HIGH:!aNULL:!MD5;
        ssl_prefer_server_ciphers on;


       location /
       {
            proxy_pass  https://streaming_example_com;
        }
    }

}

Anyway, when I try to access a file using reverse proxy this is the error I get in reverse proxy logs:

2014/03/20 12:09:07 [error] 4113079#0: *1 SSL_do_handshake() failed (SSL: error:1408E0F4:SSL routines:SSL3_GET_MESSAGE:unexpected message) while SSL handshaking to upstream, client: 192.168.1.2, server: streaming.example.com, request: "GET /publishers/0/645/_teaser.jpg HTTP/1.1", upstream: "https://MYSERVER.COM:443/publishers/0/645/_teaser.jpg", host: "streaming.example.com"

Any idea what I am doing wrong?


Source: (StackOverflow)

Nginx Config: Front-End Reverse Proxy to Another Port

I have a small web server that serves requests on port 5010 rather than 80.

I would like to use nginx as a front end proxy to receive requests on port 80 and then let those requests be handle by port 5010.

I have installed nginx successfully and it runs smoothly on Ubuntu Karmic.

But, my attempts to reconfigure the default nginx.conf have not been successful.

I tried including in the server directive the listen argument for port 5010.

I have also tried proxy_pass directive.

Any suggestions on changes that need to be made or directives that need to be set in order to just have port forwarding.


Source: (StackOverflow)

nginx failover without load balancing

I'm having trouble configuring nginx.

I'm using nignx as a reverse proxy. I want to send my all requests to my first server. If the first server is down, I want to send requests to second server.

In short, how can I have a failover solution without load balancing?


Source: (StackOverflow)

An upstream response is buffered to a temporary file

I have a rather large and slow (complex data, complex frontend) web application build in RoR and served by Puma with nginx as reverse proxy. Looking at the nginx error log I see quite a few entries like:

2014/04/08 09:46:08 [warn] 20058#0: *819237 an upstream response is buffered to a temporary file /var/lib/nginx/proxy/8/47/0000038478 while reading upstream, client: 5.144.169.242, server: engagement-console.foo.it, request: "GET /elements/pending?customer_id=2&page=2 HTTP/1.0", upstream: "http://unix:///home/deployer/apps/conversationflow/shared/sockets/puma.sock:/elements/pending?customer_id=2&page=2", host: "ec.reputationmonitor.it", referrer: "http://ec.foo.it/elements/pending?customer_id=2&page=3"

I find it rather curious as it's very unlikely that that page remains the same for different users and different user interactions, and I would not think that buffering the response on disk necessary/useful.

I know about proxy_max_temp_file_size and setting it to 0, but it seems to me I little bit awkward to me (my proxy tries to buffer but has no file where to buffer to... how can that be faster?).

My questions are:

1) How can I remove the [warn] and avoid buffering responses? Is it better to turn off proxy_buffering or set proxy_max_temp_file_size to 0? Why?

2) If nginx buffers a response when does it serve the buffered response, to whom and why?

3) Why nginx turns proxy_buffering on by default and then [warn]s you if it actually buffers a response?

4) When does a response triggers that option? When it takes > than some seconds (how many?) to serve the response? Is this configurable?

TIA, ngw


Source: (StackOverflow)

How to handle relative urls correctly with a reverse proxy

I have a reverse proxy setup as follows in Apache:

Server A with address www.example.com/folder is the reverse proxy server.

It maps to: Server B with address test.madeupurl.com

This kind of works. But the problem I have is, on www.example.com/folder, all of the relative links are of the forms www.example.com/css/examplefilename.css rather than www.example.com/folder/css/examplefilename.css

How do I fix this?

So far my reverse proxy has this on Server A (www.example.com):

<Location /folder>
    ProxyPass  http://test.madeupurl.com
    ProxyPassReverse http://test.madeupurl.com
</Location>

Source: (StackOverflow)

How to rewrite the domain part of Set-Cookie in a nginx reverse proxy?

I have a simple nginx reverse proxy:

server {
  server_name external.domain.com;
  location / {
    proxy_pass http://backend.int/;
  }
}

The problem is that Set-Cookie response headers contain ;Domain=backend.int, because the backend does not know it is being reverse proxied.

How can I make nginx rewrite the content of the Set-Cookie response headers, replacing ;Domain=backend.int with ;Domain=external.domain.com?

Passing the Host header unchanged is not an option in this case.

Apache httpd has had this feature for a while, see ProxyPassReverseCookieDomain, but I cannot seem to find a way to do the same in nginx.


Source: (StackOverflow)