How To NGINX reverse proxy to another server

Why would you want to use the proxy_pass directive?

  1. You are a high school student. Your school blocks access to social networks. You want to sell Facebook.com proxy access to your classmates for five dollars a month. You can use NGINX to proxy HTTP traffic from an IP address you control to the Facebook.com domain. Make sure you don’t steal your classmates usernames and passwords 😉

  2. You work at a tech company. Your company is embracing micro-services. Your boss asked you to figure something out. You can you NGINX proxy_pass to segment HTTP requests by service. You may use subdomain or path based routing.

  3. You have a memcached instance. You can use memcached_pass , a cousin of proxy_pass, to make memcached requests. Most infrastructures proxy HTTP to internal applications (think rails or Node.js web apps). These apps then query memcached. You can use NGINX directly and eliminate the middle person.

Reverse Proxy for HTTP Requests

Reverse proxies are useful for exposing one endpoint, like www.mysite.com to the outside world. You have an API, a CRM, and an Admin backend. But the problem is your website is WordPress, your API is Node.js, your CRM and Admin interface are PHP. Using NGINX, you can make these applications look like one.

WordPress will be the "main" server route and will respond to anything not in the other locations.

location /api {
    proxy_pass http://localhost:3000
}

location /crm {
    proxy_pass http://localhost:9000
}

location /admin {
    proxy_pass http://localhost:9001
}

That’s all it takes for a simple way to combine applications into a common HTTP endpoint.

Buffer Configuration

NGINX will buffer responses from your application server. This means NGINX waits until your app server has sent 100% of the response before sending it to the client. This may seem like it will slow your app down, but really it allows your application server to send responses (to NGINX) very quickly. NGINX is really good at serving thousands of clients so you should make NGINX to that work.

There are typically 8 buffers of either 4k or 8k size for a single connection.

If you want to change these on a per location basis see this config:

location /buffers {
    proxy_buffers 16 4k;
    proxy_buffer_size 2k;
    proxy_pass http://localhost:8000;
}

Per connect to the location /buffers there are 16 4k buffers allocated. The proxy_buffer_size statement configures a special buffer that holds the initial response data from your application. The response data is typically small and will fit in a small buffer.

If you want to turn off buffers for a location use this:

location /no/buffers {
    proxy_buffering off;
    proxy_pass http://localhost:8000;
}

If you can ensure your clients are super fast, turning off buffers maybe good for performance.

Passing Headers

IPv4 (Internet Protocol version 4) requires the IP address to be NGINX’s IP address. Your application server will only see NGINX’s IP address make HTTP requests. Sometimes you want to know the user’s IP address for logging, security, or geolocation purposes.

Use the proxy_set_header directive.

location /api/ {
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_pass http://localhost:8000;
}

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *