What is HAProxy? How It Works, Architecture & Setup Guide (2025)
HAProxy is the world's most widely deployed load balancer and reverse proxy, used by GitHub, Twitter, Airbnb, and thousands of production systems. This guide explains exactly how it works — with real config examples you can use today.
What is HAProxy?
HAProxy (High Availability Proxy) is a free, open-source TCP/HTTP load balancer and reverse proxy. Written in C by Willy Tarreau in 2000, it is designed for high performance and reliability — processing millions of concurrent connections with minimal memory and CPU overhead.
At its core, HAProxy does one thing extremely well: it sits between your clients and your backend servers, intelligently routing incoming connections and requests to ensure your application stays fast, available, and secure.
HAProxy powers infrastructure at companies like:
- GitHub — handles all git traffic
- Twitter/X — API traffic routing
- Airbnb — core load balancing layer
- Reddit — handles millions of requests per second
- Stack Overflow — primary load balancer
According to the Netcraft survey, HAProxy is deployed in more production environments than any other dedicated load balancer.
How HAProxy Works
HAProxy operates as a proxy — it terminates incoming connections from clients, then opens a new connection to a backend server on the client's behalf. This gives HAProxy full visibility and control over the traffic.
The request lifecycle looks like this:
- A client connects to HAProxy on a configured port (e.g., port 80 or 443)
- HAProxy evaluates the request against ACL rules in the matching frontend
- It selects the appropriate backend pool based on routing rules
- It picks a server from the backend using the configured balancing algorithm
- It forwards the request to that server
- It relays the response back to the client
HAProxy operates in two modes:
- HTTP mode — full HTTP/1.1 and HTTP/2 awareness. HAProxy can inspect headers, rewrite URLs, route based on cookies, and apply advanced rules.
- TCP mode — pure TCP proxying without HTTP parsing. Used for databases, message queues, HTTPS passthrough, and any non-HTTP protocol.
The Frontend → Backend Model Explained
Every HAProxy configuration is built around two primitives: frontends and backends.
Frontend
A frontend defines how HAProxy listens for traffic. It specifies:
- The IP address and port to bind to
- The mode (HTTP or TCP)
- ACL rules that determine where to route the traffic
- Default backend to use if no ACL matches
Backend
A backend defines the pool of servers that handle the actual requests. It specifies:
- The list of servers (IP + port)
- The load balancing algorithm (roundrobin, leastconn, etc.)
- Health check configuration
- Connection limits and timeouts
A single frontend can route to multiple backends using ACL rules. For example, requests to /api/ go to the API backend, while everything else goes to the web backend.
Your First HAProxy Config in 10 Minutes
Here's a minimal but production-ready haproxy.cfg that load balances HTTP traffic across three application servers:
global
log /dev/log local0
maxconn 50000
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5s
timeout client 30s
timeout server 30s
option forwardfor
option http-server-close
frontend web_frontend
bind *:80
bind *:443 ssl crt /etc/ssl/certs/site.pem
http-request redirect scheme https unless { ssl_fc }
default_backend web_servers
backend web_servers
balance roundrobin
option httpchk GET /health
server app1 10.0.0.1:8080 check
server app2 10.0.0.2:8080 check
server app3 10.0.0.3:8080 checkWhat this config does:
- Listens on port 80 and 443
- Automatically redirects HTTP to HTTPS
- Terminates SSL at HAProxy (removes load from app servers)
- Distributes requests across 3 app servers in round-robin
- Checks
/healthon each server every 2 seconds - Automatically removes a server from rotation if health checks fail
Load Balancing Algorithms
HAProxy supports several balancing algorithms. Choose the right one for your workload:
- roundrobin — Default. Distributes requests sequentially across servers. Best for stateless applications with similar server capacity.
- leastconn — Routes to the server with the fewest active connections. Best for long-lived connections (WebSockets, databases).
- source — Hashes the client IP to always route to the same server. Useful for simple session affinity.
- uri — Hashes the request URI. Useful for caching scenarios.
- random — Picks a random server. Useful for simple horizontal scaling.
Health Checks and Automatic Failover
HAProxy's health checking is one of its most powerful features. It continuously monitors your backend servers and automatically removes unhealthy ones from rotation — with no manual intervention required.
Types of Health Checks
- TCP health check — Simply opens a TCP connection. If it connects, the server is considered healthy. Fast but shallow.
- HTTP health check — Sends an HTTP request (e.g.,
GET /health) and expects a 2xx response. More meaningful for web applications. - Custom health check — Sends a custom string and expects a specific response. Useful for databases and custom protocols.
backend api_servers
option httpchk GET /health HTTP/1.1
http-check expect status 200
server api1 10.0.0.1:3000 check inter 2s fall 3 rise 2
server api2 10.0.0.2:3000 check inter 2s fall 3 rise 2The fall 3 means a server is marked down after 3 consecutive failures. rise 2 means it returns to rotation after 2 consecutive successes. inter 2s sets the check interval to 2 seconds.
ACLs: Smart Routing Logic
Access Control Lists (ACLs) are conditions you define that HAProxy evaluates against incoming requests. They allow you to build routing logic based on almost any request attribute.
frontend web
bind *:80
mode http
# Define ACLs
acl is_api path_beg /api/
acl is_static path_end .jpg .png .gif .css .js
acl is_mobile hdr_sub(user-agent) -i mobile
# Route based on ACLs
use_backend api_backend if is_api
use_backend static_backend if is_static
default_backend web_backendThis routes API calls to a dedicated API backend, static assets to a CDN-backed backend, and everything else to the main web backend. All in a few lines of config.
HAProxy vs NGINX vs Traefik: When to Use Each
| Feature | HAProxy | NGINX | Traefik |
|---|---|---|---|
| Primary use | Load balancing | Web server + proxy | Cloud-native proxy |
| Raw performance | Highest | High | Moderate |
| Config complexity | Medium | Medium | Low (auto-discovery) |
| Kubernetes native | Via Ingress | Via Ingress | Native |
| Dynamic config reload | Yes | Yes | Yes (real-time) |
| SSL termination | Yes | Yes | Yes (auto Let's Encrypt) |
| Web server | No | Yes | No |
| Service mesh | No | No | Yes (Traefik Mesh) |
| Learning curve | Medium | Medium | Low |
Choose HAProxy when you need the best raw load balancing performance, advanced health checks, detailed traffic metrics, or you have complex routing rules.
Choose NGINX when you also need a web server for static files, or you want a single tool for both serving and proxying.
Choose Traefik when you're on Kubernetes and want automatic service discovery without manually updating config files.
Read our full comparison: HAProxy vs NGINX vs Traefik: Benchmark Results & Verdict (2025)
HAProxy in Kubernetes
In Kubernetes environments, HAProxy is commonly used as an Ingress controller via the HAProxy Kubernetes Ingress Controller. It provides:
- Automatic SSL certificate management
- Advanced rate limiting and DDoS protection
- mTLS for service-to-service communication
- Canary deployments and A/B testing at the ingress layer
- Real-time configuration updates without restarts
Install it with Helm:
helm repo add haproxytech https://haproxytech.github.io/helm-charts
helm install haproxy-kubernetes-ingress haproxytech/kubernetes-ingress \
--namespace haproxy-controller \
--create-namespaceMonitoring HAProxy
HAProxy provides a built-in stats page and Prometheus metrics endpoint that give you full visibility into your load balancer.
Built-in Stats Page
frontend stats
bind *:8404
stats enable
stats uri /stats
stats refresh 10s
stats auth admin:your-secure-passwordVisit http://your-server:8404/stats for a real-time dashboard showing connections, server health, traffic rates, and error counts.
Prometheus Metrics
frontend prometheus
bind *:8405
http-request use-service prometheus-exporter if { path /metrics }This exposes 100+ metrics you can scrape with Prometheus and visualize in Grafana. Integrate this with your cloud infrastructure automation stack for full observability.
Frequently Asked Questions
What is HAProxy used for?
HAProxy is used as a TCP/HTTP load balancer and reverse proxy. It distributes incoming traffic across multiple backend servers, provides health checking, SSL/TLS termination, rate limiting, and DDoS protection.
Is HAProxy better than NGINX?
For pure load balancing at high concurrency, HAProxy outperforms NGINX. NGINX is better when you also need a web server for static files. The two are often used together: HAProxy at the front for load balancing, NGINX behind for serving content.
Is HAProxy free to use?
Yes. HAProxy Community Edition is free and open source under the GNU LGPL license. HAProxy Enterprise is a paid version with additional features and commercial support.
What is the difference between HAProxy frontend and backend?
A frontend defines how HAProxy receives incoming connections — the port, protocol, and routing rules. A backend defines the pool of servers that handle requests. Traffic flows from client → frontend → backend → server.
Running HAProxy in production?
We've configured HAProxy for 50+ startups on AWS. Get a free infrastructure review from our DevOps team.
Need Help Configuring HAProxy?
Our DevOps team sets up production-grade HAProxy configurations with SSL termination, health checks, rate limiting, and full monitoring — as part of our cloud infrastructure service.
See Our DevOps ServicesRelated Articles
HAProxy vs NGINX vs Traefik: Benchmark Results & Verdict (2025)
Performance benchmarks and honest comparison.
SecurityHAProxy Security Hardening: DDoS Protection & SSL Setup
Harden your HAProxy installation step by step.
ConfigurationHAProxy Rate Limiting: Complete Configuration Guide
Implement rate limiting with stick tables.