[squid-users] Protecting squid against ddos attacks

Amos Jeffries squid3 at treenet.co.nz
Sat Sep 21 00:19:18 UTC 2019

On 21/09/19 1:03 am, Chirayu Patel wrote:
> --> I have installed squid in a wifi access point which will in many
> cases behave as an edge gateway as well.. So basically it itself is the
> firewall. There is nothing in front to protect it.
> --> There are 4 ports that are opened.. If someone decides to do a DDOS
> attack on them, what options do I have to protect against them.

Pretty much the exact opposite of what you have this proxy configured to
be doing.

Right now you have it setup to allow all traffic *from* anywhere *to*
anywhere, with no controls, no logging, and no report to any backend
where the traffic originated.

Squid default configuration comes with some DoS protections as
recommended config, some are built-in and always working.

> This is my squid config file :
> ------------------------------------------
> http_port 3129 intercept
> https_port 3131 intercept ssl-bump cert=/etc/ray/certificates/myCA.pem \
>     generate-host-certificates=off dynamic_cert_mem_cache_size=2MB
> ## For Captive Portal    
> http_port 3132 intercept
> https_port 3133 intercept ssl-bump cert=/etc/ray/certificates/myCA.pem \
>     generate-host-certificates=off dynamic_cert_mem_cache_size=1MB

That comment "For Captive Portal" is out of place. Interception *is*
captive portal, so all your ports above are captive portal ports.

Usually you would only need one port of each type. Including the
forward-proxy port (3128 with no mode flag, which you are missing).

For DoS and DDoS protection, having more ports receiving traffic does
help by allowing more TCP port numbers to be available for use. But you
need firewall rules to spread the traffic load across those ports. See
the "Frontend Alternative 1" section of

For the best DDoS protection Squid can offer you would have a
multi-machine setup like that config page is part of. The particular
Squid you have right now though can gain from just having the port load
balancing part. You can extend the backend part on other machines later
if you want / need.

> # TLS/SSL bumping definitions
> acl tls_s1_connect at_step SslBump1
> acl tls_s2_client_hello at_step SslBump2
> acl tls_s3_server_hello at_step SslBump3

Unusued ACLs still consume memory. Not much, but still thats memory.

> # TLS/SSL bumping steps
> ssl_bump peek tls_s1_connect all # peek at TLS/SSL connect data

The "all" on the above line is unnecessary and a waste of CPU cycles on
ever next connection. Remove it.

> ssl_bump splice all # splice: no active bumping
> on_unsupported_protocol tunnel all

The tunnel action causes Squid to setup a server connection. That costs
2x TCP ports, 2x FDs, client I/O, server I/O, CPU cycles to perform all
the I/O, and memory for all the state and I/O buffers

While this may give you good service for weird client traffic. If your
DDoS risk is high, it may be better to use "respond" instead and an ACL
with "deny_info TCP_RESET attached".

> pinger_enable off
> digest_generation off
> netdb_filename none
> ipcache_size 128

ipcache being larger will help your high-traffic periods by helping
reduce delays on traffic you let through the proxy.

DDoS can reduce that benefit. But that is only a *visual* effect, there
is no more resource consumption than the DDoS would cause with a smaller
ipcache size.

So reducing this cache size only slows your normal peak traffic at times
when it needs fastest service. That is a tradeoff against your AP
machines memory available.

> fqdncache_size 128

Large fqdncache for intercept proxies helps retain valid Host header
records longer and reduce delays receiving new messages. So larger here
is better protection, against both normal traffic problems and DDoS.

> via off
> forwarded_for transparent
> httpd_suppress_version_string on
> cache deny all
> cache_mem 0 MB

Using memory to store objects recently used gives 100x speed increase
(aka DoS handling capacity).

This though is a tradeoff with the memory you have available. Whether
that speed gain is nanoseconds, milliseconds or whole seconds depends on
your network speeds.

FYI: The model of a frontend LB with backend cache machine (like that
CARP setup earlier) is designed to reduce that speed difference so both
the resource consumption and speed gain cache gives is primarily
happening at the backends - which are very close in the network so
minimal extra delay for the frontend LB.

> memory_pools off

Only if you have to. The memory usage patterns of high-traffic software
like Squid is quite different from what most OS malloc are optimized
for. The memory pools in Squid are optimized to reduce that to a number
of larger more consistently sized allocations.

Without these pools memory allocation cycles add a bit of speed
reduction to the proxy, and worse can easily lead to memory
fragmentation issues. Normal traffic speeds these effects are not easily
noticed, but under DoS or DDoS conditions they can drag the entire
machine to a crawl if not a complete halt on low-memory systems (like

> shutdown_lifetime 0 seconds
> #logfile_daemon /dev/null
> access_log none

A bit part of DoS or DDoS protection is identifying the attack as it
starts. That requires the information about what traffic is happening to
go somewhere for processing.

Even if you have something else doing deep packet inspection I would
enable logs. Use one of the network logging modules to send them to
another machine if necessary for processing.

> #acl good_url dstdomain .yahoo.com <http://yahoo.com>
> http_access allow all

See the "default config" section of
<https://wiki.squid-cache.org/Squid-3.5>. The default rules are
primarily DoS protections these days, with some other nasty attacks
(potentially leading to DDoS indirectly) as well.

You need those rules, and you need a clear policy on what traffic you
allow through the proxy (from where, to where). Once you have that in
place you can reasonably consider what DoS/DDoS risk is left to deal
with. So long as your policy and rule is "http_access allow all" you can
be DoS'ed by a single 38 byte HTTP request message - at least the proxy
killed completely, possibly the whole machine.
 (FYI the other protection against this attack is the Via header, which
you have disabled).

> url_rewrite_program /tmp/squid/urlcat_server_start.sh
> #url_rewrite_bypass on
> url_rewrite_children 1 startup=1 idle=1 concurrency=30 queue-size=10000
> on-persistent-overload=ERR

Having only one helper may be a source of problems under any conditions.
The ERR will help, but ideally you don't want to reach that state.

Consider whether the thing this helper is doing can be done by ACLs and
deny_info instead. That would avoid all the helper delays or I/O
resource needs, and any clients getting those ERR error pages.

> #url_rewrite_access allow all
> #url_rewrite_extras "%>a/%>A %un %>rm bump_mode=%ssl::bump_mode
> sni=\"%ssl::>sni\" referer=\"%{Referer}>h\""
> url_rewrite_extras "%>a %lp %ssl::>sni"
> max_filedesc 5120

This is the most direct measure of how large a DoS has to be to kill
your traffic. The smaller it is the fewer connections the DoS needs to open.

There is a tradeoff though between the memory each of these needs to
allocate (~500 bytes just to exist, up to 256KB when in use) vs the
memory your machine has available.

The reduction of that "in use" time also matters as one might expect.
Which is where the cache_mem speedup comes in, to answer repeat queries
(eg those seen in a classical DoS) at orders of magnitude higher speed
than the backend network can provide the same answer.

> coredump_dir /tmp
> client_lifetime 30 minutes
> read_ahead_gap 8 KB

Additional to all the above, you can setup a "deny_info TCP_RESET ..."
for any ACLs which does a deny action in your rules. That will prevent
Squid generating an error page and consuming bandwidth to deliver it
when that ACL blocks access.

There is a tradeoff between annoying clients who no longer know why
their connection ended, but under DoS or DDoS it is a huge bandwidth saver.


More information about the squid-users mailing list