[squid-users] Protecting squid against ddos attacks

Chirayu Patel chirayu.patel at truecomtelesoft.com
Sun Sep 22 13:59:46 UTC 2019


Hi Amos,

Thanks a lot for giving some amazing insights..

So currently I am using Squid to achieve 2 things :
a) Content Filtering - by checking the url against an external db and allow
and block it accordingly. (using url_rewriter).
b) To popup captive portal

1) Regarding the use of 4 ports
Using iptables, I am redirecting the non authenticated users to ports  3132
 & 3133. And then in squid i am checking the port on which I have received
the request from. If its the above ports, then I run the captive portal flow

Once the user is authenticated , then I redirect their traffic to ports 3129
and 3131, and for those ports I run the content policy flow.

I am not sure if this is the right way of choosing the flow. Please advise
if there is any other way to run two different flows with one squid.

2) Actually 3128 port is there in the config.. missed to attach that..

3) Right now I have configured firewall to only allow these 4 ports on the
INPUT chain, so I am not expecting traffic to come from any other ports. In
that case, is it okay if I have removed the default config and kept
"http_access allow all" ??
The only issue is that the attacker now has 4 ports to run attacks on.

4) > on_unsupported_protocol tunnel all

I had added this when I faced issue with one of the Apps, Whatsapp which
send the http traffic on https port. If I replace that with *respond*, I
guess Whatsapp will become unusable.. right ?

5) ipcache_size & fqdncache_size.
I was too concerned about the memory usage but I believe it does more bad
than good.  I will increase them to their defaults.

6) cache_mem 0 MB
The default cache memory is quite huge (256 MB). That is approx the total
usable memory I have on the AP. In that case, what do you think should be a
good starting point in case I keep it to a non zero value ?

7) memory_pools off
Again, I was too concerned about memory use and I got scared because of
this comment
------ ------ ------ ------ ------ ------ ------ ------ ------ ------
------ ------ ------ ------
If set, Squid will keep pools of allocated (but unused) memory available
for future use. If memory is a premium on your system and you believe your
malloc library outperforms Squid  routines, disable this.
------------- ------ ------ ------ ------ ------ ------ ------ ------
------ ------ ------
But I believe some memory in exchange of performance is OK. So I am going
to enable it.

8) The reason for keeping a single url_rewrite process has got to do with
caching of the external content policy api replies which is mainly to avoid
making external calls if the cache is available. If I have multiple
processes, the cache will get divided amongst multiple processes depending
on which one has made the call.

9) max_filedesc 5120.
I had kept this number bigger because we were getting "out of file
descriptors error"

10) Above all, the best thing would be a way to differentiate between a
high traffic flow of http(s) requests coming in from legitimate users vs a
high traffic flow generated by an attacker with a simple script.

--
Thank You
Chirayu Patel
Truecom Telesoft
+91 8758484287




On Sat, 21 Sep 2019 at 17:33, <squid-users-request at lists.squid-cache.org>
wrote:

> Send squid-users mailing list submissions to
>         squid-users at lists.squid-cache.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         http://lists.squid-cache.org/listinfo/squid-users
> or, via email, send a message with subject or body 'help' to
>         squid-users-request at lists.squid-cache.org
>
> You can reach the person managing the list at
>         squid-users-owner at lists.squid-cache.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of squid-users digest..."
>
>
> Today's Topics:
>
>    1. Re: Protecting squid against ddos attacks (Amos Jeffries)
>    2. Re: Help with HTTPS SQUID 3.1.23 https proxy not  working
>       (KleinEdith)
>    3. Re: Help with HTTPS SQUID 3.1.23 https proxy not working
>       (Matus UHLAR - fantomas)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Sat, 21 Sep 2019 12:19:18 +1200
> From: Amos Jeffries <squid3 at treenet.co.nz>
> To: squid-users at lists.squid-cache.org
> Subject: Re: [squid-users] Protecting squid against ddos attacks
> Message-ID: <835c4d02-4246-8c65-f9ce-cf91c7dd9e92 at treenet.co.nz>
> Content-Type: text/plain; charset=utf-8
>
> On 21/09/19 1:03 am, Chirayu Patel wrote:
> > --> I have installed squid in a wifi access point which will in many
> > cases behave as an edge gateway as well.. So basically it itself is the
> > firewall. There is nothing in front to protect it.
> > --> There are 4 ports that are opened.. If someone decides to do a DDOS
> > attack on them, what options do I have to protect against them.
>
>
> Pretty much the exact opposite of what you have this proxy configured to
> be doing.
>
> Right now you have it setup to allow all traffic *from* anywhere *to*
> anywhere, with no controls, no logging, and no report to any backend
> where the traffic originated.
>
>
> Squid default configuration comes with some DoS protections as
> recommended config, some are built-in and always working.
>
> > This is my squid config file :
> >
> > ------------------------------------------
> > http_port 3129 intercept
> > https_port 3131 intercept ssl-bump cert=/etc/ray/certificates/myCA.pem \
> >     generate-host-certificates=off dynamic_cert_mem_cache_size=2MB
> > ## For Captive Portal
> > http_port 3132 intercept
> > https_port 3133 intercept ssl-bump cert=/etc/ray/certificates/myCA.pem \
> >     generate-host-certificates=off dynamic_cert_mem_cache_size=1MB
> >
>
> That comment "For Captive Portal" is out of place. Interception *is*
> captive portal, so all your ports above are captive portal ports.
>
> Usually you would only need one port of each type. Including the
> forward-proxy port (3128 with no mode flag, which you are missing).
>
> For DoS and DDoS protection, having more ports receiving traffic does
> help by allowing more TCP port numbers to be available for use. But you
> need firewall rules to spread the traffic load across those ports. See
> the "Frontend Alternative 1" section of
>  <https://wiki.squid-cache.org/ConfigExamples/ExtremeCarpFrontend>
>
> For the best DDoS protection Squid can offer you would have a
> multi-machine setup like that config page is part of. The particular
> Squid you have right now though can gain from just having the port load
> balancing part. You can extend the backend part on other machines later
> if you want / need.
>
>
> >
> > # TLS/SSL bumping definitions
> > acl tls_s1_connect at_step SslBump1
> > acl tls_s2_client_hello at_step SslBump2
> > acl tls_s3_server_hello at_step SslBump3
> >
>
> Unusued ACLs still consume memory. Not much, but still thats memory.
>
>
> > # TLS/SSL bumping steps
> > ssl_bump peek tls_s1_connect all # peek at TLS/SSL connect data
>
> The "all" on the above line is unnecessary and a waste of CPU cycles on
> ever next connection. Remove it.
>
> > ssl_bump splice all # splice: no active bumping
> > on_unsupported_protocol tunnel all
>
> The tunnel action causes Squid to setup a server connection. That costs
> 2x TCP ports, 2x FDs, client I/O, server I/O, CPU cycles to perform all
> the I/O, and memory for all the state and I/O buffers
>
> While this may give you good service for weird client traffic. If your
> DDoS risk is high, it may be better to use "respond" instead and an ACL
> with "deny_info TCP_RESET attached".
>
>
> >
> > pinger_enable off
> > digest_generation off
> > netdb_filename none
> > ipcache_size 128
>
> ipcache being larger will help your high-traffic periods by helping
> reduce delays on traffic you let through the proxy.
>
> DDoS can reduce that benefit. But that is only a *visual* effect, there
> is no more resource consumption than the DDoS would cause with a smaller
> ipcache size.
>
> So reducing this cache size only slows your normal peak traffic at times
> when it needs fastest service. That is a tradeoff against your AP
> machines memory available.
>
>
> > fqdncache_size 128
>
> Large fqdncache for intercept proxies helps retain valid Host header
> records longer and reduce delays receiving new messages. So larger here
> is better protection, against both normal traffic problems and DDoS.
>
>
> > via off
> > forwarded_for transparent
> > httpd_suppress_version_string on
> > cache deny all
> > cache_mem 0 MB
>
> Using memory to store objects recently used gives 100x speed increase
> (aka DoS handling capacity).
>
> This though is a tradeoff with the memory you have available. Whether
> that speed gain is nanoseconds, milliseconds or whole seconds depends on
> your network speeds.
>
> FYI: The model of a frontend LB with backend cache machine (like that
> CARP setup earlier) is designed to reduce that speed difference so both
> the resource consumption and speed gain cache gives is primarily
> happening at the backends - which are very close in the network so
> minimal extra delay for the frontend LB.
>
>
> > memory_pools off
>
> Only if you have to. The memory usage patterns of high-traffic software
> like Squid is quite different from what most OS malloc are optimized
> for. The memory pools in Squid are optimized to reduce that to a number
> of larger more consistently sized allocations.
>
> Without these pools memory allocation cycles add a bit of speed
> reduction to the proxy, and worse can easily lead to memory
> fragmentation issues. Normal traffic speeds these effects are not easily
> noticed, but under DoS or DDoS conditions they can drag the entire
> machine to a crawl if not a complete halt on low-memory systems (like
> yours?).
>
>
> > shutdown_lifetime 0 seconds
> >
> > #logfile_daemon /dev/null
> > access_log none
>
> A bit part of DoS or DDoS protection is identifying the attack as it
> starts. That requires the information about what traffic is happening to
> go somewhere for processing.
>
> Even if you have something else doing deep packet inspection I would
> enable logs. Use one of the network logging modules to send them to
> another machine if necessary for processing.
>
>
> >
> > #acl good_url dstdomain .yahoo.com <http://yahoo.com>
> > http_access allow all
> >
>
> See the "default config" section of
> <https://wiki.squid-cache.org/Squid-3.5>. The default rules are
> primarily DoS protections these days, with some other nasty attacks
> (potentially leading to DDoS indirectly) as well.
>
> You need those rules, and you need a clear policy on what traffic you
> allow through the proxy (from where, to where). Once you have that in
> place you can reasonably consider what DoS/DDoS risk is left to deal
> with. So long as your policy and rule is "http_access allow all" you can
> be DoS'ed by a single 38 byte HTTP request message - at least the proxy
> killed completely, possibly the whole machine.
>  (FYI the other protection against this attack is the Via header, which
> you have disabled).
>
>
> > url_rewrite_program /tmp/squid/urlcat_server_start.sh
> > #url_rewrite_bypass on
> > url_rewrite_children 1 startup=1 idle=1 concurrency=30 queue-size=10000
> > on-persistent-overload=ERR
>
> Having only one helper may be a source of problems under any conditions.
> The ERR will help, but ideally you don't want to reach that state.
>
> Consider whether the thing this helper is doing can be done by ACLs and
> deny_info instead. That would avoid all the helper delays or I/O
> resource needs, and any clients getting those ERR error pages.
>
>
> > #url_rewrite_access allow all
> > #url_rewrite_extras "%>a/%>A %un %>rm bump_mode=%ssl::bump_mode
> > sni=\"%ssl::>sni\" referer=\"%{Referer}>h\""
> > url_rewrite_extras "%>a %lp %ssl::>sni"
> >
> > max_filedesc 5120
>
> This is the most direct measure of how large a DoS has to be to kill
> your traffic. The smaller it is the fewer connections the DoS needs to
> open.
>
> There is a tradeoff though between the memory each of these needs to
> allocate (~500 bytes just to exist, up to 256KB when in use) vs the
> memory your machine has available.
>
> The reduction of that "in use" time also matters as one might expect.
> Which is where the cache_mem speedup comes in, to answer repeat queries
> (eg those seen in a classical DoS) at orders of magnitude higher speed
> than the backend network can provide the same answer.
>
>
> > coredump_dir /tmp
> > client_lifetime 30 minutes
> > read_ahead_gap 8 KB
> >
>
>
> Additional to all the above, you can setup a "deny_info TCP_RESET ..."
> for any ACLs which does a deny action in your rules. That will prevent
> Squid generating an error page and consuming bandwidth to deliver it
> when that ACL blocks access.
>
> There is a tradeoff between annoying clients who no longer know why
> their connection ended, but under DoS or DDoS it is a huge bandwidth saver.
>
>
> Amos
>
>
> ------------------------------
>
> Message: 2
> Date: Sat, 21 Sep 2019 02:51:05 -0500 (CDT)
> From: KleinEdith <SEO.Workwide at gmail.com>
> To: squid-users at lists.squid-cache.org
> Subject: Re: [squid-users] Help with HTTPS SQUID 3.1.23 https proxy
>         not     working
> Message-ID: <1569052265880-0.post at n4.nabble.com>
> Content-Type: text/plain; charset=UTF-8
>
> Squid as the https proxy not working
>
> # Example rule allowing access from your local networks.
> # Adapt to list your (internal) IP networks from where browsing
> # should be allowed
> acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
> acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
> acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
> #acl localnet src fc00::/7       # RFC 4193 local private network range
> #acl localnet src fe80::/10      # RFC 4291 link-local (directly plugged)
> machines
> acl localnet src 10.0.0.188 # David Computer
> acl SSL_ports port 443
> acl Safe_ports port 80      # http
> acl Safe_ports port 21      # ftp
> acl Safe_ports port 443     # https
> acl Safe_ports port 70      # gopher
> acl Safe_ports port 210     # wais
> acl Safe_ports port 1025-65535  # unregistered ports
> acl Safe_ports port 280     # http-mgmt
> acl Safe_ports port 488     # gss-http
> acl Safe_ports port 591     # filemaker
> acl Safe_ports port 777     # multiling http
> acl CONNECT method CONNECT
>
> acl bad_urls dstdomain "/etc/squid/blacklisted_sites.acl"
> acl good_url dstdomain "/etc/squid/good_sites.acl"
> #http_access deny bad_url
>
> I can´t connect to:
>
> Outlook.com <https://outlook.live.com>
> Yahoo.com <https://www.yahoo.com/>
> SuCarroRD.com <https://sucarrord.com/>
> Gmail.com <http://gmail.com/>
> Bing <https://www.bing.com/>
>
> And more. I need help please to fix this problem
>
>
>
>
> --
> Sent from:
> http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
>
>
> ------------------------------
>
> Message: 3
> Date: Sat, 21 Sep 2019 10:07:31 +0200
> From: Matus UHLAR - fantomas <uhlar at fantomas.sk>
> To: squid-users at lists.squid-cache.org
> Subject: Re: [squid-users] Help with HTTPS SQUID 3.1.23 https proxy
>         not working
> Message-ID: <20190921080731.GB29045 at fantomas.sk>
> Content-Type: text/plain; charset=iso-8859-2; format=flowed
>
> On 21.09.19 02:51, KleinEdith wrote:
> >Squid as the https proxy not working
> >
> ># Example rule allowing access from your local networks.
> ># Adapt to list your (internal) IP networks from where browsing
> ># should be allowed
> >acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
> >acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
> >acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
> >#acl localnet src fc00::/7       # RFC 4193 local private network range
> >#acl localnet src fe80::/10      # RFC 4291 link-local (directly plugged)
> >machines
> >acl localnet src 10.0.0.188 # David Computer
> >acl SSL_ports port 443
> >acl Safe_ports port 80      # http
> >acl Safe_ports port 21      # ftp
> >acl Safe_ports port 443     # https
> >acl Safe_ports port 70      # gopher
> >acl Safe_ports port 210     # wais
> >acl Safe_ports port 1025-65535  # unregistered ports
> >acl Safe_ports port 280     # http-mgmt
> >acl Safe_ports port 488     # gss-http
> >acl Safe_ports port 591     # filemaker
> >acl Safe_ports port 777     # multiling http
> >acl CONNECT method CONNECT
> >
> >acl bad_urls dstdomain "/etc/squid/blacklisted_sites.acl"
> >acl good_url dstdomain "/etc/squid/good_sites.acl"
> >#http_access deny bad_url
> >
> >I can´t connect to:
> >
> >Outlook.com <https://outlook.live.com>
> >Yahoo.com <https://www.yahoo.com/>
> >SuCarroRD.com <https://sucarrord.com/>
> >Gmail.com <http://gmail.com/>
> >Bing <https://www.bing.com/>
>
> You haven't post whole squid config, did you?
> If you did, you definitely need to llow some access (for limited set of
> IPs), because the default is deny.
>
>
>
> --
> Matus UHLAR - fantomas, uhlar at fantomas.sk ; http://www.fantomas.sk/
> Warning: I wish NOT to receive e-mail advertising to this address.
> Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
> The early bird may get the worm, but the second mouse gets the cheese.
>
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> squid-users mailing list
> squid-users at lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
>
> ------------------------------
>
> End of squid-users Digest, Vol 61, Issue 28
> *******************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squid-cache.org/pipermail/squid-users/attachments/20190922/3f28ceba/attachment-0001.html>


More information about the squid-users mailing list