[squid-users] Transparent deployment VS web services behind DNS load balancing
Amos Jeffries
squid3 at treenet.co.nz
Wed Sep 27 02:10:11 UTC 2023
On 26/09/23 05:35, Denis Roy wrote:
> My installation is fairly simple: I run Squid 5.8 in transparent mode,
> on a pF based firewall (FreeBDS 14.0) .
>
> I intercept both HTTP 80, and HTTPS 443. Splicing the exceptions I have
> in a whitelist, bumping everything else. Simple.
>
> This is a relatively recent deployment, and it has been working well as
> far as web browser experience is concerned. Nonetheless, I have observed
> a certain amount of 409s sharing similarities (more on that later). Rest
> assured, I have made 100% certain my clients and Squid use the same
> resolver (Unbound), installed on the same box with a fairly basic
> configuration.
>
> When I observe the 409s I am getting, they all share the same
> similarities: the original client request was from an application or OS
> related task, using DNS records with very low TTL. 5 minutes or less,
> often 2 minutes. I could easily identify the vast majority of these
> domains as being load balanced with DNS solutions like Azure Traffic
> Manager, and Akamai DNS.
>
> Now, this make sense: a thread on the client may essentially intiate a
> long running task that will last a couple of minutes (more than the
> TTL), during which it may actually establish a few connections without
> calling the gethostbyname function, resulting in squid detecting a
> forgery attempt since it will be unable to validate the dst IP match the
> intended destination domain. Essentially, creating "false positives'',
> and dropping legitimate traffic.
>
Aye, pretty good summary of the current issue.
> I have searched a lot, and the only reliable way to completely solve
> this issue in a transparent deployment has been to implement a number of
> IP lists for such services (Azure Cloud, Azure Front Door, AWS, Akamai
> and such), bypassing squid completely based on the destination IP address.
>
> I'd be interested to hear what other approaches there might be. Some
> package maintainers have chosen to drop the header check altogether (
> https://github.com/NethServer/dev/issues/5348
> <https://github.com/NethServer/dev/issues/5348> ).
Nod. Thus opening everyone using that package to CVE-2009-0801 effects.
This is a high risk action that IMO should be an explicit choice by
admin to do, not a package distributor.
> I believe a better
> approach could be to just validate that the SNI of the TLS Client Hello
> match the certificate obtained from the remote web server, perform the
> usual certificate validation (is it trusted, valid, etc), and not rely
> so much on the DNS check
That is just swapping the client-presented Host header for
client-presented SNI, and remotely-supplied DNS lookup for
remotely-supplied certificate lookup. All the security considerations
problems of CVE-2009-0801 open up again, but at the TLS trust layer
instead of the HTTP(S) trust layer.
> which can be expected to fail at times given
> how DNS load balancing is ubiquitous with native cloud solutions and
> large CDNs. But implementing this change would require serious
> development, which I am completely unable to take care of.
Indeed.
Alternatively the design I have been trying to work towards slowly is
verifying that all requests on the long-lived connection only go to the
same origin/server IP. Once trust of that IP has been validated we
should not have to re-verify every request against new DNS data, just
against the past history of of the connection.
This approach though is also a lot of development.
HTH
Amos
More information about the squid-users
mailing list