[squid-dev] [squid-users] 4.17 and 5.3 SSL BUMP issue: SSL_ERROR_RX_RECORD_TOO_LONG

Eliezer Croitoru ngtech1ltd at gmail.com
Mon Jan 24 23:00:16 UTC 2022


Hey Alex, 

@Squid-dev

Thanks for the response!
It will take me time to answer your questions and doubts about the solution to a more "scientific" degree rather than a basic understanding of the way the code works.

The main reason I posted it on the Squid-Users list first was since this issues was not resolved by anyone to an acceptable degree for such a long time.
This patch demonstrates that it's possible to prevent a basic DOS from Squid side based on the assumption that the client can damage the cache.

There are couple options on what to do when such a request happens ie:
There is a difference between what Squid knows about the requested domain name and what the client is asking for.(Split brain scenario?)

What have been done in HTTP/1.X was mostly to force the proxy resolved domain ip rather then considering the client side of the picture.
In TLS we ca assume that the client knows pretty well what IP he wants to reach for.
The only thing that was left is to verify the remote host against the proxy local CA's bundle(s) from an admin point of view.
We can naturally assume that any TLS connection that is trusted by the proxy CA's bundle(s) is trusted and there is no need
for any special test about the basic trust for caching from the destination address.(Revocations are an exception to this)

Indeed there are tests which are required but when the service is denying the basic nature of the proxy which
is in my case content filtering and not caching, I and many others would prefer to have close to 0 Cache but an operational service.
(which by the way in TLS connections this what happens in most cases anyways)

I have traced the issue to at-least 6 years ago.
I have tried to find a bug report which contains NONE/409 but none was found by the Bugzilla search despite the fact
that many encountered this issue in production with alive clients... compared to API or AI systems.
(???? Why no-one responded or filed a bug report, even a duplicate one???)

Since V 3.5 there is something broken :\

So answering:
* http://lists.squid-cache.org/pipermail/squid-users/2020-November/022913.html
* http://www.squid-cache.org/mail-archive/squid-dev/201112/0035.html
* https://forum.netgate.com/topic/159364/squid-squidguard-none-409-and-dns-issue
* https://docs.netgate.com/pfsense/en/latest/troubleshooting/squid.html
* https://www.linuxquestions.org/questions/linux-server-73/tag_none-409-connect-squid-3-5-20-a-4175620518/
* https://community.spiceworks.com/topic/351106-pfsense-and-squidguard-error-page


The use cases that I have seen until now are: (please add more cases to the list if you have)
* remote work VPN which forces remote(geographically and in the network level) DNS but splits the tunnel traffic between the office and the local WAN connection
* enforcement of a specific IP for testing using the hosts file by CDN and networking services providers 
* software that uses another way to acquire the destination IP of the service (DNS over HTTPS/others) (AV, Software Updates, others)
* Malware/Spyware that forces a specific DNS service which also installed a RootCA on the device (Squid CA's bundle(s) blocks these easily)

Bugzilla related bugs I have found using other keywords:
* https://bugs.squid-cache.org/show_bug.cgi?id=4940
* https://bugs.squid-cache.org/show_bug.cgi?id=4514

I do not have any sponsorship for this patch and I if someone is willing to pay for the work as it is then I would be happy
to accept some donation for my time on it.

If you have noticed or not it also removes the:
"Host header forgery detected on" and couple other log messages flooding.

I hope it will help to move couple steps forward.

One of the things that pushed me to write this patch is that the fact that Squid is a very good software despite it's an old beast
and I have tried to use commercial products and got really disappointed until now more then once.

I do believe that the right solution is not an ON/OFF switch but rather something that can be matched against http_access and other matchers/acls.
The right choice in my use case is ON/OFF switch but if the admin would be able to configure this it would be very easy to enforce a policy.
Currently in the security industry the investments are on the TLS level rather then on the policy itself. (feel free to correct me if I'm wrong).

I will add this patch to the next RPM's release which have just finished the build so these who have been having these issues would be able to use squid in production.

Thanks again,
Eliezer

* Currently I am building RPM's for: CentOS7-8, Oracle Linux 7-8, Amazon Linux 2.
* peek at: Slamming-Your-Head-Into-Keyboard-HOWTO: Packaging Applications - Jared Morrow    https://vimeo.com/70019064

----
Eliezer Croitoru
Tech Support
Mobile: +972-5-28704261
Email: ngtech1ltd at gmail.com

-----Original Message-----
From: Alex Rousskov <rousskov at measurement-factory.com> 
Sent: Monday, January 24, 2022 21:54
To: Eliezer Croitoru <ngtech1ltd at gmail.com>; squid-users at lists.squid-cache.org
Cc: 'Amos Jeffries' <squid3 at treenet.co.nz>
Subject: Re: [squid-users] 4.17 and 5.3 SSL BUMP issue: SSL_ERROR_RX_RECORD_TOO_LONG

On 1/24/22 1:06 PM, Eliezer Croitoru wrote:
> I sat for a while thinking what is the best approach to the subject and the
> next patch seems to be reasonable enough to me:
> https://gist.github.com/elico/630fa57d161b0c0b59ef68786d801589

> Let me know if this patch violates anything that I might not took into
> account.

The squid-users mailing list is not a good place for code reviews. If
you think your changes should be made official, please submit a pull
request on GitHub: https://wiki.squid-cache.org/MergeProcedure

FWIW, I wonder whether we should reuse and/or extend host_verify_strict
instead of adding a new squid.conf directive to control this behavior.
All other factors being equal, it would be good to have one directive to
control Host validation and its direct effects.


> * Tested to work in my specific scenario which I really don't care about
> caching when I'm in a DOS situation.

When one disables checks, Squid will continue to "work", of course. Did
you verify that the patched Squid:

1. Goes to the intended destination IP address rather than to Host?
2. Does not evict the matching cached responses from the cache?
3. Does not satisfy the forged request from the cache?
4. Does not share responses to requests with the "forged" Host?

There may be other prerequisites, and the above four may need polishing,
but these are the first conditions that come to my mind when dealing
with forgery attacks. Please disclose this information when/if posting
your changes for the Project review on GitHub.


Thank you,

Alex.

> ----
> Eliezer Croitoru
> Tech Support
> Mobile: +972-5-28704261
> Email: ngtech1ltd at gmail.com
> 
> -----Original Message-----
> From: squid-users <squid-users-bounces at lists.squid-cache.org> On Behalf Of
> Alex Rousskov
> Sent: Monday, January 24, 2022 16:54
> To: squid-users at lists.squid-cache.org
> Subject: Re: [squid-users] 4.17 and 5.3 SSL BUMP issue:
> SSL_ERROR_RX_RECORD_TOO_LONG
> 
> On 1/24/22 2:42 AM, Eliezer Croitoru wrote:
>> 2022/01/24 09:11:20 kid1| SECURITY ALERT: Host header forgery detected on
>> local=142.250.179.228:443 remote=10.200.191.171:51831 FD 16 flags=33
> (local
>> IP does not match any domain IP)
> 
> As you know, Squid improvements related to these messages have been
> discussed many times. I bet the ideas summarized in the following old
> email remain valid today:
> 
> http://lists.squid-cache.org/pipermail/squid-users/2019-July/020764.html
> 
> 
> If you would like to address browser's SSL_ERROR_RX_RECORD_TOO_LONG
> specifically (the error in your email Subject line), then that is a
> somewhat different matter: According to your packet capture, Squid sends
> a plain text HTTP 409 response to a TLS client. That is not going to
> work with popular browsers (for various technical and policy reasons).
> 
> Depending on the SslBump stage where the Host header forgery was
> detected, Squid could bump the client connection to deliver that error
> response; in that case, the browser may still refuse to show the
> response to the user because the browser will not trust the certificate
> that Squid would have to fake without sufficient origin server info.
> However, the browser error will be different and arguably less confusing
> to admins and even users.
> 
> https://wiki.squid-cache.org/SquidFaq/AboutSquid#How_to_add_a_new_Squid_feat
> ure.2C_enhance.2C_of_fix_something.3F
> 
> 
> HTH,
> 
> Alex.
> 
> _______________________________________________
> squid-users mailing list
> squid-users at lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
> 



More information about the squid-dev mailing list