From spamfree at wansecurity.com Fri Oct 15 03:04:42 2021 From: spamfree at wansecurity.com (Robert Smith) Date: Thu, 14 Oct 2021 22:04:42 -0500 Subject: [squid-dev] squid-5.0.5-20210223-r4af19cc24 difference in behaviors between openbsd and linux In-Reply-To: References: <93D5BC23-84A8-465F-BB4D-0CEC0529F188@wansecurity.com> <000001d723f6$299226d0$7cb67470$@gmail.com> Message-ID: <4110E90C-08E1-4878-B510-E69D9449F42B@wansecurity.com> I apologize for the ultra-long delay on this. I did just test this tonight and it worked properly under OpenBSD. What would be the process for submitting a bug report? -Robert > On Mar 29, 2021, at 4:33 AM, Amos Jeffries wrote: > > On 29/03/21 6:16 am, Eliezer Croitoru wrote: >> Hey Robert, >> I am not sure I understood what is the meaning of the description: >> openbsd: Requiring client certificates. >> linux: Not requiring any client certificates > > @Eliezer: > They are startup messages Squid prints in cache.log when a TLS server context is initialized. > > > >> -----Original Message----- >> From: Robert Smith >> Sent: Sunday, March 28, 2021 7:27 PM >> Dear Squid-Dev list: >> I could use some help on this one: >> I have a build environment that is identical on linux, openbsd, and macosx >> In this scenario, I am developing under: >> Ubuntu 18.04 - All patches and updates applied as of 3/24 >> OpenBSD 6.8 - All patches and updates applied as of 3/24 >> I will note that I am really only using the libc from each system whereas every other component dependencies (which are not many! Good job squid team!) are a part of my build system. >> When building squid with the exact same tool chain and library stack, with the same configure options, I am seeing a difference in behavior on the two platforms: >> The difference is that after parsing the configuration file, the two systems differ in whether or not they will require client certificates: >> openbsd: Requiring client certificates. >> linux: Not requiring any client certificates >> > > What the message means depends on whether the http(s)_port, a cache_peer, or the outgoing https:// context is being initialized. Options that directive was supposed to be using (including the default security). > > Looking at your logs I see: > > > On OpenBSD Squid detects the presence of an IPv6 split-stack for networking. Which means Squid has to clone the internal representation of all your squid.conf *_port settings and setup separate contexts and state for IPv4 versions of them. > There seems to be a bug in that cloning process which is turning on the TLS client certificates feature. Please report this to our bugzilla so it does not get forgotten until fixed. > > > On Linux Squid is detecting IPv6 disabled in the kernel networking setup. So it is disabling its own IPv6 support. That said Linux has a hybrid-stack networking so the cloning would not happen anyway. If IPv6 were enabled here it would be somewhat more obvious that the IPv4 ports on OpenBSD are the odd ones. > > > For a workaround you may be able to set sslflags=DELAYED_AUTH on the http*_port lines and leave your ACLs as they are without anything requiring a client certificate. > > > >> # openbsd >> root at openbsd:~# /root/squid.init conftest > >> 2021/03/28 10:47:31| Processing: http_port 3128 ssl-bump cert=/opt/osec/etc/ssl_cert/squid-ca-cert+key.pem generate-host-certificates=on dynamic_cert_mem_cache_size=16MB >> 2021/03/28 10:47:31| Processing: https_port 3129 intercept ssl-bump cert=/opt/osec/etc/ssl_cert/squid-ca-cert+key.pem generate-host-certificates=on dynamic_cert_mem_cache_size=16MB > >> 2021/03/28 10:47:31| Processing: tls_outgoing_options cafile=/opt/osec/etc/pki/tls/certs/ca-bundle.crt > > >> 2021/03/28 10:47:31| Initializing https:// proxy context >> 2021/03/28 10:47:31| Requiring client certificates. > > >> 2021/03/28 10:47:31| Initializing http_port [::]:3128 TLS contexts >> 2021/03/28 10:47:31| Using certificate in /opt/osec/etc/ssl_cert/squid-ca-cert+key.pem >> 2021/03/28 10:47:31| Using certificate chain in /opt/osec/etc/ssl_cert/squid-ca-cert+key.pem >> 2021/03/28 10:47:31| Adding issuer CA: /C=US/ST=Kansas/L=Overland Park/O=Company, Inc./OU=Area 77/CN=local.corp.dom/emailAddress=ssladmin at Company.com >> 2021/03/28 10:47:31| Using key in /opt/osec/etc/ssl_cert/squid-ca-cert+key.pem >> 2021/03/28 10:47:31| Not requiring any client certificates > > >> 2021/03/28 10:47:31| Initializing http_port 0.0.0.0:3128 TLS contexts >> 2021/03/28 10:47:31| Using certificate in /opt/osec/etc/ssl_cert/squid-ca-cert+key.pem >> 2021/03/28 10:47:31| Using certificate chain in /opt/osec/etc/ssl_cert/squid-ca-cert+key.pem >> 2021/03/28 10:47:31| Adding issuer CA: /C=US/ST=Kansas/L=Overland Park/O=Company, Inc./OU=Area 77/CN=local.corp.dom/emailAddress=ssladmin at Company.com >> 2021/03/28 10:47:31| Using key in /opt/osec/etc/ssl_cert/squid-ca-cert+key.pem >> 2021/03/28 10:47:31| Requiring client certificates. > > >> 2021/03/28 10:47:31| Initializing https_port [::]:3129 TLS contexts >> 2021/03/28 10:47:31| Using certificate in /opt/osec/etc/ssl_cert/squid-ca-cert+key.pem >> 2021/03/28 10:47:31| Using certificate chain in /opt/osec/etc/ssl_cert/squid-ca-cert+key.pem >> 2021/03/28 10:47:31| Adding issuer CA: /C=US/ST=Kansas/L=Overland Park/O=Company, Inc./OU=Area 77/CN=local.corp.dom/emailAddress=ssladmin at Company.com >> 2021/03/28 10:47:31| Using key in /opt/osec/etc/ssl_cert/squid-ca-cert+key.pem >> 2021/03/28 10:47:31| Not requiring any client certificates > > >> 2021/03/28 10:47:31| Initializing https_port 0.0.0.0:3129 TLS contexts >> 2021/03/28 10:47:31| Using certificate in /opt/osec/etc/ssl_cert/squid-ca-cert+key.pem >> 2021/03/28 10:47:31| Using certificate chain in /opt/osec/etc/ssl_cert/squid-ca-cert+key.pem >> 2021/03/28 10:47:31| Adding issuer CA: /C=US/ST=Kansas/L=Overland Park/O=Company, Inc./OU=Area 77/CN=local.corp.dom/emailAddress=ssladmin at Company.com >> 2021/03/28 10:47:31| Using key in /opt/osec/etc/ssl_cert/squid-ca-cert+key.pem >> 2021/03/28 10:47:31| Requiring client certificates. >> # linux >> root at linux:~# /root/squid.init conftest > >> 2021/03/28 10:48:21| WARNING: BCP 177 violation. Detected non-functional IPv6 loopback. >> 2021/03/28 10:48:21| aclIpParseIpData: IPv6 has not been enabled. >> 2021/03/28 10:48:21| aclIpParseIpData: IPv6 has not been enabled. >> 2021/03/28 10:48:21| Initializing https:// proxy context >> 2021/03/28 10:48:21| Requiring client certificates. > >> 2021/03/28 10:48:21| Initializing http_port 0.0.0.0:3128 TLS contexts >> 2021/03/28 10:48:21| Using certificate in /opt/osec/etc/ssl_cert/squid-ca-cert+key.pem >> 2021/03/28 10:48:21| Using certificate chain in /opt/osec/etc/ssl_cert/squid-ca-cert+key.pem >> 2021/03/28 10:48:21| Adding issuer CA: /C=US/ST=Kansas/L=Overland Park/O=Company, Inc./OU=Area 77/CN=local.corp.dom/emailAddress=ssladmin at Company.com >> 2021/03/28 10:48:21| Using key in /opt/osec/etc/ssl_cert/squid-ca-cert+key.pem >> 2021/03/28 10:48:21| Not requiring any client certificates > >> 2021/03/28 10:48:21| Initializing https_port 0.0.0.0:3129 TLS contexts >> 2021/03/28 10:48:21| Using certificate in /opt/osec/etc/ssl_cert/squid-ca-cert+key.pem >> 2021/03/28 10:48:21| Using certificate chain in /opt/osec/etc/ssl_cert/squid-ca-cert+key.pem >> 2021/03/28 10:48:21| Adding issuer CA: /C=US/ST=Kansas/L=Overland Park/O=Company, Inc./OU=Area 77/CN=local.corp.dom/emailAddress=ssladmin at Company.com >> 2021/03/28 10:48:21| Using key in /opt/osec/etc/ssl_cert/squid-ca-cert+key.pem >> 2021/03/28 10:48:21| Not requiring any client certificates > > > > Amos > > _______________________________________________ > squid-dev mailing list > squid-dev at lists.squid-cache.org > http://lists.squid-cache.org/listinfo/squid-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rousskov at measurement-factory.com Wed Oct 20 15:22:53 2021 From: rousskov at measurement-factory.com (Alex Rousskov) Date: Wed, 20 Oct 2021 11:22:53 -0400 Subject: [squid-dev] RFC: Categorize level-0/1 messages Message-ID: <200b6139-1941-cd30-7a49-b5262823094e@measurement-factory.com> Hello, Nobody likes to be awaken at night by an urgent call from NOC about some boring Squid cache.log message the NOC folks have not seen before (or miss a critical message that was ignored by the monitoring system). To facilitate automatic monitoring of Squid cache.logs, I suggest to adjust Squid code to divide all level-0/1 messages into two major categories -- "problem messages" and "status messages"[0]: * Problem messages are defined as those that start with one of the three well-known prefixes: FATAL:, ERROR:, and WARNING:. These are the messages that most admins may want to be notified about (by default[1]) and these standardized prefixes make setting up reliable automated notifications straightforward. * Status messages are all other messages. Most admins do not want to be notified about normal Squid state changes and progress reports (by default[2]). These status messages are still valuable in triage so they are _not_ going away[3]. Today, Squid does not support the problem/status message classification well. To reach the above state, we will need to adjust many messages so that they fall into the right category. However, my analysis of the existing level-0/1 messages shows that it is doable to correctly classify most of them without a lot of tedious work (all numbers and prefix strings below are approximate and abridged for clarity of the presentation): * About 40% of messages (~700) already have "obvious" prefixes: BUG:, BUG [0-9]*:, ERROR:, WARNING:, and FATAL:. We will just need to adjust ~20 existing BUG messages to move them into one of the three proposed major categories (while still being clearly identified as Squid bugs, of course). * About 15% of messages (~300) can be easily found and adjusted using their prefixes (after excluding the "obvious" messages above). Here is a representative sample of those prefixes: SECURITY NOTICE, Whoops!, XXX:, UPGRADE:, CRITICAL, Error, ERROR, Error, error, ALERT, NOTICE, WARNING!, WARNING OVERRIDE, Warning:, Bug, Failed, Stop, Startup:, Shutdown:, FATAL Shutdown, avoiding, suspending, DNS error, bug, cannot, can't, could not, couldn't, bad, unable, malformed, unsupported, not found, missing, broken, unexpected, invalid, corrupt, obsolete, unrecognised, and unknown. Again, there is valuable information in many of these existing prefixes, and all valuable information will be preserved (after the standardized prefix). Some of these messages may be demoted to debugging level 2. * The remaining 45% of messages (~800) may remain as is during the initial conversion. Many of them are genuine status/progress messages with prefixes like these: Creating, Processing, Adding, Accepting, Configuring, Sending, Making, Rebuilding, Skipping, Beginning, Starting, Initializing, Installing, Indexing, Loading, Preparing, Killing, Stopping, Completed, Indexing, Loading, Killing, Stopping, Finished, Removing, Closing, Shutting. There are also "squid -k parse" messages that are easy to find automatically if somebody wants to classify them properly. Most other messages can be adjusted as/if they get modified or if we discover that they are frequent/important enough to warrant a dedicated adjustment. If there are no objections or better ideas, Factory will work on a few PRs that adjust the existing level-0/1 messages according to the above classification, in the rough order of existing message categories/kinds discussed in the three bullets above. Thank you, Alex. [0] The spelling of these two category names is unimportant. If you can suggest better category names, great, but let's focus on the category definitions. [1] No default will satisfy everybody, and we already have the cache_log_message directive that can control the visibility and volume of individual messages. However, manually setting those parameters for every level-0/1 message is impractical -- we have more than 1600 such messages! This RFC makes a reasonable _default_ treatment possible. [2] Admins can, of course, configure their log monitoring scripts to alert them of certain status messages if they consider those messages important. Again, this RFC is about facilitating reasonable _default_ treatment. [3] We could give status messages a unique prefix as well (e.g., INFO:) but such a prefix is not necessary to easily distinguish them _and_ adding a prefix would create a lot more painful code changes, so I think we should stay away from that idea. From squid3 at treenet.co.nz Wed Oct 20 19:14:46 2021 From: squid3 at treenet.co.nz (Amos Jeffries) Date: Thu, 21 Oct 2021 08:14:46 +1300 Subject: [squid-dev] RFC: Categorize level-0/1 messages In-Reply-To: <200b6139-1941-cd30-7a49-b5262823094e@measurement-factory.com> References: <200b6139-1941-cd30-7a49-b5262823094e@measurement-factory.com> Message-ID: <9497b99d-8a2e-3941-0c99-1606270bd2bc@treenet.co.nz> On 21/10/21 4:22 am, Alex Rousskov wrote: > Hello, > > Nobody likes to be awaken at night by an urgent call from NOC about > some boring Squid cache.log message the NOC folks have not seen before > (or miss a critical message that was ignored by the monitoring system). > To facilitate automatic monitoring of Squid cache.logs, I suggest to > adjust Squid code to divide all level-0/1 messages into two major > categories -- "problem messages" and "status messages"[0]: > We already have a published categorization design which (when/if used) solves the problem(s) you are describing. Unfortunately that design has not been followed by all authors and conversion of old code to it has not been done. Please focus your project on making Squid actually use the system of debugs line labels. The labels are documented at: What we do not have in that design is clarity on which labels are shown at what level. IMO they should be: * DBG_CRITICAL(0) - admin *need* to know this even if they do not think they want to. - FATAL - SECURITY ALERT - ERROR which were mislabeled and should be FATAL * DBG_IMPORTANT(1) - some admin want to know these, not mandatory though. - ERROR - SECURITY ERROR - SECURITY WARNING * level-2 - status, troubleshooting etc. - WARNING admin cannot do anything about - SECURITY NOTICE (these are for troubleshooting advice) * level-3+ - other > There are also "squid -k parse" messages > that are easy to find automatically if somebody wants to classify them > properly. Those are level 1-2 messages that become mandatory to display on startup/reconfigure. I have one worry about you taking this on right now. PR 574 has not been resolved and merged yet, but many of the debugs() messages you are going to be touching in here should be converted to thrown exceptions - which ones and what exception type is used has some dependency on how that PR turns out. Amos From rousskov at measurement-factory.com Thu Oct 21 03:16:12 2021 From: rousskov at measurement-factory.com (Alex Rousskov) Date: Wed, 20 Oct 2021 23:16:12 -0400 Subject: [squid-dev] RFC: Categorize level-0/1 messages In-Reply-To: <9497b99d-8a2e-3941-0c99-1606270bd2bc@treenet.co.nz> References: <200b6139-1941-cd30-7a49-b5262823094e@measurement-factory.com> <9497b99d-8a2e-3941-0c99-1606270bd2bc@treenet.co.nz> Message-ID: On 10/20/21 3:14 PM, Amos Jeffries wrote: > On 21/10/21 4:22 am, Alex Rousskov wrote: >> To facilitate automatic monitoring of Squid cache.logs, I suggest to >> adjust Squid code to divide all level-0/1 messages into two major >> categories -- "problem messages" and "status messages"[0]: > We already have a published categorization design which (when/if used) > solves the problem(s) you are describing. Unfortunately that design has > not been followed by all authors and conversion of old code to it has > not been done. > Please focus your project on making Squid actually use the system of > debugs line labels. The labels are documented at: > ? https://wiki.squid-cache.org/SquidFaq/SquidLogs#Squid_Error_Messages AFAICT, the partial classification in that wiki table is an opinion on how things could be designed, and that opinion does not reflect Project consensus. FWIW, I cannot use that wiki table for labeling messages, but I do not want to hijack this RFC thread for that table review. Fortunately, there are many similarities between the wiki table and this RFC that we can and should capitalize on instead: * While the wiki table is silent about the majority of existing cache.log messages, most of the messages it is silent about probably belong to the "status messages" category proposed by this RFC. This assumption gives a usable match between the wiki table and the RFC for about half of the existing level-0/1 cache.log messages. Great! * The wiki table talks about FATAL, ERROR, and WARNING messages. These labels match the RFC "problem messages" category. This match covers all of the remaining cache.log messages except for 10 debugs() detailed below. Thus, so far, there is a usable match on nearly all current level-0/1 messages. Excellent! * The wiki table also uses three "SECURITY ..." labels. The RFC does not recognize those labels as special. I find their definitions in the wiki table unusable/impractical, and you naturally think otherwise, but the situation is not as bad as it may seem at the first glance: - "SECURITY ERROR" is used once to report a coding _bug_. That single use case does not match the wiki table SECURITY ERROR description. We should be able to rephrase that single message so that does it not contradict the wiki table and the RFC. - "SECURITY ALERT" is used 6 times. Most or all of those cases are a poor match for the SECURITY ALERT description in the wiki table IMHO. I hope we can find a way to rephrase those 6 cases to avoid conflicts. - "SECURITY NOTICE" is used 3 times. Two of those use cases can be simply removed by removing the long-deprecated and increasingly poorly supported SslBump features. I do not see why we should keep the third message/feature, but if it must be kept, we may be able to rephrase it. If we cannot reach an agreement regarding these 10 special messages, we can leave them as is for now, and come back to them when we find a way to agree on how/whether to assign additional labels to some messages. Thus, there are no significant conflicts between the RFC and the table! We strongly disagree how labels should be defined, but I do not think we have to agree on those details to make progress here. We only need to agree that (those 10 SECURITY messages aside) the RFC-driven message categorization projects should adjust (the easily adjustable) messages about Squid problems to use three standard labels: FATAL, ERROR, and WARNING. Can we do just that and set aside the other disagreements for another time? If there are serious disagreements whether a specific debugs() is an ERROR or WARNING, we can leave those specific messages intact until we find a way to reach consensus. I hope there will be very few such messages if we use the three labels from the RFC and do our best to avoid controversial changes. > What we do not have in that design is clarity on which labels are shown > at what level. In hope to make progress, I strongly suggest to _ignore_ the difference between level 0 and level 1 for now. We are just too far apart on that topic to reach consensus AFAICT. The vast majority of messages that RFC-driven projects should touch (and, if really needed, _all_ such messages!) can be left at their current level, avoiding this problem. > I have one worry about you taking this on right now. PR 574 has not been > resolved and merged yet, but many of the debugs() messages you are going > to be touching in here should be converted to thrown exceptions - which > ones and what exception type is used has some dependency on how that PR > turns out. If the RFC PRs are merged first, Factory will help resolve conflicts in the PR 574 branch. While resolving boring conflicts is certainly annoying, this is not really a big deal in this case, and both projects are worth the pain. Alternatively, I can offer to massage PR 574 branch into merge-able shape _before_ we start working on these PRs. While the current branch code has serious problems, I believe they have known straightforward solutions that will allow us to make progress in a much more efficient way. Cheers, Alex. From kk at sudo-i.net Tue Oct 26 21:46:55 2021 From: kk at sudo-i.net (kk at sudo-i.net) Date: Tue, 26 Oct 2021 23:46:55 +0200 Subject: [squid-dev] request for change handling hostStrictVerify Message-ID: <53-61787780-1-3efbb8c0@263885290> Hi Guys! Sorry I was unsure if this was the correct point of contact in regards to hostStrictVerify. I think I am not the only one having issues with hostStrictVerify in scenarios where you just intercept traffic (tls) and squid checks the SNI if the IP address from the Client is the same as squid resolve it. The major issue in that approach is that many services today change their DNS records at a very high frequency, thus it's almost impossible to make sure that client and squid do have the same A record cached. My Proposal to resolve this issue would be the following: - Squid enforces the Client to use SNI! (currently, this is not done and can be considered as a security issue, because you can bypass any hostname rules) - Squid lookup IP for SNI (DNS resolution). - Squid forces the client to go to the resolved IP (and thus ignoring the IP which was provided in the L3 info from the client) Any thoughts? many thanks & have a nice day, Kevin --? Kevin Klopfenstein sudo-i.net -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 5102 bytes Desc: not available URL: From steve at opendium.com Thu Oct 28 13:24:04 2021 From: steve at opendium.com (Steve Hill) Date: Thu, 28 Oct 2021 14:24:04 +0100 Subject: [squid-dev] Alternate origin server selection Message-ID: <38d21b62-8356-94db-8085-7dfabb45a234@opendium.com> Various online services provide "virtual IPs" to change the way those services behave. An example of this is enforcing Safe Search on Google Search: https://support.google.com/websearch/answer/186669?hl=en Google recommend setting the network's DNS server to override the normal "www.google.com" domain with a replacement RR: www.google.com. CNAME forcesafesearch.google.com. This causes clients making requests to www.google.com to connect to a specific IP address and Google will enforce Safe Search for those clients. However, DNS changes generally affect the entire network and there is a requirement to apply this setting to only specific users / machines. Overriding DNS also relies on the clients using the correct DNS server and not having already cached the record from elsewhere. It seems a good place to do this is in the proxy. For non-transparently proxied traffic, the client makes a "CONNECT www.google.com" request, and the proxy could rewrite this to "CONNECT forcesafesearch.google.com" so that the connection goes to the virtual IP. For transparently proxied traffic, the client makes a connection to www.google.com's IP address, which Squid intercepts. Squid must then SSL-peek the request to figure out that it is connecting to www.google.com. The onward connection can then be redirected to the virtual IP. There is code to do this: https://github.com/squid-cache/squid/pull/924 This allows an external ACL to record an alt-host note, or an ICAP server to return an X-Alt-Host header, specifying a new origin server to connect to. The pull request was rejected, as it adds CVE-2009-0801 vulnerabilities. I'm hoping for some guidance on the best way to achieve this. Many thanks. -- - Steve Hill Technical Director | Cyfarwyddwr Technegol Opendium Online Safety & Web Filtering http://www.opendium.com Diogelwch Ar-Lein a Hidlo Gwefan Enquiries | Ymholiadau: sales at opendium.com +44-1792-824568 Support | Cefnogi: support at opendium.com +44-1792-825748 ------------------------------------------------------------------------ Opendium Limited is a company registered in England and Wales. Mae Opendium Limited yn gwmni sydd wedi'i gofrestru yn Lloegr a Chymru. Company No. | Rhif Cwmni: 5465437 Highfield House, 1 Brue Close, Bruton, Somerset, BA10 0HY, England. -------------- next part -------------- A non-text attachment was scrubbed... Name: steve.vcf Type: text/x-vcard Size: 259 bytes Desc: not available URL: From rousskov at measurement-factory.com Thu Oct 28 15:41:23 2021 From: rousskov at measurement-factory.com (Alex Rousskov) Date: Thu, 28 Oct 2021 11:41:23 -0400 Subject: [squid-dev] Alternate origin server selection In-Reply-To: <38d21b62-8356-94db-8085-7dfabb45a234@opendium.com> References: <38d21b62-8356-94db-8085-7dfabb45a234@opendium.com> Message-ID: On 10/28/21 9:24 AM, Steve Hill wrote: > For transparently proxied traffic, the client makes a connection to > www.google.com's IP address, which Squid intercepts.? Squid must then > SSL-peek the request to figure out that it is connecting to > www.google.com.? The onward connection can then be redirected to the > virtual IP. IIRC, Google has recommended (to my surprise) something like that as well, for environments where DNS modifications are inappropriate and bumping is possible. I cannot find that recommendation now, unfortunately. > There is code to do this: > ? https://github.com/squid-cache/squid/pull/924 > This allows an external ACL to record an alt-host note, or an ICAP > server to return an X-Alt-Host header, specifying a new origin server to > connect to. > > The pull request was rejected, as it adds CVE-2009-0801 vulnerabilities. > > I'm hoping for some guidance on the best way to achieve this. While I disagree with some of the assertions made in that PR review and the unilateral approach to closing that PR, I hope we can find a positive way forward here. Your use case deserves Squid support IMO. AFAICT, the primary obstacle here is that Squid pins the connection while obtaining the origin server certificate. Please confirm that without that origin server certificate you cannot make the decision whether to redirect the HTTP request. In other words, the client-intended destination IP address and the client-supplied SNI are not sufficient to correctly determine whether the connection must be bumped (in relevant cases). Also, if you need the server certificate indeed, please confirm that when the origin server uses TLS v1.3, the proposed scheme will have to fully bump the Squid-server connection before redirecting the request. Just staring at the TLS ServerHello will not be enough because the origin certificate comes later, in the encrypted records. I hope to suggest the right solution based on your feedback. Thank you, Alex. From steve at opendium.com Thu Oct 28 16:17:04 2021 From: steve at opendium.com (Steve Hill) Date: Thu, 28 Oct 2021 17:17:04 +0100 Subject: [squid-dev] request for change handling hostStrictVerify In-Reply-To: <53-61787780-1-3efbb8c0@263885290> References: <53-61787780-1-3efbb8c0@263885290> Message-ID: On 26/10/2021 22:46, kk at sudo-i.net wrote: > - Squid enforces the Client to use SNI! (currently, this is not done and > can be considered as a security issue, because you can bypass any > hostname rules) I don't think you can get away with requiring SNI everywhere. There is still software in the wild which doesn't present an SNI. Worse: there is software in the wild that presents an SNI that doesn't have a matching DNS record! (I'm looking at you Apple). However, you can probably change to an improved behaviour if there is an SNI which resolves and matches the URI and Host: header, whilst still supporting broken clients. -- - Steve Hill Technical Director | Cyfarwyddwr Technegol Opendium Online Safety & Web Filtering http://www.opendium.com Diogelwch Ar-Lein a Hidlo Gwefan Enquiries | Ymholiadau: sales at opendium.com +44-1792-824568 Support | Cefnogi: support at opendium.com +44-1792-825748 ------------------------------------------------------------------------ Opendium Limited is a company registered in England and Wales. Mae Opendium Limited yn gwmni sydd wedi'i gofrestru yn Lloegr a Chymru. Company No. | Rhif Cwmni: 5465437 Highfield House, 1 Brue Close, Bruton, Somerset, BA10 0HY, England. -------------- next part -------------- A non-text attachment was scrubbed... Name: steve.vcf Type: text/x-vcard Size: 259 bytes Desc: not available URL: From steve at opendium.com Thu Oct 28 16:39:44 2021 From: steve at opendium.com (Steve Hill) Date: Thu, 28 Oct 2021 17:39:44 +0100 Subject: [squid-dev] Alternate origin server selection In-Reply-To: References: <38d21b62-8356-94db-8085-7dfabb45a234@opendium.com> Message-ID: On 28/10/2021 16:41, Alex Rousskov wrote: > IIRC, Google has recommended (to my surprise) something like that as > well, for environments where DNS modifications are inappropriate and > bumping is possible. I cannot find that recommendation now, unfortunately. Google are very hostile to bumping. But then there's this that explicitly recommends bumping... https://support.google.com/a/answer/1668854?hl=en#zippy=%2Cstep-choose-a-web-proxy-server Never the less, if you don't want to bump (which obviously has significant privacy implications and involves installing certificates on every device), the virtual IP method is your only option. Similarly, you can't enforce YouTube Restricted Mode in the YouTube Android app without using the virtual IP method. > AFAICT, the primary obstacle here is that Squid pins the connection > while obtaining the origin server certificate. Well, I can't see why Squid needs the origin certificate - it should be able to make a decision off the SNI before connecting to the origin server. I didn't seem to be able to make the decision prior to the connection being pinned though. I'm not sure why - I will go back and investigate further. Thank you. -- - Steve Hill Technical Director | Cyfarwyddwr Technegol Opendium Online Safety & Web Filtering http://www.opendium.com Diogelwch Ar-Lein a Hidlo Gwefan Enquiries | Ymholiadau: sales at opendium.com +44-1792-824568 Support | Cefnogi: support at opendium.com +44-1792-825748 ------------------------------------------------------------------------ Opendium Limited is a company registered in England and Wales. Mae Opendium Limited yn gwmni sydd wedi'i gofrestru yn Lloegr a Chymru. Company No. | Rhif Cwmni: 5465437 Highfield House, 1 Brue Close, Bruton, Somerset, BA10 0HY, England. -------------- next part -------------- A non-text attachment was scrubbed... Name: steve.vcf Type: text/x-vcard Size: 259 bytes Desc: not available URL: From rousskov at measurement-factory.com Thu Oct 28 17:16:37 2021 From: rousskov at measurement-factory.com (Alex Rousskov) Date: Thu, 28 Oct 2021 13:16:37 -0400 Subject: [squid-dev] Alternate origin server selection In-Reply-To: References: <38d21b62-8356-94db-8085-7dfabb45a234@opendium.com> Message-ID: On 10/28/21 12:39 PM, Steve Hill wrote: > On 28/10/2021 16:41, Alex Rousskov wrote: >> AFAICT, the primary obstacle here is that Squid pins the connection >> while obtaining the origin server certificate. > Well, I can't see why Squid needs the origin certificate - it should be > able to make a decision off the SNI before connecting to the origin server. Squid does not "need" any of this, of course. Configuration and/or bugs force Squid to do what it does. If your decision-making process does not involve the certificate, then you should be able to rewrite the fake CONNECT request during SslBump step2, without (or before) telling Squid to stare at the certificate (and pin the resulting connection). There are bugs in this area, including bugs that may prevent certain CONNECT adaptations from happening. We are fixing one of those bugs right now. For details, you can see an unpolished/unofficial pull request at https://github.com/measurement-factory/squid/pull/108 > I didn't seem to be able to make the decision prior to the connection > being pinned though.? I'm not sure why - I will go back and investigate > further. Sounds like a plan! Cheers, Alex. From steve at opendium.com Fri Oct 29 13:57:23 2021 From: steve at opendium.com (Steve Hill) Date: Fri, 29 Oct 2021 14:57:23 +0100 Subject: [squid-dev] Alternate origin server selection In-Reply-To: References: <38d21b62-8356-94db-8085-7dfabb45a234@opendium.com> Message-ID: <95e22445-ed5a-8bb8-e6a0-5ea0b0a67635@opendium.com> On 28/10/2021 18:16, Alex Rousskov wrote: > Squid does not "need" any of this, of course. Configuration and/or bugs > force Squid to do what it does. If your decision-making process does not > involve the certificate, then you should be able to rewrite the fake > CONNECT request during SslBump step2, without (or before) telling Squid > to stare at the certificate (and pin the resulting connection). Ok, I've gone back and looked over my old debug logs. It appears what was actually happening was: - Client sends "CONNECT www.google.com:443". - Connection with TLS made to forcesafesearch.google.com. - Client sends "GET / HTTP/1.1\r\nHost: www.google.com" - Squid runs the peer selector to find peers for www.google.com (i.e. the host contained in the GET request). - It finds the appropriate pinned connection: client_side.cc(3872) borrowPinnedConnection: conn28 local=81.187.83.66:52488 remote=216.239.38.120:443 HIER_DIRECT FD 18 flags=1 - Squid then logs: FwdState.cc(472) fail: ERR_ZERO_SIZE_OBJECT "Bad Gateway" https://www.google.com/ FwdState.cc(484) fail: pconn race happened FwdState.cc(494) fail: zero reply on pinned connection Unfortunately, I cannot reproduce this problem now. I can remove the unpinning code and submit a new pull request, which now works ok for me. But I'm very wary that I did originally have problems with that which I can no longer reproduce. -- - Steve Hill Technical Director | Cyfarwyddwr Technegol Opendium Online Safety & Web Filtering http://www.opendium.com Diogelwch Ar-Lein a Hidlo Gwefan Enquiries | Ymholiadau: sales at opendium.com +44-1792-824568 Support | Cefnogi: support at opendium.com +44-1792-825748 ------------------------------------------------------------------------ Opendium Limited is a company registered in England and Wales. Mae Opendium Limited yn gwmni sydd wedi'i gofrestru yn Lloegr a Chymru. Company No. | Rhif Cwmni: 5465437 Highfield House, 1 Brue Close, Bruton, Somerset, BA10 0HY, England. -------------- next part -------------- A non-text attachment was scrubbed... Name: steve.vcf Type: text/x-vcard Size: 259 bytes Desc: not available URL: From rousskov at measurement-factory.com Fri Oct 29 14:38:12 2021 From: rousskov at measurement-factory.com (Alex Rousskov) Date: Fri, 29 Oct 2021 10:38:12 -0400 Subject: [squid-dev] Alternate origin server selection In-Reply-To: <95e22445-ed5a-8bb8-e6a0-5ea0b0a67635@opendium.com> References: <38d21b62-8356-94db-8085-7dfabb45a234@opendium.com> <95e22445-ed5a-8bb8-e6a0-5ea0b0a67635@opendium.com> Message-ID: <18a1cb18-c0b8-7558-3614-07e4deea86bd@measurement-factory.com> On 10/29/21 9:57 AM, Steve Hill wrote: > Ok, I've gone back and looked over my old debug logs.? It appears what > was actually happening was: > > - Client sends "CONNECT www.google.com:443". > - Connection with TLS made to forcesafesearch.google.com. > - Client sends "GET / HTTP/1.1\r\nHost: www.google.com" > - Squid runs the peer selector to find peers for www.google.com (i.e. > the host contained in the GET request). > - It finds the appropriate pinned connection: > client_side.cc(3872) borrowPinnedConnection: conn28 > local=81.187.83.66:52488 remote=216.239.38.120:443 HIER_DIRECT FD 18 > flags=1 > - Squid then logs: > ? FwdState.cc(472) fail: ERR_ZERO_SIZE_OBJECT "Bad Gateway" > ????????? https://www.google.com/ > ? FwdState.cc(484) fail: pconn race happened > ? FwdState.cc(494) fail: zero reply on pinned connection The above looks like a persistent connection race, way after SslBump bumped the connections. This kind of races seem unrelated to everything we have discussed on this thread so far, but I may be missing something. > Unfortunately, I cannot reproduce this problem now. Well, if you want to reproduce it, you probably can do that using a custom origin server, but I do not see the point: There is nothing particularly "wrong" in the above sequence AFAICT; races happen. However, it is possible that the above log is lying or misleading, and there was actually a different problem, a problem related to previous discussions, but it manifested itself as ERR_ZERO_SIZE_OBJECT "pconn race happened" diagnostic. > I can remove the unpinning code and submit a new pull request, which now > works ok for me.? But I'm very wary that I did originally have problems > with that which I can no longer reproduce. Unfortunately, I did not have a chance to study the rejected pull request before it was rejected, so I cannot advise you on whether subtracting something from that pull request will result in changes that should be, in principle, accepted. I suggest to imagine that the rejected pull request did not exist (instead of using it as a starting/reference point) and then describe the problem and the proposed solution from scratch. You can do that here or via a new pull request. The best avenue may depend on whether your code changes alter some key Squid principles. If they do, it might be best to get an agreement on those changes here, but GitHub discussions have better access to code and better formatting abilities. Cheers, Alex. From rousskov at measurement-factory.com Fri Oct 29 22:09:31 2021 From: rousskov at measurement-factory.com (Alex Rousskov) Date: Fri, 29 Oct 2021 18:09:31 -0400 Subject: [squid-dev] request for change handling hostStrictVerify In-Reply-To: <53-61787780-1-3efbb8c0@263885290> References: <53-61787780-1-3efbb8c0@263885290> Message-ID: <025483dc-d7b6-b0fc-438d-f08429c4a781@measurement-factory.com> On 10/26/21 5:46 PM, kk at sudo-i.net wrote: > - Squid enforces the Client to use SNI > - Squid lookup IP for SNI (DNS resolution). > - Squid forces the client to go to the resolved IP AFAICT, the above strategy is in conflict with the "SECURITY NOTE" paragraph in host_verify_strict documentation: If Squid strays from the intended IP using client-supplied destination info, then malicious applets will escape browser IP-based protections. Also, SNI obfuscation or encryption may make this strategy ineffective or short-lived. AFAICT, in the majority of deployments, the mismatch between the intended IP address and the SNI/Host header can be correctly handled automatically and without creating serious problems for the user. Squid already does the right thing in some cases. Somebody should carefully expand that coverage to intercepted traffic. Frankly, I am somewhat surprised nobody has done that yet given the number of complaints! HTH, Alex. From squid3 at treenet.co.nz Sat Oct 30 00:37:01 2021 From: squid3 at treenet.co.nz (Amos Jeffries) Date: Sat, 30 Oct 2021 13:37:01 +1300 Subject: [squid-dev] request for change handling hostStrictVerify In-Reply-To: <025483dc-d7b6-b0fc-438d-f08429c4a781@measurement-factory.com> References: <53-61787780-1-3efbb8c0@263885290> <025483dc-d7b6-b0fc-438d-f08429c4a781@measurement-factory.com> Message-ID: <7cd48494-f908-2444-db10-e278201d7123@treenet.co.nz> On 30/10/21 11:09, Alex Rousskov wrote: > On 10/26/21 5:46 PM, kk at sudo-i.net wrote: > >> - Squid enforces the Client to use SNI >> - Squid lookup IP for SNI (DNS resolution). >> - Squid forces the client to go to the resolved IP > > AFAICT, the above strategy is in conflict with the "SECURITY NOTE" > paragraph in host_verify_strict documentation: If Squid strays from the > intended IP using client-supplied destination info, then malicious > applets will escape browser IP-based protections. Also, SNI obfuscation > or encryption may make this strategy ineffective or short-lived. > > AFAICT, in the majority of deployments, the mismatch between the > intended IP address and the SNI/Host header can be correctly handled > automatically and without creating serious problems for the user. Squid > already does the right thing in some cases. Somebody should carefully > expand that coverage to intercepted traffic. Frankly, I am somewhat > surprised nobody has done that yet given the number of complaints! > IIRC the "right thing" as defined by TLS for SNI verification is that it be the same as the host/domain name from the wrapper protocol (i.e. the Host header / URL domain from HTTPS messages). Since Squid uses the SNI at step2 as Host value it already gets checked against the intercepted IP Amos From rousskov at measurement-factory.com Sat Oct 30 01:14:58 2021 From: rousskov at measurement-factory.com (Alex Rousskov) Date: Fri, 29 Oct 2021 21:14:58 -0400 Subject: [squid-dev] request for change handling hostStrictVerify In-Reply-To: <7cd48494-f908-2444-db10-e278201d7123@treenet.co.nz> References: <53-61787780-1-3efbb8c0@263885290> <025483dc-d7b6-b0fc-438d-f08429c4a781@measurement-factory.com> <7cd48494-f908-2444-db10-e278201d7123@treenet.co.nz> Message-ID: <03983185-b4d1-83cc-cc12-df4a9a98527d@measurement-factory.com> On 10/29/21 8:37 PM, Amos Jeffries wrote: > On 30/10/21 11:09, Alex Rousskov wrote: >> On 10/26/21 5:46 PM, kk at sudo-i.net wrote: >> >>> - Squid enforces the Client to use SNI >>> - Squid lookup IP for SNI (DNS resolution). >>> - Squid forces the client to go to the resolved IP >> >> AFAICT, the above strategy is in conflict with the "SECURITY NOTE" >> paragraph in host_verify_strict documentation: If Squid strays from the >> intended IP using client-supplied destination info, then malicious >> applets will escape browser IP-based protections. Also, SNI obfuscation >> or encryption may make this strategy ineffective or short-lived. >> >> AFAICT, in the majority of deployments, the mismatch between the >> intended IP address and the SNI/Host header can be correctly handled >> automatically and without creating serious problems for the user. Squid >> already does the right thing in some cases. Somebody should carefully >> expand that coverage to intercepted traffic. Frankly, I am somewhat >> surprised nobody has done that yet given the number of complaints! > IIRC the "right thing" as defined by TLS for SNI verification is that it > be the same as the host/domain name from the wrapper protocol (i.e. the > Host header / URL domain from HTTPS messages). Since Squid uses the SNI > at step2 as Host value it already gets checked against the intercepted IP Just to avoid misunderstanding, my email was _not_ about SNI verification. I was talking about solving the problem this thread is devoted to (and a specific solution proposed in the opening email on the thread). Alex.