<html xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40"><head><meta http-equiv=Content-Type content="text/html; charset=utf-8"><meta name=Generator content="Microsoft Word 15 (filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
.MsoChpDefault
{mso-style-type:export-only;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style></head><body lang=EN-US link=blue vlink="#954F72"><div class=WordSection1><p class=MsoNormal>Hey Amos,</p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>I am not sure I understand the if and what are the risks of this subject.</p><p class=MsoNormal>From what I understand until now Google doesn’t use any DH concept on specific keys.</p><p class=MsoNormal>I do believe that there is a reason for the obviates ABORT.</p><p class=MsoNormal>The client is allowed and in most cases the software decides to ABORT if there is an issue with the given certificate.</p><p class=MsoNormal>The most obviates reason for such a case is that the client software tries to peek inside the “given” TLS connections and understand if it’s a good idea to continue with the session conditions.</p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>I do agree that forced caching is a very bad idea.</p><p class=MsoNormal>However I do believe that there are use cases for such methods… only and only on a dev environment.</p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>If Google or any other leaf of the network is trying to cache the ISP or to push into it traffic, the ISP is allowed by law..</p><p class=MsoNormal>to do what he needs to do to protect the clients.</p><p class=MsoNormal>I am not sure that there is any risk in doing so compared to what google did to the internet.</p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>Just a scenario I have in mind:</p><p class=MsoNormal>If the world doesn’t really need google to survive like some try to argue,</p><p class=MsoNormal>Would an IT specialist give up on google? Ie given a better much more safe alternative?</p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>I believe Google is a milestone for humanity, however, if no one understands the risks of the local Databases<br>and why these database exists and protected in the first place and why they shouldn’t be exposed to the public,</p><p class=MsoNormal>there is an opening for these who want to access these Databases.</p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>Eliezer</p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>----<o:p></o:p></p><p class=MsoNormal>Eliezer Croitoru<o:p></o:p></p><p class=MsoNormal>Tech Support<o:p></o:p></p><p class=MsoNormal>Mobile: +972-5-28704261<o:p></o:p></p><p class=MsoNormal>Email: ngtech1ltd@gmail.com<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><div style='mso-element:para-border-div;border:none;border-top:solid #E1E1E1 1.0pt;padding:3.0pt 0in 0in 0in'><p class=MsoNormal style='border:none;padding:0in'><b>From: </b><a href="mailto:squid3@treenet.co.nz">Amos Jeffries</a><br><b>Sent: </b>Monday, May 25, 2020 1:02 PM<br><b>To: </b><a href="mailto:squid-users@lists.squid-cache.org">squid-users@lists.squid-cache.org</a><br><b>Subject: </b>Re: [squid-users] Squid cache with SSL</p></div><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>On 25/05/20 8:09 pm, Andrey Etush-Koukharenko wrote:</p><p class=MsoNormal>> Hello, I'm trying to set up a cache for GCP signed URLs using squid 4.10</p><p class=MsoNormal>> I've set ssl_bump:</p><p class=MsoNormal>> *http_port 3128 ssl-bump cert=/etc/ssl/squid_ca.pem</p><p class=MsoNormal>> generate-host-certificates=on dynamic_cert_mem_cache_size=4MB</p><p class=MsoNormal>> </p><p class=MsoNormal>> sslcrtd_program /usr/lib/squid/security_file_certgen -s /var/lib/ssl_db</p><p class=MsoNormal>> -M 4MB</p><p class=MsoNormal>> </p><p class=MsoNormal>> acl step1 at_step SslBump1</p><p class=MsoNormal>> </p><p class=MsoNormal>> ssl_bump peek step1</p><p class=MsoNormal>> ssl_bump bump all*</p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>The above SSL-Bump configuration tries to auto-generate server</p><p class=MsoNormal>certificates based only on details in the TLS client handshake. This</p><p class=MsoNormal>leads to a huge number of problems, not least of which is completely</p><p class=MsoNormal>breaking TLS security properties.</p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>Prefer doing the bump at step3,</p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>> *</p><p class=MsoNormal>> I've set cache like this:</p><p class=MsoNormal>> </p><p class=MsoNormal>> *refresh_pattern -i my-dev.storage.googleapis.com/.* </p><p class=MsoNormal>> 4320 80% 43200 override-expire ignore-reload ignore-no-store ignore-private*</p><p class=MsoNormal>> *</p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>FYI: that does not setup the cache. It provides *default* parameters for</p><p class=MsoNormal>the heuristic expiry algorithm.</p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>* override-expire replaces the max-age (or Expires header) parameter</p><p class=MsoNormal>with 43200 minutes from object creation.</p><p class=MsoNormal> This often has the effect of forcing objects to expire from cache long</p><p class=MsoNormal>before they normally would.</p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>* ignore-reload makes Squid ignore requests from the client to update</p><p class=MsoNormal>its cached content.</p><p class=MsoNormal> This forces content which is stale, outdated, corrupt, or plain wrong</p><p class=MsoNormal>to remain in cache no matter how many times clients try to re-fetch for</p><p class=MsoNormal>a valid response.</p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>* ignore-private makes Squid content that is never supposed to be shared</p><p class=MsoNormal>between clients.</p><p class=MsoNormal> To prevent personal data being shared between clients who should never</p><p class=MsoNormal>see it Squid will revalidate these objects. Usually different data will</p><p class=MsoNormal>return, making this just a waste of cache space.</p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>* ignore-no-store makes Squid cache objects that are explicitly</p><p class=MsoNormal>*forbidden* to be stored in a cache.</p><p class=MsoNormal> 80% of 0 seconds == 0 seconds before these objects become stale and</p><p class=MsoNormal>expire from cache.</p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>Given that you described this as a problem with an API doing *signing*</p><p class=MsoNormal>of things I expect that at least some of those objects will be security</p><p class=MsoNormal>keys. Possibly generated specifically per-item keys, where forced</p><p class=MsoNormal>caching is a *BAD* idea.</p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>I recommend removing that line entirely from your config file and</p><p class=MsoNormal>letting the Google developers instructions do what they are intended to</p><p class=MsoNormal>do with the cacheability. At the very least start from the default</p><p class=MsoNormal>caching behaviour and see how it works normally before adding protocol</p><p class=MsoNormal>violations and unusual (mis)behvaviours to how the proxy caches things.</p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>> *</p><p class=MsoNormal>> In the cache directory, I see that object was stored after the first</p><p class=MsoNormal>> call, but when I try to re-run the URL I get always</p><p class=MsoNormal>> get: *TCP_REFRESH_UNMODIFIED_ABORTED/200*</p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>What makes you think anything is going wrong?</p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal> Squid found the object in cache (HIT).</p><p class=MsoNormal> The object requirements were to check with the origin server about</p><p class=MsoNormal>whether it could still be used (HIT becomes REFRESH).</p><p class=MsoNormal> The origin server said it was fine to deliver (UNMODIFIED).</p><p class=MsoNormal> Squid started delivery (status 200).</p><p class=MsoNormal> The client disconnected before the response could be completed delivery</p><p class=MsoNormal>(ABORTED).</p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>Clients are allowed to disconnect at any time, for any reason.</p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>Amos</p><p class=MsoNormal>_______________________________________________</p><p class=MsoNormal>squid-users mailing list</p><p class=MsoNormal>squid-users@lists.squid-cache.org</p><p class=MsoNormal>http://lists.squid-cache.org/listinfo/squid-users</p><p class=MsoNormal><o:p> </o:p></p></div></body></html>