[squid-users] Squid 2.7.s9 HTTPS-proxying - hint welcome

Amos Jeffries squid3 at treenet.co.nz
Sat Aug 20 10:12:21 UTC 2016


On 19/08/2016 10:42 p.m., Torsten Kuehn wrote:
> Hi,
> 
> On 18/08/2016 6:32 a.m., Amos Jeffries wrote:
> 
>>> I imagine layouts where the encrypted traffic itself gets stored
>> no way for Squid to know if a previous encrypted stream is reusable.
>> To squid it is just a random stream of opaque bytes.
> 
> Enlightening! The idea was that omitting decryption but instead providing
> measures to do so from the client's side had less concerns wrt privacy.
> As I see, with a mere "stream of opaque bytes", any "handle" to provide
> such measures is missing. Thus, if caching of SSL-encrypted data is wished,
> decryption is mandatory.
> 
>>> I.e. [HTTPS] not cacheable at all [in 2.7.s9]?
>> Correct.
> 
> Asking months earlier here would have saved me painful failures ...
> 
>>> I prefer not to erase objects [...] My [TAGs] may look horrible
>>>     authenticate_ttl 359996400 seconds
>> Lookup "credentials replay attack". [...] There is no other use for
>> this directive than attacking your clients.
> 
> Uugh! Was set in april 2012, by error (without effect in 2.5.s8_OS2_VAC,
> thus
> it didn't harm): the idea was to turn off Squid's garbage collection, in
> order
> to avoid wearing out flash memory. Wrong place, and I ignored credentials
> ...
> 
>>>     hierarchy_stoplist cgi-bin
>>>     refresh_pattern -i /cgi-bin/ 5258880 100% 5258880
>> Please use the pattern settings:  "-i (/cgi-bin/|\?) 0 0% 0"
>> This pattern is designed to work around an issue with truly ancient CGI
>> scripts [...] Such scripts are guaranteed to be dynamically changing [...]
> 
> The idea comes from http://twiki.cern.ch/twiki/bin/view/CMS/MyOwnSquid ,
> to get dynamic web pages cached. I am glad that Squid finally does so!
> Conflicting concepts, as it seems. Or, is there any RegEx which does the
> old CGI script-workaround but still caches content with "?" in URLs?
> 
>>>     refresh_pattern . 5258880 100% 5258880
>>>         override-expire override-lastmod ignore-no-cache ignore-private
>>>     positive_dns_ttl 359996400 seconds
>> Meaning whenever any domain name moves hosting service you get cutoff
>> from it completely for ~11 years or until you fully restart Squid.
> 
> Yes, I noticed this :-) (Used to reconfigure Squid from CacheMgr in these
> cases). Came from Sjoerd Visser's dutch page on Squid 1.1, to work around
> it's missing offline_mode TAG (just kept positive_dns_ttl afterwards):
> http://vissesh.home.xs4all.nl/multiboot/firewall/squid.html
> 

Well, that tags available now. :-). Though it does not do much now since
HTTP/1.1 caching brings most of what it used to do but in a standardised
way.

>> When you go to Squid-3, removing both these DNS settings entirely would
>> be best. [...] if you really insist [...] ~48hrs should work just as well
> 
> Truely. When I set up 2.5.s8_OS2_VAC six years ago, I just added a few new
> TAGs to my old 1.1 config. Only this summer, I spent a couple of days to
> migrate previous settings into the new order of a fresh 2.7.s9's config (for
> better comparison), now comprising a history of all available OS/2 builds
> (introduction/ disappearance of features etc.). To be re-done with 3.5.

Hmm. Run "squid -k parse" and you should get notices about chnaged or
removed config options. You should do so before/during any big version
jump anyway.

> 
>>> setup is that robust that force-reload [fails unless objects deleted]
>> This is done solely by "override-expire".
> 
> Perfect. I'm far from knowing config TAGs by heart and thus don't see how
> things play together. Enabling "overide-expire" in 2012 was a bad thing.
> 

Yeah. Times change. The new RFCs now tell us Expires header is to be
followed unless other (max-age) values are available, so its sort of
optional in todays traffic. Ignoring it is still a violation, but not
much risk.


>>> [setup that robust that] PURGE fails unless [objects deleted manually]
>> In Squid-2 this is more efficiently done by:
>>   acl PURGE method PURGE
>>   http_access deny PURGE
> 
> Recently enabled (as well as CacheMgr-access) by setting
>     acl localnet src 192.168.0.160/27
>     [...]
>     acl purge method PURGE
>     http_access allow purge localnet
>     http_access allow purge localhost
>     http_access deny purge
> 
>> Squid-3 [...] disables all the PURGE functionality *unless* you have
>> "PURGE" method in an ACL like above. It is a good idea for performance
>> reasons to remove all mention of "PURGE" when upgrading to Squid-3.
> 
> Permitting PURGE has a performance impact? I enabled it recently, but
> since reload works now, it could be suppressed.

A little bit. Squid has to track cached objects with an index ID that
PURGE can use (normal one is tied to GET), so wastes a bit of memory and
CPU time calculating it.

 HTCP protocol CLR messages work better than PURGE, especially when
using a group/hierarchy of caches.

> 
>>> [force-reload fails unless] the ugly "reload_into_ims on" option
>>> is set which violates standards.
>> reload_into_ims is not a violation of current HTTP/1.1 standards. [...]
>> The big(est) problem with it is that Squid-2 is HTTP/1.0 software and
>> both reload and IMS revalidation are not defined in that version of the
>> protocol. Adding two risky undefined things together multiplies dangers.
>> [...] Overall the approach to caching *at any cost* is doing more harm
>> than good, both to yourself and to many others.
> 
> Disquieting. In fact, I tried to change Squid's default caching behaviour
> to "accumulating" content ("once here, why to reload redundant stuff?").
> The second, important intention behind this is not to wear out the flash
> memory where Squid runs on. Data is backed up regularily, but I'm afraid
> of, e.g., the regular revalidation processes's write accesses. (A friend
> lost a solid state disk with a Squid cache after only six months.)
> 

Nod. My customers experiences with SSD, and talk with other list members
here, has been similar. The older Squid pass all objects through the
cache_dir so the disk gets a lot more write activity than you would
expect. That has been improved with recent releases, but is not yet
fully solved.

You need to go for SSD with very high advertised write/rewrite counts,
and you can expect/budget for the operating lifetime under Squid to be
1/2 or even 1/3 what the manufacturer advertise. Not due to any fault of
theirs, Squid just thrashes hardware in ways SSD are not designed for.

FYI; ufs/aufs/diskd cache_dir do not update the on-disk copy of objects
when revalidating - just the in-memory objects or rock cache types do
that. For most that is a bug (#7 if interested) for you it may be a
bonus :-)

Cheers
Amos



More information about the squid-users mailing list