[squid-users] hsc-dynamic-cache: relied on storeID rules? Removed in 3.5.20?

L A Walsh squid-user at tlinx.org
Tue Mar 28 23:55:02 UTC 2017


Eliezer Croitoru wrote:
> Hey Linda,
> 
> As the pathcer\author of StoreID I will try to clarify what might seems odd.
> StoreID is a "static" rule which is one of the squid cache fundamentals.
> The feature is the option to tweak this internal cache object ID.
> This is a very static feature and will not be changed for a very long time from now on.
> Most of the public helpers I have seen are very "simple" and rely on very simple things.
----
	Makes sense, otherwise too prone to breakage.



> 
> But since many systems these days are aware of the option to predict what would be the next url(think about an exe or other binary that can be replaced on-the-fly\in-transit) the developers changed and change(like a Diffe Hellman) their way of transporting content to the end client.
> Due to this "Diffe Hellman" feature that many added it makes the more simple scripts useless since the developers became much smarter.
----
	yeah, my use case was fairly simple -- same
person w/same browser, watching same vid a 2nd time.

	They gave me many "kudos" and noticed that
youtube was noticeably faster to browse through when I
implemented the SSL interception on the squid proxy
that web traffic goes through.  In that case, it was mainly
the caching of the video-thumbs that noticeably sped up
moving through YT pages.

> Indeed you will see TCP_MISS and it won't cache but this is only since the admin might have the illusion that an encrypted content can be predicted when a Diffie  Helman cryptography is creating the plain url's these days.
----
	Oh yeah.  Have noted that there are an infinite number
of ways to access the same URL and have thought about ways 
I might collapse them to 1 URL, but it's just idle thinking as
other things on the plate.

	One good idea that didn't get updated was an
extension in FF, that tried to store some of the latest
Javascript libs that sites used so if they asked for the lib
from a common site (like jquery), it might return the result
from a local cache.  

It wouldn't help for those sites that merge
multiple JS files and minify them.

But many sites have 15-20 different websites that are 
"included" to get different elements (fonts, stylesheets,
JS libs, etc) from different sources.  They seem to
include URL's like a developer would use
#include files...(and often take forever to load).

multiple elements from different URLs like they would
use multiple header include files in a local compilation.


> Hope It helped,
> Eliezer

Thanks for the explanation, certainly more useful
than just telling someone:

 "the web broke it"... 
:-)







More information about the squid-users mailing list