[squid-users] hsc-dynamic-cache: relied on storeID rules? Removed in 3.5.20?

Yuri Voinov yvoinov at gmail.com
Wed Mar 29 19:28:59 UTC 2017



29.03.2017 5:55, L A Walsh пишет:
> Eliezer Croitoru wrote:
>> Hey Linda,
>>
>> As the pathcer\author of StoreID I will try to clarify what might
>> seems odd.
>> StoreID is a "static" rule which is one of the squid cache fundamentals.
>> The feature is the option to tweak this internal cache object ID.
>> This is a very static feature and will not be changed for a very long
>> time from now on.
>> Most of the public helpers I have seen are very "simple" and rely on
>> very simple things.
> ----
>     Makes sense, otherwise too prone to breakage.
Don't think so. More effective (and complex) solutions just not free
(and, of course, open-source). I don't think that C++ code is easy to
breakage. :)
>
>
>
>>
>> But since many systems these days are aware of the option to predict
>> what would be the next url(think about an exe or other binary that
>> can be replaced on-the-fly\in-transit) the developers changed and
>> change(like a Diffe Hellman) their way of transporting content to the
>> end client.
>> Due to this "Diffe Hellman" feature that many added it makes the more
>> simple scripts useless since the developers became much smarter.
> ----
>     yeah, my use case was fairly simple -- same
> person w/same browser, watching same vid a 2nd time.
>
>     They gave me many "kudos" and noticed that
> youtube was noticeably faster to browse through when I
> implemented the SSL interception on the squid proxy
> that web traffic goes through.  In that case, it was mainly
> the caching of the video-thumbs that noticeably sped up
> moving through YT pages.
YT now is very different thing. As I told much times, YT is actively
opposite caching (to force ISP uses Google Cache) by shuffling
underlying CDN URLs and/or encrypts parts of URL. So, it is very
difficult to make YT videos cacheable in runtime this time. Just
possible to cache static YT pages (and, so, not at all).
>
>> Indeed you will see TCP_MISS and it won't cache but this is only
>> since the admin might have the illusion that an encrypted content can
>> be predicted when a Diffie  Helman cryptography is creating the plain
>> url's these days.
> ----
>     Oh yeah.  Have noted that there are an infinite number
> of ways to access the same URL and have thought about ways I might
> collapse them to 1 URL, but it's just idle thinking as
> other things on the plate.
>
>     One good idea that didn't get updated was an
> extension in FF, that tried to store some of the latest
> Javascript libs that sites used so if they asked for the lib
> from a common site (like jquery), it might return the result
> from a local cache. 
> It wouldn't help for those sites that merge
> multiple JS files and minify them.
>
> But many sites have 15-20 different websites that are "included" to
> get different elements (fonts, stylesheets,
> JS libs, etc) from different sources.  They seem to
In this case Store ID is useful. You simple write regex which combines
this several URLs to one.
> include URL's like a developer would use
> #include files...(and often take forever to load).
>
> multiple elements from different URLs like they would
> use multiple header include files in a local compilation.
>
>
>> Hope It helped,
>> Eliezer
>
> Thanks for the explanation, certainly more useful
> than just telling someone:
>
> "the web broke it"... :-)
It is hardly an explanation that will help to solve a specific question.
It takes some effort, often very big. And - yes, the web actively
opposes caching, or at least does not worry about caching at all. Is it
any wonder that those who could solve this problem not hurry to spread
the results of hard works free of charge around the world? :)
>
>
>
>
>
> _______________________________________________
> squid-users mailing list
> squid-users at lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Bugs to the Future
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 0x613DEC46.asc
Type: application/pgp-keys
Size: 2437 bytes
Desc: not available
URL: <http://lists.squid-cache.org/pipermail/squid-users/attachments/20170330/f0518477/attachment.key>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <http://lists.squid-cache.org/pipermail/squid-users/attachments/20170330/f0518477/attachment.sig>


More information about the squid-users mailing list