[squid-users] Caching http google deb files

Amos Jeffries squid3 at treenet.co.nz
Tue Oct 4 14:17:01 UTC 2016


On 5/10/2016 2:34 a.m., Hardik Dangar wrote:
> Hey Amos,
> 
> We have about 50 clients which downloads same google chrome update every 2
> or 3 days means 2.4 gb. although response says vary but requested file is
> same and all is downloaded via apt update.
> 
> Is there any option just like ignore-no-store? I know i am asking for too
> much but it seems very silly on google's part that they are sending very
> header at a place where they shouldn't as no matter how you access those
> url's you are only going to get those deb files.


Some things G does only make sense whan you ignore all the PR about
wanting to make the web more efficient and consider it's a company whose
income is derived by recording data about peoples habits and activities.
Caching can hide that info from them.

> 
> can i hack squid source code to ignore very header ?
> 

Google are explicitly saying the response changes. I suspect there is
something involving Google account data being embeded in some of the
downloads. For tracking, etc.


If you are wanting to test it I have added a patch to
<http://bugs.squid-cache.org/show_bug.cgi?id=4604> that should implement
archival of responses where the ACLs match. It is completely untested by
me beyond building, so YMMV.

Amos



More information about the squid-users mailing list