[squid-users] Caching http google deb files

Hardik Dangar hardikdangar+squid at gmail.com
Tue Oct 4 13:34:50 UTC 2016


Hey Amos,

We have about 50 clients which downloads same google chrome update every 2
or 3 days means 2.4 gb. although response says vary but requested file is
same and all is downloaded via apt update.

Is there any option just like ignore-no-store? I know i am asking for too
much but it seems very silly on google's part that they are sending very
header at a place where they shouldn't as no matter how you access those
url's you are only going to get those deb files.

can i hack squid source code to ignore very header ?



On Tue, Oct 4, 2016 at 6:51 PM, Amos Jeffries <squid3 at treenet.co.nz> wrote:

> On 5/10/2016 2:05 a.m., Hardik Dangar wrote:
> > Hello,
> >
> > I am trying to cache following deb files as its most requested file in
> > network. ( google chrome almost every few days many clients update it ).
> >
> > http://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
> > http://dl-ssl.google.com/dl/linux/direct/mod-pagespeed-
> beta_current_i386.deb
> >
> > Response headers for both contains Last modified date which is 10 to 15
> > days old but squid does not seem to cache it somehow. here is sample
> > response header for one of the file,
> >
> > HTTP Response Header
> >
> > Status: HTTP/1.1 200 OK
> > Accept-Ranges: bytes
> > Content-Length: 6662208
> > Content-Type: application/x-debian-package
> > Etag: "fa383"
> > Last-Modified: Thu, 15 Sep 2016 19:24:00 GMT
> > Server: downloads
> > Vary: *
>
> The Vary header says that this response is just one of many that can
> happen for this URL.
>
> The "*" in that header says that the way to determine which the clietn
> gets is based on something no proxy can ever do. Thus no cache can ever
> re-use any content it wanted to store. Making any attempts to store it a
> pointless waste of CPU time, disk and memory space that could better be
> used by some other more useful object. Squid will not ever cache these
> responses.
>
> (Thank you for the well written request for help anyhow.)
>
> Amos
>
> _______________________________________________
> squid-users mailing list
> squid-users at lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squid-cache.org/pipermail/squid-users/attachments/20161004/7bce88b2/attachment.html>


More information about the squid-users mailing list