[squid-users] Getting the full file content on a range request, but not on EVERY get ...
hpj at urpla.net
Thu May 12 18:57:53 UTC 2016
On Donnerstag, 12. Mai 2016 13:28:00 Heiler Bemerguy wrote:
> Hi Pete, thanks for replying... let me see if I got it right..
> Will I need to specify every url/domain I want it to act on ? I want
> squid to do it for every range-request downloads that should/would be
> cached (based on other rules, pattern_refreshs etc)
Yup, that's right. At least, that's the common approach to deal with CDNs.
I think, that disallowing range requests is too drastic to work fine on the
long run, but let us know, if you get to satisfactory solution this way.
> It doesn't need to delay any downloads as long as it isn't a dupe of
> what's already being downloaded.....
You can set to delay to zero of course.
This is only one side of the issues with CDNs. The other, more problematic
side of it is, that many server with different URLs provide the same files.
Every new address will result in a new download of otherwise identical
Here's an example of openSUSE:
# this file was generated by gen_openSUSE_dedups
# from http://mirrors.opensuse.org/list/all.html
# with timestamp Thu, 12 May 2016 05:30:18 +0200
# openSUSE Headquarter
# South Africa (za)
# Bangladesh (bd)
# China (cn)
# Hong Kong (hk)
# Indonesia (id)
# Israel (il)
[...] -> this list contains about 180 entries
# fetch all redirected objects explicitly
This is, how CDNs work, but it's a nightmare for caching proxies.
In such scenarios squid_dedup comes to rescue.
More information about the squid-users