[squid-users] Getting the full file content on a range request, but not on EVERY get ...

Yuri Voinov yvoinov at gmail.com
Thu May 12 19:19:12 UTC 2016


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
 
And I did not promise a silver bullet :) This is just a small
workaround, which does not work in all cases. :)

13.05.16 1:17, Heiler Bemerguy пишет:
>
>
> I also don't care too much about duplicated cached files.. but trying
to cache "ranged" requests is topping my link and in the end it seems
it's not caching anything lol
>
> EVEN if I only allow range_offset to some urls or file extensions....
>
>
> Best Regards,
>
>
> --
> Heiler Bemerguy - (91) 98151-4894
> Assessor Técnico - CINBESA (91) 3184-1751
>
> Em 12/05/2016 16:09, Yuri Voinov escreveu:
> I suggest it is very bad idea to transform caching proxy to linux
> distro's or something else archive.
>
> As Amos said, "Squid is a cache, not an archive".
>
>
> 13.05.16 0:57, Hans-Peter Jansen пишет:
> >>> Hi Heiler,
> >>>
> >>> On Donnerstag, 12. Mai 2016 13:28:00 Heiler Bemerguy wrote:
> >>>> Hi Pete, thanks for replying... let me see if I got it right..
> >>>>
> >>>> Will I need to specify every url/domain I want it to act on ? I want
> >>>> squid to do it for every range-request downloads that should/would be
> >>>> cached (based on other rules, pattern_refreshs etc)
> >>> Yup, that's right. At least, that's the common approach to deal
with CDNs.
> >>> I think, that disallowing range requests is too drastic to work fine
> on the
> >>> long run, but let us know, if you get to satisfactory solution
this way.
> >>>
> >>>> It doesn't need to delay any downloads as long as it isn't a dupe of
> >>>> what's already being downloaded.....
> >>> You can set to delay to zero of course.
> >>>
> >>> This is only one side of the issues with CDNs. The other, more
> problematic
> >>> side of it is, that many server with different URLs provide the same
> files.
> >>> Every new address will result in a new download of otherwise identical
> >>> content.
> >>>
> >>> Here's an example of openSUSE:
> >>>
> >>> #
> >>> # this file was generated by gen_openSUSE_dedups
> >>> # from http://mirrors.opensuse.org/list/all.html
> >>> # with timestamp Thu, 12 May 2016 05:30:18 +0200
> >>> #
> >>> [openSUSE]
> >>> match:
> >>>     # openSUSE Headquarter
> >>>     http\:\/\/[a-z0-9]+\.opensuse\.org\/(.*)
> >>>     # South Africa (za)
> >>>     http\:\/\/ftp\.up\.ac\.za\/mirrors\/opensuse\/opensuse\/(.*)
> >>>     # Bangladesh (bd)
> >>>     http\:\/\/mirror\.dhakacom\.com\/opensuse\/(.*)
> >>>     http\:\/\/mirrors\.ispros\.com\.bd\/opensuse\/(.*)
> >>>     # China (cn)
> >>>     http\:\/\/mirror\.bjtu\.edu\.cn\/opensuse\/(.*)
> >>>     http\:\/\/fundawang\.lcuc\.org\.cn\/opensuse\/(.*)
> >>>     http\:\/\/mirrors\.tuna\.tsinghua\.edu\.cn\/opensuse\/(.*)
> >>>     http\:\/\/mirrors\.skyshe\.cn\/opensuse\/(.*)
> >>>     http\:\/\/mirrors\.hust\.edu\.cn\/opensuse\/(.*)
> >>>     http\:\/\/c\.mirrors\.lanunion\.org\/opensuse\/(.*)
> >>>     http\:\/\/mirrors\.hustunique\.com\/opensuse\/(.*)
> >>>     http\:\/\/mirrors\.sohu\.com\/opensuse\/(.*)
> >>>     http\:\/\/mirrors\.ustc\.edu\.cn\/opensuse\/(.*)
> >>>     # Hong Kong (hk)
> >>>     http\:\/\/mirror\.rackspace\.hk\/openSUSE\/(.*)
> >>>     # Indonesia (id)
> >>>     http\:\/\/mirror\.linux\.or\.id\/linux\/opensuse\/(.*)
> >>>     http\:\/\/buaya\.klas\.or\.id\/opensuse\/(.*)
> >>>     http\:\/\/kartolo\.sby\.datautama\.net\.id\/openSUSE\/(.*)
> >>>     http\:\/\/opensuse\.idrepo\.or\.id\/opensuse\/(.*)
> >>>     http\:\/\/mirror\.unej\.ac\.id\/opensuse\/(.*)
> >>>     http\:\/\/download\.opensuse\.or\.id\/(.*)
> >>>     http\:\/\/repo\.ugm\.ac\.id\/opensuse\/(.*)
> >>>     http\:\/\/dl2\.foss\-id\.web\.id\/opensuse\/(.*)
> >>>     # Israel (il)
> >>>     http\:\/\/mirror\.isoc\.org\.il\/pub\/opensuse\/(.*)
> >>>   
> >>>     [...] -> this list contains about 180 entries
> >>>
> >>> replace: http://download.opensuse.org.%(intdomain)s/\1
> >>> # fetch all redirected objects explicitly
> >>> fetch: true
> >>>
> >>>
> >>> This is, how CDNs work, but it's a nightmare for caching proxies.
> >>> In such scenarios squid_dedup comes to rescue.
> >>>
> >>> Cheers,
> >>> Pete
> >>> _______________________________________________
> >>> squid-users mailing list
> >>> squid-users at lists.squid-cache.org
> >>> http://lists.squid-cache.org/listinfo/squid-users
>>
>>
>>
>> _______________________________________________
>> squid-users mailing list
>> squid-users at lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
>
>
>
> _______________________________________________
> squid-users mailing list
> squid-users at lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
 
iQEcBAEBCAAGBQJXNNcvAAoJENNXIZxhPexGLGsH/jilasMtRp499QJS+G5O9lHv
z1MtLbZnExrzVcmqb69jsbaZWjwnhvUF1Ng7fZbep5q6pky3DGcwxQu/EBVhD3+p
tTliArKa45dmhbOm5a0ljJcq73hBtUrlS0UrGDV6CMRrXjHjSUzy6+BwwsI1mClp
dtOos3NoSDlQmazkEDA6+f3iuYykjinmTsFlJRgQipluXnUUlmvbnpwZHqUhTA0R
X2I6j3zdTDHGszlXkoFrKg+Vj0gOzeGfA5IPx7/vnruShlYSPWuvoVfvi4ZLYV2y
8NZ8Q9L1MvBoMUa1WphE2NZeKpVPbK/d1inNCjOtpymxmNX/JYHTCs3aB8Gr8pU=
=9RTQ
-----END PGP SIGNATURE-----

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squid-cache.org/pipermail/squid-users/attachments/20160513/f6fbf20d/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 0x613DEC46.asc
Type: application/pgp-keys
Size: 2437 bytes
Desc: not available
URL: <http://lists.squid-cache.org/pipermail/squid-users/attachments/20160513/f6fbf20d/attachment-0001.key>


More information about the squid-users mailing list