<html>
  <head>
    <meta content="text/html; charset=utf-8" http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <p><br>
    </p>
    <p>I also don't care too much about duplicated cached files.. but
      trying to cache "ranged" requests is topping my link and in the
      end it seems it's not caching anything lol</p>
    <p>EVEN if I only allow range_offset to some urls or file
      extensions....</p>
    <p><br>
    </p>
    <p>Best Regards,<br>
    </p>
    <p><br>
    </p>
    <pre class="moz-signature" cols="72">-- 
Heiler Bemerguy - (91) 98151-4894
Assessor Técnico - CINBESA (91) 3184-1751</pre>
    <br>
    <div class="moz-cite-prefix">Em 12/05/2016 16:09, Yuri Voinov
      escreveu:<br>
    </div>
    <blockquote
      cite="mid:bb977153-9b63-abcd-5e9d-452c10d6f539@gmail.com"
      type="cite">
      <pre wrap="">
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
 
I suggest it is very bad idea to transform caching proxy to linux
distro's or something else archive.

As Amos said, "Squid is a cache, not an archive".


13.05.16 0:57, Hans-Peter Jansen пишет:
</pre>
      <blockquote type="cite">
        <pre wrap="">Hi Heiler,

On Donnerstag, 12. Mai 2016 13:28:00 Heiler Bemerguy wrote:
</pre>
        <blockquote type="cite">
          <pre wrap="">Hi Pete, thanks for replying... let me see if I got it right..

Will I need to specify every url/domain I want it to act on ? I want
squid to do it for every range-request downloads that should/would be
cached (based on other rules, pattern_refreshs etc)
</pre>
        </blockquote>
        <pre wrap="">
Yup, that's right. At least, that's the common approach to deal with CDNs.
I think, that disallowing range requests is too drastic to work fine
</pre>
      </blockquote>
      <pre wrap="">on the
</pre>
      <blockquote type="cite">
        <pre wrap="">long run, but let us know, if you get to satisfactory solution this way.

</pre>
        <blockquote type="cite">
          <pre wrap="">It doesn't need to delay any downloads as long as it isn't a dupe of
what's already being downloaded.....
</pre>
        </blockquote>
        <pre wrap="">
You can set to delay to zero of course.

This is only one side of the issues with CDNs. The other, more
</pre>
      </blockquote>
      <pre wrap="">problematic
</pre>
      <blockquote type="cite">
        <pre wrap="">side of it is, that many server with different URLs provide the same
</pre>
      </blockquote>
      <pre wrap="">files.
</pre>
      <blockquote type="cite">
        <pre wrap="">Every new address will result in a new download of otherwise identical
content.

Here's an example of openSUSE:

#
# this file was generated by gen_openSUSE_dedups
# from <a class="moz-txt-link-freetext" href="http://mirrors.opensuse.org/list/all.html">http://mirrors.opensuse.org/list/all.html</a>
# with timestamp Thu, 12 May 2016 05:30:18 +0200
#
[openSUSE]
match:
    # openSUSE Headquarter
    http\:\/\/[a-z0-9]+\.opensuse\.org\/(.*)
    # South Africa (za)
    http\:\/\/ftp\.up\.ac\.za\/mirrors\/opensuse\/opensuse\/(.*)
    # Bangladesh (bd)
    http\:\/\/mirror\.dhakacom\.com\/opensuse\/(.*)
    http\:\/\/mirrors\.ispros\.com\.bd\/opensuse\/(.*)
    # China (cn)
    http\:\/\/mirror\.bjtu\.edu\.cn\/opensuse\/(.*)
    http\:\/\/fundawang\.lcuc\.org\.cn\/opensuse\/(.*)
    http\:\/\/mirrors\.tuna\.tsinghua\.edu\.cn\/opensuse\/(.*)
    http\:\/\/mirrors\.skyshe\.cn\/opensuse\/(.*)
    http\:\/\/mirrors\.hust\.edu\.cn\/opensuse\/(.*)
    http\:\/\/c\.mirrors\.lanunion\.org\/opensuse\/(.*)
    http\:\/\/mirrors\.hustunique\.com\/opensuse\/(.*)
    http\:\/\/mirrors\.sohu\.com\/opensuse\/(.*)
    http\:\/\/mirrors\.ustc\.edu\.cn\/opensuse\/(.*)
    # Hong Kong (hk)
    http\:\/\/mirror\.rackspace\.hk\/openSUSE\/(.*)
    # Indonesia (id)
    http\:\/\/mirror\.linux\.or\.id\/linux\/opensuse\/(.*)
    http\:\/\/buaya\.klas\.or\.id\/opensuse\/(.*)
    http\:\/\/kartolo\.sby\.datautama\.net\.id\/openSUSE\/(.*)
    http\:\/\/opensuse\.idrepo\.or\.id\/opensuse\/(.*)
    http\:\/\/mirror\.unej\.ac\.id\/opensuse\/(.*)
    http\:\/\/download\.opensuse\.or\.id\/(.*)
    http\:\/\/repo\.ugm\.ac\.id\/opensuse\/(.*)
    http\:\/\/dl2\.foss\-id\.web\.id\/opensuse\/(.*)
    # Israel (il)
    http\:\/\/mirror\.isoc\.org\.il\/pub\/opensuse\/(.*)
   
    [...] -> this list contains about 180 entries

replace: <a class="moz-txt-link-freetext" href="http://download.opensuse.org.%(intdomain)s/\1">http://download.opensuse.org.%(intdomain)s/\1</a>
# fetch all redirected objects explicitly
fetch: true


This is, how CDNs work, but it's a nightmare for caching proxies.
In such scenarios squid_dedup comes to rescue.

Cheers,
Pete
_______________________________________________
squid-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:squid-users@lists.squid-cache.org">squid-users@lists.squid-cache.org</a>
<a class="moz-txt-link-freetext" href="http://lists.squid-cache.org/listinfo/squid-users">http://lists.squid-cache.org/listinfo/squid-users</a>
</pre>
      </blockquote>
      <pre wrap="">
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
 
iQEcBAEBCAAGBQJXNNTzAAoJENNXIZxhPexG8XIIAKal+I1GMvTS9QDdJT6pxi7n
IL/d33/YUelZJ9ok1bLAiI1DNOJR6xwK6OZ+LefPOrxH1Q14quGJ5m873065jE+H
/1qhYs8rVVQ8qlLQyMI+aacEA9FV7j6OpWMteM+54SSjLlW4z0pJkw+vSsMwCnI5
Sy3qryieIImtmYnT1wbVM5Pop3lrLA/t1jza619ioxIxWa4M4bSO2EAR+Qj5HiUg
BT8ki8t1GIO12RatjqDwSouU+yDMK85amUKZBjRFXhyOxi1Cg+5uleI4C2lUjqM2
f1n3KBC7mlF6snAT74kc+JWLsNd2ohlkmJB8tSIhkxvkgmaWDpCpwaGaUmtkuXg=
=/fDD
-----END PGP SIGNATURE-----

</pre>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
squid-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:squid-users@lists.squid-cache.org">squid-users@lists.squid-cache.org</a>
<a class="moz-txt-link-freetext" href="http://lists.squid-cache.org/listinfo/squid-users">http://lists.squid-cache.org/listinfo/squid-users</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>