[squid-users] squid refresh_pattern / cache question

Amos Jeffries squid3 at treenet.co.nz
Tue Aug 9 23:11:39 UTC 2016


On 10/08/2016 10:43 a.m., Berkes, David wrote:
> 
> I have a question about the caching mechanism and "refresh_pattern" specifically.  I had the following configured for my company.  Lateley there have been complaints that people are seeing old pages and not the recent content...specifically when going to www.bbc.com<http://www.bbc.com>.
> 
> Im not actually sure what is not allowing the bbc.com pages to not get updated, but at this point, I just want to let squid do normal caching without any refresh_pattern or "fancy" settings.
> 
> My question is will I still get caching features/benefits of squid after I remove the "refresh_pattern" entries?

Yes you will get caching. Just on a slightly different set of objects
than are being cached now.

The HTTP protocol defines an algorithm for heuristic caching. Which is
based on the headers sent by the server (or not) for each object. All
that refresh_pattern does is provide default values for the headers if
the server did not send one or other. The ignore/override options make
Squid calculate that algorithm as if certain header values were not sent
even if they were.


I suggest you try the following changes first though:

* remove the ignore-no-cache ignore-no-store ignore-private options.

* add the refresh_pattern for correctly handling dynamic content. This
should be placed as second to last pattern, just above the '.' one.

  refresh_pattern -i (/cgi-bin/|\?) 0 0% 0


That should make your Squid obey the HTTP protocol more accurately. If
the problem remains after those changes, then you need to take a closer
look at what exactly is going on with those problem URLs.


Amos



More information about the squid-users mailing list