[squid-users] Lots of "Vary object loop!"

Sebastián Goicochea sebag at vianetcon.com.ar
Wed Aug 26 16:11:19 UTC 2015

Hello Amos, thanks for your help. I've disabled our rewriter helper but 
the errors remain the same. So I think that's not the reason.
I've been reading some older mails from the list and a guy named Hussam 
Al-Tayeb exchanged some interesting mails with you a couple of months 
ago .. That got me thinking, can I completely disable Vary checking? I 
know is an http violation and not recommended, but if I could disable it 
using an ACL for certain sites that are missconfigured and I have the 
certainty that the content is exactly the same no matter what .. I could 
get better performance. (It's ok if I have to patch something and 
recompile squid)
If this is not possible, what about "bypassing" content that has the 
Vary in its response header so squid does not make this 2 lookups only 
to find that it has to retrieve it from the original server anyway?


El 23/08/15 a las 08:14, Amos Jeffries escribió:
> On 22/08/2015 4:20 a.m., Sebastian Goicochea wrote:
>> Hello everyone, I'm having a strange problem:
>> Several servers, same hardware, using same version of squid (3.5.4)
>> compiled using the same configure options, same configuration files. But
>> in two of them I get LOTS of these Vary object loop! lines in cache.log
>> 2015/08/21 13:07:52 kid1| varyEvaluateMatch: Oops. Not a Vary match on
>> second attempt,
>> 'http://resources.mlstatic.com/frontend/vip-fend-webserver/assets/bundles/photoswipe-6301b943e5586fe729e5d6480120a893.js'
>> 'accept-encoding="gzip"'
>> 2015/08/21 13:07:52 kid1| clientProcessHit: Vary object loop!
>> 2015/08/21 13:07:52 kid1| varyEvaluateMatch: Oops. Not a Vary match on
>> second attempt, 'http://www.google.com/afs/ads/i/iframe.html'
>> 'accept-encoding="gzip,%20deflate"'
>> 2015/08/21 13:07:52 kid1| clientProcessHit: Vary object loop!
>> 2015/08/21 13:08:01 kid1| varyEvaluateMatch: Oops. Not a Vary match on
>> second attempt,
>> 'http://minicuotas.ribeiro.com.ar/images/products/large/035039335000.jpg' 'accept-encoding="gzip,%20deflate"'
>> 2015/08/21 13:08:01 kid1| clientProcessHit: Vary object loop!
>> I've read what I could find on forums but could not solve it. Is this
>> something to worry about?
> The short answer:
> Yes and no. Squid is signalling that it is completely unable to perform
> its caching duty for these URLs. The proxying duty continues with only
> high latency visible to the client.
> It is up to you whether that latency cost is urgent or not. It is
> certainy high enough importance that you need to be told each time (no
> rate limiting) when you have asked to receive important notices.
>> If that is not the case, how can I disable the
>> excessive logging?
> You can reduce your logging level to show only critical problems,
> instead of showing all details rated 'important'.
>    debug_options ALL,0
> NOTE: important (ALL,1) includes a lot of things like this that do
> really need to be fixed to get better service out of either your proxy
> or the underlying network. But can be put on your todo list if you dont
> have time right now.
>> Which is the condition that generates this?
> In long;
> The "whats happening" is:
> Your cache contains an object which was delivered by the server along
> with headers stating that behind the URL is a large set of porssible
> responses. *all* requests for that URL use a certain set of headers
> (listed in Vary) to determine which binary-level object is applicable
> (or not) on a per-client / per-reqeust basis.
>   In order to cache the object Squid has to follow that same selection
> criteria *exactly*.
> Most common example is gzip vs non-gzip encoded copies of things. Which
> you can see those messages relate to.
> Squid stores this information in a "Vary object" associated with only
> the URL. That vary object is used to perform a secondary cache index
> lookup to see if the partiular variant needed is stored.
> The expectation is that there would be 3+ objects stored for this URL; a
> gzip data object, various non-gzip data objects, and a metadata object
> ("Vary object") telling Squid that it needs to look at the
> accept-encoding header to find which of the those data objects to send
> the client.
> The messages themselves mean:
> "Oops. Not a Vary match on second attempt"
>   - that the Vary object saying look at headers X+Y+X is pointing at
> itself or another Vary metadata object saying look at some other
> headers. A URL cannot have two different Vary header values
> simultaneously (Vary is a single list "value").
> Something really weird is going on in your cache. Squid should handle
> this by abandoning the cache lookups and go to the origin for fresh copies.
> You could be causing it by using url-rewrite or store-id helpers wrongly
> to pass requests for a URL to servers which produce different responses.
> So that is well worth looking into.
> IMPORTANT: It is mandatory that any re-writing only be done to
> 'collapse' URLs that are *actually* producing identical objects and
> producing them in (outwardly) identical ways. This Vary looping is just
> the tip of an iceberg of truly horrible failures that occur "silently"
> with re-writing.
> There is another similar message that can be mixed in the long list:
> "Oops. Not a Vary object on second attempt," (note the 1-word difference)
>   - this is almost but not quite so bad, and is usually seen with broken
> origin servers. All you can do about the problem itself then is fire off
> bug reports to people and hope it gets fixed by the sysadmin in charge.
> Both situations are very bad for HTTP performance, and bad for churning
> your cache as well. But Squid can cope easily enough by just fetching a
> new object and dropping what is in the cache. That "Vary object loop!"
> message is telling you Squid is doing exactly that.
> A quick test with the tool at redbot.org shows that the
> resources.mlstatic.com server is utterly borked. Not even sending
> correct ETag ids for the objects its outputting. Thats a sign to me that
> the admin is trying to be smart with headers, and getting it very badly
> wrong.
> minicuotas.ribeiro.com.ar is also a Nginx server. But only showing the
> signs of normal default nginx brokenness
> (<http://trac.nginx.org/nginx/ticket/118>).
> The only thing you can do about that nginx bug is add pressure to get it
> fixed, or cut away all the Accept-Encoding headers on all requests sent
> to those servers. (request_header_access Accept-Encoding deny ...) Squid
> is already doing everything it reasonably can to correct the traffic
> output to your clients.
> The google related messages ... they are usually pretty good at this
> type of thing and my tests do show their server to be working correctly.
>   So that points me back at suspecting your config does something bad
> with url-rewriter or store-id helpers.
> Amos
> _______________________________________________
> squid-users mailing list
> squid-users at lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squid-cache.org/pipermail/squid-users/attachments/20150826/7191e87e/attachment.html>

More information about the squid-users mailing list