[squid-users] Redirects error for only some Citrix sites

Laz C. Peterson laz at paravis.net
Sun Jul 19 16:22:55 UTC 2015


Wow thank you Amos for that information.

I must read, research, digest, read and then attempt to figure out what the problem is. :-)

Will be in touch if there are any further issues.  Thank you.

~ Laz Peterson
Paravis, LLC

> On Jul 19, 2015, at 9:03 AM, Amos Jeffries <squid3 at treenet.co.nz> wrote:
> 
> On 18/07/2015 1:42 a.m., Laz C. Peterson wrote:
>> Hello all,
>> 
>> Very weird issue here.  This happens to only select Citrix support articles.  (For example, http://support.citrix.com/article/CTX122972 <http://support.citrix.com/article/CTX122972> when searching Google for “citrix netscaler expired password” which is the top link in my results, or also searching for the same article directly on Citrix support site.)
>> 
>> This is a new install of Squid 3 on Ubuntu 14.04.2 (from Ubuntu repository).  When clicking the Google link, I get “too many redirects” error, saying that possibly the page refers to another page that is then redirected back to the original page.
>> 
>> I tried debugging but did not find much useful information.  Has anyone else seen behavior like this?
>> 
> 
> The problem is the client fething URL X, gets a 30x redirect message
> instructing it to contacts URL X instead (X being same URL X it *was*
> fetching).
> 
> Usually that is a miconfiguration on the origin server itself. Fixable
> only by the origin site authors. But there are also a few ways Squid can
> play a part:
> 
> 1) The 30x response pointing to itself (wrongly) really was generated by
> the server and also explicitly stated that it should be cached [or you
> configured Squid to force-cache].
> 
> Squid obeyed, and now you keep getting these loops. That will continue
> until the cached content expires or is purged.
> 
> 
> 2) You are using Store-ID/Store-URL feature of Squid and did not check
> that the URLs being merged were identical output. One of them produces a
> 302 redirect to X, which got cached. So now fetches for any URL in the
> merged set (including the X itself) gets the cached 302 redirect back to X.
> Again, that will continue until the cached content expires or is purged.
> 
> 
> 3) You are using a URL redirector that is generating the 302 response
> loop. Usually redirectors with badly written (overly inclusive) regex
> patterns causing similar behaviour to (2).
> 
> 
> 4) You are using a URL re-writer that is taking client request URL Y and
> (wrongly) re-writing it to X. Squid fetches X from the backend server,
> which replies with a redirect to Y (because Y != X). ... and loop.
> 
> 
> 5) You could be directing traffic using a cache_peer on port 80
> regardless of http:// or https:// scheme received from the clients. If
> the receiving peer/server emits a 302 for all traffic arriving on its
> port 80 to a https:// URL this sort of loop happens. Its a slightly more
> complicated form of (4), using a cache_peer equivalent of URL re-writer.
> The best fix for that is at the server. RFC 7230 section 5.5 has an
> algorithm for what compliant servers should be doing. Squids job is to
> relay the request and URL unchanged.
> 
> Amos
> 
> _______________________________________________
> squid-users mailing list
> squid-users at lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squid-cache.org/pipermail/squid-users/attachments/20150719/d5de18b7/attachment.html>


More information about the squid-users mailing list