[squid-users] Squid doesn't reload webpage like other clients do

Amos Jeffries squid3 at treenet.co.nz
Tue Oct 31 08:51:36 UTC 2017


On 31/10/17 05:22, Alex Rousskov wrote:
> On 10/30/2017 03:51 AM, Troiano Alessio wrote:
> 
>> I've squid 3.5.20 running on RHEL 7.4. I have a problem to access
>> some websites, for example www.nato.int. This website apply an
>> Anti-DDoS system that reset the first connection after the TCP 3-way
>> handshake (SYN/SYN-ACK/ACK/RST-ACK). All subsequent TCP connections
>> are accepted. The website administrator say's it is by design.
> 
> 
>> When I browse the site with squid proxy the browser receive an "Empty
>> Response" squid error page (HTTP error code 502 Bad Gateway) and
>> doesn't do the automatic retry:
> 
> This is by design as well :-).
> 
> We can change Squid behavior to retry connection resets, but I am sure
> that some folks will not like the new behavior because in _their_ use
> cases a retry is wasteful and/or painful. IMHO, the new behavior should
> be controlled by a configuration directive, possibly an ACL-driven one.
> > Quality patches implementing the above feature should be welcomed IMO.
> The tip of the relevant code is probably in ERR_ZERO_SIZE_OBJECT
> handling inside FwdState::fail(). There is a similar code that handles
> persistent connection races there already, but the zero-size reply code
> may need a new dedicated FwdState flag to prevent infinite retry loops
> when the origin server is broken (a much more typical use case than the
> weird attempt at DDoS mitigation that you have described above).
> 
> https://wiki.squid-cache.org/SquidFaq/AboutSquid#How_to_add_a_new_Squid_feature.2C_enhance.2C_of_fix_something.3F
> 
> 
> HTH,
> 
> Alex.
> 
> 
> 
>> [root at soc-pe-nagios01 ~]# wget www.nato.int -e use_proxy=yes -e http_proxy=172.31.1.67:8080
>> --2017-10-30 10:41:09--  http://www.nato.int/
>> Connecting to 172.31.1.67:8080... connected.
>> Proxy request sent, awaiting response... 502 Bad Gateway
>> 2017-10-30 10:41:09 ERROR 502: Bad Gateway.
>>
>> I can't find an RFC that confirm if browser and proxyes should try a page reload, or if squid has an option to do that.

FWIW: what Squid is seeing is that the TCP is *successful*, then the 
HTTP request delivery gets a TCP RST. This is indistinguishable from the 
many broken IP/VPN tunnel, path-MTU and NAT errors that occur all over 
the Internet on a regular basis.


That operational state (HTTP underway) also means RFC 7230 is the 
relevant place to look for behaviour requirements. Section 6.3.1 says:

Browsers requirement:
"
   a client MAY open a
    new connection and automatically retransmit an aborted sequence of
    requests if all of those requests have idempotent methods (Section
    4.2.2 of [RFC7231]).
"

Squid requirement:
"
   A proxy MUST NOT automatically retry non-idempotent requests.
"

So it depends entirely on what type of HTTP request was being performed. 
If it was CONNECT, POST or similar then retry is forbidden.

Squid should already be re-trying for GET requests and similar. If it is 
not then that can be treated as a bug. Alex already pointed out the code 
place to look at for retry handling. The 'Method' class in Squid 
provides an isIdempotent() accessor that can be used to check whether 
the request is retriable or not.

Amos


More information about the squid-users mailing list