[squid-users] TCP_MISS_ABORTED/503 - -Squid-Error: ERR_DNS_FAIL 0

Amos Jeffries squid3 at treenet.co.nz
Wed Aug 21 11:41:40 UTC 2019


On 21/08/19 3:51 pm, L A Walsh wrote:
> Pulled this out of my log.  Downloading lots of files through squid has
> the download aborting after about 3k files.  This is the first I've seen
> that there's also an associated ERR_DNS_FAIL -- is that a message from
> squid's internal resolver?

Indirectly. It will be the result of DNS failure on the domain in that
particular transactions URL.

You mentioned the ABORTED was repeatable. If this DNS_FAIL was not
always present, what was?


> 
> 1566304848.234      1 192.168.3.1 TCP_MISS_ABORTED/503 4185 GET
> http://download.
> opensuse.org/tumbleweed/repo/src-oss/src/gdouros-aegean-fonts-9.78-1.6.src.rpm
> - HIER_NONE/- text/html [Referer:
> http://download.opensuse.org/tumbleweed/repo/src-oss/src/\r\nUser-Agent:
> openSuSE_Client\r\nAccept: */*\r\nAccept-Encoding:
> identity\r\nConnection: Keep-Alive\r\nProxy-Connection:
> Keep-Alive\r\nHost: download.opensuse.org\r\n] [HTTP/1.1 503 Service
> Unavailable\r\nServer: squid/4.0.25\r\nMime-Version: 1.0\r\nDate: Tue,

Please upgrade your Squid. That is a beta release. Squid-4 has been in
stable/production releases for over a year now.


> 20 Aug 2019 12:40:48 GMT\r\nContent-Type:
> text/html;charset=utf-8\r\nContent-Length: 4163\r\nX-Squid-Error:
> ERR_DNS_FAIL 0\r\nContent-Language: en\r\n\r]
> 
> One thing -- all the files were being downloaded in 1 copy of wget, so
> it was
> a long connection.

This makes me suspect the timing may be important. Check if these 3k
transactions all terminated the same amount of time after their TCP
connection was opened by the client.
 If they are or very close to similar timing before the end. Look for
somewhere that may be timing out (could be Squid, or the TCP stack or
any routers along the way.


> 
> If I don't go through the proxy, it does download, but its hard to see
> why DNS
> would ahve a problem only 4 hosts are accessed in the download:
> 

The hostname needs to be looked up on every request (HTTP being
stateless). The Squid internal resolver does cache names for the DNS
provided TTL, when that expires they may need to be re-resolved and hit
a failure then.


> 
>    1027 http://download.opensuse.org
>       2 http://ftp1.nluug.nl
>    2030 http://mirror.sfo12.us.leaseweb.net
>      14 http://mirrors.acm.wpi.edu
> 
> 3073...
> 
>     Looks like it's exactly 3k lookups and death on 3k+1
> 
> ring any bells?

There is nothing in Squid that is tied to a particular number of
transactions. An infinite number of HTTP/1.x requests may be processed
on a TCP connection.

It is more likely to be timing or bandwidth related. Some network level
thing. The error message "page content" Squid produces on that last
request each time would be useful info.

Amos


More information about the squid-users mailing list