[squid-users] Error Resolution (TunnelStateData::Connection:: error )
Iruma Keisuke
su.maji.ke at gmail.com
Fri Jun 12 05:39:25 UTC 2015
Thank you Amos.
I really appreciate your response.
We analyzed the trend of FD an error has occurred.
2015/06/01_08:52:35 nsu01pint-int01 [cache]2015/06/01 08:52:32|
TunnelStateData::Connection:: error : FD 81: read/write failure: (110)
Connection timed out
Active file descriptors:
File Type Tout Nread * Nwrite * Remote Address Description
---- ------ ---- -------- -------- ---------------------
--------------------- ---------
81 Socket 86282 8100* 67977 XXX.XXXX.2.136:49907 Reading
next request Date: Sun, 31 May 2015 23:32:30 GMT
81 Socket 86302 8100* 16753 XXX.XXXX.2.136:49944 Reading
next request Date: Sun, 31 May 2015 23:37:30 GMT
81 Socket 86002 8100* 17395* XXX.XXXX.2.136:49944 Reading
next request Date: Sun, 31 May 2015 23:42:30 GMT
81 Socket 85702 8100* 17395* XXX.XXXX.2.136:49944 Reading
next request Date: Sun, 31 May 2015 23:47:30 GMT
81 Socket 85402 8100* 17395* XXX.XXXX.2.136:49944 Reading
next request Date: Sun, 31 May 2015 23:52:30 GMT
81 Socket 86354 56810* 40697 XXX.XXXX.6.114:49687 Reading
next request Date: Sun, 31 May 2015 23:57:30 GMT
↑
Error while writing ?
All it seems to have time out in writing.
And, the time until the timeout is between 10 to 15 minutes.
Are "Write_timeout" and "read_timeout" directive related to the error?
"write_timeout" is a directive that does not exist in the 3.1 version.
Though "write_timeout" can not be set, it works and cause a timeout in
15 minutes?
I think this is also a relationship.
http://www.squid-cache.org/Doc/config/half_closed_clients/
> Squid can not tell the difference between a half-closed connection, and a fully closed one.
3.1 version "half_closed_clients" is off by default.
I have this kind of guess.
1. Change the client to "half_closed".
2. squid write to FD.
3. Change the client to "fully_losed".
4. Since the "fully_losed" squid can not understand, squid attempts to
write to the FD.
5. After 15 minutes, "write_timeout" occurs in squid
Can I get your opinion?
2015-06-04 22:38 GMT+09:00, Amos Jeffries <squid3 at treenet.co.nz>:
> On 5/06/2015 1:18 a.m., Iruma Keisuke wrote:
>> Thank you Amos.
>>
>> 2015-06-02 23:07 GMT+09:00, Amos Jeffries <squid3 at treenet.co.nz>:
>>> On 2/06/2015 9:15 p.m., Irimajiri keisuke wrote:
>>>> Dear all,
>>>>
>>>> I have to build a proxy server by using the squid.
>>>> The number of clients is 400 people.
>>>>
>>>> I do not know the cause of the error message that appears in the
>>>> cache.log.
>>>> In the weekday, I have come up with an error every few hours 8:00 to
>>>> 18:00.
>>>> Access concentration I look like does not matter.
>>>>
>>>> [cache.log error message]
>>>> 2015/05/11 13:37:24| TunnelStateData::Connection:: error : FD 610:
>>>> read/write failure: (110) Connection timed out
>>>>
>>>> Why I want to know whether this error has occurred.
>>>
>>> Yes it has occured. You would not be seeing it otherwise.
>>>
>>>> Also, I want to know the impact on the user.
>>>
>>> The user who is causing the problem is probably not impacted at all.
>>> Every other user sharing the proxy is impacted by the reduction in
>>> available network socket, memory and CPU resources.
>>>
>> It seems to be no abnormality in the state of network sockets and
>> memory and CPU.
>> Is it safe to ignore this error?
>
> If you think your service is operating fine. It does mean the proxy has
> a lower threshold of tolerance for network congestion than normal.
>
>
>>>>
>>>> [squidclient mgr:filedescriptor]
>>>> Every five minutes record
>>>> extract FD610
>>>>
>>>> It looks like an error has occurred in the use to which the terminal
>>>> of xxx.xxx.2.115 user.
>>>> Is it a problem of communication of the user and the proxy?
>>>>
>>>
>>> Nothing happened on a TCP conection for a long time. It was closed by
>>> the networking sub-systems somewhere between Squid and the client.
>>>
>>
>> Do error is not out on the web browser?
>> Could you detailed information about TCP state and the state of the
>> user when an error has occurred.
>
> It might be, or it might not be. Others before you who noticed the same
> messages have a mixed set of reasons found for it. Some was Chrome
> browser happy eyeballs algorithm leaking its second connection until it
> got dropped by the network TCP stack. Some it was F5 load balancers
> breaking. Others it was NAT timeout in users home routers. Some have not
> bothered to dig deep so it may be other causes.
>
> All that is certain is that something between Squid and user is closing
> the connection while it is in an HTTP idle state.
>
>>
>>>> Active file descriptors:
>>>> File Type Tout Nread * Nwrite * Remote Address Description
>>>> ---- ------ ---- -------- -------- ---------------------
>>>> ------------------------------
>>>> 610 Socket 893 39494* 50228 xxx.xxx.xxx.162:443
>>>> outlook.office365.com:443 2015/05/11_13:08:29
>>>> 610 Socket 86329 45754* 103329 xxx.xxx.6.141:50174 Reading next
>>>> request 2015/05/11_13:13:29
>>>> 610 Socket 86258 6516* 13975 xxx.xxx.2.115:50820 Reading next
>>>> request 2015/05/11_13:18:29
>>>> 610 Socket 85958 12472* 34531* xxx.xxx.2.115:50820 Reading next
>>>> request 2015/05/11_13:23:29
>>>> 610 Socket 85657 12472* 34531* xxx.xxx.2.115:50820 Reading next
>>>> request 2015/05/11_13:28:29
>>>> 610 Socket 85357 12472* 34531* xxx.xxx.2.115:50820 Reading next
>>>> request 2015/05/11_13:33:29
>>>> 610 Socket 86336 3652* 8003 xxx.xxx.3.152:50817 Reading next
>>>> request 2015/05/11_13:38:29
>>>>
>>>> [access.log]
>>>> I do not see suspicious error log I tried to extract the address
>>>> xxx.xxx.2.115.
>>>>
>>>> Please tell me a good idea toward someone solve.
>>>
>>> Please provided additional details:
>>> * Squid version
>>> * Squid configuration
>>>
>>>
>>> I suspect you have a quite old verion of Squid. That particular error
>>> message does not even exist in the code any more. The current releases
>>> display much more TCP details about the connection where the error
>>> occured.
>>
>> squid version is squid-3.1.10-29.
>> This is the latest version that RedHat is delivering.
>
> Ah, yes RHEL policy of not updating unless explicit bug reports exist
> and supporting things for 10 years+ >
>
> You may be interested in the official unofficial packages (accepted by
> the Squid Project and community as reasonable packages for use, but not
> RHEL official supported).
> <http://wiki.squid-cache.org/KnowledgeBase/RedHat>
>
>
>>
>> [squid.conf]
>> ------------------------------------
>> acl manager proto cache_object
>> acl localhost src 127.0.0.1/32 ::1
>> acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
>> acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
>> acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
>> acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
>> acl localnet src fc00::/7 # RFC 4193 local private network range
>> acl localnet src fe80::/10 # RFC 4291 link-local (directly
>> plugged) machines
>> acl SSL_ports port 443
>> acl Safe_ports port 80 # http
>> acl Safe_ports port 21 # ftp
>> acl Safe_ports port 443 # https
>> acl Safe_ports port 70 # gopher
>> acl Safe_ports port 210 # wais
>> acl Safe_ports port 1025-65535 # unregistered ports
>> acl Safe_ports port 280 # http-mgmt
>> acl Safe_ports port 488 # gss-http
>> acl Safe_ports port 591 # filemaker
>> acl Safe_ports port 777 # multiling http
>> acl CONNECT method CONNECT
>> http_access allow manager localhost
>> http_access deny !Safe_ports
>> http_access deny CONNECT !SSL_ports
>> http_access allow all
>
> !!!! Open proxy !!!
>
>
>> http_port 192.168.1.1:8080
>> hierarchy_stoplist cgi-bin ?
>> coredump_dir /var/spool/squid
>> refresh_pattern ^ftp: 1440 20% 10080
>> refresh_pattern ^gopher: 1440 0% 1440
>> refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
>> refresh_pattern . 0 20% 4320
>>
>> cache_mem 2048 MB
>> cache_store_log none
>> visible_hostname unknown
>> request_header_access X-FORWARDED-FOR deny all
>> request_header_access Via deny all
>> max_filedesc 10240
>> ipcache_size 10240
>> -----------------------------------------------
>>
>> Please let me ask further questions
>> Are these has to do with the error?
>> http://www.squid-cache.org/Doc/code/tunnel_8cc_source.html
>>
>> 472 TunnelStateData::Connection::error(int const xerrno)
>> 473 {
>> 474 /* XXX fixme xstrerror and xerrno... */
>> 475 errno = xerrno;
>> 476
>> 477 debugs(50, debugLevelForError(xerrno), HERE << conn << ":
>> read/write failure: " << xstrerror());
>> 478
>> 479 if (!ignoreErrno(xerrno))
>> 480 conn->close();
>> 481 }
>>
>> 536 /* Bump the dest connection read timeout on any activity */
>> 537 /* see Bug 3659: tunnels can be weird, with very long one-way
>> transfers */
>> 538 if (Comm::IsConnOpen(to.conn)) {
>> 539 AsyncCall::Pointer timeoutCall = commCbCall(5, 4, "tunnelTimeout",
>> 540 CommTimeoutCbPtrFun(tunnelTimeout, this));
>> 541 commSetConnTimeout(to.conn, Config.Timeout.read, timeoutCall);
>> 542 }
>> 543
>>
>
> Possibly, but predates all that codes existence.
>
> Amos
>
>
More information about the squid-users
mailing list