[squid-users] Excessive TCP memory usage
Deniz Eren
denizlist at denizeren.net
Wed May 25 10:18:27 UTC 2016
When I listen to connections between squid and icap using tcpdump I
saw that after a while icap closes the connection but squid does not
close, so connection stays in CLOSE_WAIT state:
[root at test ~]# tcpdump -i any -n port 34693
tcpdump: WARNING: Promiscuous mode not supported on the "any" device
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 96 bytes
13:07:31.802238 IP 127.0.0.1.icap > 127.0.0.1.34693: F
2207817997:2207817997(0) ack 710772005 win 395 <nop,nop,timestamp
104616992 104016968>
13:07:31.842186 IP 127.0.0.1.34693 > 127.0.0.1.icap: . ack 1 win 3186
<nop,nop,timestamp 104617032 104616992>
[root at test ~]# netstat -tulnap|grep 34693
tcp 215688 0 127.0.0.1:34693 127.0.0.1:1344
CLOSE_WAIT 19740/(squid-1)
These CLOSE_WAIT connections do not timeout and stay until squid
process is killed.
2016-05-25 10:37 GMT+03:00 Deniz Eren <denizlist at denizeren.net>:
> 2016-05-24 21:47 GMT+03:00 Amos Jeffries <squid3 at treenet.co.nz>:
>> On 25/05/2016 5:50 a.m., Deniz Eren wrote:
>>> Hi,
>>>
>>> After upgrading to squid 3.5.16 I realized that squid started using
>>> much of kernel's TCP memory.
>>
>> Upgrade from which version?
>>
> Upgrading from squid 3.1.14. I started using c-icap and ssl-bump.
>
>>>
>>> When squid was running for a long time TCP memory usage is like below:
>>> test at test:~$ cat /proc/net/sockstat
>>> sockets: used *
>>> TCP: inuse * orphan * tw * alloc * mem 200000
>>> UDP: inuse * mem *
>>> UDPLITE: inuse *
>>> RAW: inuse *
>>> FRAG: inuse * memory *
>>>
>>> When I restart squid the memory usage drops dramatically:
>>
>> Of course it does. By restarting you just erased all of the operational
>> state for an unknown but large number of active network connections.
>>
> That's true but what I mean was squid's CLOSE_WAIT connections are
> using too much memory and they are not timing out.
>
>> Whether many of those should have been still active or not is a
>> different question. the answer to which depends on how you have your
>> Squid configured, and what the traffic through it has been doing.
>>
>>
>>> test at test:~$ cat /proc/net/sockstat
>>> sockets: used *
>>> TCP: inuse * orphan * tw * alloc * mem 10
>>> UDP: inuse * mem *
>>> UDPLITE: inuse *
>>> RAW: inuse *
>>> FRAG: inuse * memory *
>>>
>>
>> The numbers you replaced with "*" are rather important for context.
>>
>>
> Today again I saw the problem:
>
> test at test:~$ cat /proc/net/sockstat
> sockets: used 1304
> TCP: inuse 876 orphan 81 tw 17 alloc 906 mem 29726
> UDP: inuse 17 mem 8
> UDPLITE: inuse 0
> RAW: inuse 1
> FRAG: inuse 0 memory 0
>
>>> I'm using Squid 3.5.16.
>>>
>>
>> Please upgrade to 3.5.19. Some important issues have been resolved. Some
>> of them may be related to your TCP memory problem.
>>
>>
> I have upgraded now and problem still exists.
>
>>> When I look with "netstat" and "ss" I see lots of CLOSE_WAIT
>>> connections from squid to ICAP or from squid to upstream server.
>>>
>>> Do you have any idea about this problem?
>>
>> Memory use by the TCP system of your kernel has very little to do with
>> Squid. Number of sockets in CLOSE_WAIT does have some relation to Squid
>> or at least to how the traffic going through it is handled.
>>
>> If you have disabled persistent connections in squid.conf then lots of
>> closed sockets and FD are to be expected.
>>
>> If you have persistent connections enabled, then fewer closures should
>> happen. But some will so expectations depends on how high the traffic
>> load is.
>>
> Persistent connection parameters are enabled in my conf, the problem
> occurs especially with connections to c-icap service.
>
> My netstat output is like this:
> netstat -tulnap|grep squid|grep CLOSE
>
> tcp 211742 0 127.0.0.1:55751 127.0.0.1:1344
> CLOSE_WAIT 17076/(squid-1)
> tcp 215700 0 127.0.0.1:55679 127.0.0.1:1344
> CLOSE_WAIT 17076/(squid-1)
> tcp 215704 0 127.0.0.1:55683 127.0.0.1:1344
> CLOSE_WAIT 17076/(squid-1)
> ...(hundreds)
> Above ones are connections to c-icap service.
>
> netstat -tulnap|grep squid|grep CLOSE
> Active Internet connections (servers and established)
> Proto Recv-Q Send-Q Local Address Foreign Address
> State PID/Program name
> tcp 1 0 192.168.2.1:8443 192.168.6.180:45182
> CLOSE_WAIT 15245/(squid-1)
> tcp 1 0 192.168.2.1:8443 192.168.2.177:50020
> CLOSE_WAIT 15245/(squid-1)
> tcp 1 0 192.168.2.1:8443 192.168.2.172:60028
> CLOSE_WAIT 15245/(squid-1)
> tcp 1 0 192.168.2.1:8443 192.168.6.180:44049
> CLOSE_WAIT 15245/(squid-1)
> tcp 1 0 192.168.2.1:8443 192.168.6.180:55054
> CLOSE_WAIT 15245/(squid-1)
> tcp 1 0 192.168.2.1:8443 192.168.2.137:52177
> CLOSE_WAIT 15245/(squid-1)
> tcp 1 0 192.168.2.1:8443 192.168.6.180:43542
> CLOSE_WAIT 15245/(squid-1)
> tcp 1 0 192.168.2.1:8443 192.168.6.155:39489
> CLOSE_WAIT 15245/(squid-1)
> tcp 1 0 192.168.2.1:8443 192.168.0.147:38939
> CLOSE_WAIT 15245/(squid-1)
> tcp 1 0 192.168.2.1:8443 192.168.6.180:38754
> CLOSE_WAIT 15245/(squid-1)
> tcp 1 0 192.168.2.1:8443 192.168.0.164:39602
> CLOSE_WAIT 15245/(squid-1)
> tcp 1 0 192.168.2.1:8443 192.168.0.147:54114
> CLOSE_WAIT 15245/(squid-1)
> tcp 1 0 192.168.2.1:8443 192.168.6.180:57857
> CLOSE_WAIT 15245/(squid-1)
> tcp 1 0 192.168.2.1:8443 192.168.0.156:43482
> CLOSE_WAIT 15245/(squid-1)
> ...(about 50)
> Above ones are connections from https port to client.
>
> As you can see recv-q for icap connections allocate more memory but
> connections from https_port to upstream server connections allocate
> only one byte.
>
> What can be done to close these unused connections?
>
> The problem in this thread seems similar:
> http://www.squid-cache.org/mail-archive/squid-users/201301/0092.html
>
>> Amos
>>
>> _______________________________________________
>> squid-users mailing list
>> squid-users at lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
More information about the squid-users
mailing list