[squid-users] Significant memory leak with version 5.x (not with 4.17)

Lukáš Loučanský loucansky.lukas at kjj.cz
Wed Dec 22 09:43:13 UTC 2021


ups - i mean reqs per minute :-], and I'm doing icap to clamav daemon at 
localhost

L

Dne 22.12.2021 v 10:36 Lukáš Loučanský napsal(a):
>
> ok - I'dont want to tell stories how I went from rockd and workers 
> back to non-SMP and aufs. But I want to mention I face these kind of 
> issues too. I do not have too much evidence nor expirience to call it 
> memory leak - but - whatever I set to cache_mem, all buffers, timeouts 
> etc., reducing sysctl buffer bloat for tcp/udp connections etc. my 
> squid eats all my RAM and starts to swap - and then crash by low 
> memory (could not start storeid helpers etc.)
>
> After one day the resources taken look like this
>
> 15487 proxy     20   0 8478512   7.9g  17280 S   6.0  50.5 64:15.84 squid
>
> and raising. I have about 370 clients, avg rps (now) 480 and about 
> 1000 peak. In this config 2GB (of 16GB) cache_mem (same behaviour with 
> 256MB) and about 80GB in two aufs cache dirs. I'm doing an ssl 
> "inspection" by looking at sni server names and cert fingerprints 
> during peek and splice steps (peek and terminate to be precise), 
> everything other is spliced. Would it be SSL  context memory bug like 
> in the 3.5.x?
>
> Resource usage for squid:
>
> Maximum Resident Size: 33619392 KB
>
> Memory accounted for:
> Total accounted: 1421104 KB
> memPoolAlloc calls: 226605414
> memPoolFree calls: 229519316
>
> Frankly - I don't have too much xmas power to go back to 4.x and 
> investigate. I went ahead to 6.x already... no change.
>
> L
>
> Dne 22.12.2021 v 1:48 Praveen Ponakanti napsal(a):
>> Hi,
>>
>>
>> We are running the squid proxy for servicing outbound HTTP quests 
>> from our network and have observed a significant memory leak with 5.x 
>> versions. While there are several discussions about memory leaks with 
>> recent versions, just wanted to list out what we have observed in 
>> case this is an unknown leak.
>>
>>
>>   * The request-rate through our squid proxy currently ranges between
>>     a daily low of 10 rps to a daily high 125 rps.
>>   * About mid-way during the daily ramp in request rates, the memory
>>     usage of the squid proxy starts to increase by about 1-2.5 G /
>>     day before leveling off till the next day's ramp in reqs. 4.17
>>     does not exhibit this memory leak (or at least not at anything
>>     close to this rate).
>>   * We are running 30 squid workers and the cache is set to deny all.
>>     Besides this, we have a single tcp_outgoing_address, some site
>>     specific ACL’s (both IP & domain acl's), and a custom access log
>>     format with a UDP target.
>>   * Both versions of squid run on similar hosts (64 cores each) and
>>     receive identical traffic patterns throughout the day. Version
>>     5.3 & 5.1 have similar rates of memory leak.
>>
>>
>> We are unable to use version 5.X in our production environment as we 
>> will have a much higher rate of requests through the proxy later on.
>>
>>
>> Due to the nature of the memory leak, it appears something with the 
>> memory pool management has been broken with version 5.x.
>>
>>
>> I have attempted to build squid with -with-valgrind-debug, and run it 
>> in a test env. However valgrind appears to collect some data from the 
>> parse config functions and then the squid proxy restarts. Valgrind no 
>> longer reports memory leak stats afterwards.
>>
>>
>> Snips from squid-internal-mgr/info. Note the data below does not 
>> appear to account for all the memory used by the squid process(es), 
>> which is also reported by the squid-exporter container. Our node 
>> level stats show that the 5.x squid is currently using up more than 
>> 28G, while the 4.17 version is under 7G. Both instances were setup to 
>> take traffic about 10 days ago.
>>
>>
>> Thanks
>>
>> Praveen
>>
>>
>> Version 5.3
>>
>> ——————
>>
>>
>> Memory accounted for:
>>
>> Total accounted: 1215892 KB
>>
>> memPoolAlloc calls: 7529400915
>>
>> memPoolFree calls:7624989161
>>
>> File descriptor usage for squid:
>>
>> Maximum number of file descriptors: 31457280
>>
>> Largest file desc currently in use: 24962
>>
>> Number of file desc currently in use: 4486
>>
>> Files queued for open: 0
>>
>> Available number of file descriptors: 31452794
>>
>> Reserved number of file descriptors:3000
>>
>> Store Disk files open: 0
>>
>>
>> Version 4.17
>>
>> ——————
>>
>>
>> Memory accounted for:
>>
>> Total accounted:76878 KB
>>
>> memPoolAlloc calls: 6217787434
>>
>> memPoolFree calls:6301483686
>>
>> File descriptor usage for squid:
>>
>> Maximum number of file descriptors: 31457280
>>
>> Largest file desc currently in use: 20184
>>
>> Number of file desc currently in use: 4419
>>
>> Files queued for open: 0
>>
>> Available number of file descriptors: 31452861
>>
>> Reserved number of file descriptors:3000
>>
>> Store Disk files open: 0
>>
>>
>> _______________________________________________
>> squid-users mailing list
>> squid-users at lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
>
> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient> 
> 	Bez virů. www.avast.com 
> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient> 
>
>
> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>
> _______________________________________________
> squid-users mailing list
> squid-users at lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-- 
Tato zpráva byla zkontrolována na viry programem Avast Antivirus.
https://www.avast.com/antivirus
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squid-cache.org/pipermail/squid-users/attachments/20211222/c5520889/attachment-0001.htm>


More information about the squid-users mailing list