<div dir="ltr">Hi Alex,<div><br></div><div>We are fine with version 4.17 as of now. I can try out the fix sometime next week if you need further data. Is there a build with the fix, or do you have some recommended steps to manually pull the source, patch the fix and then recompile?</div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Jan 6, 2022 at 8:16 PM Alex Rousskov <<a href="mailto:rousskov@measurement-factory.com">rousskov@measurement-factory.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On 1/6/22 2:50 AM, Praveen Ponakanti wrote:<br>
> Hi Alex/Amos,<br>
> <br>
> Do you still need memory logs from version 5.3 after stopping traffic<br>
> through the squid?<br>
<br>
I cannot answer for Amos who asked for those logs, but you may want to<br>
try a fix posted at<br>
<a href="https://bugs.squid-cache.org/show_bug.cgi?id=5132#c27" rel="noreferrer" target="_blank">https://bugs.squid-cache.org/show_bug.cgi?id=5132#c27</a><br>
<br>
<br>
HTH,<br>
<br>
Alex.<br>
<br>
<br>
> We have disabled traffic to the 5.3 version squid<br>
> about 6 hours ago and have not seen any memory being freed up since.<br>
> This node has used up ~50G more memory compared with 4.17 squid taking<br>
> similar traffic over the last 3+ weeks. I am collecting hourly memory<br>
> logs on 5.3 after stopping traffic. Let me know and I can attach the<br>
> log tomorrow morning.<br>
> <br>
> Thanks<br>
> Praveen<br>
> <br>
> On Mon, Dec 27, 2021 at 4:58 PM Praveen Ponakanti <<a href="mailto:pponakanti@roblox.com" target="_blank">pponakanti@roblox.com</a><br>
> <mailto:<a href="mailto:pponakanti@roblox.com" target="_blank">pponakanti@roblox.com</a>>> wrote:<br>
> <br>
> I cant make any changes to our prod squids this week. I have a squid<br>
> instance (5.3v) in a test env but could not reproduce the leak by<br>
> starting & stopping traffic with a bulk http req generator (wrk).<br>
> Was able to send 175k rps @ 20k concurrent sessions (each doing a<br>
> get on a 1KB object) through the 30-worker squid. This initially<br>
> caused a 3G increase in memory usage and then flattened out after<br>
> stopping the requests. If I restart the bulk reqs, the memory usage<br>
> only goes up ~0.5GB and then drops back down. Live traffic is<br>
> probably exercising a different code path within squid's memory pools.<br>
> <br>
> On Mon, Dec 27, 2021 at 2:26 AM Lukáš Loučanský<br>
> <<a href="mailto:loucansky.lukas@kjj.cz" target="_blank">loucansky.lukas@kjj.cz</a> <mailto:<a href="mailto:loucansky.lukas@kjj.cz" target="_blank">loucansky.lukas@kjj.cz</a>>> wrote:<br>
> <br>
> After one day of running without clients my squid memory is stable<br>
> <br>
> 29345 proxy 20 0 171348 122360 14732 S 0.0 0.7 <br>
> 0:25.96 (squid-1) --kid squid-1 -YC -f /etc/squid5/squid.conf<br>
> 29343 root 20 0 133712 79264 9284 S 0.0 0.5 <br>
> 0:00.00 /usr/sbin/squid -YC -f /etc/squid5/squid.conf<br>
> <br>
> Storage Mem size: 3944 KB Storage Mem capacity: 0.2% used, 99.8%<br>
> free Maximum Resident Size: 489440 KB Page faults with physical<br>
> i/o: 0 Memory accounted for: Total accounted: 15741 KB<br>
> memPoolAlloc calls: 1061495 memPoolFree calls: 1071691 Total<br>
> allocated 15741 kB So this does not seem to be the problem... L<br>
> <br>
> Dne 26.12.2021 v 10:02 Lukáš Loučanský napsal(a):<br>
>> ok - as it seems my squid quacked on low memory again today -<br>
>><br>
>> Dec 26 00:04:25 gw (squid-1): FATAL: Too many queued store_id<br>
>> requests; see on-persistent-overload.#012 current master<br>
>> transaction: master4629331<br>
>> Dec 26 00:04:28 gw squid[15485]: Squid Parent: squid-1 process<br>
>> 15487 exited with status 1<br>
>> Dec 26 00:04:28 gw squid[15485]: Squid Parent: (squid-1)<br>
>> process 28375 started<br>
>><br>
>> 2021/12/26 00:01:20 kid1| helperOpenServers: Starting 5/64<br>
>> 'storeid_file_rewrite' processes<br>
>> 2021/12/26 00:01:20 kid1| ipcCreate: fork: (12) Cannot<br>
>> allocate memory<br>
>> 2021/12/26 00:01:20 kid1| WARNING: Cannot run<br>
>> '/lib/squid5/storeid_file_rewrite' process.<br>
>> 2021/12/26 00:01:20 kid1| ipcCreate: fork: (12) Cannot<br>
>> allocate memory<br>
>><br>
>> I'm going to reroute my clients (which are on their days off<br>
>> anyway) to direct connections and run it "dry" - on it's own.<br>
>> But I'm not able to to test it before "lack of memory issues<br>
>> occur" - because my clients are offline. So I'll watch squid<br>
>> for it's own memory consuption. It's all I can do right now -<br>
>> my squid already restarted and it's memory has been freed - so<br>
>> I think just now I have no power to fill it up again :-]<br>
>><br>
>> L<br>
>><br>
>> Dne 26.12.2021 v 7:41 Amos Jeffries napsal(a):<br>
>>><br>
>>> If possible can one of you run a Squid to get this behaviour,<br>
>>> then stop new clients connecting to it before lack of memory<br>
>>> issues occur and see if the memory usage disappears or<br>
>>> reduces after a 24-48hr wait.<br>
>>><br>
>>> A series of regular mempools report dumps from across the<br>
>>> test may help Alex or whoever works on the bug eliminate<br>
>>> further which cache and client related things are releasing<br>
>>> properly.<br>
>>><br>
>>><br>
>>> Amos<br>
>>><br>
>>> _______________________________________________<br>
>>> squid-users mailing list<br>
>>> <a href="mailto:squid-users@lists.squid-cache.org" target="_blank">squid-users@lists.squid-cache.org</a><br>
>>> <mailto:<a href="mailto:squid-users@lists.squid-cache.org" target="_blank">squid-users@lists.squid-cache.org</a>><br>
>>> <a href="http://lists.squid-cache.org/listinfo/squid-users" rel="noreferrer" target="_blank">http://lists.squid-cache.org/listinfo/squid-users</a><br>
>>> <<a href="http://lists.squid-cache.org/listinfo/squid-users" rel="noreferrer" target="_blank">http://lists.squid-cache.org/listinfo/squid-users</a>><br>
>><br>
> <br>
> <<a href="https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient" rel="noreferrer" target="_blank">https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient</a>><br>
> Bez virů. <a href="http://www.avast.com" rel="noreferrer" target="_blank">www.avast.com</a><br>
> <<a href="https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient" rel="noreferrer" target="_blank">https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient</a>><br>
> <br>
> <br>
> <#m_-6622557068709516458_m_9217020348889694418_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2><br>
> _______________________________________________<br>
> squid-users mailing list<br>
> <a href="mailto:squid-users@lists.squid-cache.org" target="_blank">squid-users@lists.squid-cache.org</a><br>
> <mailto:<a href="mailto:squid-users@lists.squid-cache.org" target="_blank">squid-users@lists.squid-cache.org</a>><br>
> <a href="http://lists.squid-cache.org/listinfo/squid-users" rel="noreferrer" target="_blank">http://lists.squid-cache.org/listinfo/squid-users</a><br>
> <<a href="http://lists.squid-cache.org/listinfo/squid-users" rel="noreferrer" target="_blank">http://lists.squid-cache.org/listinfo/squid-users</a>><br>
> <br>
> <br>
> _______________________________________________<br>
> squid-users mailing list<br>
> <a href="mailto:squid-users@lists.squid-cache.org" target="_blank">squid-users@lists.squid-cache.org</a><br>
> <a href="http://lists.squid-cache.org/listinfo/squid-users" rel="noreferrer" target="_blank">http://lists.squid-cache.org/listinfo/squid-users</a><br>
> <br>
<br>
</blockquote></div>