<div dir="ltr">Thanks Alex & Amos. Will try the patch in the next week or 2.</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Jan 7, 2022 at 8:54 AM Alex Rousskov <<a href="mailto:rousskov@measurement-factory.com">rousskov@measurement-factory.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On 1/7/22 12:12 AM, Praveen Ponakanti wrote:<br>
<br>
> Is there a build with the fix, or do you have some recommended steps<br>
> to manually pull the source, patch the fix and then recompile?<br>
<br>
Yes, applying the patch to official Squid sources and then bootstrapping<br>
and building from patched sources is what folks using patches usually<br>
have to do. Roughly speaking:<br>
<br>
cd squid-x.y.z/<br>
patch -p1 < ...<br>
./bootstrap.sh<br>
./configure<br>
make<br>
make check<br>
sudo make install<br>
<br>
The above steps usually require installing a few build-related tools<br>
(and custom options like --prefix and --with-openssl). A capable<br>
sysadmin should be able to get this done in most cases. It is possible<br>
to avoid the bootstrapping step if the patch does not modify<br>
bootstrapping-sensitive files and you can get a bootstrapped version of<br>
the sources from <a href="http://www.squid-cache.org/Versions/" rel="noreferrer" target="_blank">http://www.squid-cache.org/Versions/</a><br>
<br>
I am not aware of anybody providing ready-to-use builds that include the<br>
proposed bug 5132 fix.<br>
<br>
<br>
HTH,<br>
<br>
Alex.<br>
<br>
<br>
> On Thu, Jan 6, 2022 at 8:16 PM Alex Rousskov wrote:<br>
> <br>
> On 1/6/22 2:50 AM, Praveen Ponakanti wrote:<br>
> > Hi Alex/Amos,<br>
> ><br>
> > Do you still need memory logs from version 5.3 after stopping traffic<br>
> > through the squid?<br>
> <br>
> I cannot answer for Amos who asked for those logs, but you may want to<br>
> try a fix posted at<br>
> <a href="https://bugs.squid-cache.org/show_bug.cgi?id=5132#c27" rel="noreferrer" target="_blank">https://bugs.squid-cache.org/show_bug.cgi?id=5132#c27</a><br>
> <br>
> <br>
> HTH,<br>
> <br>
> Alex.<br>
> <br>
> <br>
> > We have disabled traffic to the 5.3 version squid<br>
> > about 6 hours ago and have not seen any memory being freed up since.<br>
> > This node has used up ~50G more memory compared with 4.17 squid taking<br>
> > similar traffic over the last 3+ weeks. I am collecting hourly memory<br>
> > logs on 5.3 after stopping traffic. Let me know and I can attach the<br>
> > log tomorrow morning.<br>
> ><br>
> > Thanks<br>
> > Praveen<br>
> ><br>
> > On Mon, Dec 27, 2021 at 4:58 PM Praveen Ponakanti<br>
> <<a href="mailto:pponakanti@roblox.com" target="_blank">pponakanti@roblox.com</a> <mailto:<a href="mailto:pponakanti@roblox.com" target="_blank">pponakanti@roblox.com</a>><br>
> > <mailto:<a href="mailto:pponakanti@roblox.com" target="_blank">pponakanti@roblox.com</a> <mailto:<a href="mailto:pponakanti@roblox.com" target="_blank">pponakanti@roblox.com</a>>>> wrote:<br>
> ><br>
> > I cant make any changes to our prod squids this week. I have a<br>
> squid<br>
> > instance (5.3v) in a test env but could not reproduce the leak by<br>
> > starting & stopping traffic with a bulk http req generator (wrk).<br>
> > Was able to send 175k rps @ 20k concurrent sessions (each doing a<br>
> > get on a 1KB object) through the 30-worker squid. This initially<br>
> > caused a 3G increase in memory usage and then flattened out after<br>
> > stopping the requests. If I restart the bulk reqs, the memory<br>
> usage<br>
> > only goes up ~0.5GB and then drops back down. Live traffic is<br>
> > probably exercising a different code path within squid's<br>
> memory pools.<br>
> ><br>
> > On Mon, Dec 27, 2021 at 2:26 AM Lukáš Loučanský<br>
> > <<a href="mailto:loucansky.lukas@kjj.cz" target="_blank">loucansky.lukas@kjj.cz</a> <mailto:<a href="mailto:loucansky.lukas@kjj.cz" target="_blank">loucansky.lukas@kjj.cz</a>><br>
> <mailto:<a href="mailto:loucansky.lukas@kjj.cz" target="_blank">loucansky.lukas@kjj.cz</a> <mailto:<a href="mailto:loucansky.lukas@kjj.cz" target="_blank">loucansky.lukas@kjj.cz</a>>>> wrote:<br>
> ><br>
> > After one day of running without clients my squid memory<br>
> is stable<br>
> ><br>
> > 29345 proxy 20 0 171348 122360 14732 S 0.0 0.7 <br>
> > 0:25.96 (squid-1) --kid squid-1 -YC -f /etc/squid5/squid.conf<br>
> > 29343 root 20 0 133712 79264 9284 S 0.0 0.5 <br>
> > 0:00.00 /usr/sbin/squid -YC -f /etc/squid5/squid.conf<br>
> ><br>
> > Storage Mem size: 3944 KB Storage Mem capacity: 0.2% used,<br>
> 99.8%<br>
> > free Maximum Resident Size: 489440 KB Page faults with<br>
> physical<br>
> > i/o: 0 Memory accounted for: Total accounted: 15741 KB<br>
> > memPoolAlloc calls: 1061495 memPoolFree calls: 1071691 Total<br>
> > allocated 15741 kB So this does not seem to be the<br>
> problem... L<br>
> ><br>
> > Dne 26.12.2021 v 10:02 Lukáš Loučanský napsal(a):<br>
> >> ok - as it seems my squid quacked on low memory again today -<br>
> >><br>
> >> Dec 26 00:04:25 gw (squid-1): FATAL: Too many queued store_id<br>
> >> requests; see on-persistent-overload.#012 current master<br>
> >> transaction: master4629331<br>
> >> Dec 26 00:04:28 gw squid[15485]: Squid Parent: squid-1<br>
> process<br>
> >> 15487 exited with status 1<br>
> >> Dec 26 00:04:28 gw squid[15485]: Squid Parent: (squid-1)<br>
> >> process 28375 started<br>
> >><br>
> >> 2021/12/26 00:01:20 kid1| helperOpenServers: Starting 5/64<br>
> >> 'storeid_file_rewrite' processes<br>
> >> 2021/12/26 00:01:20 kid1| ipcCreate: fork: (12) Cannot<br>
> >> allocate memory<br>
> >> 2021/12/26 00:01:20 kid1| WARNING: Cannot run<br>
> >> '/lib/squid5/storeid_file_rewrite' process.<br>
> >> 2021/12/26 00:01:20 kid1| ipcCreate: fork: (12) Cannot<br>
> >> allocate memory<br>
> >><br>
> >> I'm going to reroute my clients (which are on their days off<br>
> >> anyway) to direct connections and run it "dry" - on it's own.<br>
> >> But I'm not able to to test it before "lack of memory issues<br>
> >> occur" - because my clients are offline. So I'll watch squid<br>
> >> for it's own memory consuption. It's all I can do right now -<br>
> >> my squid already restarted and it's memory has been freed<br>
> - so<br>
> >> I think just now I have no power to fill it up again :-]<br>
> >><br>
> >> L<br>
> >><br>
> >> Dne 26.12.2021 v 7:41 Amos Jeffries napsal(a):<br>
> >>><br>
> >>> If possible can one of you run a Squid to get this<br>
> behaviour,<br>
> >>> then stop new clients connecting to it before lack of memory<br>
> >>> issues occur and see if the memory usage disappears or<br>
> >>> reduces after a 24-48hr wait.<br>
> >>><br>
> >>> A series of regular mempools report dumps from across the<br>
> >>> test may help Alex or whoever works on the bug eliminate<br>
> >>> further which cache and client related things are releasing<br>
> >>> properly.<br>
> >>><br>
> >>><br>
> >>> Amos<br>
> >>><br>
> >>> _______________________________________________<br>
> >>> squid-users mailing list<br>
> >>> <a href="mailto:squid-users@lists.squid-cache.org" target="_blank">squid-users@lists.squid-cache.org</a><br>
> <mailto:<a href="mailto:squid-users@lists.squid-cache.org" target="_blank">squid-users@lists.squid-cache.org</a>><br>
> >>> <mailto:<a href="mailto:squid-users@lists.squid-cache.org" target="_blank">squid-users@lists.squid-cache.org</a><br>
> <mailto:<a href="mailto:squid-users@lists.squid-cache.org" target="_blank">squid-users@lists.squid-cache.org</a>>><br>
> >>> <a href="http://lists.squid-cache.org/listinfo/squid-users" rel="noreferrer" target="_blank">http://lists.squid-cache.org/listinfo/squid-users</a><br>
> <<a href="http://lists.squid-cache.org/listinfo/squid-users" rel="noreferrer" target="_blank">http://lists.squid-cache.org/listinfo/squid-users</a>><br>
> >>> <<a href="http://lists.squid-cache.org/listinfo/squid-users" rel="noreferrer" target="_blank">http://lists.squid-cache.org/listinfo/squid-users</a><br>
> <<a href="http://lists.squid-cache.org/listinfo/squid-users" rel="noreferrer" target="_blank">http://lists.squid-cache.org/listinfo/squid-users</a>>><br>
> >><br>
> ><br>
> > <br>
> <<a href="https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient" rel="noreferrer" target="_blank">https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient</a><br>
> <<a href="https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient" rel="noreferrer" target="_blank">https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient</a>>><br>
> > Bez virů. <a href="http://www.avast.com" rel="noreferrer" target="_blank">www.avast.com</a> <<a href="http://www.avast.com" rel="noreferrer" target="_blank">http://www.avast.com</a>><br>
> > <br>
> <<a href="https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient" rel="noreferrer" target="_blank">https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient</a><br>
> <<a href="https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient" rel="noreferrer" target="_blank">https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient</a>>><br>
> ><br>
> ><br>
> > <br>
> <#m_-6622557068709516458_m_9217020348889694418_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2><br>
> > _______________________________________________<br>
> > squid-users mailing list<br>
> > <a href="mailto:squid-users@lists.squid-cache.org" target="_blank">squid-users@lists.squid-cache.org</a><br>
> <mailto:<a href="mailto:squid-users@lists.squid-cache.org" target="_blank">squid-users@lists.squid-cache.org</a>><br>
> > <mailto:<a href="mailto:squid-users@lists.squid-cache.org" target="_blank">squid-users@lists.squid-cache.org</a><br>
> <mailto:<a href="mailto:squid-users@lists.squid-cache.org" target="_blank">squid-users@lists.squid-cache.org</a>>><br>
> > <a href="http://lists.squid-cache.org/listinfo/squid-users" rel="noreferrer" target="_blank">http://lists.squid-cache.org/listinfo/squid-users</a><br>
> <<a href="http://lists.squid-cache.org/listinfo/squid-users" rel="noreferrer" target="_blank">http://lists.squid-cache.org/listinfo/squid-users</a>><br>
> > <<a href="http://lists.squid-cache.org/listinfo/squid-users" rel="noreferrer" target="_blank">http://lists.squid-cache.org/listinfo/squid-users</a><br>
> <<a href="http://lists.squid-cache.org/listinfo/squid-users" rel="noreferrer" target="_blank">http://lists.squid-cache.org/listinfo/squid-users</a>>><br>
> ><br>
> ><br>
> > _______________________________________________<br>
> > squid-users mailing list<br>
> > <a href="mailto:squid-users@lists.squid-cache.org" target="_blank">squid-users@lists.squid-cache.org</a><br>
> <mailto:<a href="mailto:squid-users@lists.squid-cache.org" target="_blank">squid-users@lists.squid-cache.org</a>><br>
> > <a href="http://lists.squid-cache.org/listinfo/squid-users" rel="noreferrer" target="_blank">http://lists.squid-cache.org/listinfo/squid-users</a><br>
> <<a href="http://lists.squid-cache.org/listinfo/squid-users" rel="noreferrer" target="_blank">http://lists.squid-cache.org/listinfo/squid-users</a>><br>
> ><br>
> <br>
<br>
</blockquote></div>