<div dir="ltr"><div>On a x86/64bit ubuntu machine if I set 'workers 4' and run:</div><div><br></div><div>squid --foreground -f /etc/squid.conf 2>&1 |grep mlock</div><div><div> mlock(0x7f2e5bfb2000, 8) = 0</div><div> mlock(0x7f2e5bf9f000, 73912) = -1 ENOMEM (Cannot allocate memory)</div></div><div>squid -N -f /etc/squid.conf 2>& |grep mlock</div><div> mlock(0x7f8e4b7c0000, 8) = 0</div><div> mlock(0x7f8e4b7ad000, 73912) = -1 ENOMEM (Cannot allocate memory)</div><div><br></div><div>Note 1; -N and --foreground made no difference as long as 'workers 4' is set, I was expecting -N will ignore "worker 4", does it?</div><div><br></div><div>Now I set 'workers 2' and run the same two commands above and I got the output(both are the same), which means squid started successfully:</div><div><div> mlock(0x7f0c441cc000, 8) = 0</div><div> mlock(0x7f0c441c3000, 32852) = 0</div><div> mlock(0x7f0c441c2000, 52) = 0</div></div><div><br></div><div>Note as long as "workers <=2" I can run squid as expected and mlock the memory. I have more than 4GB RAM free(this is a 8GB RAM laptop) and this is a Intel i7, the mlock failure is strange.</div><div><br></div><div>On my target system which has 512MB RAM, even 'workers 0' won't help, I still get :</div><div><br></div><div> mlock(0x778de000, 2101212) = -1 ENOMEM (Out of memory)<br></div><div><br></div><div>I have to disable lock-memory for now and it puzzles me why the very first 2MB mlock can fail. I strace|grep shm_get and shmat and found nothing, instead there are lots of mmap calls, so Squid is using mmap for its shared memory mapping, the only question is that, is this mlock file-backed-up or is it anonymous mmaped(in this case on Linux it will use /dev/shm by default)?</div><div><br></div><div>Thanks a lot,</div><div><br></div><div>Gordon</div></div><br><div class="gmail_quote"><div dir="ltr">On Mon, Jul 16, 2018 at 11:58 AM Alex Rousskov <<a href="mailto:rousskov@measurement-factory.com">rousskov@measurement-factory.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On 07/15/2018 08:47 PM, Gordon Hsiao wrote:<br>
> Just upgraded squid to 4.1, however if I enabled shared_memory_locking I<br>
> failed to start squid:<br>
> <br>
> "FATAL: shared_memory_locking on but failed to<br>
> mlock(/squid-tls_session_cache.shm, 2101212): (12) Out of memory"<br>
<br>
> How do I know how much memory it is trying to mlock? is 2101212(~2MB)<br>
> the shm size of not,<br>
<br>
Yes, Squid tried to lock a 2101212-byte segment and failed.<br>
<br>
<br>
> any way to debug/looking-into/config this size?<br>
<br>
I am not sure what you mean, but please keep in mind that the failed<br>
segment could be the last straw -- most of the shared memory could be<br>
allocated earlier. You can observe all allocations/locks with 54,7<br>
debugging. Look for "mlock(".<br>
<br>
You can also run "strace" or a similar command line tool to track<br>
allocations, but analyzing strace output may be more difficult than<br>
looking through Squid logs.<br>
<br>
<br>
> Again I disabled cache etc for a memory restricted environment, also<br>
> used the minimal configuration with a few enable-flags, in the meantime<br>
> I want to avoid memory overcommit from squid(thus mlock)<br>
<br>
I am glad the new code is working to prevent runtime crashes in your<br>
memory-restricted environment. If studying previous mlock() calls does<br>
not help, please suggest what else Squid could do not help you.<br>
<br>
<br>
Thank you,<br>
<br>
Alex.<br>
</blockquote></div>