<div dir="ltr">Turns out it is a ulimit-related issue, I bumped default mlock limit to a large value and now I can start squid with memory-locked.<div><br></div><div>yes strace is only for syscalls, while ltrace shows all library calls.</div><div><br></div><div>Thanks for the help!</div><div><br></div><div>Gordon</div></div><br><div class="gmail_quote"><div dir="ltr">On Mon, Jul 16, 2018 at 6:38 PM Alex Rousskov <<a href="mailto:rousskov@measurement-factory.com">rousskov@measurement-factory.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On 07/16/2018 05:08 PM, Gordon Hsiao wrote:<br>
> On a x86/64bit ubuntu machine if I set 'workers 4' and run:<br>
<br>
> squid --foreground -f /etc/squid.conf 2>&1 |grep mlock<br>
> mlock(0x7f2e5bfb2000, 8) = 0<br>
> mlock(0x7f2e5bf9f000, 73912) = -1 ENOMEM<br>
<br>
> squid -N -f /etc/squid.conf 2>& |grep mlock<br>
> mlock(0x7f8e4b7c0000, 8) = 0<br>
> mlock(0x7f8e4b7ad000, 73912) = -1 ENOMEM<br>
<br>
> Note 1; -N and --foreground made no difference as long as 'workers 4' is<br>
> set, I was expecting -N will ignore "worker 4", does it?<br>
<br>
IIRC, -N does not start workers. However, some (memory allocation) code<br>
may not honor -N and still allocate memory necessary for those (disabled<br>
by -N) workers. That would be a bug AFAICT.<br>
<br>
<br>
> Now I set 'workers 2' and run the same two commands above and I got the<br>
> output(both are the same), which means squid started successfully:<br>
> mlock(0x7f0c441cc000, 8) = 0<br>
> mlock(0x7f0c441c3000, 32852) = 0<br>
> mlock(0x7f0c441c2000, 52) = 0<br>
<br>
The second allocation is probably smaller because two workers need fewer<br>
SMP queues (or similar shared memory resources) than four workers.<br>
<br>
<br>
> I have more than 4GB RAM free(this is a 8GB RAM laptop) and this<br>
> is a Intel i7, the mlock failure is strange.<br>
<br>
The default amount of shared memory available to a program is often much<br>
smaller than the total amount of RAM. I do not recall which Ubuntu<br>
commands or sysctl settings control the former, but Squid wiki or other<br>
web resources should have that info. The question you should ask<br>
yourself is "How much shared memory is available for the Squid process"?<br>
<br>
<br>
> On my target system which has 512MB RAM, even 'workers 0' won't help, I<br>
> still get :<br>
> <br>
> mlock(0x778de000, 2101212) = -1 ENOMEM (Out of memory)<br>
<br>
For "workers 0" concerns, please see the -N discussion above. The two<br>
should be equivalent.<br>
<br>
<br>
> I have to disable lock-memory for now and it puzzles me why the very<br>
> first 2MB mlock can fail.<br>
<br>
Most likely, your OS is configured (or defaults) to provide very little<br>
shared memory to a process when the total RAM is only 512MB.<br>
<br>
<br>
> I strace|grep shm_get and shmat and found nothing,<br>
<br>
mlock() is a system call so strace should see it, but it may be called<br>
something else.<br>
<br>
<br>
> instead there are lots of mmap calls, so Squid is using mmap<br>
> for its shared memory mapping,<br>
<br>
Squid creates segments using shm_open() and attaches to them using mmap().<br>
<br>
<br>
> the only question is that, is this mlock<br>
> file-backed-up or is it anonymous mmaped(in this case on Linux it will<br>
> use /dev/shm by default)?<br>
<br>
On Ubuntu, Squid shared memory segments should all be in /dev/shm by<br>
default. Squid does not want them to be backed by real files. See<br>
shm_open(3).<br>
<br>
Please note that some libc calls manipulating regular files are<br>
translated into mmap() calls by the standard library (or some such). Not<br>
all mmap() calls you see in strace are Squid mmap() calls.<br>
<br>
<br>
HTH,<br>
<br>
Alex.<br>
<br>
<br>
> On Mon, Jul 16, 2018 at 11:58 AM Alex Rousskov wrote:<br>
> <br>
> On 07/15/2018 08:47 PM, Gordon Hsiao wrote:<br>
> > Just upgraded squid to 4.1, however if I enabled<br>
> shared_memory_locking I<br>
> > failed to start squid:<br>
> ><br>
> > "FATAL: shared_memory_locking on but failed to<br>
> > mlock(/squid-tls_session_cache.shm, 2101212): (12) Out of memory"<br>
> <br>
> > How do I know how much memory it is trying to mlock? is 2101212(~2MB)<br>
> > the shm size of not,<br>
> <br>
> Yes, Squid tried to lock a 2101212-byte segment and failed.<br>
> <br>
> <br>
> > any way to debug/looking-into/config this size?<br>
> <br>
> I am not sure what you mean, but please keep in mind that the failed<br>
> segment could be the last straw -- most of the shared memory could be<br>
> allocated earlier. You can observe all allocations/locks with 54,7<br>
> debugging. Look for "mlock(".<br>
> <br>
> You can also run "strace" or a similar command line tool to track<br>
> allocations, but analyzing strace output may be more difficult than<br>
> looking through Squid logs.<br>
> <br>
> <br>
> > Again I disabled cache etc for a memory restricted environment, also<br>
> > used the minimal configuration with a few enable-flags, in the<br>
> meantime<br>
> > I want to avoid memory overcommit from squid(thus mlock)<br>
> <br>
> I am glad the new code is working to prevent runtime crashes in your<br>
> memory-restricted environment. If studying previous mlock() calls does<br>
> not help, please suggest what else Squid could do not help you.<br>
> <br>
> <br>
> Thank you,<br>
> <br>
> Alex.<br>
> <br>
<br>
</blockquote></div>