[squid-users] Rock datastore, CFLAGS and a crash that (may be) known
Jester Purtteman
jester at optimera.us
Tue Feb 16 14:32:37 UTC 2016
Greetings Squid users,
With 3.5.14 out and activating CFLAGS, I am getting into trouble. Funny
too, I spent a lot of time wondering why it wasn't adding CFLAGS in earlier
builds. In any event, I have a 3.5.13 instance configured as follows:
./configure --prefix=/usr --localstatedir=/var
--libexecdir=/usr/lib/squid --srcdir=. --datadir=/usr/share/squid
--sysconfdir=/etc/squid --with-default-user=proxy --with-logdir=/var/log
--with-pidfile=/var/run/squid.pid --enable-linux-netfilter
--enable-cache-digests --enable-storeio=ufs,aufs,diskd,rock
--enable-async-io=30 --enable-http-violations --enable-zph-qos
--with-netfilter-conntrack --with-filedescriptors=65536 --with-large-files
It has a quartet of cache-dirs (I'm still testing and monkeying) as follows:
cache_dir rock /var/spool/squid/rock/1 64000 swap-timeout=600
max-swap-rate=600 min-size=0 max-size=128KB
cache_dir rock /var/spool/squid/rock/2 102400 swap-timeout=600
max-swap-rate=600 min-size=128KB max-size=256KB
cache_dir aufs /var/spool/squid/aufs/1 200000 16 128 min-size=256KB
max-size=4096KB
cache_dir aufs /var/spool/squid/aufs/2 1500000 16 128 min-size=4096KB
max-size=8196000KB
Permissions are all proxy.proxy for the cache dirs and everything is happily
running. When I read that the CFLAGS bug was solved, I thought "hey, didn't
I do some terrible thing to determine what cflags are correct on a vmware
virtual instance?" and dug up the cflags that I came up with. I then
compiled 3.5.14 as follows:
./configure CFLAGS="-march=core2 -mcx16 -msahf -mno-movbe -mno-aes
-mno-pclmul -mno-popcnt -mno-sse4 -msse4.1" CXXFLAGS="${CFLAGS}"
--with-pthreads --prefix=/usr --localstatedir=/var
--libexecdir=/usr/lib/squid --srcdir=. --datadir=/usr/share/squid
--sysconfdir=/etc/squid --with-default-user=proxy --with-logdir=/var/log
--with-pidfile=/var/run/squid.pid --enable-linux-netfilter
--enable-cache-digests --enable-storeio=ufs,aufs,diskd,rock
--enable-async-io=30 --enable-http-violations --enable-zph-qos
--with-netfilter-conntrack --with-filedescriptors=65536 --with-large-files
This leads to the following in the cache log, and a crash.
<<<SNIP
FATAL: Ipc::Mem::Segment::open failed to
shm_open(/squid-var.spool.squid.rock.1_spaces.shm): (2) No such file or
directory
Squid Cache (Version 3.5.14): Terminated abnormally.
CPU Usage: 5.439 seconds = 2.581 user + 2.858 sys
>>>SNIP
This looks similar to a bug
http://bugs.squid-cache.org/show_bug.cgi?id=3880#c1 that was already
reported, but I don't know enough to say with certainty. It does look like
these compile options are allowing squid to launch with multiple processes
and do other things that I think I might want, but I can't tell for sure.
So, it does lead me to a few questions:
(1) Do these flags make sense? I only half know what half of them do,
but they appear to basically just be supported flags on a ESXi virtual
machine given my hardware. I have googled, just not a lot of light shed on
this instance, thoughts and insights are appreciated.
(2) Are my rock stores lagging out, and how would you recommend tuning
them if so?
(3) Does the strategy above make sense? My thinking is to segregate the
small cache items into a rock datastore, and the big items into an aufs
datastore.
(4) Do you have any pointers on calculating the size of rocks and aufs
stores based on disk performance etc? I'm guessing that there is sort of a
logical size to make a specific rock and aufs based on how big of items you
store in it and so on. Is there some way I can apply some math and find
bottlenecks?
Finally, 3.5.14 does run fine when compiled with the first set (even with
--with-pthreads added) so I think this is probably cflags related. I would
like to get multiple disker processes running, I think it would probably
help in my environment, but it's not supremely critical. Anyway, there is a
note at the end of the bug saying that this wasn't seen for a while, and I
thought I'd say "I've seen it! Maybe!" let me know if I am creating this
bug through a creative mistake, or if you have other ideas here. Thanks!
Jester Purtteman, P.E.
OptimERA Inc
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squid-cache.org/pipermail/squid-users/attachments/20160216/45ecaa10/attachment.html>
More information about the squid-users
mailing list