[squid-users] Your cache is running out of filedescriptors
Vieri
rentorbuy at yahoo.com
Wed Aug 30 21:48:12 UTC 2017
Hi,
I'm opening a new thread realted to my previous post here: http://lists.squid-cache.org/pipermail/squid-users/2017-August/016233.html
I'm doing so because the subject is more specific now.
I'm obviously having trouble with file descriptors since I'm gettign the following message in the log on a regular basis.
WARNING! Your cache is running out of filedescriptors
Sometimes, but not always, I also get the message "WARNING: Consider increasing the number of ssl_crtd processes in your config file".
I've upped twice now the value of "children", but I'm still getting the same warnigns and performance issues. Here are some values:
external_acl_type bllookup ttl=86400 children-max=100 ...
sslcrtd_children 128 startup=20 idle=4
I also increased nofiles in ulimit:
# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 127521
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 4096
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 127521
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
If I run "lsof" I get a huge listing.
I could keep increasing the ulimit nofiles and squid "children" values, but I'd like to know if I'm on the right path or not.
What do you suggest?
I appreciate any tip you can share.
Regards,
Vieri
More information about the squid-users
mailing list