[squid-users] Performance issue /cache_dir / cache_mem

pacolo pacolopezvelasco at gmail.com
Thu Nov 8 15:46:24 UTC 2018


Hello,

I am having performance issues with a deployment of a farm of 5 servers
(CentOS Linux release 7.5.1804) with Squid 3.5.20-12.el7, that are used for
the internet access of a scholar community.
This is around 7 Gbps at peak hour, including 60% of HTTPS not processed at
the moment by Squid (we will try to intercept HTTPS and processed it in the
near future).

I have noticed several error events in /var/log/audit/audit.log 
	type=ANOM_ABEND msg=audit(30/10/18 10:30:54.557:18355) : auid=unset
uid=squid gid=squid ses=unset pid=567 comm=squid reason="memory violation"
sig=SIGABRT 
	
That corresponded with another events in /var/log/squid/cache.log
	2018/10/30 10:26:15 kid1| assertion failed: filemap.cc:50: "capacity_ <= (1
<< 24)"
	2018/10/30 10:26:19 kid1| Set Current Directory to /cache
	2018/10/30 10:26:19 kid1| Starting Squid Cache version 3.5.20 for
x86_64-redhat-linux-gnu...
	2018/10/30 10:26:19 kid1| Service Name: squid
	2018/10/30 10:26:19 kid1| Process ID 567

	
There were thousands of squid's restarts per day, which appear to be the
main problem.
I have noticed that this problem could be related to the maximum value of
our cache_dir size, according to...
	https://bugs.squid-cache.org/show_bug.cgi?id=3566

I have been looking for relevant information regarding the cache_dir max
sizes, but all posts seem a little bit old, for example...

http://squid-web-proxy-cache.1019090.n4.nabble.com/size-of-cache-dir-td1033280.html

http://squid-web-proxy-cache.1019090.n4.nabble.com/cache-dir-size-td1033774.html


This is deployed in a virtual environment with an storage platform of
different rpm disks, resources aren't the problem, more could be added if it
is needed.
Each server has 4 CPU, 8 GB RAM, and LVM with an OS disk of 30 GB and a
Cache disk of 8 TB.
What we need is to deploy 8 TB per server, or as many as it is possible and
we could deploy another virtual server to reach to 40 TB total.

I have noticed that the first approach could be wrong, as we only referenced
one cache_dir with the 8000000 MB in the cache_dir...
cache_dir aufs /cache 800000 16 256

Then, the following error was returned ((squid-1): xcalloc: Unable to
allocate 18446744073566526858 blocks of 1 bytes!), until the noticed the
maximum value accepted...
These are the sentences related to the mem and disk options.

memory_replacement_policy heap GDSF
cache_mem 1024 MB
maximum_object_size_in_memory 10 MB

cache_replacement_policy heap LFUDA
cache_dir aufs /cache 5242880 16 256
maximum_object_size 1024 MB												
cache_swap_low 90
cache_swap_high 95

We noticed the errors commented as service's degradation was reported by the
customer.
By the way, with this configuration, only 2 TB aprox. was cached in each
server.

I suppose more RAM would be needed, as according to the rule "14 MB of
memory per 1 GB on disk for 64-bit Squid".
But I would need some clarifications with this, I suppose that 14 MB of
memory needed is referencing our total RAM, and 1 GB on disk is referencing
1 GB in the cache_dir (as our 8 TB are not detected, only 5TB).
So taking into account we need to deploy an 40 TB cache in total, for
example in 5 servers, with 8 TB per server, it will be needed at 112 GB of
RAM per server at least.
Am I right?

 
Please, could somebody point me in the right direction?
I have noticed about https://wiki.squid-cache.org/Features/SmpScale, but
before testing that I would like to know if there is any maximum value for
cache_dir.

Thanks!
Paco.




--
Sent from: http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html


More information about the squid-users mailing list