[squid-users] rock issue

patrick mkhael patrick.mkhael at hotmail.com
Fri Jul 3 08:50:05 UTC 2020

Dear Alex,

kindly note that i have adjusted the config , in addition to checking the provided links.
First i have 3 disk with no RAID config, each rock cache_dir has it own disk to write to.
then each disker and worker have it own process. In addition to this i have adjusted some value as per "https://wiki.squid-cache.org/Features/RockStore" recomandation.

Below is the new config:

workers 3
cpu_affinity_map process_numbers=1,2,3,4,5,6 cores=1,2,3,4,5,6
cache_dir rock /rock1 200000 max-size=32000 swap-timeout=300 max-swap-rate=100
cache_dir rock /rock2 200000 max-size=32000 max-swap-rate=100 swap-timeout=300
cache_dir rock /rock3 200000 max-size=32000 max-swap-rate=100 swap-timeout=300
cache_mem 17 GB
maximum_object_size_in_memory 25 MB
maximum_object_size 1 GB
cache_miss_revalidate off
quick_abort_pct 95

This config is giving 4% of cache gain ratio,
in addition as i already mentionned before if i take the same above config without worker and cach_dir with the same traffiic using aufs on one of the disks ,  i have automatically i har 60 % cache ratio. [ my lab i 250 Mb/s]

Shoud rock give me the same performance as aufs ?

for a traffic of 1 Gb/s , is there a way to use aufs ?

thank u

From: Alex Rousskov <rousskov at measurement-factory.com>
Sent: Thursday, July 2, 2020 4:24 PM
To: patrick mkhael <patrick.mkhael at hotmail.com>; squid-users at lists.squid-cache.org <squid-users at lists.squid-cache.org>
Subject: Re: [squid-users] rock issue

On 7/1/20 4:45 PM, patrick mkhael wrote:

> ***Please note that you have 20 kids worth mapping (10 workers and 10
> diskers), but you map only the first 10.‚Äč{since i did not get the point
> of the diskers ,as far as i understood  , it should be like  (simple
> example)

>> workers 2
>> cpu_affinity_map process_numbers=1,2,3,4 cores=1,2,3,4
>> cache_dir rock ...
>> cache_dir rock ...

The above looks OK. Each worker is a kid process. Each rock cache_dir is
a kid process (we call them diskers).  If you have physical CPU cores to
spare, give each kid process its own physical core. Otherwise, give each
worker process its own physical core (if you can). Diskers can share
physical cores with less harm because they usually do not consume much
CPU cycles. Squid wiki has more detailed information about that:

> ***Why do you have 10 rock caches of various sizes? [ to be honest , i
> saw in many websites that it should be like this from the smallest to
> the bigest with diff size, i tought it should serve from small size pool
> to high ]

IMHO, you should stop reading those web pages :-). There is no general
need to segregate objects by sizes, especially when you are using the
same slot size for all cache_dirs. Such segregation may be necessary in
some special cases, but we have not yet established that your case is

> *****How many independent disk spindles (or equivalent) do you have? [ i
> have one raid 5 ssd disks , used by the 10 rock cache dir]

Do not use RAID. If possible, use one rock cache_dir per SSD disk. The
only reason this may not be possible, AFAICT, is if you want to cache
more (per SSD disk) than a single Squid cache_dir can hold, but I would
not worry about overcoming that limit at the beginning. If you want to
know more about the limit, look for "33554431" in

> ***How did you select the swap rate limits and timeouts for
> cache_dirs?[I took it also from online forum , can i leave it empty for
> both]

If you want a simple answer, then it is "yes". Unfortunately, there is
no simple correct answer to that question. To understand what is going
on and how to tune things, I recommend studying the Performance Tuning
section of https://wiki.squid-cache.org/Features/RockStore

> ****Do you see any ERRORs or WARNINGs in cache log?[NO error or warning
> found in cache]

Good. I assume you do see some regular messages in cache.log. Keep an
eye for ERRORs and WARNINGs as you change settings.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squid-cache.org/pipermail/squid-users/attachments/20200703/52c5bcf5/attachment-0001.html>

More information about the squid-users mailing list