[squid-users] rock issue

Amos Jeffries squid3 at treenet.co.nz
Thu Jul 2 10:20:08 UTC 2020

On 2/07/20 8:45 am, patrick mkhael wrote:
> ***Please note that you have 20 kids worth mapping (10 workers and 10
> diskers), but you map only the first 10.​{since i did not get the point
> of the diskers ,as far as i understood  , it should be like  (simple
> example)
>  >workers 2
>> cpu_affinity_map process_numbers=1,2,3,4 cores=1,2,3,4
>> cache_dir rock /mnt/sdb/1   2048 max-size=10000 max-swap-rate=200 swap-timeout=300
>> cache_dir rock /mnt/sdb/2   2048 max-size=10000 max-swap-rate=200 swap-timeout=300
> ***Why do you have 10 rock caches of various sizes? [ to be honest , i
> saw in many websites that it should be like this from the smallest to
> the bigest with diff size, i tought it should serve from small size pool
> to high ]

In general yes. BUT the size ranges to use should be determined via
traffic analysis. Specifically measure and graph the object sizes being
handled. There will be a wavy / cyclic line resulting. The size
boundaries should be set to the *minimum* point(s) along that line.

That said. Since you are comparing the new rock to an old UFS setup. It
would be best to start with the rock being setup as similar to the UFS
as you can - number of cache_dir, ranges of objects stored there etc.

ie. if these ranges were in the old UFS, then keep them for now. That
can be re-calculated after the HIT ratio drop is identified.

> *****How many independent disk spindles (or equivalent) do you have? [ i
> have one raid 5 ssd disks , used by the 10 rock cache dir]


Ideally you would have either:

 5x SSD disks separately mounted. With one rock cache on each.


 1x RAID 10 with one rock cache per disk pair/stripe. This requires
ability for controller to map a sub-directory tree exclusively onto one
of the sub-array stripes.


 2x RAID 1 (drive pair mirroring) with one rock cache on each pair. This
is the simplest way to achieve above when sub-array feature is not
available in RAID 10.


 1x RAID 10 with a single rock cache

The reasons;

Squid I/O pattern is mostly writes and erase. Low on reads.

RAID types in order of best->worst for that pattern are:
  none, RAID 1, RAID 10, RAID 5, RAID 0

Normal SSD controllers cannot handle the Squid I/O pattern well. Squid
*will* age the disk much faster than manufacturer measured statistics
indicate. (True for even HDD, just less of a problem there).

This means that the design needs to plan for coping with relatively
frequent disk failures. Loss of the data itself irrelevant. Only the
outage time + reduction in HIT ratio actually matter on failures.

Look for SSD with high write cycle measurements, and for RAID hot-swap
capability (even if the machine itself cant do that).

> ***How did you select the swap rate limits and timeouts for
> cache_dirs?[I took it also from online forum , can i leave it empty for
> both]

Alex may have better ideas if you can refer us to which tutorials or
documents you found that info in.

Without specific details on why they were chosen I would go with one
rock cache with default values to start with. Only changing them if
followup analysis indicates some other value is better.


More information about the squid-users mailing list