[squid-users] Is it a good idea to use Linux swap partition/file with rock storage?

Amos Jeffries squid3 at treenet.co.nz
Tue Sep 19 09:43:47 UTC 2017


On 19/09/17 18:25, duanyao wrote:
> Hi,
> 
> I notice that squid's rock storage uses large (and fixed) amount of 
> shared memory even if it is not accessed. It's estimated as 
> 110byte/slot, so for a 256GB rock storage with 16KB slot, the memory 
> requirement is about 1.7GB, which is quite large.
> 
> So my questions are:
> 
> 1. Is there a way to reduce memory usage of rock storage?
> 

Reducing the cache size is the only thing that will do that.

For the entire time your Squid is running it is adding to the cache 
contents. The rate of growth decreases over time, but will only ever 
stop growing if the cache reaches 100% full.

So going out of your way to make it use less memory during that warm-up 
phaze is pointless long-term. The memory *is* needed and not having it 
available for use with zero advance notice will lead to serious 
performance problems, up to and including DoS vulnerability in your proxy.

For general memory reduction see the FAQ:
<https://wiki.squid-cache.org/SquidFaq/SquidMemory#What_can_I_do_to_reduce_Squid.27s_memory_usage.3F>


> 2. On Linux squid puts its shared memory in /dev/shm, which can be 
> backed by swap partition/file. Is it a good idea to use swap 
> partition/file with rock storage to save some physical memory?
> 

Definitely No. The cache index has an extremely high rate of churn and a 
large number of random location reads per transaction. If any of it ever 
gets pushed out to a swap disk/file the proxy operational speed 
undergoes a performance reduction of 3-4 orders of magnitude. eg. 50GBps 
-> 2MBps.


> 3. For rock storage, are /dev/shm/squid* frequently and randomly 
> written? If the Linux swap is on SSD, will this causes 
> performance/lifetime issues?
> 

see the answer to (2).

Squid stresses disks in ways vastly different to what manufacturers 
optimize the hardware to handle. The HTTP caches have a very high 
write-to-read ratio. No disk actually survives more than a fraction of 
its manufacturer advertised lifetime. This problem is less visible with 
HDD due to their naturally long lifetimes.

Specific to your question, due to the churn mentioned in (2) using a 
disk as storage location for the cache index faces it with the worst of 
both worlds - very high read throughput and even higher write 
throughput. SSD avoid (some of) the speed problem, but at cost of 
shorter lifetimes. So the churn is much more relevant and perhapse 
costly in hardware replacements.

YMMV depending on the specific SSD model and how it is designed to cope 
with dead sectors - but it is guaranteed to wear out much faster than 
advertised.

Amos


More information about the squid-users mailing list