[squid-users] true sizeof squid cache

Eliezer Croitoru eliezer at ngtech.co.il
Fri Nov 7 13:31:55 UTC 2014


Hey Riccardo,

2000-3000 computers is partially relevant.
It means that the proxy machine will probably need some more ram then a
tiny machine and it's a CACHE and not filtering machine.
Since it's 2-3k clients you will use SMP to make sure that the load will
be distributed on top of couple CPUs.
And since it's more then one CPU you will want to use rock cache_dir.
UFS cache_dir doesn't support SMP yet and there is a 3.5 beta version of
"large rock".
It will limit the cache size but remember that it's cache and not a storage.

I do not remember the numbers by heart but there is a limit on how much
a machine can handle(Amos might remember it).
The main bottle neck is the DISK.
The spinning disk can be used with maximum 300 IOPS (on a very fast one).

When implementing a proxy you will need to first make a small assessment
on ram only and get some load statistics.
It has been my suggestion for a very long time now and some do not like it.
Squid as a software with no cache can take one load and with ram cache
can take another.
There is a complexity for it and it depends on the nature of the
environment.

For an enterprise class network like yours it won't take too long to
collect the logs and statistics to make sure that the proxy match your
needs.

All The Bests,
Eliezer

On 11/06/2014 09:58 AM, Riccardo Castellani wrote:
> I'm installing new machine as Squid server and I need to understand what 
> criteria to estimate the 'cache size', I'm not speaking about extra space for 
> swap/temporary files or fragmentation but I'm saying about the cache of size 
> that is 3rd cache_dir argument.
> Server will receive requests from about 2000-
> 3000 computers.
> I'd like interesting to find documents where shows suggestions 
> to assign right size.



More information about the squid-users mailing list