[squid-users] Squid ICAP -> Sophos SAVDI -> read_ahead_gap question

Alex Rousskov rousskov at measurement-factory.com
Mon Jan 27 19:28:33 UTC 2020


On 1/27/20 1:52 PM, netadmin wrote:

> reply_body_max_size 20 MB localnet

I would not set this option unless there is a good reason for setting it.


> maximum_object_size_in_memory 20 MB

If you want to cache objects that approach 20MB in body size, then raise
this to at least 21MB to account for overheads. The overheads (e.g.,
HTTP headers) are not that large, but it is easier and safer than
computing them directly.


> read_ahead_gap 20 MB

FYI: Due to Squid implementation deficiencies/limitations, a huge gap
like this is likely to cause large, unpooled memory allocations on
virtually every received HTTP response with a body, regardless of that
response body size. This should not cause bugs or transaction aborts,
but you should be aware of the high performance cost of such
allocations, including stalling the entire Squid worker for a few
milliseconds (at least).


> If I try to download a 20 MB file on all workstations at the same time,
> without the option "read_ahead_gap 20 MB", the download fails on a small
> number of workstations.

This is an indication of a bug somewhere. The usual suspects are: Squid,
the ICAP service, the HTTP origin server.


> If I use disk storage for the 20 MB file, during the simultaneous download
> the processor load reaches 100% - this I think not because of the ICAP
> server - and download errors occur.

Persistent 100% CPU usage with 20 concurrent transactions on decent
hardware is also a red flag.


> Is the maximum supported size for a file transmitted to the ICAP server 20
> MB?

No. There is virtually no maximum.


> Is there anything wrong with my settings?

What you want should "work" (in principle) without any special settings.
And it should work with/through a memory cache given a large-enough
maximum_object_size_in_memory setting.

Some of your settings might stress Squid because they set very tight
limits for 20MB objects, with no room for overheads (see above for
specifics), but even that should not really break things (especially in
your non-SMP setup) -- it may cause more cache misses and such.

I would suspect a bug (possible, but not necessarily in Squid).
Analyzing a cache.log with debug_options set to ALL,9 will probably be
enough to pin-point the culprit but may require some developer
knowledge/effort because those 20 concurrent transactions will create a
lot of debugging noise.

Alex.


More information about the squid-users mailing list