[squid-dev] [PATCH] Increase request buffer size to 64kb

Amos Jeffries squid3 at treenet.co.nz
Wed Mar 30 10:29:05 UTC 2016


On 30/03/2016 6:53 p.m., Alex Rousskov wrote:
> On 03/29/2016 10:39 PM, Nathan Hoad wrote:
>> On 30 March 2016 at 12:11, Alex Rousskov wrote:
>>> On 03/29/2016 06:06 PM, Nathan Hoad wrote:
>>>
>>>> This (very small) patch increases the request buffer size to 64kb, from 4kb.
>>>
>>>> -#define HTTP_REQBUF_SZ  4096
>>>> +#define HTTP_REQBUF_SZ  65535
>>>

One thing you need to keep in mind with all this is that the above
macros *does not* configure the network I/O buffers.


The network HTTP request buffer is controlled by request_header_max_size
- default 64KB.

The network HTTP reply buffer is controlled by reply_header_max_size -
default 64KB.

The HTTP_REQBUF_SZ macro configures the StoreIOBuffer object size. Which
is mostly used for StoreIOBuffer (client-streams or disk I/O) or local
stack allocated variables. Which is tuned to match the filesystem page
size - default 4KB.

If your system uses non-4KB pages for disk I/O then you should tune that
alignment of course. If you are memory-only caching or even not caching
that object at all - then the memory page size will be the more
important metric to tune it against.

How important I'm not sure. I had thought the relative difference in
memory and network I/O speeds made the smaller size irrelevant (since we
are data-copying from the main network SBuf buffers anyway). But
perhapse not. You may have just found that it needs to be tuned to match
the network I/O buffer default max-size (64KB).

NP: perhapse the real difference is how fast Squid can walk the list of
in-memory buffers that span the object in memory cache. Since it walks
the linked-list from head to position N with each write(2) having larger
steps would be relevant.



>>>
>>>> In my testing, this increases throughput rather dramatically for
>>>> downloading large files:
>>>
>>> Based on your results alone, it is clear that the patch does more than
>>> "increases the request buffer size"(*). IMO, the important initial
>>> questions we need to answer are:
>>>
>>>   1. Why the performance is improved in this micro test?
>>>   2. What are the expected effects on other traffic?
> 
>> I think these are reasonable questions to ask. Answering question one
>> will take time
> 
> Yes, finding the correct answer may not be trivial, although I would
> expect that larger system reads result in fewer reads and, hence, less
> Squid-imposed overhead in a micro test. That hypothesis should be
> relatively easy to semi-confirm by looking at the number of system reads
> before/after the change (if that statistics is already reported).
> Hacking Squid to collect those stats should not be difficult either (if
> that statistics is not already reported).
> 
> Another related angle you may want to consider is comparing 8KB, 16KB,
> and 32KB performance with the already available 4KB and 64KB numbers:
> 
> *  If performance keeps going up with a larger buffer (in a micro test),
> how about 512KB? You will need a large enough file and a fast
> client/server to make sure Squid is the bottleneck, of course.
> 

Make sure you have plenty of per-process stack space available before
going large. Squid allocates several buffers using this size directly on
the stack. Usually at least 2, maybe a half dozen.


> * If there is a "sweet spot" that is determined by system page size or
> perhaps NIC buffer size, then one can argue that this parameter should
> be configurable to match that size (and perhaps even set based on system
> parameters by default if possible).
> 

It would be page size (memory pages or disk controller I/O pages). Since
the network is tuned already and defaulting to 64KB.


> 
>> Yes I think you are right in that the constant may be
>> misnamed/misused. Once I've gone through all the places that use it, I
>> suppose reevaluating its name would be a good idea.
> 
> ... and we would have to rename it anyway if we decide to make it
> configurable. Knowing where it is used (and/or isolating its usage)
> would be important for that as well.

It is used primarily for the disk I/O and Squid internal client-streams
buffers.

In the long-term plan those internal uses will be replaced by SBuf which
are controlled by the existing squid.conf options and actual message
sizes more dynamically.

A new option for tuning disk I/O buffer size might be useful in both
long- and short- terms though.

Amos



More information about the squid-dev mailing list