[squid-dev] [PATCH] mempools-nozero: turn StoreEntry into MEMPROXY_CLASS
Amos Jeffries
squid3 at treenet.co.nz
Tue Sep 1 07:06:02 UTC 2015
On 1/09/2015 8:29 a.m., Alex Rousskov wrote:
> On 08/29/2015 08:03 PM, Kinkie wrote:
>> On Aug 28, 2015 7:24 PM, Alex Rousskov wrote:
>> Perhaps there a way to avoid this particular part of the change
>> and keep the StoreEntry chunk sizes the same as they are today?
>
>
>> Unfortunately not if we want to proceed with MEMPROXY_CLASS conversion,
>> which doesn't expose that method, and MEMPROXY_CLASS is with this branch
>> the easiest (I would dare to say standard) way to go to non-zeroing pools.
>
> If you can avoid the problem by exposing and using that method, I think
> that would be a good solution.
The problem then becomes *how* to do that.
Since it is a macro API the cleanest solution seems to be rename the
existing macro to one that takes a pool size parameter. Then add a
wrapper MEMPROXY_CLASS(x) macro that calls it with the usual pool size.
>
>> I've tried to figure out the code some more; comments highlight that
>> it's an optimization feature, which is only effective with the chunked
>> allocator - it is otherwise a nop.
>> I believe this hint its is meant to try and alter the behavior of the
>> chunked allocator - in a roundabout way - hinting that the objects in
>> the pool are important and thus to try and make that particular object
>> pool less sensitive to memory pressure, by ensuring that if even a
>> single StoreEntry can be allocated, there will be at least
>> (chunk_size/obj_size -1) available memory areas until all of them have
>> been released.
>
>
>> ... I suspect that whatever is the optimization value of
>> chunk size setting, it is dwarfed by other effects and is speculative;
>> at the same time we have a sure gain by not initializing objects twice
>> (one when zeroing the pool, and then in the constructor).
>> For this reason, I would like to go ahead with the merge, I agree it's a
>> good idea to isolate this patch into a separate commit, and to mark it
>> as potentially performance-impacting in the commit message.
>
>> Please let me know if you agree.
>
>
> I may not agree with some of the above statements, but I do not object
> to this change going in (in a dedicated commit) if you think that is the
> best way forward (and others do not object).
Ditto.
PS. I think the chunked pools are not even used that much. But we simply
dont have enough solid info about the limit. Thus my earlier statement
about needing tests. From the suspected purpose it should only take
someone with lot of traffic and large cache_mem to trial the different
values and see what happens to their performance. It can be done on any
Squid version by just removing the 2MB limit setter.
PPS. We do have several active community members with that sort of setup
who might be willing to run a trial. Particularly if the hoped outcome
is better performance. If it gets *worse* then we may want to move on to
trialling larger chunk sizes instead.
Amos
More information about the squid-dev
mailing list