[squid-dev] [PATCH] pconn_lifetime

Tsantilas Christos chtsanti at users.sourceforge.net
Mon Oct 6 12:56:03 UTC 2014

On 10/03/2014 09:58 PM, Amos Jeffries wrote:
> Hash: SHA1
> On 3/10/2014 10:21 p.m., Tsantilas Christos wrote:
>> On 10/02/2014 08:24 AM, Amos Jeffries wrote:
>>> On 2/10/2014 5:23 a.m., Tsantilas Christos wrote:
>>>> Ping....
>>>> So which is the decision? Can the patch go to trunk? Should
>>>> the patch replaced with on other solution?
>>> We already have client_lifetime directive which is documented as
>>> doing exactly what this new directive does.
>>> (<http://www.squid-cache.org/Doc/config/client_lifetime/>).
>>> However, what that directive *actually* does is very different
>>> from the documented behaviour - it is also used for controlling
>>> between-request idle timeout (breaking client_idle_pconn_timeout
>>> by making it 1 week!) and also server request timout (pretending
>>> to be a timeout for duration between sending request and starting
>>> to read reply).
>> My sense is that the client_lifetime is a timeout, which should
>> triggered when the client waits for long time an HTTP or ICAP or
>> DNS server to finish. Probably should renamed to
>> "client_wait_for_activity_timeout" and documentation should
>> adapted. Also probably we need to fix it a little.
> We already have a lot of different *_timeout directives being used for
> activities. Each one named for the activity being watched.
> What I am hearing you requesting, and seeing in your code, is a total
> since-opened timeout. Covering all actions on that connection.
>> However it make sense to me a such timeout parameter. It is used in
>> the case: "The web browser, waits 1 minute the dns server to
>> respond, 1 minute helper to finish, 1 minute the ICAP server to
>> respond, 2 minutes the HTTP server send first bytes of the answer,
>> please close this connection I am not interested any more for the
>> response..."
>> I should note that the pconn_lifetime has effect to server side
>> connections too. And also try to not halt a served transaction.
>> Only triggered after the transaction is finished, or while we are
>> waiting for new requests.
> Yes, I noted that, and agree it is a good decision.
>> It is a timeout used by administrators to resolve a problem
>> without affecting active transactions. There are major differences
>> in use cases.
> There is a lot of similarity here with:
> * request_timeout which counts down the period between accept() and
> first request bytes arriving, and
> * client_idle_pconn_timeout which counts down the period between requests.
> Although both those count in time relative to the current idle
> activity, instead of time since connection accept().
> I note that you said this pconn_lifetime did not need coding for ICAP
> or server connections since those were stored in Pconn/Idle pools. The
> Idle pool timeouts only count from start of idle period, just like
> client_idle_pconn_timeout.
>   So either request_timeout/client_idle_pconn_timeout make
> pconn_lifetime needless or pconn_lifetime needs some since-connect()
> equivalent on server connections (and ICAP).

My sense is that the pconn_lifetime scoping lifetime since-connect in 
this patch.
Am I wrong, is there any bug to this patch?

> I think your choice of scoping "lifetime" as since-accept() is right.

>>> 2) There is more work to do to integrate it properly with the
>>> existing pconn timeout and lifetime directives.
>>> I request that we include these in this patch:
>>> * change ConnStateData::clientParseRequests() to use
>>> conn->timeLeft(Config.Timeout.clientIdlePconn) when setting
>>> timeout between requests. - this is another bug in the existing
>>> code that is retained by your patch and needs fixing.
>> I think this change should not be done. We may affect an valid
>> existing request. If the timeLeft() is very short, we may have
>> timeout while waiting data from helpers or ICAP server.
> Your patch is already setting keepalive=false when the pipeline may
> have others still waiting responses, and in fact some of those may be
> fully serviced by a server and only awaiting their reply data to be
> written to the client.

However it is not the same as aborting a served transaction (including 
POST or PUT requests)

> If you want lifetime to have no bad side effects on a client
> connection, then you need the timeout handler for it to set the client
> connection to half-closed (no more reading) and let the existing
> transactions (all of them) complete.

Maybe this is a good optimization, and probably we should add it to this 

> If you are doing the half-closed action, then it is better to set idle
> timeout using timeLeft() in ConnStateData::clientParseRequests()
> instead of playing with requests keep-alive flags.
>> We need a client_wait_for_activity_timeout here. This is what the
>> "client_lifetime" now does.
> No it waits for a random request and sets the keepalive flag to close.
> Activity on other requests is *probably* pending.

Currently the client_lifetime now is set to client_side connection after 
we received all of the HTTP request. This is not a connection lifetime 

Also set this read timeout to squid-to-server connection after sent all 
HTTP request.

We may read new HTTP pipelined requests on this connection, but we may 
not read. This timeout will be triggered when no input data come to this 
connection for the timeout period.

 From what I can understand reading the code (not the documentation), is 
that this timeout intended to abort connection which does not have any 
activity for long time. This is why this timeout has big value.
Most possible is that the client will wait all of this time for an 
answer from squid, or client already hclosed the connection but for a 
reason squid did not know it.

Also when this timeout triggered squid will just close client-side 
connection. No keep-alive flag is set.

We may want to change the current behaviour of this cfg parameter, but 
my opinion is that we should fix the documentation, and cfg parameter 
name ...

More information about the squid-dev mailing list