[squid-users] persistent connections not being utilized with Chrome

Rafael Akchurin rafael.akchurin at diladele.com
Mon Jan 15 16:06:34 UTC 2018


Hello Brian,

Sorry not to flame it all out further - but I see the same annoying "waiting for proxy tunnel" in Chrome through SSL bumping AD integrated explicit Squid.
*but* the same 200 of tabs loads just fine from FF and the same Squid on the same machine at the same time - so might be a Chrome issue/architecture?

Best regards,
Rafael Akchurin
Diladele B.V.


-----Original Message-----
From: squid-users [mailto:squid-users-bounces at lists.squid-cache.org] On Behalf Of Brian J. Murrell
Sent: Monday, January 15, 2018 4:41 PM
To: squid-users at lists.squid-cache.org
Subject: Re: [squid-users] persistent connections not being utilized with Chrome

On Fri, 2018-01-12 at 21:34 -0700, Alex Rousskov wrote:
> In that case, there are two HTTP
> connections
> in play:
> 
>   1. An HTTP connection from the client to the origin server.

By this do you mean to say there is a connection from the client, through the proxy server to the origin server?

>   2. An HTTP connection from the client to the proxy.
> 
> Both HTTP connections use the same TCP client connection/socket

Understood.  So I do believe you are ACKing my question above.

> No, the optimization is still there as far as client-origin traffic is 
> concerned.

Except that it is all bottle-necked through the same open-TCP-socket limitations that Chrome has to a single destination.  I think what I want to see is those limited number of TCP-sockets better utilized. 
But maybe that cannot happen without pipelining.

> Yes, probably something like that is happening.

So how do I ameliorate it?

> Perhaps they do? How many requests does Chrome send inside a CONNECT 
> tunnel through Squid, on average?

My short investigation using packet sniffing seems to indicate just one.

> If you bump CONNECT tunnels using
> SslBump, then you can use Squid to measure persistency. If you do not 
> bump, then you should still be able to use Chrome developer tools to 
> measure persistency.

Any clues about how do to do that?

> Origin server response delays rather than TCP handshakes may be your 
> primary bottleneck because Chrome probably does not pipeline and, 
> without pipelining, there can be at most one concurrent request per 
> HTTP connection.

I think Chrome disabled pipelining a while back:

https://stackoverflow.com/questions/30477476/why-is-pipelining-disabled-in-modern-browsers

> To improve throughput in your environment (without raising the number 
> of TCP connections that Chrome is allowed to open), you would need to 
> wait for HTTP/2 support. In HTTP/2, a single HTTP connection can carry 
> lots of concurrent transactions.

So are people without proxies suffering this same issue?  I don't think they are because their few hundred tabs will all be much more distributed to various servers and domains across the Internet allowing their Chrome to open many (many!) more parallel TCP connections and wait for them all to respond in parallel.

It's the concentration of all of those potential TCP connections through a single host -- the proxy server -- that is greatly reducing the parallelism of fetching lots of objects at the same time and dragging it's wall-clock time out.

Perhaps there is no solution until HTTP/2.

I just find it surprising that every IT person that utilizes a proxy has to tell their users, "yeah, that's just how it is here in this network, very slow to start up your browser".  :-(

Cheers,
b.


More information about the squid-users mailing list