[squid-users] Scaling concurrent TCP sessions beyond ephemeral port range
Praveen Ponakanti
pponakanti at roblox.com
Fri Sep 9 22:29:07 UTC 2022
On Thu, Sep 8, 2022 at 8:31 PM Alex Rousskov <
rousskov at measurement-factory.com> wrote:
> On 9/8/22 19:41, Praveen Ponakanti wrote:
> > * We have a large number of workers (30) to help with handling a
> > high RPS. However, TCP session reuse does not seem to be optimal
> > even with server_persistent_connections enabled as a new outbound
> > session would have to be opened up if the request is proxied by a
> > kid worker that doesn’t already have a connection to that
> > destination. Is there something that can be done to improve this
> > with later versions of squid? Would be glad to help out if anyone
> > has some suggestions.
>
> If your only concern is TCP, and the number of servers is large, then it
> would be possible to share open Squid-server connections among workers
> by adding code that would exchange open TCP socket descriptors using UDS
> messages, but I doubt it is worth doing (a lot of complexity but not
> enough gain). There may also be some advanced/modern kernel tricks that
> we can teach Squid to use for sharing connections, but, again, I doubt
> the complexity would be worth the benefits from such reuse.
>
Agree, it might not make sense to increase the complexity with sharing
socket among the workers. Was thinking more on the lines of a hashmap that
the coordinator could use to pick workers that already have a TCP
connection to the destination being requested, instead of having the
workers themselves share connection details. This might introduce
significant complexity as well, so please ignore if there are better
solutions when large counts of workers are in use.
>
> If most TCP servers are known a priori, and there are few of them, then
> cache_peer standby=N feature for them might be useful.
>
>
The TCP destinations are not always known beforehand and can run into the
thousands. However only the top 4-5 destinations have a large number of
concurrent sessions. Which are each limited to the ip_local_port_range size
in concurrent sessions now after the enhancement to add the
IP_BIND_ADDRESS_NO_PORT flag. BTW, we do not run caching on squid or have
peer caches in our deployment.
Most of the TCP connections are for HTTPS reqs, w/o TLS termination at the
squid. Does squid currently support a TLS session cache ?
Thanks
Praveen
> If you are dealing with TLS sessions as well, then we should add a
> shared memory TLS session cache that all workers can tap into.
>
>
> Cheers,
>
> Alex.
>
> > On Tue, Jun 21, 2022 at 2:11 PM Alex Rousskov wrote:
> >
> > On 6/19/22 12:48, Praveen Ponakanti wrote:
> >
> > > What is the process to have this code patch upstreamed for future
> > squid
> > > versions?
> >
> > In short, just post a quality pull request on GitHub (or find
> somebody
> > who can guide your code towards official acceptance for you). For
> > details, please see https://wiki.squid-cache.org/MergeProcedure
> > <https://wiki.squid-cache.org/MergeProcedure>
> >
> >
> > Thank you,
> >
> > Alex.
> >
> >
> > > On Fri, May 20, 2022 at 9:31 PM Amos Jeffries
> > <squid3 at treenet.co.nz <mailto:squid3 at treenet.co.nz>
> > > wrote:
> > >
> > > On 20/05/22 19:44, Praveen Ponakanti wrote:
> > > > Hi Alex,
> > > >
> > > > Thanks for going through several steps to help mitigate
> > src port
> > > > exhaustion. We are looking to achieve 400-500% more
> > > > concurrent connections if we could :) as there is a
> > > significant buffer
> > > > on the available CPU.
> > >
> > > Then you require at least 4, maybe 5, IP addresses to handle
> > that many
> > > concurrent connections with Squid.
> > >
> > >
> > > We would like to investigate going beyond the ephemeral port
> > range for
> > > some specific destination IP:PORT addresses. For that it appears
> > squid
> > > does not round-robin requests if we use multiple
> > tcp_outgoing_addresses.
> > > We could use ACL’s to pick a different outbound IP based on the
> > clients
> > > source IP, however that is not very ideal in our environment as
> our
> > > clients aren’t always equally split by subnet. However, if we
> could
> > > split by the client’s source port that might help achieve this.
> For
> > > example something like:
> > >
> > >
> > > acl pool1 clientport 0-32768
> > >
> > > acl pool2 clientport 32769-65536
> > >
> > >
> > > tcp_outgoing_address 10.1.0.1 pool1
> > >
> > > tcp_outgoing_address 10.1.0.2 pool2
> > >
> > >
> > > Squid's ACLs currently do not allow filtering by the client's
> source
> > > port. We could look into a separate patch to add this
> > functionality to
> > > squid’s ACL code if that makes sense. Or is there a better way to
> > > achieve this?
> > >
> > >
> > > Thanks
> > >
> > > Praveen
> > >
> > >
> > > > The option to use multiple tcp_outoing_addresses appears
> to be
> > > promising
> > > > along with some tweaks to the TCP timeouts. I guess we
> > could use
> > > ACLs to
> > > > pick a different outbound IP based on the requesting
> client's
> > > prefix. We
> > > > had not considered that option as the ephemeral ports were
> > no longer
> > > > available to other applications when squid uses most of
> > them with a
> > > > single outbound IP configured. We are also looking to
> > modify the
> > > code to
> > > > use the IP_BIND_ADDRESS_NO_PORT sockopt as that could help
> > delay
> > > port
> > > > assignment with the bind() call on the outbound TCP
> > sessions (to
> > > > hopefully allow access to the 4-tuple on the socket).
> > >
> > > Patches welcome.
> > >
> > > However, please be aware that use of the 4-tuple is often no
> > different
> > > from the 3-tuple since the dst-port is typically identical
> > for all
> > > outgoing traffic to a given dst-IP.
> > >
> > >
> > > Cheers
> > > Amos
> > > _______________________________________________
> > > squid-users mailing list
> > > squid-users at lists.squid-cache.org
> > <mailto:squid-users at lists.squid-cache.org>
> > > <mailto:squid-users at lists.squid-cache.org
> > <mailto:squid-users at lists.squid-cache.org>>
> > > http://lists.squid-cache.org/listinfo/squid-users
> > <http://lists.squid-cache.org/listinfo/squid-users>
> > > <http://lists.squid-cache.org/listinfo/squid-users
> > <http://lists.squid-cache.org/listinfo/squid-users>>
> > >
> > >
> > > _______________________________________________
> > > squid-users mailing list
> > > squid-users at lists.squid-cache.org
> > <mailto:squid-users at lists.squid-cache.org>
> > > http://lists.squid-cache.org/listinfo/squid-users
> > <http://lists.squid-cache.org/listinfo/squid-users>
> >
> > _______________________________________________
> > squid-users mailing list
> > squid-users at lists.squid-cache.org
> > <mailto:squid-users at lists.squid-cache.org>
> > http://lists.squid-cache.org/listinfo/squid-users
> > <http://lists.squid-cache.org/listinfo/squid-users>
> >
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squid-cache.org/pipermail/squid-users/attachments/20220909/cb812047/attachment-0001.htm>
More information about the squid-users
mailing list