[squid-dev] Benchmarking Performance with reuseport

Marcus Kool marcus.kool at urlfilterdb.com
Sat Aug 13 23:32:24 UTC 2016


This article better explains the benefits of O_REUSEPORT:
https://lwn.net/Articles/542629/

A key paragraph is this:
The problem with this technique, as Tom pointed out, is that when
multiple threads are waiting in the accept() call, wake-ups are not
fair, so that, under high load, incoming connections may be
distributed across threads in a very unbalanced fashion. At Google,
they have seen a factor-of-three difference between the thread
accepting the most connections and the thread accepting the
fewest connections; that sort of imbalance can lead to
underutilization of CPU cores. By contrast, the SO_REUSEPORT
implementation distributes connections evenly across all of the
threads (or processes) that are blocked in accept() on the same port.

So using O_REUSEPORT seems very beneficial for SMP-based Squid.

Marcus


On 08/09/2016 09:19 PM, Henrik Nordström wrote:
> tor 2016-08-04 klockan 23:12 +1200 skrev Amos Jeffries:
>>
>>
>> I imagine that Nginx are seeing latency reduction due to no longer
>> needing a central worker that receives the connection then spawns a
>> whole new process to handle it. The behaviour sort of makes sense for
>> a
>> web server (which Nginx is at heart still, a copy of Apache) spawning
>> CGI processes to handle each request. But kind of daft in these
>> HTTP/1.1
>> multiplexed performance-centric days.
>
> No, it's only about accepting new connections on existing workers.
>
> Many high load sites still run with non-persistent connections to keep
> worker count down, and these benefit a lot from this change.
>
> Sites using persistent connections only benefit marginally. But the
> larger the worker count the higher the benefit as the load from new
> connections gets distrubuted by the kernel instead of a stamping herd
> of workers.
>
> Regards
> Henrik
>
> _______________________________________________
> squid-dev mailing list
> squid-dev at lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-dev
>


More information about the squid-dev mailing list