[squid-users] Squid performance not able to drive a 1Gbps internet link
Amos Jeffries
squid3 at treenet.co.nz
Fri Aug 5 11:59:49 UTC 2016
On 4/08/2016 11:55 p.m., brendan kearney wrote:
> At what point does buffer bloat set in? I have a linux router with the
> below sysctl tweaks load balancing with haproxy to 2 squid instances. I
> have 4 x 1Gb interfaces bonded and have bumped the ring buffers on RX and
> TX to 1024 on all interfaces.
Exact timing will depend on your systems. AFAIU, it is the point where
control signals about congestion spend longer time in the traffic buffer
than needed for one endpoing to start re-sending packets and cause
congestino to get worse - a meltdown sort of behaviour.
If Squid takes say 1ms to process a I/O cycle, and reads 4KB per cycle.
Any server that send more than 4KB/ms will fill the buffer somewhat.
(real I/O cycles are dynamic in timing, so theres no easily pointed at
time when bloat effects start to happen).
What I would expect to see with buffer limits set to 8MB. Is that on
transfer of objects greater than 8MB (ie 1GB) the first ~12MB happen
really fast, then speed drops off a cliff down to the slower rate Squid
is processing it out of the buffer.
With my fake numbers from above 1ms x 4KB ==> 4MB/sec. So in theory you
would get up to 64Mbps for the first chunk of large objects, then drop
down to 32Mbps. Then the Squid->client buffers start filling, and there
is a second drop down to whatever speed the client is emptying its side.
The issue is not visible on any object smaller than those cliff
boundaries. And may not be user visible at all unless total network load
reaches rates where the processing speed drops - which makes the speed
drop occur much sooner.
In particular as I said earlier as Squid gets more processing load its
I/O cycle slow down, effectively shifting the speed 'cliff' to lower
thresholds.
If there is any problem in the traffic, it will take 2 seconds for
Squid to become aware and even begin to start failure recovery.
Signals like end-of-object might arrive faster if the TCP stack is
optimized for control signals and cause up to 8MB of data at the end of
the object to be truncated. Other weird things like that start to happen
depending on the TCP stack implementation.
>
> The squid servers run with almost the same hardware and tweaks, except the
> ring buffers have only been bumped to 512.
>
> DSL Reports has a speed test page that supposedly finds and quantifies
> buffer bloat and my setup does not introduce it, per their tests.
The result there will depend on the size of object they test with. And
as Marcus mentioned the bandwidth product to the test server has an
impact on what data sizes will be required to find any problems.
>
> I am only running a home internet connection (50 down x 15 up) but have a
> wonderful browsing experience. I imagine scale of bandwidth might be a
> factor, but have no idea where buffer bloat begins to set in.
At values higher than your "50 down" by the sounds of it. I assume that
means 50 Mbps, which is well under the 64Mbps cliff your 8MB buffer causes.
It is rare to see a home connection that needs industrial scale
performance optimizations tuned with Squid. The bottlneck is that
Internet modem. Anything you configure internally greater than its
limits is effectively "infinity".
The bloat effects (if any) will be happening in your ISP's network.
Bloating is particularly nasty as it effects *others* sharing the
network worse than the individual causing it.
> # Maximum number of outstanding syn requests allowed; default 128
> #net.ipv4.tcp_max_syn_backlog = 2048
> net.ipv4.tcp_max_syn_backlog = 16284
>
For each of these entries thare will be ~256 bytes of RAM used by Squid
to remember that it occured. Plus whatever your TCP stack uses.
Not big, but the latency effect of waiting for FD to become available in
Squid might be noticed in highly loaded network conditions.
> # Discourage Linux from swapping idle processes to disk (default = 60)
> #vm.swappiness = 10
>
> # Increase Linux autotuning TCP buffer limits
AFAIK these are the limits. Not what is actually used. The latest Linux
versions contain algorithms designed by the buffer bloat research team
that prevent insane buffers being created even if the limits are set large.
Amos
More information about the squid-users
mailing list