[squid-users] Squid performance not able to drive a 1Gbps internet link

brendan kearney bpk678 at gmail.com
Thu Aug 4 11:55:46 UTC 2016


At what point does buffer bloat set in?  I have a linux router with the
below sysctl tweaks load balancing with haproxy to 2 squid instances.  I
have 4 x 1Gb interfaces bonded and have bumped the ring buffers on RX and
TX to 1024 on all interfaces.

The squid servers run with almost the same hardware and tweaks, except the
ring buffers have only been bumped to 512.

DSL Reports has a speed test page that supposedly finds and quantifies
buffer bloat and my setup does not introduce it, per their tests.

I am only running a home internet connection (50 down x 15 up) but have a
wonderful browsing experience.  I imagine scale of bandwidth might be a
factor, but have no idea where buffer bloat begins to set in.

# Favor low latency over high bandwidth
net.ipv4.tcp_low_latency = 1

# Use the full range of ports.
net.ipv4.ip_local_port_range = 1025 65535

# Maximum number of open files per process; default 1048576
#fs.nr_open = 10000000

# Increase system file descriptor limit; default 402289
fs.file-max = 100000

# Maximum number of requests queued to a listen socket; default 128
net.core.somaxconn = 1024

# Maximum number of packets backlogged in the kernel; default 1000
#net.core.netdev_max_backlog = 2000
net.core.netdev_max_backlog = 4096

# Maximum number of outstanding syn requests allowed; default 128
#net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_max_syn_backlog = 16284

# Discourage Linux from swapping idle processes to disk (default = 60)
#vm.swappiness = 10

# Increase Linux autotuning TCP buffer limits
# Set max to 16MB for 1GE and 32M (33554432) or 54M (56623104) for 10GE
# Don't set tcp_mem itself! Let the kernel scale it based on RAM.
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.rmem_default = 16777216
net.core.wmem_default = 16777216
net.core.optmem_max = 40960
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216

# Increase Linux autotuning UDP buffer limits
net.ipv4.udp_mem = 4096 87380 16777216

# Make room for more TIME_WAIT sockets due to more clients,
# and allow them to be reused if we run out of sockets
# Also increase the max packet backlog
net.core.netdev_max_backlog = 50000
net.ipv4.tcp_max_syn_backlog = 30000
net.ipv4.tcp_max_tw_buckets = 2000000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 10

# Disable TCP slow start on idle connections
net.ipv4.tcp_slow_start_after_idle = 0

On Aug 4, 2016 2:17 AM, "Amos Jeffries" <squid3 at treenet.co.nz> wrote:

> On 4/08/2016 2:32 a.m., Heiler Bemerguy wrote:
> >
> > I think it doesn't really matter how much squid sets its default buffer.
> > The linux kernel will upscale to the maximum set by the third option.
> > (and the TCP Window Size will follow that)
> >
> > net.ipv4.tcp_wmem = 1024 32768 8388608
> > net.ipv4.tcp_rmem = 1024 32768 8388608
> >
>
> Having large system buffers like that just leads to buffer bloat
> problems. Squid is still the bottleneck if it is sending only 4KB each
> I/O cycle to the client - no matter how much is already received by
> Squid, or stuck in kernel queues waiting to arrive to Squid. The more
> heavily loaded the proxy is the longer each I/O cycle gets as all
> clients get one slice of the cycle to do whatever processing they need
> done.
>
> The buffers limited by HTTP_REQBUF_SZ are not dynamic so its not just a
> minimum. Nathan found a 300% speed increase from a 3x buffer size
> increase. Which is barely noticable (but still present) on small
> responses, but very noticable with large transactions.
>
> Amos
>
> _______________________________________________
> squid-users mailing list
> squid-users at lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squid-cache.org/pipermail/squid-users/attachments/20160804/b7e0c140/attachment-0001.html>


More information about the squid-users mailing list