[squid-users] how to achieve squid to handle 2000 concurrent connections?
Amos Jeffries
squid3 at treenet.co.nz
Mon Apr 20 00:58:28 UTC 2015
On 19/04/2015 9:58 p.m., Abdelouahed Haitoute wrote:
> Hello,
>
> I’ve got the following setup, each application on its own virtual machine:
>
> Client (sends http-requests to proxy)—> Squid (sends http-requests to apache based on destination IP and round robin to multiple apache machines) —> Apache (setting up a two way ssl to the requested server) —> HTTPS-server
>
> This setup works great, and I have the Apache and the HTTPS-server its performance tuned. Both can handle 2000 concurrent connections of file sizes up to 10MB.
>
> Unfortunately I haven’t been successful with the Squid-server. After a while I’m getting the following error messages in the log:
> 1429432828.200 62854 10.10.7.16 TCP_MISS_ABORTED/000 0 GET http://https.example.com/index.html - ROUNDROBIN_PARENT/192.168.0.20 -
>
> The Squid virtual machine contains the following:
> CentOS 7.1 with latest updates
> Squid Cache: Version 3.3.8
> CPU: Intel Xeon E312xx (Sandy Bridge) - 1799.998 MHz (4 cores)
> Memory: 4096 MiB
> Harddisk: 10 GiB, SCSI, raw, cache none
>
> When I execute a performance test with 2000 concurrent connections handling a file size of 10KB on each request.
> # ab -n 10000 -c 2000 -X 10.10.7.15:3128 http://https.example.com/index.html
You are wrong. "ab -c 2000" to a non-caching proxy means *4000*
concurrent connections being handled by the proxy. Web server only loads
the file object once.
A non-caching proxy requires +1 connection to server for each inbound
client connection ( 2000 + 2000 = 4K concurrent connections ).
> This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
> Licensed to The Apache Software Foundation, http://www.apache.org/
>
> Benchmarking https.rinis.nl [through 10.10.7.15:3128] (be patient)
> Completed 1000 requests
> Completed 2000 requests
> Completed 3000 requests
> Completed 4000 requests
> Completed 5000 requests
> Completed 6000 requests
> Completed 7000 requests
> Completed 8000 requests
> apr_pollset_poll: The timeout specified has expired (70007)
Squid is still responding by the client has given up. As shown by the
_ABORTED in the squid log.
> Total of 8610 requests completed
>
> I have the command "vmstat 5” running on the squid server:
> procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
> r b swpd free buff cache si so bi bo in cs us sy id wa st
> 2 0 0 3823916 764 124992 0 0 519 26 237 503 2 3 92 3 0
> 0 0 0 3823744 764 125072 0 0 0 0 44 79 0 0 100 0 0
> 0 0 0 3823776 764 125044 0 0 0 2 39 70 0 0 100 0 0
> 0 0 0 3729540 764 139116 0 0 1 0 2145 257 1 2 97 0 0
> 0 0 0 3728432 764 139888 0 0 0 46 2297 594 1 1 97 0 0
> 0 0 0 3726484 764 140892 0 0 0 39 2869 581 2 1 97 0 0
> 0 0 0 3725528 764 141376 0 0 0 0 2843 648 2 2 96 0 0
> 0 0 0 3724980 764 142008 0 0 0 69 2824 529 2 1 97 0 0
> 0 0 0 3724584 764 142540 0 0 0 0 2742 472 2 1 97 0 0
> 0 0 0 3723696 764 143004 0 0 0 0 2511 577 2 1 97 0 0
> 0 0 0 3722840 764 143200 0 0 0 12 884 228 1 1 99 0 0
> 0 0 0 3722704 764 142900 0 0 0 0 136 127 0 0 100 0 0
> 0 0 0 3722504 764 142744 0 0 0 0 40 70 0 0 100 0 0
> 0 0 0 3722456 764 142784 0 0 0 114 37 68 0 0 100 0 0
> 0 0 0 3722208 764 142832 0 0 0 0 41 68 0 0 100 0 0
> 0 0 0 3722480 764 142280 0 0 0 0 179 82 0 0 100 0 0
> 0 0 0 3722544 764 142140 0 0 0 7 41 75 0 0 100 0 0
> procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
> r b swpd free buff cache si so bi bo in cs us sy id wa st
> 1 0 0 3722544 764 142136 0 0 0 0 36 67 0 0 100 0 0
> 0 0 0 3722996 764 141552 0 0 0 0 42 75 0 0 100 0 0
> 0 0 0 3722980 764 141568 0 0 0 0 37 68 0 0 100 0 0
> 0 0 0 3723028 764 141524 0 0 0 0 36 66 0 0 100 0 0
> 0 0 0 3736816 764 130352 0 0 0 0 809 114 0 0 99 0 0
> 0 0 0 3737544 764 130268 0 0 0 41 42 74 0 0 100 0 0
>
> It looks like the hardware has enough resources during the benchmark test.
>
> I’ve got the following squid.conf running:
> cache_peer 192.168.0.18 parent 3128 0 round-robin no-query no-digest
> cache_peer 192.168.0.20 parent 3128 0 round-robin no-query no-digest
>
> acl development_net dst 192.168.0.0/24
> cache_peer_access 192.168.0.18 allow development_net
> cache_peer_access 192.168.0.20 allow development_net
>
> never_direct allow all
> cache deny all
>
> maximum_object_size_in_memory 16 MB
> cache_mem 2048 MB
>
> The squid must not cache at all.
The dont bother setting cache_mem to 2GB. The memory cache wont be used.
Also note that the lack of caching is *worsening* your performance
results. When memory cache is used the FD usage is halved, and the time
to respond is greatly increased (factor of approx 100 in latency reduction).
Consider removing the "cache deny all" when you get this into
production. The 2GB memory cache you assigned can help a *lot* for quick
short term bursts of high traffic (ie. some DoS situations).
I do not see any SMP configuration in your Squid. Meaning that its
operating all those 4K connections with a single process on a single
1.7GHz core. Thats not much processor to work with.
Try adding this to your config file:
workers 2
Amos
More information about the squid-users
mailing list