[squid-users] Proxy server to support a large number of simultaneous requests
Andrey K
ankor2023 at gmail.com
Mon May 29 14:43:03 UTC 2023
Hello,
We need to configure a dedicated proxy server to provide caching of online
video broadcasts in order to reduce the load on the uplink proxy.
Hundreds of users will access the same video-chunks simultaneously.
I developed a simple configuration for the test purposes (it is shown
below).
The *collapsed_forwarding* option is on.
I selected a couple of cacheable resources in the internet for testing:
- small size (~400 KB):
https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf
- large (~8 MB):
https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf
To test simultaneous connections I am forking curl using a simple script
(it is also shown below).
When I run a test (500 curl threads to
https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf)
I see lots of TCP_MISS/200 with FIRSTUP_PARENT/parent_proxy records in the
logs.
A simple analysis shows a low percentage of cache hits:
cat /var/log/squid.user/access.log| grep '2023-05-29 14' | grep pdf | awk
'{print $5" " $10}' | sort | uniq -c
24 TCP_CF_MISS/200/- HIER_NONE/-
457 TCP_MISS/200/200 FIRSTUP_PARENT/parent_proxy
10 TCP_MISS/200/- HIER_NONE/-
9 TCP_SWAPFAIL_MISS/200/200 FIRSTUP_PARENT/parent_proxy
So the Hit ratio is about (500-457-9)*100/500=6.8%
Almost the same situation we see when run 200 threads:
cat /var/log/squid.user/access.log| grep '2023-05-29 15:45' | grep pdf |
awk '{print $5" " $10}' | sort | uniq -c
4 TCP_CF_MISS/200/- HIER_NONE/-
140 TCP_MISS/200/200 FIRSTUP_PARENT/parent_proxy
40 TCP_MISS/200/- HIER_NONE/-
16 TCP_SWAPFAIL_MISS/200/200 FIRSTUP_PARENT/parent_proxy
This time the Hit ratio is about (200-140-16)*100/500=21%
With 50 threads the Hit ratio is 90%:
cat /var/log/squid.user/access.log| grep '2023-05-29 15:50' | grep pdf |
awk '{print $5" " $10}' | sort | uniq -c
27 TCP_CF_MISS/200/- HIER_NONE/-
1 TCP_MISS/200/200 FIRSTUP_PARENT/parent_proxy
18 TCP_MISS/200/- HIER_NONE/-
4 TCP_SWAPFAIL_MISS/200/200 FIRSTUP_PARENT/parent_proxy
I thought that it should always be near 99% - only the first request to an
URL should be forwarded to the parent proxy and all subsequent requests
should be served from the cache.
The situation is even worse with downloading a large file:
500 threads (0.4%):
cat /var/log/squid.user/access.log| grep '2023-05-29 17:2' | grep pdf |
awk '{print $5" " $10}' | sort | uniq -c
10 TCP_CF_MISS/200/200 FIRSTUP_PARENT/parent_proxy
2 TCP_CF_MISS/200/- HIER_NONE/-
488 TCP_MISS/200/200 FIRSTUP_PARENT/parent_proxy
200 threads (3%):
cat /var/log/squid.user/access.log| grep '2023-05-29 17:3' | grep pdf |
awk '{print $5" " $10}' | sort | uniq -c
9 TCP_CF_MISS/200/200 FIRSTUP_PARENT/parent_proxy
6 TCP_CF_MISS/200/- HIER_NONE/-
180 TCP_MISS/200/200 FIRSTUP_PARENT/parent_proxy
5 TCP_SWAPFAIL_MISS/200/200 FIRSTUP_PARENT/parent_proxy
50 threads (98%):
cat /var/log/squid.user/access.log| grep '2023-05-29 17:36' | grep pdf |
awk '{print $5" " $10}' | sort | uniq -c
25 TCP_CF_HIT/200/- HIER_NONE/-
12 TCP_CF_MISS/200/- HIER_NONE/-
12 TCP_HIT/200/- HIER_NONE/-
1 TCP_MISS/200/200 FIRSTUP_PARENT/parent_proxy
Could you clarify if this behavior of my squid is a bug/misconfiguration,
or if I'm running into server performance limitations (squid is running on
a VM with 22 cores)?
Kind regards,
Ankor
*squid.conf:*
workers 21
sslcrtd_program /data/squid.user/usr/lib/squid/security_file_certgen -s
/data/squid.user/var/lib/squid/ssl_db -M 20MB
sslcrtd_children 21
logformat extended-squid %{%Y-%m-%d %H:%M:%S}tl| %6tr %>a %Ss/%03>Hs/%<Hs
%<st %rm %ru %un %Sh/%<A %mt %ea
logfile_rotate 0
access_log daemon:/var/log/squid.user/access.log logformat=extended-squid
on-error=drop
cache_peer parent_proxy parent 3128 0
never_direct allow all
cachemgr_passwd pass config
acl PURGE method PURGE
http_access allow PURGE
http_access allow all
http_port 3131 ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=20MB tls-cert=/etc/squid.user/sslbump/bump.crt
tls-key=/etc/squid.user/sslbump/bump.key
sslproxy_cert_error allow all
acl step1 at_step SslBump1
acl step2 at_step SslBump2
acl step3 at_step SslBump3
ssl_bump peek step1
ssl_bump bump step2
ssl_bump bump step3
cache_dir rock /data/squid.user/cache 20000 max-size=12000000
cache_swap_low 85
cache_swap_high 90
*collapsed_forwarding on*
pinger_enable off
max_filedesc 8192
shutdown_lifetime 5 seconds
netdb_filename none
log_icp_queries off
client_request_buffer_max_size 100 MB
via off
forwarded_for delete
coredump_dir /data/squid.user/var/cache/squid
*curl_forker.sh:*
#!/bin/sh
N=100
URL=
https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf
if [[ -n $1 && $1 =~ help$ ]];
then
echo "Usage: $0 [<cnt>] [<url>]"
echo
echo "Example: $0 10
https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf
";
echo
exit;
fi
while [[ $# -gt 0 ]]
do
if [[ $1 =~ ^[0-9]+$ ]]
then
N=$1
else
URL=$1
fi
shift
done
echo $URL
echo $N threads
for i in `seq $N`
do
nohup curl --tlsv1.2 -k --proxy 0001vsg01:3131 -v $URL >/dev/null
2>&1 &
done
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squid-cache.org/pipermail/squid-users/attachments/20230529/289f1cca/attachment.htm>
More information about the squid-users
mailing list