[squid-users] Fwd: Squid configuration advise
Yuri Voinov
yvoinov at gmail.com
Sat Dec 19 19:51:19 UTC 2015
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
I'm sorry that upset. :)
20.12.15 0:56, Jean Christophe Ventura пишет:
> Hi,
>
> I'm currently working to migrate RHEL5 2.7 Squid to RHEL7 3.3.
>
> I have migrated the config files to be 3.3 compliant (CIDR, remove of
> deprecated function,change cache from UFS to AUFS) without any change
> (cache mem, policy, smp)
>
> The new platform is a 4 node R610 (24 proc hyperthreading activate)
> with 48GB of RAM, only 143GB disk in RAID for OS and cache. Each node
> is connected to the network using 2x1Gbit bonding 2/3 level (some
> network port are available on the server).
>
> bandwidth allocated for Internet users 400Mbit
>
> The difference between the old plateform and the new one doesn't seem
> to be very fantastic :P
> I have read the mailing list history alot.
>
> Squid release:
> So i know 3.3 isn't anymore maintain but this infrastructure will be
> not maintain by myself and i don't think that people behind will do
> the update them self
> If a official repository exist, maybe this question will be reopen
> (from what i have read it's more some of you build packages from
> source and give them to people)
>
> Squid auth:
> It's transparent/basic auth only filtering some ip with acl.
>
> Squid bandwidth:
> Currently a squid node treat something like 30/50Mbit (information
> recovered using iftop)
> From previous viewed mail i think it's normal for a non-smp configuration
>
> Squid measure:
> [root at xxxx ~]# squidclient mgr:5min | grep 'client_http.requests'
> client_http.requests = 233.206612/sec
> other info
> Cache information for squid:
> Hits as % of all requests: 5min: 6.8%, 60min: 7.1%
> Hits as % of bytes sent: 5min: 4.7%, 60min: 4.4%
> Memory hits as % of hit requests: 5min: 21.4%, 60min: 21.5%
> Disk hits as % of hit requests: 5min: 34.7%, 60min: 30.8%
> Storage Swap size: 9573016 KB
> Storage Swap capacity: 91.3% used, 8.7% free
> Storage Mem size: 519352 KB
> Storage Mem capacity: 99.1% used, 0.9% free
> Mean Object Size: 47.71 KB
>
> Now question and advise :
>
> This metrics seem too low for me. anyone of you agree ?
>
> 4 node x 50Mbit node= 200Mbit
> To treat the maxbandwidth (400Mbit) + the lost of one host i need to
> configure 4 thread by node.
> Is there any reason or brillant idea for more (i will have some core
> still available) ? calculation too empirical ?
>
> This url http://wiki.squid-cache.org/ConfigExamples/SmpCarpCluster
> seem to be a good start :P
> Using this method i can interconnect each proxy to share their cache
> (maybe using dedicated network port). Usefull or not ? may this
> increase the hit ratio ? if this idea is'nt stupid interconnet using
> the frontend only or directy to each ?
>
> For now i have :
> - 100GB of disk available for cache
> - 40GB of RAM (let 8 for OS + squid disk cache related ram usage)
>
> 1 front with the RAM cache and 4 back with disk cache.
> AUFS or ROCK cache? mix of them ? 50% each ? maybe another rules ?
> (i think it's will be linked to the cache content but any advise or
> method is welcome)
>
> I can get more speed and/or space for disk cache using SAN, do you
> know if the data is sequential or random ?
>
> Any advise/rules to increase the hit ratio ? :)
> Any general advise/rules ?
>
> Thanks for your help
>
>
> Jean Christophe VENTURA
> _______________________________________________
> squid-users mailing list
> squid-users at lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQEcBAEBCAAGBQJWdbU3AAoJENNXIZxhPexG2hYH/izsN//pqXXTxRE/DyE+6L0y
u3HwM7Aiyia5LVrt7juBDrb6Th/YMHBT4zgCd9Q3tcHXz2TBuM1YX4cOO+ap7kEv
pdhTORmeErUkw3EHlKuWfCiLodnOu5d9iwbxJ2W8fUqj+rnLWlw3kVeN5DSFgGVm
hxmzFQF7dTOzAPripVnzlEnimcj3bK2fiEx4YN0d+lqakxqCSUNMUYj5O34zgsZS
zGSf4byR3FOamzk9jIUMg4FN7CogbAa0elaRr/XyBpyCTlKJ4LvRq7diB2H0Xsyi
cW5XkjE2pxpblFCJ7Cj6tHlYFDdRNDOzzWf5T2+F2EkDz14MGWZj2BQD2BKYGQA=
=CML0
-----END PGP SIGNATURE-----
More information about the squid-users
mailing list