[squid-users] Fwd: Squid configuration advise

Jean Christophe Ventura ventura.jeanchristophe at gmail.com
Sat Dec 19 20:05:55 UTC 2015


powerpc is a good cpu
AIX.. well i know it a litte bit but even IBM doesn't like it anymore (lot
of stuff under powerpc but not under AIX, spectrum scale and other ibm
slideware)

some time you get more powerhorse under a RHEL/PowerPC than AIX/PowerPC
even with the IBM AIX support... it's life and like other sysadmin we go to
the best OS when we have choice ;)


2015-12-19 20:56 GMT+01:00 Yuri Voinov <yvoinov at gmail.com>:

>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
> AIX great system in good hands. ;)
>
> All the matter in the gasket between the seats and the console. :D
>
> 20.12.15 1:53, Jean Christophe Ventura пишет:
> > Well
> >
> > If it was my project : archlinux using yaourt and each needed package
> > compiled in a VM
> >
> > I work at a company as sysadmin and proxy aren't in my side (network
> > things...)
> > So if i can configure a repository or sync a repository with the
> response :
> > go head it will be here for years.., if not i have to deal with distrib
> > package :P
> >
> > Even in a VM i prefered a debian server than a RHEL
> > By the way the good thing it's not a AIX proxy ;)
> >
> > I do not understand the love of archaeological fossils. It repositories
> > such junk lying around? :-D
> >
> > 20.12.15 0:56, Jean Christophe Ventura пишет:
> > > Hi,
> >
> > > I'm currently working to migrate RHEL5 2.7 Squid to RHEL7 3.3.
> >
> > > I have migrated the config files to be 3.3 compliant (CIDR, remove of
> > > deprecated function,change cache from UFS to AUFS) without any change
> > > (cache mem, policy, smp)
> >
> > > The new platform is a 4 node R610 (24 proc hyperthreading activate)
> > > with 48GB of RAM, only 143GB disk in RAID for OS and cache. Each node
> > > is connected to the network using 2x1Gbit bonding 2/3 level (some
> > > network port are available on the server).
> >
> > > bandwidth allocated for Internet users 400Mbit
> >
> > > The difference between the old plateform and the new one doesn't seem
> > > to be very fantastic :P
> > > I have read the mailing list history alot.
> >
> > > Squid release:
> > > So i know 3.3 isn't anymore maintain but this infrastructure will be
> > > not maintain by myself and i don't think that people behind will do
> > > the update them self
> > > If a official repository exist, maybe this question will be reopen
> > > (from what i have read it's more some of you build packages from
> > > source and give them to people)
> >
> > > Squid auth:
> > > It's transparent/basic auth only filtering some ip with acl.
> >
> > > Squid bandwidth:
> > > Currently a squid node treat something like 30/50Mbit (information
> > > recovered using iftop)
> > > From previous viewed mail i think it's normal for a non-smp
> configuration
> >
> > > Squid measure:
> > > [root at xxxx ~]# squidclient mgr:5min | grep 'client_http.requests'
> > > client_http.requests = 233.206612/sec
> > > other info
> > > Cache information for squid:
> > >         Hits as % of all requests:      5min: 6.8%, 60min: 7.1%
> > >         Hits as % of bytes sent:        5min: 4.7%, 60min: 4.4%
> > >         Memory hits as % of hit requests:       5min: 21.4%, 60min:
> 21.5%
> > >         Disk hits as % of hit requests: 5min: 34.7%, 60min: 30.8%
> > >         Storage Swap size:      9573016 KB
> > >         Storage Swap capacity:  91.3% used,  8.7% free
> > >         Storage Mem size:       519352 KB
> > >         Storage Mem capacity:   99.1% used,  0.9% free
> > >         Mean Object Size:       47.71 KB
> >
> > > Now question and advise :
> >
> > > This metrics seem too low for me. anyone of you agree ?
> >
> > > 4 node x 50Mbit node= 200Mbit
> > > To treat the maxbandwidth (400Mbit) + the lost of one host i need to
> > > configure 4 thread by node.
> > > Is there any reason or brillant idea for more (i will have some core
> > > still available) ? calculation too empirical ?
> >
> > > This url http://wiki.squid-cache.org/ConfigExamples/SmpCarpCluster
> > > seem to be a good start :P
> > > Using this method i can interconnect each proxy to share their cache
> > > (maybe using dedicated network port). Usefull or not ? may this
> > > increase the hit ratio ? if this idea is'nt stupid interconnet using
> > > the frontend only or directy to each ?
> >
> > > For now i have :
> > > - 100GB of disk available for cache
> > > - 40GB   of RAM (let 8 for OS + squid disk cache related ram usage)
> >
> > > 1 front with the RAM cache and 4 back with disk cache.
> > > AUFS or ROCK cache? mix of them ? 50% each ? maybe another rules ?
> > > (i think it's will be linked to the cache content but any advise or
> > > method is welcome)
> >
> > > I can get more speed and/or space for disk cache using SAN, do you
> > > know if the data is sequential or random ?
> >
> > > Any advise/rules to increase the hit ratio ? :)
> > > Any general advise/rules ?
> >
> > > Thanks for your help
> >
> >
> > > Jean Christophe VENTURA
> > > _______________________________________________
> > > squid-users mailing list
> > > squid-users at lists.squid-cache.org
> > > http://lists.squid-cache.org/listinfo/squid-users
> >
> >
>
>
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v2
>
> iQEcBAEBCAAGBQJWdbZlAAoJENNXIZxhPexGrRoH/3c+Fdii20mZJQplh5iayrQY
> H2oQwYJhSw4S61NonryqPTLAxgfa8Q2De7LCpfhk52vWUvNk27WSRekQFEbs8mNO
> AHthD1uNegGg0rJqLyRmLdPEArECtyTFSg7sZADWFenUphxHjYZZKPrz3qEb357X
> pjA2PrNOo2i8bKVtDlTQP/mElnUoHSWG+GJWf/CROiB5/hUdwcyfagkTyB8mjqFo
> b0FYj+d0KT4mtawWLOoIa06S1cIeUUVsyHGcodqD9rwTsNjKI3QiXWQFwCi+ToAf
> Jrk1958Q3QjvqOaiYpCGAwpaeU7K02Prsa3WclLtud0gvXuDSq9uhI65Z32XyAQ=
> =sIHo
> -----END PGP SIGNATURE-----
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squid-cache.org/pipermail/squid-users/attachments/20151219/eed50b18/attachment.html>


More information about the squid-users mailing list