[squid-users] Optimizing squid
Heiler Bemerguy
heiler.bemerguy at cinbesa.com.br
Thu Feb 25 18:44:34 UTC 2016
Since it started with both cache_dirs...
Em 25/02/2016 15:32, Yuri Voinov escreveu:
>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
> Don't think so.
>
> This messages floods all time?
>
> 26.02.16 0:17, Heiler Bemerguy пишет:
> > > I waited squid -z to finish.. did a "ps auxw |grep squid" a dozen
> times to check.. THEN I started it. > It may have tried to serve
> something, as lots of users we're already conecting to it right after
> it started, but I'm still seeing a flood of warnings on error.log: > >
> > /2016/02/25 15:06:38 kid1| WARNING: swapfile header inconsistent
> with available data// > //2016/02/25 15:06:38 kid1| WARNING: swapfile
> header inconsistent with available data// > //2016/02/25 15:06:38
> kid1| WARNING: swapfile header inconsistent with available data// >
> //2016/02/25 15:06:39 kid1| WARNING: swapfile header inconsistent with
> available data/ > > It's curious that with only one cache_dir (the
> first one), I didn't receive any of these errors... > Maybe the
> non-rounded "4097" value is causing an issue? > > Best Regards, > > --
> > Heiler Bemerguy - (91) 98151-4894 > Assessor Técnico - CINBESA (91)
> 3184-1751 > > > Em 25/02/2016 14:18, Amos Jeffries escreveu: >> On
> 26/02/2016 5:58 a.m., Heiler Bemerguy wrote: >>> Hi Alex, Eliezer,
> Yuri, Amos.. >>> >>> So, to start from the start, after seeing squid
> was totally stable and >>> fast, running with NO cache_dirs, I tried
> to add only 2 rockstore >>> cache_dirs to test. >>> >>> conf: >>>
> /cache_dir rock /cache2/rock1 20000 min-size=0 max-size=4096 >>>
> slot-size=2048// >>> //cache_dir rock /cache2/rock2 30000
> min-size=4097 max-size=16384 >>> slot-size=4096/ >>> (ps.: I know it
> would be nice to use one store PER partition/disk/lun >>> whatever..
> but I'm trying to lessen disk wasting by using small >>> slot-sizes
> for small files.. am I wrong?) >>> >>> Then squid -z: >>> /2016/02/25
> 13:42:00 kid2| Creating Rock db: /cache2/rock1/rock// >>> //2016/02/25
> 13:42:00 kid3| Creating Rock db: /cache2/rock2/rock/ >>> >>> Then
> running squid for the first time with these newly created rock >>>
> stores.... >>> >>> /2016/02/25 13:42:09 kid3| Loading cache_dir #1
> from /cache2/rock2/rock// >>> //2016/02/25 13:42:09 kid2| Loading
> cache_dir #0 from /cache2/rock1/rock// >>> //2016/02/25 13:42:09 kid3|
> Store rebuilding is 0.01% complete// >>> //2016/02/25 13:42:09 kid2|
> Store rebuilding is 0.01% complete/ >>> >>> Rebuilding what? just
> creating the huge files I think... >> The cache index for those rock
> DB. >> >> Unlike UFS which stores a swap.state file, rock rebuilds its
> index on >> each startup. >> >>> Then: >>> /2016/02/25 13:42:19 kid1|
> WARNING: swapfile header inconsistent with >>> available data >>>
> 2016/02/25 13:42:21 kid2| WARNING: cache_dir[0]: Ignoring malformed
> >>> cache entry meta data at 6943832064 >> <snip repeats> >>>
> 2016/02/25 13:42:40 kid1| ctx: enter level 0: >>>
> 'http://static.bn-static.com/pg/0plcB0QjJpBbwN7rMMDjKKO5Z63Nhu3zfPw==.gif'
> >>> 2016/02/25 13:42:40 kid1| WARNING: swapfile header inconsistent
> with >>> available data >>> 2016/02/25 13:42:40 kid2| WARNING:
> cache_dir[0]: Ignoring malformed >>> cache entry meta data at
> 19581075456 >>> 2016/02/25 13:42:41 kid2| WARNING: cache_dir[0]:
> Ignoring malformed >>> cache entry meta data at 19757760512 >>>
> 2016/02/25 13:42:43 kid2| Finished rebuilding storage from disk. >>>
> 2016/02/25 13:42:43 kid2| 10239992 Entries scanned >>> 2016/02/25
> 13:42:43 kid2| 14 Invalid entries./// >>> >>> What entry? why
> malformed? Wasn't it just a empty store?! it just >>> created
> it....... >>> >> >> Did you wait for the -z background processes to
> finish creating the 50GB >> of disk allocation before starting the
> main Squid process ? >> >> Are your workers trying to serve up traffic
> to or from the cache before >> the rebuild has completed? >> >> >> As
> you can see from the log timestamps on startup it will take ~30-60 >>
> sec for the rock caches of that size to be loaded in your system. >>
> >> >> Amos >> >> _______________________________________________ >>
> squid-users mailing list >> squid-users at lists.squid-cache.org >>
> http://lists.squid-cache.org/listinfo/squid-users > > > >
> _______________________________________________ > squid-users mailing
> list > squid-users at lists.squid-cache.org >
> http://lists.squid-cache.org/listinfo/squid-users
>
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v2
>
> iQEcBAEBCAAGBQJWz0ioAAoJENNXIZxhPexGqv0H/iumDpA03D6UDC7Px+Scdrgm
> +u/Bnf7MbXRX4UptkoYZ0WdXfBUgaeGfQvajZRpegktqzdmf+tn85uS+JZ5tjtAN
> /MBQLTFQiYzjiYEma3wH2GrhHdOHGAQytlO6vyP+KJXkj+XEZlKapfDxMe9jfe0P
> PJ/6Q+zy+3LgGT4BnZzBN50YZ42RtQ5eF72W2t6y+XLDs6behYf2xqrtq0CGiOY8
> KwtE3SsN601VAgbd4eZqbZT0tp8DO8/qAWHqnsu2goiTkjIyrBHWXO8Rx4fm9DCt
> rUwuqzOncu/mWMXVrNlZdPSp39T05u3Wdi/eQU4E0F1OppPBMVp3+OrBIEvcHLE=
> =LI6z
> -----END PGP SIGNATURE-----
>
>
>
> _______________________________________________
> squid-users mailing list
> squid-users at lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
--
Heiler Bemerguy - (91) 98151-4894
Assessor Técnico - CINBESA (91) 3184-1751
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squid-cache.org/pipermail/squid-users/attachments/20160225/9195ce7a/attachment.html>
More information about the squid-users
mailing list