[squid-users] Rock Store max object size 3.5.14

Yuri Voinov yvoinov at gmail.com
Tue Feb 23 20:04:49 UTC 2016


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
 
Agreed.

High-load big enough caches must utilize _an adequate_ hardware
configuration with enough capacity to meet you expectations.

And, of course, cache software configuration must fit this hardware, to
maximize approaches.

24.02.16 1:55, Amos Jeffries пишет:
> [ pPS please dont hijack other peoples threads ... this has nothing to
> do with YouTube ]
>
> On 24/02/2016 8:11 a.m., Heiler Bemerguy wrote:
>>
>> Thanks Alex.
>>
>> We have a simple cache_dir config like this, with no "workers" defined:
>> cache_dir rock /cache2 80000 min-size=0 max-size=32767
>> cache_dir aufs /cache 320000 96 256 min-size=32768
>>
>> And we are suffering from a 100% CPU use by a single squid thread. We
>> have lots of ram, cores and disk space..
>
> Squid is essentially single-threaded (not completely, so dual-core has
> benefit, but close). Without SMP enabled you will not benefit from those
> "lots of cores".
>
>
>> but also too many users:
>> Number of clients accessing cache:      1634
>> Number of HTTP requests received:       3276691
>> Average HTTP requests per minute since start:   12807.1
>> Select loop called: 60353401 times, 22.017 ms avg
>>
>
> What GHz rating is each CPU core?  200-250 RPS is roughly in the range I
> would expect from a 1.xGHz core going full speed / 100% usage.
>
> Are you using RAID on the disk storage? IME, RAID can more than halve
> the speed of the proxy. Although the CPU thrashing effect is mostly
> hidden away out of sight in the disk controller processor(s).
>
>
>> Getting rid of this big aufs and spreading to many rock stores will
>> improve things here? I've already shrunk the acls and
patterns/regexes etc
>>
>
> YMMV but I doubt it. AUFS has 64 disk I/O threads taking advantage of
> those other cores. Without SMP rock is restricted to fewer threads for
> its I/O and most work is being done by the main worker core anyway
> without the disk IO portion.
>
>
> With CPU maxing out as the bottleneck I would be looking at config
> perforemance (you say you have done that already), then Squid SMP
> workers as the next workaround. With disk efficiencies later if/when
> they become relevant.
>
> Amos
>
> _______________________________________________
> squid-users mailing list
> squid-users at lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
 
iQEcBAEBCAAGBQJWzLthAAoJENNXIZxhPexG2F8IAJvpm++PFSuglu8us998ZSWi
u+Lih+wQPXa5YNjb5IHB2M+Hp0v5wFCAYCQZngicXeroeH1Fq1B3nZfVRNBlJkeS
0Uuk/agq4ZD7C54DPA+/rfj69t1dYcf72PGiJ7RcFWG9wZdA096x5SvNN9eWYFZ4
gnvX/KlnVzST972bwBqaklhEorOgJQNt2V19H6tTako2rvU/qk3Fqf+9CmzPD4Ld
Y0PsWAu2bgPhu2FvI1+x4UIBOGbmOQncDnll7SXjc+0M0LIUDRhHXK0LnGXR0YWV
i1MgPpe5/GxRhK0NhskfgmsSTHRZZuHmIEsfdMvjaNQggourNbu9vfKrs6xqKGw=
=CMql
-----END PGP SIGNATURE-----

-------------- next part --------------
A non-text attachment was scrubbed...
Name: 0x613DEC46.asc
Type: application/pgp-keys
Size: 2437 bytes
Desc: not available
URL: <http://lists.squid-cache.org/pipermail/squid-users/attachments/20160224/adfbd63d/attachment.key>


More information about the squid-users mailing list