[squid-users] Squid3: 100 % CPU load during object caching

Eliezer Croitoru eliezer at ngtech.co.il
Wed Jul 22 18:59:32 UTC 2015


Hey Jens,

I have tested the issue with LARGE ROCK and not AUFS or UFS.
Using squid or not my connection to the server is about 2.5 MBps (20Mbps).
Squid is sitting on an intel atom with SSD drive and on a HIT case the 
download speed is more then doubled to 4.5 MBps(36Mbps).
I have not tried it with AUFS yet.

My testing machine is an ARCH linux with self compiled squid with the 
replacement of diskd to rock from the compilation options of arch linux.

You can take a look at the HIT log at:
http://paste.ngtech.co.il/pnhkglgsu

Eliezer

On 22/07/2015 21:07, Jens Offenbach wrote:
> I will send you my current settings tomorrow. I have used AUFS as caching format, but I have also tested UFS. The format seems to have no influence on the issue.
>
> I have tested the 1 GB Ubuntu 15.04 image (ubuntu-15.04-desktop-amd64.iso). This is the link http://releases.ubuntu.com/15.04/ubuntu-15.04-desktop-amd64.iso.
>
> If you want to stress caching more with large files. You can use one of those:
> https://surfer.nmr.mgh.harvard.edu/fswiki/Download
>
> But I think the Centos 7 ISO are large enough, In my test scenario, I have put all files on an internal web server with gives them in stable 120 MB/sec. So the problem does not come from a slow network connection. I have checked network connectivity with Iperf3 (>= 900 MBit/sec) and made a direct wget without Squid. The file gets downloaded in high speed. Adding Squid in the communication flow which caches the file on the first request and the issue occurrs. After some minutes, the download rate drops to 500 KByte/sec and stays on this level together with 100 % CPU load. The download rate corresponds with the disk IO. The file gets written with 500 KByte/sec.
>
> Thank you very much!
>
>
> Gesendet: Mittwoch, 22. Juli 2015 um 18:28 Uhr
> Von: "Eliezer Croitoru" <eliezer at ngtech.co.il>
> An: squid-users at lists.squid-cache.org
> Betreff: Re: [squid-users] Squid3: 100 % CPU load during object caching
> Can you share the relevant squid.conf settings? Just to reproduce..
>
> I have a dedicated testing server here which I can test the issue on.
> 8GB archive which might be an ISO and can be cached on AUFS\UFS and
> LARGE ROCK cache types.
>
> I am pretty sure that the maximum cache object size is one thing to
> change and waht more?
>
>  From What I understand it should not be different for 2GB cached
> archive and to 8 GB cached archive.
> I have a local copy of centos 7 ISO which should be a test worthy object.
> Anything more you can add to the test subject?
>
> Eliezer
>
> On 22/07/2015 16:24, Jens Offenbach wrote:
>> I checked the bug you have mentioned and I think I am confronted with the same
>> issue. I was able to build and test Squid 3.5.6 on Ubuntu 14.04.2 x84_64. I
>> observed the same behavior. I have tested an 8 GB archive file and I get 100 %
>> CPU usage and a download rate of nearly 500 KB/sec when the object gets cached.
>> I have attached strace to the running process, but I killed it after 30 minutes.
>> The whole takes hours, although we have a 1-GBit ethernet:
>>
>> Process 4091 attached
>> Process 4091 detached
>> % time seconds usecs/call calls errors syscall
>> ------ ----------- ----------- --------- --------- ----------------
>> 78.83 2.622879 1 1823951 write
>> 12.29 0.408748 2 228029 2 read
>> 6.18 0.205663 0 912431 1 epoll_wait
>> 2.58 0.085921 0 456020 epoll_ctl
>> 0.09 0.002919 0 6168 brk
>> 0.02 0.000623 2 356 openat
>> 0.01 0.000286 0 712 getdents
>> 0.00 0.000071 1 91 getrusage
>> 0.00 0.000038 0 362 close
>> 0.00 0.000003 2 2 sendto
>> 0.00 0.000001 0 3 1 recvfrom
>> 0.00 0.000000 0 2 open
>> 0.00 0.000000 0 3 stat
>> 0.00 0.000000 0 1 1 rt_sigreturn
>> 0.00 0.000000 0 1 kill
>> 0.00 0.000000 0 4 fcntl
>> 0.00 0.000000 0 2 2 unlink
>> 0.00 0.000000 0 1 getppid
>> ------ ----------- ----------- --------- --------- ----------------
>> 100.00 3.327152 3428139 7 total
>>
>> Can I do anything that helps to get ride of this problem?
>>
>>
>> Gesendet: Dienstag, 21. Juli 2015 um 17:37 Uhr
>> Von: "Amos Jeffries" <squid3 at treenet.co.nz>
>> An: "Jens Offenbach" <wolle5050 at gmx.de>, "squid-users at lists.squid-cache.org"
>> <squid-users at lists.squid-cache.org>
>> Betreff: Re: Aw: Re: [squid-users] Squid3: 100 % CPU load during object caching
>> On 22/07/2015 12:31 a.m., Jens Offenbach wrote:
>>> Thank you very much for your detailed explainations. We want to use Squid in
>>> order to accelerate our automated software setup processes via Puppet. Actually
>>> Squid will host only a very short amount of large objects (10-20). Its purpose
>>> is not to cache web traffic or little objects.
>>
>> Ah, Squid does not "host", it caches. The difference may seem trivial at
>> first glance but it is the critical factor between whether a proxy or a
>> local web server is the best tool for the job.
>>
>>  From my own experiences with Puppet, yes Squid is the right tool. But
>> only because the Puppet server was using relatively slow python code to
>> generate objects and not doing server-side caching on its own. If that
>> situation has changed in recent years then Squids usefulness will also
>> have changed.
>>
>>
>>> The hit-ratio for all the hosted
>>> objects will be very high, because most of our VMs require the same software
>> stack.
>>> I will update mit config regarding to your comments! Thanks a lot!
>>> But actually I have still no idea, why the download rates are so unsatisfying.
>>> We are sill in the test phase. We have only one client that requests a large
>>> object from Squid and the transfer rates are lower than 1 MB/sec during cache
>>> build-up without any form of concurrency. Have vou got an idea what could be the
>>> source of the problem here? Why causes the Squid process 100 % CPU usage.
>>
>> I did not see any config causing the known 100% CPU bugs to be
>> encountered in your case (eg. HTTPS going through delay pools guarantees
>> 100% CPU). Which leads me to think its probably related to memory
>> shuffling. (<http://bugs.squid-cache.org/show_bug.cgi?id=3189
>> <https://3c.gmx.net/mail/client/dereferrer?redirectUrl=http%3A%2F%2Fbugs.squid-cache.org%2Fshow_bug.cgi%3Fid%3D3189[https://3c.gmx.net/mail/client/dereferrer?redirectUrl=http%3A%2F%2Fbugs.squid-cache.org%2Fshow_bug.cgi%3Fid%3D3189]>>
>> appears
>> to be the same and still unidentified)
>>
>> As for speed, if the CPU is maxed out by one particular action Squid
>> wont have time for much other work. So things go slow.
>>
>> On the other hand Squid is also optimized for relatively high traffic
>> usage. For very small client counts (such as under-10) it is effectively
>> running in idle mode 99% of the time. The I/O event loop starts pausing
>> for 10ms blocks waiting to see if some more useful amount of work can be
>> done at the end of the wait. That can lead to apparent network slowdown
>> as TCP gets up to 10ms delay per packet. But that should not be visible
>> in CPU numbers.
>>
>>
>> That said, 1 client can still max out Squid CPU and/or NIC throughput
>> capacity on a single request if its pushing/pulling packets fast enough.
>>
>>
>> If you can attach the strace tool to Squid when its consuming the CPU
>> there might be some better hints about where to look.
>>
>>
>> Cheers
>> Amos
>>
>>
>>
>> _______________________________________________
>> squid-users mailing list
>> squid-users at lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users[http://lists.squid-cache.org/listinfo/squid-users]
>>
>
>
> _______________________________________________
> squid-users mailing list
> squid-users at lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users[http://lists.squid-cache.org/listinfo/squid-users]
>




More information about the squid-users mailing list