[squid-users] squid 3.5.27 does not respect cache_dir-size but uses 100% of partition and fails
Eliezer Croitoru
eliezer at ngtech.co.il
Fri Jul 13 01:23:29 UTC 2018
Gear!
I am testing it with 4.1 since UFS and AUFS are great but... doesn't support SMP.
Eliezer
* another thread on the way to the list.
----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: eliezer at ngtech.co.il
-----Original Message-----
From: Alex Rousskov [mailto:rousskov at measurement-factory.com]
Sent: Friday, July 13, 2018 3:24 AM
To: squid-users at lists.squid-cache.org
Cc: Eliezer Croitoru <eliezer at ngtech.co.il>
Subject: Re: [squid-users] squid 3.5.27 does not respect cache_dir-size but uses 100% of partition and fails
On 07/12/2018 06:20 PM, Eliezer Croitoru wrote:
> From the docs:
> http://www.squid-cache.org/Versions/v4/cfgman/cache_swap_low.html
>
> I see that this is only for UFS/AUFS/diskd and not rock cache_dir.
> What about rock cache_dir?
Rock cache_dirs cannot overflow by design. Rock reserves a configured
amount of disk space and uses nothing but that amount of disk space. Due
to optimistic allocation by file systems, you can still run out of disk
space if something else consumes space on the same partition, but the
rock database itself cannot overflow.
Alex.
> -----Original Message-----
> From: squid-users [mailto:squid-users-bounces at lists.squid-cache.org] On Behalf Of Amos Jeffries
> Sent: Friday, July 13, 2018 1:33 AM
> To: squid-users at lists.squid-cache.org
> Subject: Re: [squid-users] squid 3.5.27 does not respect cache_dir-size but uses 100% of partition and fails
>
> On 13/07/18 04:16, Alex Rousskov wrote:
>> On 07/12/2018 05:53 AM, pete dawgg wrote:
>>
>>
>>> When there is no traffic squid seems to cleaning up well enough: over
>>> night (no traffic) disk usage went down to 30GB (now it's at 50GB
>>> again)
>>
>> This may be a sign that your Squid cannot keep up with the load. IIRC,
>> AUFS uses lazy garbage collection so it is possible for the stream of
>> new objects to outpace the stream of object deletion events, resulting
>> in a gradually increasing cache size. Using even more aggressive
>> cache_swap_high might help, but there is no good configuration solution
>> to this UFS problem AFAIK.
>>
>
> FYI, to be more aggressive place the two limits closer together.
>
> I made the removal rate grow in steps of the difference between the
> marks. A low of 60 and high of 70 means there are 4 steps of 10 between
> 60% and 100% full cache - so Squid will be removing 4*200 objects/sec
> when the cache is 99.999% full. But a low of 90 and high 91 will remove
> 10*200 objects/sec at the same full point.
>
> Low numbers like 60, 70 etc are only needed now if you have to push the
> removal rate past 2K objects/sec - eg low 60 high 61 will be removing
> 40*200 = 8K objects/sec.
>
>
> If you know your peak traffic rate in req/sec you should be able to tune
> the purge rate to match that peak traffic rate. The speed traffic
> reaches that peak should inform what the gap is between the watermarks.
>
> Amos
> _______________________________________________
> squid-users mailing list
> squid-users at lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
> _______________________________________________
> squid-users mailing list
> squid-users at lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
More information about the squid-users
mailing list