[squid-users] squid cache takes a break

Amos Jeffries squid3 at treenet.co.nz
Mon Sep 11 13:34:19 UTC 2017


On 11/09/17 20:49, Vieri wrote:
> 
> ________________________________
> From: Amos Jeffries
>>
>> a) start fewer helpers at a time.
>>
>> b) reduce cache_mem.
>>
>> c) add concurrency support to the helpers.
> 
> 
> So I decreased the startup, idle, cache_mem values:
> 
> # egrep 'startup=|idle=' squid.conf
> external_acl_type bllookup ttl=86400 negative_ttl=86400 children-max=80 children-startup=10 children-idle=3 %URI /opt/custom/scripts/run/scripts/firewall/squid_url_lookup.pl [...]
> sslcrtd_children 128 startup=10 idle=3
> 
> # grep cache_mem squid.conf
> cache_mem 64 MB
> 
> I also set debug_options to "ALL,1 5,9 50,6 51,3 54,9".
> 
> As far as concurrency is concerned, I never programmed a helper to support this feature.
> If it were to be done in Perl, do you know by any chance if it would require Perl6 "promises" with await/start function calls?
> 

Don't know the answer to that one sorry. But ...

> Currently, my "bllookup" helper is a simple Perl5 script which reads from standard input like so:
> 
> while( <STDIN> )
> {
> [...lookup URI in a MySQL database and reply accordingly to Squid...]
> }
> 
> It does not handle the channel-ID field.

That is all it needs to do to begin with; parse off the numeric value 
from the input line and send it back as prefix on the output line. The 
helper does not need threading or anything particularly special for the 
minimal support.

> 
> I haven't found many Squid concurrency-enabled helper examples out there.
> 

Nod.

> By the way, I see that Squid defaults to IPv6 for helper communications. I suppose it wouldn't make any real difference if I tried "ipv4" with "external_acl_type".

If the helper is running at all without it, then no.

> If I don't get any new info next time Squid slows down to a crawl, I'll probably try ipv4 just for kicks.
> 
> What I still don't get is how long it takes for Squid to get back to work after I do a complete restart (after thoroughly killing all related processes, including helpers). I'm talking more than 5 minutes here...
> If I ever get the same issue again, I understand that I can:
> 
> - stop squid & eventually kill all apparently stalled processes
> 
> - modify squid.conf, and decrease or comment out all *startup= and *idle= options
> 
> - start squid
> 
> At this point, I should expect Squid to be up and serving within a reasonable amount of time, even if I may get squid warnings later on asking me to increase those values.
> Or maybe not, because the Linux kernel might be busy cleaning up the swap space anyway?

Something along those lines, though the disk cache related things can 
sometimes take a surprisingly long time to complete. It's hard to tell 
these possibilities apart without a trace of some kind to provide clues 
about what is going on during the pause.

> 
> One last thing. I'm running squid 3.5.26. I'll try to upgrade to 3.5.27 asap.
> 

Nod.

Amos


More information about the squid-users mailing list