[squid-dev] [RFC] Fix shared memory initialization, cleanup. Ensure its usability.
Alex Rousskov
rousskov at measurement-factory.com
Thu Dec 10 16:06:19 UTC 2015
On 12/10/2015 01:53 AM, Amos Jeffries wrote:
> On 10/12/2015 1:24 p.m., Alex Rousskov wrote:
>> On 12/09/2015 04:24 PM, Amos Jeffries wrote:
>>> On 10/12/2015 10:49 a.m., Alex Rousskov wrote:
>>>>>> Questions: Should we add a configuration directive to control whether
>>>>>> mlock(2) is called? If yes, should mlock(2) be called by default?
>>>>>> Should mlock(2) failures be fatal?
>>>>> My answers are no, yes, yes. Since this is a startup thing squid.conf
>>>>> post-processing is probably too late.
> It makes them "maybe", no, yes.
> Though I am still not convinced exactly that squid.conf is the right
> place. As I said before:
> "
> In general if a setting is not something that can have a reasonable
> temporary default for a while, then be switched around in realtime. Then
> squid.conf is not a good place for it.
Yes, you said that, but (a) your precondition still does not quite match
the situation at hand:
* The "perform mlock" setting _has_ a reasonable default (i.e., yes, lock),
* that default can be overwritten to "do not lock" if the admin does not
want to lock, and
* the setting can even be switched around during reconfiguration (after
somebody adds support for reconfigurable shared storage, although the
change from no-lock to lock can be supported earlier);
and (b) I disagree with the overall decision logic you are proposing:
IMO, command line options should be limited to these two areas:
i. controlling Squid behavior before squid.conf is parsed.
ii. controlling another Squid instance (for legacy reasons).
Memory locking controls do not belong to the command line. And yes, many
current command line options violate the above, but that does not make
those violations good ideas and does not mean we should add more of them.
> I would make it mandatory on "-k check" (but not -k parse). Optional
> through a command line parameter on all other runs.
I have two problems with the -k check approach:
1. -k check is documented to parse configuration and send a signal to
the already running Squid. To implement what you are proposing would
require also initializing Squid storage. Initializing Squid storage when
another Squid instance is running is either impossible (because it will
conflict with the existing instance) or would require a highly
customized code (to work around those conflicts) that will not fully
test what a freshly started Squid would experience. Finally, locking
memory on -k check may also swap out or even crash the running Squid
instance!
2. Since most folks do not run -k check, they will not be aware that
their Squid or OS is misconfigured and will suffer from the obscure
effects of that misconfiguration such as SIGBUS and segmentation fault
deaths.
Problem #2 affects the "Optional through a command line parameter on all
other runs" part as well.
> The other (default / normal) runs having it OFF by default, since the -k
> check should have confirmed already that it is going to work.
Yes, but I do not think we should use -kcheck and can rely on -kcheck as
discussed in #1 and #2 above. If we want to avoid these SIGBUS problems
in old and new configurations, we should lock by default. I see no other
way.
We may, of course, decide that SIGBUS problems are not significant
enough to be worth avoiding, but it feels wrong to ship a proxy that can
easily crash in its default configuration just because we decided to
avoid a check preventing such crashes. I would rather ship a proxy that
starts slow when explicitly configured to use lots of shared memory.
Thank you,
Alex.
More information about the squid-dev
mailing list