[squid-users] Squid SMP workers crash

Alex Rousskov rousskov at measurement-factory.com
Thu Oct 13 22:50:49 UTC 2016


On 10/13/2016 01:53 AM, Deniz Eren wrote:

> I'm using squid's SMP functionality to distribute requests to many
> squid instances and distribute workload to multiple processors.
> However while running squid's workers after a while worker processes
> crash with the error below and coordinator does not start them again:
> ...
> FATAL: Ipc::Mem::Segment::open failed to
> shm_open(/squid-cf__metadata.shm): (2) No such file or directory
> Squid Cache (Version 3.5.20): Terminated abnormally.
> ...

Are you saying that this fatal shm_open() error happens after all
workers have started serving/logging traffic? I would expect to see it
at startup (first few minutes at the most if you have IPC timeout
problems). Does the error always point to squid-cf__metadata.shm?

Are you sure that there are no other fatal errors, segmentation faults,
or similar deathly problems _before_ this error? Are you sure your
startup script does not accidentally start multiple Squid instances that
compete with each other? Check system error logs.

FWIW, Segment::open errors without Segment::create errors are often a
side-effect of other problems that either prevent Squid from creating
segments or force Squid to remove created segments (both happen in the
master process).


> permissions are OK in /dev/shm

Do you see any Squid segments there (with reasonable timestamps)?


> Also is my way of using SMP functionality correct, since I want to
> distribute all connections between workers and to listen only specific
> ports?

Adding "workers N" and avoiding SMP-incompatible features is the right
way; I do not see any SMP-related problems in your configuration.

Alex.



More information about the squid-users mailing list