[squid-users] Sqid uses all RAM / killed by OOM
Ronny Preiss
ronny.preiss at gmail.com
Mon Jul 11 06:54:39 UTC 2022
Hello all,
I have the following problem with squid 5.2 on ubuntu 22.04.
Squid consumes all ram and the entire SWAP. When swap and ram are
completely full, the OOM killer strikes and terminates the process.
We use three internal child proxy servers with keepalived and haproxy as
load balancers. From our ISP we use a parent upstream proxy for external
internet traffic.
As an operating system we have so far Ubuntu 20.04.4 with squid 4.1 in use.
This constellation works flawlessly.
Now I want to update the Server to Ubuntu 22.04 and squid 5.2. But with
Ubuntu 22.04 and squid 5.2 the above mentioned problem with the OOM Killer
occurs.
The new machine has only the OS and squid installed.
Who can help me with a solution?
With kind regards
Ronny
Attached the squid configuration and the VMWare specs.
### VM Specs ###
OS: Ubuntu 22.04 Server
CPU: 4x (Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz)
RAM: 4 GB
VMWare: ESXi 7.0 U2
### CONFIG ###
acl 10.172.xxx.xxx/18 <http://10.172.128.0/18> src 10.172.xxx.xxx/18
<http://10.172.128.0/18>
acl 172.16.xxx.xxx/12 <http://172.16.0.0/12> src 172.16.xxx.xxx/12
<http://172.16.0.0/12>
acl 192.168.xxx.xxx/16 <http://192.168.0.0/16> src 192.168.xxx.xxx/16
<http://192.168.0.0/16>
acl Safe_ports port 80
acl Safe_ports port 21
acl Safe_ports port 443
acl Safe_ports port 70
acl Safe_ports port 210
acl Safe_ports port 1025-65535
acl Safe_ports port 280
acl Safe_ports port 488
acl Safe_ports port 591
acl Safe_ports port 777
http_access allow 10.172.xxx.xxx/18 <http://10.172.128.0/18> Safe_ports
http_access allow 172.16.xxx.xxx/12 <http://172.16.0.0/12> Safe_ports
http_access allow 192.168.xxx.xxx/16 <http://192.168.0.0/16> Safe_ports
http_access allow localhost manager
http_access allow localhost
http_access deny manager
http_access deny all
include /etc/squid/conf.d/*
http_port 10.172.xxx.xxx:3128 <http://10.172.128.34:3128>
cache_peer 10.210.xxx.xxx parent 8080 0
cache_dir ufs /var/spool/squid 3000 16 256
cache_effective_user proxy
cache_effective_group proxy
coredump_dir /var/spool/squid
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern \/(Packages|Sources)(|\.bz2|\.gz|\.xz)$ 0 0% 0 refresh-ims
refresh_pattern \/Release(|\.gpg)$ 0 0% 0 refresh-ims
refresh_pattern \/InRelease$ 0 0% 0 refresh-ims
refresh_pattern \/(Translation-.*)(|\.bz2|\.gz|\.xz)$ 0 0% 0 refresh-ims
refresh_pattern . 0 20% 4320
never_direct allow all
max_filedescriptors 40960
dns_nameservers 10.244.xxx.xxx
### DMESG ###
[256929.150801]
oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/squid.service,task=squid,pid=26390,uid=13
[256929.150822] Out of memory: Killed process 26390 (squid)
total-vm:9691764kB, anon-rss:3657748kB, file-rss:2320kB, shmem-rss:0kB,
UID:13 pgtables:18932kB oom_score_adj:0
[256929.510641] oom_reaper: reaped process 26390 (squid), now anon-rss:0kB,
file-rss:0kB, shmem-rss:0kB
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squid-cache.org/pipermail/squid-users/attachments/20220711/8cc0ada6/attachment.htm>
More information about the squid-users
mailing list