[squid-users] Possible memory leak?

Amos Jeffries squid3 at treenet.co.nz
Fri Sep 11 15:56:04 UTC 2015


On 12/09/2015 1:48 a.m., Alfredo Rezinovsky wrote:
> I'm using squid with a custom icap service. (Which code I plan to free)
> 
> http_port 3129 tproxy disable-pmtu-discovery=always
> 

You are missing the default port (3128) for management access.


> collapsed_forwarding on
> 
> dns_v4_first on
> max_filedescriptors 8192
> connect_retries 10
> retry_on_error on
> client_request_buffer_max_size 10250 KB
> request_header_max_size 10240 KB

REALLY bad idea.

These limits require 20 GB of RAM to be available for active client
connections. The latest Squid dont allocate that much of course, but you
have alowed it so the RAM is required to be avaialable for the clients
when they discover that.
 Note that the expanded limits in latest Squid can still only hold 2 MB
or less of data anyway.

These buffers are capped at 64KB by default because message *headers*
size should be only a few KB. Accepting big headers is a DoS vulnerability.

Also, message body/payload are streamed, not restricted by the buffer
sizes. So there is no need for large limits.

> 
> http_access allow manager localhost
> http_access deny manager
> http_access allow all

You have an open proxy. Anyone who can open a TCP connection to it can
abuse it.

Please re-instate the default security rules for CONNECT, Safe_ports and
SSL_ports:

 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports

which should be above the manager lines.

> 
> maximum_object_size 800 MB
> maximum_object_size_in_memory 32 KB
> cache_swap_low 90
> cache_swap_high 95
> 
> #Server has 16Gb RAM
> cache_mem 738 MB
> 
> cache_dir aufs /cache/sdb 228138269 16 256 min-size=1 max-size=838860800
> 

This cache_dir configuration needs approximately 3.2 TB of RAM on the
machine just to store the cache index.

Squid can store 2^25-1 objects safely in each cache_dir. To fully use
228 TB the objects must be of minimum size 6.8 MB each.

cache_dir aufs /cache/sdb 228138269 16 256 \
min-size=7130317 max-size=838860800

Which will only need a few GB to store the index.

And you will then need to store the under 8 MB objects either in a rock
cache or memory:

maximum_object_size_in_memory 8 MB



> buffered_logs off
> 
> icap_enable on
> icap_send_client_ip on
> icap_persistent_connections on
> icap_preview_enable off
> 
> icap_206_enable off
> 
> icap_service service_reqmod  reqmod_precache  icap://127.0.0.1:50020/request
>  bypass=0 max-conn=100 ipv6=off
> icap_service service_respmod respmod_precache icap://
> 127.0.0.1:50020/response bypass=0 max-conn=100 ipv6=off
> 
> acl html Content-Type -i html
> adaptation_access service_respmod allow html
> respmod_rep_header
> adaptation_access service_reqmod allow all


NP: its pretty rare for people to be uploading HTML pages. I suspect
this service may not be doing what you expect. But thats just a guess.

> 
> # DEFAULT REFRESH PATTERNS

NP: You are missing the ftp:// and gopher:// patterns. They are usually
good to store for more than the default 72hrs.

> refresh_pattern -i (/cgi-bin/|?)    0    0%        0

This pattern is wrong. Should be:
  (/cgi-bin/|\?)

> refresh_pattern .                    0   20%     4320
> 
> acl queries url_regex -i http://.*\?.*
> acl queries url_regex -i http://.*/cgi-bin/.*
> 
> cache deny queries
> cache allow all

NP: This "queries" stuff is obsolete. The refresh_pattern handles
correct caching requirements in the current Squid versions.

> 
> client_persistent_connections on
> server_persistent_connections on
> 
> debug_options ALL,0
> 
> I see the (squid-1) process RAM use slowly increasing. It's never going
> down. Ultil squid cannot allocate more memory and crashes.
> 
> Could be the icap client functionality ?

Unlikely.

Your config says the machine has 16 GB of RAM, but the config settings
for buffering client connections and caching will use up to 3.3 TB of
RAM when your Squid reaches full operational use.

Amos


More information about the squid-users mailing list