[squid-users] squid delay_pools can't limit speed on certain connections

Amos Jeffries squid3 at treenet.co.nz
Thu Feb 21 10:06:02 UTC 2019


On 21/02/19 12:46 am, reinerotto wrote:
> I also have a problem with delay_pools on 4.4. Download speed is not
> throttled. Easily to be verified when watching video from youtube, using
> 'statistics for nerds'.
> 
> I do not remember having this effect on 3.5
> 
> This squid runs on up-to-date openwrt device,
> having limited resources.
> 
> I am happy to provide further info, in case I get
> instructions for debug_options
> 


ALL,9 is usually the best to get a full log of everything going on. For
more targeted logging debug section 77 is delay pools, 11 is HTTP, and
26 is the CONNECT tunnel handling.

For a device like OpenWRT you should be able to setup a Unix pipe at the
filesystem path of cache.log and divert the data written there to some
other machine with more storage.



> My squid.conf:
> 
> acl localnet src 10.1.0.0/24
> acl localnet src 172.16.0.0/24
> acl localnet src 192.168.1.0/24
> acl localnet src 192.168.2.0/24
> acl ssl_ports port 443
> acl safe_ports port 80
> acl safe_ports port 443
> acl safe_ports port 3128
> acl connect method connect
> http_access deny !safe_ports
> http_access deny connect !ssl_ports
> http_access allow localhost manager
> http_access allow localnet manager
> http_access deny manager
> cachemgr_passwd xxxxxxxx all
> acl denybin urlpath_regex \.bin
> cache deny denybin
> acl test_chrome_compression url_regex ^http://check.googlezip.net/connect
> http_access deny test_chrome_compression
> acl block_google_proxy req_header Chrome-Proxy .*
> http_access deny  block_google_proxy
> http_access allow localnet
> http_access allow localhost
> http_access deny all
> reload_into_ims on

NP: Now that Squid supports HTTP/1.1 caching by default this can cause a
lot (depending on object size) of unnecessary bandwidth to force caching
of objects that have no need to be fetched at all.


> refresh_pattern ^ftp: 1440 20% 10080
> refresh_pattern ^gopher: 1440 0% 1440
> refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$ 10080 90% 43200 override-expire
> ignore-no-cache ignore-no-store ignore-private
> refresh_pattern -i \.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|x-flv)$ 43200 90%
> 432000 override-expire ignore-no-cache ignore-no-store ignore-private
> refresh_pattern -i \.(deb|rpm|exe|zip|tar|tgz|ram|rar|bin|ppt|doc|tiff)$
> 10080 90% 43200 override-expire ignore-no-cache ignore-no-store
> ignore-private


* ignore-no-cache no longer exists.

* override-expires tends to *shorten* caching times for the object types
above since they are most often static with years of lifetime.

* Combining ignore-no-store and ignore-private can cause information
leaks as objects that are only supposed to be delivered to a specific
client are delivered by your proxy to any other clients.
 ignore-private is not as dangerous as it used to be, but also does not
exactly save much bandwidth since everything has to be verified before
use anyway.


> refresh_pattern -i \.index.(html|htm)$ 0 40% 10080
> refresh_pattern -i \.(html|htm|css|js)$ 1440 40% 40320
> refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
> refresh_pattern . 0 20% 4320
> access_log none
> cache_log /mnt/sda1/log/squid/cache.log
> cache_store_log stdio:/dev/null

That is bad. It makes Squid do all the extra work of locating data and
formatting the log lines to be written to /dev/null.

In all Squid-3+ just remove the cache_store_log line from your config file.


> logfile_rotate 1
> logfile_daemon /dev/null
> http_port 3128
> http_port 8888 intercept
> https_port 3127  intercept ssl-bump cert=/etc/squid/ssl_cert/myCA.pem \
>   generate-host-certificates=off
> acl step1 at_step SslBump1
> ssl_bump peek step1 all
> ssl_bump splice all
> cache_mem 8 MB
> shutdown_lifetime 10 seconds
> httpd_suppress_version_string on
> dns_v4_first on
> forwarded_for delete
> pipeline_prefetch 2
> via off
> maximum_object_size_in_memory 128 KB
> maximum_object_size 4 MB
> reply_header_access Cache deny all
> client_idle_pconn_timeout 1 minute
> server_idle_pconn_timeout 5 minute
> read_timeout 2 minute
> ipcache_size 512
> fqdncache_size 256
> reply_header_access Alternate-Protocol deny all
> reply_header_access alternate-protocol deny all

NP: duplicate config line. Header names are case insensitive.

> reply_header_access alt-svc deny all
> pinger_enable off
> digest_generation off
> netdb_filename none
> cache_dir ufs /mnt/sda1/cache 250 16 256
> acl only512kusers src 10.1.0.0/24
> delay_pools 1
> delay_class 1 3
> delay_access 1 allow only512kusers
> delay_access 1 deny all
> delay_parameters 1 -1/-1 -1/-1 64000/64000
> 

FYI: The numbers above are in Bytes.

 ==> So your limit there is actually 625 KB, not 512 KB.

It is only a bug if the on-wire bytes being transmitted through the
proxy exceed 625KB/sec for any given client IP address.

* If the client uses multiple IPs - they each get separate buckets.

* If the client is using a connection that avoids the proxy - anything
the proxy does is irrelevant.

* If the client is compressing the data - on-wire may be up to 100% less
then reported by the UA.

This needs to take into consideration that HTTP/1.1, TLS, and most of
the modern protocols within TLS that all come under the term "HTTPS"
have high compression rates. So a client using any form of compression
will receive objects larger than their limit size/sec implies.

Amos


More information about the squid-users mailing list