[squid-users] Working peek/splice no longer functioning on some sites

torson cofkomail at gmail.com
Wed Oct 9 07:36:00 UTC 2019

@Amos thank you for your detailed reply. It took me a while to get back to
this task, sorry.
I did some changes, added your suggestions, tested some more and here are my
results using Squid 4.8 with a couple of questions:

A short summary of my setup: Squid that does only intercept for all servers
in the same network. There's Dnsmasq set up on the same server and is used
by Squid and all other servers. 

### 1. https certificate check
I've managed to make it work with "ssl_bump peek all" but it started working
only after I've switched the order of splice and peek:
ssl_bump splice allowed_https_sites
ssl_bump peek all
ssl_bump terminate all

Though now I don't get "terminate" entries in the logfile for blocked
requests for most domains I'm testing with, instead those are now "peek"

Here are logs from the previous configuration (using "ssl_bump peek step1"
with logformat: %tl %6tr %>a %Ss/%03>Hs %<st %rm %ssl::bump_mode
%ssl::<cert_subject %ssl::<cert_errors %ru %ssl::>sni). Here I'm testing
with 6 domains, first 3 are blocked and last 3 are whitelisted:
08/Oct/2019:13:31:03 +0000     56 NONE_ABORTED/200 0 CONNECT
terminate - - edition.cnn.com
08/Oct/2019:13:31:03 +0000     62 NONE_ABORTED/200 0 CONNECT
terminate - - www.bbc.com
08/Oct/2019:13:31:03 +0000     59 NONE_ABORTED/200 0 CONNECT
terminate - - www.google.com
08/Oct/2019:13:31:03 +0000     57 NONE/200 0 CONNECT splice - - nodejs.org
08/Oct/2019:13:31:03 +0000    103 TCP_TUNNEL/200 6597 CONNECT
splice - - nodejs.org:443 nodejs.org
08/Oct/2019:13:31:03 +0000     54 NONE/200 0 CONNECT splice - - bitbucket.org
08/Oct/2019:13:31:04 +0000    356 TCP_TUNNEL/200 4230 CONNECT
splice - - bitbucket.org:443 bitbucket.org
08/Oct/2019:13:31:04 +0000     47 NONE/200 0 CONNECT splice - - github.com
08/Oct/2019:13:31:04 +0000    595 TCP_TUNNEL/200 5900 CONNECT
splice - - github.com:443 github.com

And here are logs from the new configuration (above with "ssl_bump peek
08/Oct/2019:13:32:09 +0000     63 NONE_ABORTED/200 0 CONNECT
terminate /C=US/ST=California/L=San Francisco/O=Fastly,
Inc./CN=turner-tls.map.fastly.net - edition.cnn.com
08/Oct/2019:13:32:10 +0000     75 NONE/200 0 CONNECT peek - - www.bbc.com
08/Oct/2019:13:32:10 +0000    116 NONE/200 0 CONNECT peek - - www.google.com
08/Oct/2019:13:32:10 +0000     65 NONE/200 0 CONNECT splice - - nodejs.org
08/Oct/2019:13:32:10 +0000     82 TCP_TUNNEL/200 6596 CONNECT
splice - - nodejs.org:443 nodejs.org
08/Oct/2019:13:32:10 +0000     58 NONE/200 0 CONNECT splice - - bitbucket.org
08/Oct/2019:13:32:11 +0000    358 TCP_TUNNEL/200 4235 CONNECT
splice - - bitbucket.org:443 bitbucket.org
08/Oct/2019:13:32:11 +0000     50 NONE/200 0 CONNECT splice - - github.com
08/Oct/2019:13:32:11 +0000    542 TCP_TUNNEL/200 5929 CONNECT
splice - - github.com:443 github.com

So there seems to be variations on how Squid is handling traffic for various
I lean more towards having increased security than consistent logging, seems
like "peek" effectively means "terminate" in this case, so I'll just count
that together for alerting.

Do you have some suggestion to make this work better or is this just the
best I can squeeze out of Squid at this time?

### 2. false host-forgery blocking due to stale DNS record

Currently I'm still using "positive_dns_ttl 0" and "negative_dns_ttl 0"
because otherwise I get a lot of false host-forgery blocking after TTL gets
past 0; the more I increase these two the more false blockings I get. Having
many more DNS requests from Squid due to this should be no issue since local
Dnsmasq should handle them all. 

AWS S3 DNS A records have a single IP with 5s TTL (I never saw a number
higher than 5). The IP together with the TTL is different on each request
and they have a large pool, you could be hitting a few records in a row with
TTL 0.

I'm using these 2 commands to do the testing:
while true; do dig s3-eu-west-1.amazonaws.com | grep "IN" ; sleep 1 ; done
while true; do if curl -I https://s3-eu-west-1.amazonaws.com 2>/dev/null |
grep -q " 405 " ; then echo OK ; else echo BLOCKED; fi ; done

The corner case when false blocking happens is when DNS record TTL gets to a
low value, lower than the interval between when the client gets the record
and when Squid gets the record ; client gets the old IP and Squid gets a new
IP from the upstream ; it could take a client a few seconds between when it
does a DNS query and when it issues the HTTP request itself.
I've remedied this a bit by enabling caching in Dnsmasq and setting
min-cache-ttl to 60 for example, so instead of TTL being 5s (even less, 5s
is the best case and rare with S3 records) it can be 60s and that would
lower the occurance of false blocking to 8% of the original.

I guess the proper solution would be if Squid itself would also be a DNS
forwarder that all clients would use, and would be extending every DNS
record validity for a few seconds (configurable) for the host validity check
so it always uses the same record as the one that it sent to the client. 

Alternative would be to have a DNS forwarder with the support of a secondary
cache with extended TTLs for a configured number of seconds that is
available for specific requesters so that the old record would still be
available and served to requests coming from the Squid IP. I haven't found
such yet, I'll probably have to add that feature to one, like
https://github.com/shawn1m/overture or

Do you have some suggestions regarding this? Am I thinking in the right

I don't see how to otherwise make this setup work, currently it's always
some % chance of false positive no matter how much I increase min-cache-ttl.
Though if clients don't request low TTL domains frequently then it's better
to have min-cache-ttl set to a low value so it expires faster in the cache.
And if they request it frequently it's better to increase it to a
(relatively) high value so it's lower chance of a client request hitting
that cached TTL 0. It's a search for the sweet spot.

My full config:
visible_hostname squid
cache deny all
via off
httpd_suppress_version_string on
debug_options ALL,1
positive_dns_ttl 0
negative_dns_ttl 0
acl localnet src	# RFC 1122 "this" network (LAN)
acl localnet src		# RFC 1918 local private network (LAN)
acl localnet src		# RFC 6598 shared address space (CGN)
acl localnet src 	# RFC 3927 link-local (directly plugged)
acl localnet src		# RFC 1918 local private network (LAN)
acl localnet src		# RFC 1918 local private network (LAN)
acl localnet src fc00::/7       	# RFC 4193 local private network range
acl localnet src fe80::/10      	# RFC 4291 link-local (directly plugged)
acl SSL_port port 443
http_access allow SSL_port
http_access deny CONNECT !SSL_port
http_access allow localhost manager
http_access deny manager
http_access allow localhost
reply_header_access Server deny all
reply_header_access X-Squid-Error deny all
reply_header_access X-Cache deny all
reply_header_access X-Cache-Lookup deny all
client_persistent_connections off
http_port 3128
http_port 3129 intercept
host_verify_strict on
acl allowed_http_sites dstdom_regex "/etc/squid/allow_list.conf"
http_access allow allowed_http_sites
https_port 3130 ssl-bump intercept tls-cert=/etc/squid/ssl/squid.pem 
tls_outgoing_options cafile=/etc/ssl/certs/ca-certificates.crt
acl allowed_https_sites ssl::server_name_regex "/etc/squid/allow_list.conf"
ssl_bump splice allowed_https_sites
ssl_bump peek all
ssl_bump terminate all
http_access deny all
logformat general      %tl %6tr %>a %Ss/%03>Hs %<st %rm %ssl::bump_mode
%ssl::<cert_subject %ssl::<cert_errors %ru %ssl::>sni
access_log daemon:/var/log/squid/access.log general

Sent from: http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html

More information about the squid-users mailing list