[squid-users] Issues with TLS inspection Intercept Mode.
squid3 at treenet.co.nz
Wed Jan 22 05:36:37 UTC 2020
On 22/01/20 7:39 am, aashutosh kalyankar wrote:
> The problem I am seeing is the intercept port initiates HTTP connection
> to self-IP instead of the web server IP it gets from the DNS request.
Neither of those IPs should be used.
Self-IP indicates that the traffic is being delivered to the proxy port
as explicit proxy traffic. Or that the NAT system is broken. We usually
get this complaint from people who are testing their proxy by sending
traffic directly to the proxy port.
DNS is only involved to validate the HTTP Host header against the IP
address to forbid caching dangerous contents.
> Filtered Tcpdump
> screenshot @ https://drive.google.com/open?id=0ByReiwdSAAY_VXBPTjF1M3dYTnBTTnhFVnRocXFveUlNSlNj
Screenshots are rarely useful. You are logging level 11,3 to cache.log,
so there should be full HTTP message traces with related connection
details and flow direction (client vs server). That would be more useful.
What I do see in the image is several "GET http://" lines. That
absolute-form URL syntax is for explicit proxy traffic. Traffic
intercepted from port 80 or 443 would use origin-form URLs.
- This reinforces the idea that you are probably testing the proxy
wrong - eg direct curl requests to the proxy?
> Server IP: Eth0: IP: 172.22.22.148/26 (Same
> eth0 interface reaches the internet gateway).
> Configurations for
> 1) Nat table:
> Chain PREROUTING(policy ACCEPT 23 packets, 1632 bytes)
> num pkts bytes target prot opt in out source
> 1 66 3960 REDIRECT tcp -- eth0 * 0.0.0.0/0
> 0.0.0.0/0 tcp dpt:80 /*
> Redirect http traffic eth0:80 to eth0:3128 */ redir ports 3128
You are missing the PREROUTING rule which would ACCEPT traffic outbound
from the proxy to port 80.
This config page details what you need for REDIRECT interception on
Note that it does not place the intercept flag on port 3128. That is
because Squid will generate URLs with its hostname and that port for
clients to use directly when needed (error page contents, manager
> Please let me know if I am missing some conf or the next steps I should
> try to get this running.
Firstly, add that missing NAT rule.
Secondly, make sure that your tests are accurately emulating how clients
would "use" the proxy. That means making connections from a test machine
directly to the Internet and seeing if the routing and NAT delivers the
traffic to Squid properly.
- Use cache.log to view the traffic coming into the proxy. It will be
request messages with a prefix line indicating "Client HTTP request".
Make sure that prefix line says the remote Internet IP address and port
80/443 you were testing with.
- If you want confirm that access.log has a transaction entry for the
URL you tested with ORIGINAL_DST and the server IP.
Thirdly, some squid.conf improvements for other problems you have either
not noticed or not encountered yet:
> 3) Squid.conf
> acl my_machine src 172.22.22.0/24
> http_access allow my_machine
> acl ap_clients src 172.16.10.0/24
> acl local_clients src 172.18.10.0/24
> acl localnet src 10.0.0.0/8
> http_access allow localnet
> http_access allow ap_clients
> http_access allow local_clients
It looks sub-optimal to have these as separate ACLs. May as well use
config comments to document what ranges are which clients and put them
all in localnet (that is what it is for).
# AP clients
acl localnet src 172.16.10.0/24
# Local clients
acl localnet src 172.18.10.0/24
acl localnet src 10.0.0.0/8
http_access allow localnet
> acl purge method PURGE
> http_access deny purge
PURGE is largely obsoleted (by HTCP CLR feature) and requires Squid to
perform a relatively large amount of processing per-request. Unless you
have tools that use it specifically it is best to remove all mention of
it from the config file to let Squid avoid all that work. If any mention
remains Squid will auto-activate the cache indexing work.
If you do have tools using it. Then it is past time to consider/plan
moving those tools and Squid to using HTCP CLR instead.
> http_access deny !Safe_ports
> http_access deny CONNECT !SSL_ports
... this is where all your custom http_access rules are supposed to be.
The Safe_ports and SSL_Ports lines above are DoS and hijack protections.
* DoS protections need to be first to do their job of minimizing the
CPU impact effectively. It is counter productive to have allow's earlier
* It is risky to let clients hijack the proxy and Squid cannot handle
native traffic to or from those ports anyway. If you need a specific
> http_reply_access allow all
Above is the default reply handling. No need to specify.
> http_access deny all
> http_port 8080
> http_port 172.22.22.148:3128 intercept
> https_port 172.22.22.148:3129 intercept ssl-bump cert=/etc/squid/ssl_certs/myCA.pem generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
> sslcrtd_program /usr/lib/squid/ssl_crtd -s /var/lib/ssl_db -M 4MB
You do need at least one non-intercept port for the direct-to-proxy
communications that are needed occasionally. Port 3128 is reserved for
that. It is best to pick a random other port number for the intercepted
traffic to arrive at and leave 3128 for the normal proxy traffic.
> acl step1 at_step SslBump1
> ssl_bump peek step1
> ssl_bump bump all
NP: while this "works" bumping without any details about the server
(obtained at step2) can cause a lot of connection compatibility problems
and undesirable security side effects. Prefer to stare at step2 and
bump at step 3. Do this step2 bump only as a last-resort (ie restrict it
to sites that are broken and cannot be worked around in other ways).
> host_verify_strict off
The above is the default, no need to specify.
> acl oyster-vpn-test dstdomain .oyster-vpn-test.com
> cache deny oyster-vpn-test
> visible_hostname togo.mtv.corp.google.com
Er ... AFAIK Google does not use that domain for their machine
hostnames. You sure you want to advertise your network as *.google.com ?
Also, you only need a visible_hostname line if Squid is unable to pull
the machines hostname from the OS configuration.
More information about the squid-users