[squid-users] How to have squid as safe as (e.g.) firefox?

Jeremie Rafin rafinjer-squid at yahoo.fr
Wed Aug 19 15:43:19 UTC 2015

Amos and Alex,

Thanks a lot for all your advises. I appreciate your helpful comments! :))

Nevertheless, all is not cristal clear for me. I have setup a sandbox (virtual box with squid 3.5.7 on debian 7.6; for transparent proxy, I have setup NAT iptables and IP route to run client through squid; my client browser is configured with squid certificate) and have tried many configurations. I am still a little bit lost...

Let me sum-up my needs, first. In a family context, I would like:
-a) to black/white list accesses (with e.g. squidguard);
-b) to check for content (with e.g. diladele or e2guardian);
-c) to do that for https (because more and more sites cipher links, like google);
-d) not to check for content for a given (spliced) sites (like banks);
-e) not to degrade security; for instance, revoked CA must not succeed in access, even if bumped;
-f) [nice to have]: to do this in a transparent proxy way; but explicit proxy would also be ok, if required.

Second, as per your advises, and some searches, I have setup this configuration (from the default one, unchanged; no third party, yet):


# Black list: meteofrance (http) and google (https)
acl blacklist dstdomain .meteofrance.com
acl sslblacklist ssl::server_name .google.fr
http_access deny blacklist
http_access deny sslblacklist

# Non bumped list (only spliced): wellsfargo
acl splicelist ssl::server_name .wellsfargo.com

# SSL configuration
acl step1 at_step SslBump1
acl step2 at_step SslBump2
ssl_bump peek step1 all
ssl_bump splice step2 splicelist
ssl_bump bump all

# Web access
http_port 3128 ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=4MB cert=/etc/squid/ssl_cert/myCA.pem
http_port 3126 intercept
https_port 3127 intercept ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=4MB cert=/etc/squid/ssl_cert/myCA.pem
sslcrtd_program /usr/local/squid-3.5/lib/squid/ssl_crtd -s /var/spool/squid3_ssldb -M 4MB


With this config file, I am able to satisfy my requirements a, b (ready for), c, d and f (I mean a, b, c and d are ok in transparent and explicit modes). But e fails: https://revoked.grc.com/ is not rejected.

So, even if I think my configuration is much cleaner thanks to you (you will probably comment), I have still the same feeling of degrading the security when bumping. Hence, my main question remains (title of the thread): how to get (at least) revocation working (requirement "e")?

During my investigation, I have met some other difficulties (these points are less important than above main question, but are still obscur for me):

-"peek"/"splice"/"bump": nothing is logged in access.log (I tried a "debug_options ALL,9"; no success); I have read in some post that nothing is logged yet, but I double check: is it planed (or done) to get log for SSL decision?

-with this SSL configuration:

# SSL configuration
ssl_bump splice splicelist
ssl_bump bump all

 -all works but **only in explicit proxy**; in transparent, the HTTPS are failing (no certification???); if I add a "ssl_bump peek all" before the "bump" rule, all https accesses are peeked (poken?) or spliced; nothing is bumped (in explicit and transparent); is it normal?
 -furthermore, google is not blacklisted **only** in transparent mode! Why?
 -the wiki (http://wiki.squid-cache.org/Features/SslPeekAndSplice) does not mention this need of step1/step2 (like first configuration) for having splice/bump decision working well in transparent mode, does it?

-I noticed a quite similar unexpected behavior for "ssl_bump terminate" without "step1/step2"; for instance, a simple "ssl_bump terminate all" gives:
 -in transparent mode: no effet (no bump, no black list), all is spliced (or poken);
 -in explicit mode: google is rejected but every other https works (in a non bumped way);
Note that "terminate" behaves logically (i.e. as I would expect) with a preceeding "acl step1 at_step SslBump1/acl step2 at_step SslBump2/ssl_bump peek step1 all". With "all" or, for instance, with "sslblacklist".

Thanks for any answer/help!

P.S.: my comments for your answers below.
>Technically a clean install of Squid with default options is more secure
>than any browser you will be able to find.
>Because it comes configured for forward-proxy. Which does not touch the
>TLS traffic in any way but relays CONNECT tunnels. Inside the tunnel the
>browser<->server security is in operating control, which makes the Squid
>relay equally secure as whatever the browser and server would agree to
>without Squid.
>Additionally, Squid enforces that HTTPS tunnels only go to port 443.
>Which is something the browsers do not do. Making Squid better in this
>one way on top of all the things the browsers do inside their tunnel.

For "splicing", OK; but I still have a doubt with bumping since I fail in revoking the test page.

>the feeling of security you have with browser is a lie. Pure "security
>theatre", done so well that you and billions of others can't even see it.
>What you are doing is trusting a very large set of nearly a thousand CA
>entities. Including most of those governments with bad reputations now
>for surveillance, and a lot of corporations with agendas of their own.
>For all sorts of reasons which you have no knowledge or control over.
>Yes, someone has vetted that their published intentions were good to get
>them into the list, but it was not you. For you and everyone else it is
>almost blind trust.

This is my next goal: be able to manage the CA by myself in order to increase security for my clients surfing on internet. But this is another story: in a first time, I would like to reach same level of security as without squid. No more, no less. Even if I agree this is not perfect, this is a must, as I can not give better, so far.

>At any time *one*, just one, of them could sign a faked certificate.
>When that happens no user will be able to tell the difference without
>detailed digging down into the browser cert information.
>The only reason these things come to light at all is when the ability is
>abused in obvious user-visible ways for dictatorial censorship or
>malware attacks. Or the few vigilant an knowledgable people actively
>seeking it out catch it in the act. Its not the CA action that was seen
>first, but the abuse of power it allowed.
>Thankfully the repercussions of being wiped from the global CA list are
>severe enough to prevent power abuse in amost cases. But there have been
>some exceptions even so.
>So security threatre. Its been working so far, but only just.

I may have the same feeling than you; but maybe this discussion is out of scope (even if I would appreciate to discuss longer)? Again, before improving my (our) condition of simple browser people, I would like not to degrade it...

>You have also configured "sslproxy_cert_error deny all" which forces
>Squid to accept and ignore all possible origin server certificate
>errors. Including revocation.
>I hope you can see/understand the result.

According to Alex, I am not wrong. Anyway, with or without, with deny or allow, the revoked test page still fails, as soon as it is bumped...

>> -do you know any implementation of NSS library (the security library
>of firefox, probably safer than openssl) for certificate checking helper
>(cf. sslcrtvalidator_program)?
>No. Just the OpenSSL one we provide so far.
>I'm still working on getting library-agnostic TLS support rolled into
>Squid. But the main effort has been on the squid binary, not the helpers

OK. As Alex says it, and I definitly agree with him, maintening such a tool is out of my capability and/or time availability. I have not been able to find any project on that topic, so I give this idea up.

>> # SSL Options - to mimic firefox; some of keys are weaks but some of my favorite websites need them :(
>> sslproxy_options NO_SSLv2,No_Compression
>Careful. Squid will do what you tell it to.
>In order for this to be more secure than the browser, you will have to
>ensure that each of these things you are allowing actually are more
>secure than what it does. And that you are not allowing anything that
>the browser decides is bad.

Thanks for the advise. Improving this is another next step for me.

>> sslproxy_cert_error deny all
>> # Splice access lists
>> acl splice_client src
>> acl splice_domain dstdomain .paypal.com
>> acl splice_dst dst
>> # HTTPS access
>Nope, "TLS access" is better description.
>HTTPS is two protocol layers; a HTTP layer over a TLS layer (like
>"TCP/IP" is actually TCP over IP layer).
>ssl_bump directive controls only the TLS later actions. The http_access
>rules later deal with the decrypted HTTP layer - but only if it was
>allowed to be decrypted ("bump" action) by these rules.

Thanks for this reminder :)

>> ssl_bump splice splice_client
>Splice is equivalent to blind tunnelling, but can be done after Squid
>has played with the certififcates a bit (using read-only accesses).
>Since splice_client is based only on src-IP and nothing TLS layer
>related it is better to use "none" instead of splice action on the above
>rule. The true secure blind-tunnel will then be done.

This makes sens; thanks.

>> ssl_bump splice splice_domain
>This is a good example of how dstdomain fails. You are deciding whether
>to splice instead of interpret the HTTP message. Based on details inside
>that HTTP message which has not yet been interpreted.
>Make sure you are using the latest 3.5 release, and use the
>"ssl::server_name" insted of dstdomain in the ACL definition.

Your comment is really helpful! Maybe the wiki (http://wiki.squid-cache.org/Features/SslPeekAndSplice) should insist on this basis. I am not the only one doing this mistake...

>> ssl_bump splice splice_dst
>> ssl_bump server-first all
>DO NOT mix the old and new config styles together. server-first requires
>doing things like emitting a fake server cert to the client before
>reading soem of the client handshake details the splicing needs. But you
>have already spliced a bunch of transactions from the earlier rules.

Once again, maybe this basis should be explained in the wiki.
If I remember correclty, I was motivated by the effect in log (cache). I have not checked, but I remember that "server-first" gives more logs than "bump", hence I was more confident. Definitly, ssl decisions should be logged.

>> # Hide PROXY
>> via off
>> forwarded_for delete
>Does *not* hide the proxy.
>Hides the *client* by actively "shouting" the proxy details out to the
>world in protocol places where the client details would normally go.

This is another interesting topic -maybe not the right place here.
The goal is to get video streaming: some web sites refuse to provide video if a proxy is detected (because of broadcast laws); to debug I use these sites (streaming web site is very slow): 

The only configuration I have found to hide the proxy (and to get the video streaming) was the above one. If you have a better configuration, I would be curious to know it: all of the configurations I found on internet was failing (and I was not able to find something else that works).
Thanks :)

More information about the squid-users mailing list