[squid-users] Splicing a connection if server cert cannot be verified

Amos Jeffries squid3 at treenet.co.nz
Tue Dec 16 14:20:55 UTC 2014


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 16/12/2014 10:20 a.m., Soren Madsen (DREIJER) wrote:
> Thanks for the quick reply, Amos.
> 
>> Offering SSLv3 from a server is suicide these days. Those sites
>> should be on the fast decline, or at very least shunned like
>> plague victims. Lookup POODLE if you dont know why already.
> 
> That's correct. That's why I don't want to bump such connections
> and instead fall back to splicing. In other words, if I can't trust
> the server, I want to get out of the way and defer the decision to
> the client.
> 
>>> or self-signed certificates,
>> 
>> Nothing wrong with self-signed though. Much *more* secure than
>> CA validated certs when used in DANE protocol.
> 
> Yes, but Squid has no way of trusting a self-signed cert. When
> Squid mints a server cert on the fly and sends it to the client,
> the client won't have any idea that the cert was originally
> self-signed. Like the previous scenario, I'd want to step out of
> the way and defer the decision to the client.
> 

The global list of CAs which non-self-signed certs validate against is
explicitly loaded into the SSL library. It is not a built-in list.

All you have to do to trust a "self-signed" cert is add the CA signing
it to your trusted set.


>>> in which case I'd like to fall back to TLS passthrough mode and
>>> let the client decide whether it wants to trust the server or
>>> not. In other words, if Squid cannot successfully bump a
>>> connection, I don't want to fail the connection, but rather
>>> step out of the way and let the client decide what to do.
>>> 
>>> The ideal solution, I think, would be to optimistically attempt
>>> to bump the connection, but if it fails due to e.g. a bad
>>> server cert, a new connection can be established with the
>>> original client hello.
>>> 
>>> I was hoping the new peek and splice functionality would be
>>> able to help me in this regard: 
>>> http://wiki.squid-cache.org/Features/SslPeekAndSplice
>>> 
>>> As far as I can tell, the 'stare' action is what I'm interested
>>> in here although it appears it's not a focus of the current 
>>> implementation, and the 'peek' action has the following
>>> limitation note about 'Peeking at the server often precludes
>>> bumping': "We could teach Squid to abandon the current server
>>> connection and then bump a newly open one. This is something we
>>> do not want to do as it is likely to create an even worse
>>> operational problems with Squids being auto-blocked for opening
>>> and closing connections in vein."
>>> 
>>> I'm confused about this. Couldn't Squid just cache the
>>> information about whether it has previously refrained from
>>> bumping a connection due to a bad server cert (or other errors)
>>> and only check with the server once the cache expires? That
>>> should avoid triggering any alarms on the server.
>> 
>> Happy eyeballs clients open multiple connections in parallel,
>> causing Squid to be seen opening just as many. Adding the above
>> behaviour would make the number of connections hammering the
>> server multiply by *at least* 2.
> 
> I don't think I see the big problem here. If you hit a web server
> with 10 connections, but Squid decides to splice the connection
> after all and therefore closes the connections and create 10 new
> ones, that's hardly going to cause any alarms to go off. After that
> point, Squid will cache the fact that connections to that hostname
> shouldn't be bumped and subsequent attempts at hitting that
> hostname (based on the SNI, for instance) won't be bumped again
> until the cache expires.
> 
>> 
>> Also, with modern HTTPS load balancers every since connection is 
>> potentially going to a different real backend server, with
>> different TLS settings even if the domain, IP, and port details
>> are exactly identical. Things could also change with no notice as
>> admin fix transient problems.
> 
> Sure, and that's the point of the cache I mentioned above. If there
> happens to be a transient problem with the server, it's okay that
> Squid doesn't bump the connection for, say, an hour until it checks
> the host again. I see this as optimistic bumping, i.e. bump if you
> can but under no circumstances break the connectivity between the
> user and the server.
> 
>> If you are going to bypass bumping based on vague-ish criteria
>> then you might as well just not bump. That gets you away from all
>> those technical probems, and a host of legal issues as well.
> 
> I don't follow what you're saying here. How is looking at a server
> cert and determining that Squid cannot trust it "vague-ish"
> criteria? And a host of legal issues?
> 

The vagueness is in how long the cache will be presenting accurate
representation of the cert state. Given the LB situation and other
intermediaries like Squid generatign certs per-connection there is no
reliability on it being accurate from one use to the next.


>> AIUI, the basic problem that "precludes bumping" is that in order
>> to peek at the serverHello some clientHello has to already have
>> been sent. Squid is already locked into using the features
>> advertised in that clientHello or dying - with no middle ground.
>> Most times the client and Squid do not actually have identical
>> capabilities so peeking the serverHello then either bump or
>> splice actions will break depending on which clientHello Squid
>> sent.
> 
> I don't see why that is a problem if you just recreate the
> connection to the server. That is, you first try bumping the
> connection by sending a new clientHello to the server, and if the
> server cert cannot be verified, a new connection is established and
> the original clientHello is sent to the server.
> 

"just" recreating the connection to the server means discarding the
old one. Which is not anywhere near as nice a proposition once you
look beyond the single proxy.

The details, you can skip if you want to avoid...

* Each aborted connection means 15 minutes TCP TIME_WAIT delay before
that TCP socket can be re-used.

* TCP/IP limits software to 63K sockets per IP address (64K total with
1024 reserved).

Using multiple outbound connections to discover some behaviour is what
the browser "happy eyeballs" algoithm is all about. They are just
looking for connectino handshake speed rather than cert properties.

- - Browsers are operating on rate of 10's to hundreds of single
requests per minute. With all 64K per-IP sockets on that machine
dedicated to the one end user.

- - Proxies are individually dealing with requests on the rate of 10's
or 100's of thousand requests per minute. Sharing socets between
hundreds or thousands of end-users.

At that speed 64K sockets per IP address are consumed very quickly
already. Squid is limited to a very few over 10K new
connections/minute per IP address on the machine. We get away with
higher rates by having HITs, collapsed-forwarding and by multiplexing
requests onto server connections.
 => multiplex is the biggest gainer which is not possible with HTTPS
traffic, despite bumping.

Then you have to consider the Internet-wide scale which all these
things are operating on. Internet is not a simple client->server or
even client->proxy->server connection. The reality is that the general
architecture is more like: users browser, ISP load balancer, ISP
cache, content provider CDN router/load balancer, content provider CDN
cache, origin server load balancer and finally origin server. approx 6
"hops" that each HTTP connnection goes through. Sometimes worse,
sometimes better. But the common situation for users is being on a big
ISP visiting popular so called "Big content" website.

What you are asking is that we effectively do happy-eyeballs for
outbound server connections in proxies. With each of the above hops
the number of TCP sockets consumed doubles, and all but 1 in that hop
will be discarded by the end of the handshake process.


* Browser people love their happy-eyeballs and prefetch algorithms. So
it starts everything by opening 2 sockets to see which connects
fastest and only using that one. (and probably not just 2, but 4 or 8
but I shall simplify).

By the end of the doubling at each proxy, (2^6) -> 64 sockets per
client connection hitting the origin server in worst-case. Thats
limiting it to serving only 1000 end users every 15 minutes. Or
serving just 1 *single* HTTP request per second.

Sites like Facebook load several hundreds of objects per page. Imagine
how badly the site would be if it took a few minutes to load after
each click.

Or how much the hardware they would have to purchase would cost at a
rate of 2-3 whole servers per user (at 100's million of users) for a
half-decent response rate.


I know those last two conclusions are OTT worst-case extreme of the
problem. However, the main point is that all that stands between
worst-case from becoming common-case is how much we *avoid* joining in
on the browser happy-eyeballs behaviour you are suggesting.
 And remember that I got to those nasty numbers quickly despite
conservatively assuming the browser only oppened 2 connections at the
start.


>>> Maybe I'm misreading the document. I was hoping somebody here
>>> on the list could explain to me if I can achieve the above
>>> behavior.
>>> 
>> 
>> I suspect you actually need the certificate mimic behaviour.
>> Where Squid generates a server cert as close as possible to the
>> original, including errors so the client can decide how to handle
>> them.
> 
> How would you propose supporting self-signed certs in this
> scenario?

A truely self-signed cert is a CA cert and must never be used to
encrypt a connection directly. What is seen on the wire and called
"self-signed" is actually a cert signed by some CA (the servers admin)
who is not in your trusted CA set.

So...

... if a lot of your traffic comes from a few sites you can add those
sites CA cert to your proxies trusted CA set.
 - maybe check and see if the CA-certificates set used by your proxy
machines SSL library is up to date. If that is outdated a bunch of
those self-signed claims may be false-nagatives.

... or you can replace the helper program Squid uses to validate the
certs with one that accepts self-signed.

I am interested in getting a helper that does TLSA/DANE validation. A
lot of sites are starting to use TLSA cert instead of the global
corporate CAs.


> This also means that in order to allow the client to use SSLv3, I'm
> going to have to allow Squid to bump SSLv3 connections, which I'm
> not keen on for the reason you mentioned yourself above.
> 

Nod.

It is a little bit safer to allow Squid to use SSLv3 to servers which
are still broken, than to allow clients to SSLv3 to Squid. At least
half of the connectivity you have soemthign to do with becomes
trustworthy even though the overall end-2-end security is no better (yet).

Amos
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJUkD/HAAoJELJo5wb/XPRj0bEH/jMKe7M/CJPMUgUDlIkdzJyU
zWn8mnv80cpoz2TdFd71vcGFT3wDLLlOleGLu6RyFWI3ikGHcAa1jdeAOJhbBk5v
dQlx7nWYiIV5rkFYHYgkl2lNBFdI9ETX4SakPWrLksf/6XGvascP8mNNDI8987XB
Vk2HXSUnA9dRiWUvmc4bOc4inMSqdjwoFR8uKQmsG6PQftNGrmrtfUcqi3fLXwMU
k8RpIPDiczCf2FzjXMk8VbfnTHEGtodDGKtjmRQk0zrc7NeO5LmyGgCbficqsmWM
PqcSKVi6pI5Rl/CrWAfgZiSlrlKnShItA5uZOIxHroLYo0gslW0BvAwT86xX/fo=
=Gwkq
-----END PGP SIGNATURE-----


More information about the squid-users mailing list