[squid-users] TProxy and client_dst_passthru

Amos Jeffries squid3 at treenet.co.nz
Thu Jul 2 12:04:29 UTC 2015


On 2/07/2015 11:05 p.m., Stakres wrote:
> Hi Amos,
> 
> 216.58.220.36 != www.google.com ??? 
> Have a look: http://www.ip-adress.com/whois/216.58.220.36, this is google.


www.google.com did not resolve to 216.58.220.36 when Squid checked.
Otherwise caching would have been allowed and ORIGINAL_DST not necessary.

Although that said the response was a 206 which does not cache anyway.


> 
> Depending the DNS server used, the IP can change, we know that especialy due
> to BGP.

The Google IPs given also change every 60-90 seconds no matter what DNS
server is used. Which is why its very important to have your clients
using the same DNS resolvers as Squid. So the IPs found by the client
are cached in the DNS resolver when Squid goes to check.

> 
> In the case the client is an ISP providing internet to smaller ISPs with
> different DNS with their end users, here I understand that due to the
> ORIGINAL_DST squid will check the headers and if the dns records do not
> match so squid will not cache, even with a storeid engine, because too many
> different DNS servers in the loop (users -> small ISP -> big ISP -> squid ->
> internet), am I right ?

Very likely yes.

> 
> So, the result is a very poor 9% saving where we could expect around 50%
> saving. 

Unfortunately that is the price for preventing any client script from
replacing the in-cache content with arbitrary content. A very nasty
vulenerability.

You can get around it somewhat by having the ISP resolvers use each
other same as proxy chains do.

> 
> Can you plan, for a next build, a workaround to accept the original dns
> record from the headers and check dns if and only if the headers do not
> contain any dns record ?

That is exactly what ORIGINAL_DST is.

The client DNS lookup results in an IP being used by the client. Squid
takes that IP from the TCP packet and relays the traffic there. It just
cannot be cached because Squid cannot know if the reconstructed URL is a
client lie or not.

Consider some malicious server at 192.168.0.2 responding with an
infected JPG to all requests. An infected server contains a script that
fetches the Google icon from 192.168.0.2 using Host:www.google.com.

Now cache that as http://www.google.com/... and what happens?


> I understand Squid should provide some securities but here we should have
> the possibility to ON/OFF these securities.
> Or do we need to downgrade to Squid 2.7/3.0 ?

You will hit the above vulnerability in older versions. And there *is*
malware out there actively abusing it since other proxies still contain it.

> 
> ISPs need to cache a lot, security is not their main issue.

Understood. You can divert all traffic to a single kitten picture. That
will have a 100% HIT rate.

Then again, you need reliability of service as much as caching. This
ORIGINAL_DST mechanism it there to guarantee reliability in the face of
that vulnerability.

Amos



More information about the squid-users mailing list