[squid-users] Host header forgery detected after upgrade from 3.5.8 to 3.5.9
dan at getbusi.com
Thu Oct 29 09:12:41 UTC 2015
This is happening when my client and proxy are using the same DNS server. In this case, a local OS X Server which forwards to my ISP’s DNS servers.
As far as I can tell Google’s DNS isn’t in the equation any more. Even so, if I run a `dig watch` on the domain, it happily cycles through a pool of IPs apparently at random.
> On 29 Oct 2015, at 3:46 PM, Amos Jeffries <squid3 at treenet.co.nz> wrote:
> On 29/10/2015 1:16 p.m., Dan Charlesworth wrote:
>> It looks like there’s certain hosts that are designed to load balance (or something) between a few IPs, regardless of geography.
>> For example pbs.twimg.com resolves to wildcard.twimg.com which returns two different IPs each time, from a pool of 5–6, at random. Basically rolling the dice whether the client and the proxy are going to get the same IPs at the same time.
>> What is one to do about that?
> The same thing. Ensuring that the proxy and the clients are using the
> same DNS server.
> The reasoning goes like so:
> * some client does a DNS fetch causing the result to be cached in *that*
> * then the proxy repeats the query and gets the DNS cached result.
> * those results should match 99% of the time even if the domain DNS is
> playing tricks.
> This falls down with the Google DNS because "220.127.116.11" is not one server
> but an entire farm of servers spread aroudn the globe. The two
> consecutive queries done often go to different physical servers.
> You can of course configure 18.104.22.168 to be an upstream resolver for your
> local DNS server if you think that is a good idea. The key think is
> having the same local-end DNS cache being used by the clients and Squid.
> NP: these problems do not exist for forward proxies. Only for traffic
> hijacking interceptor proxies.
> squid-users mailing list
> squid-users at lists.squid-cache.org
More information about the squid-users