[squid-users] TProxy and client_dst_passthru
Stakres
vdoctor at neuf.fr
Thu Jul 2 12:43:08 UTC 2015
Hi Amos,
"/You can get around it somewhat by having the ISP resolvers use each other
same as proxy chains do./"
This is impossible to do in a multi-level ISPs archi, because each ISP could
use any DNS servers (google, level3, etc...). From the original end user to
the latest ISP step the dns header could be using an ip address that the
Squid could not know.
"/Consider some malicious server at 192.168.0.2 responding with an infected
JPG to all requests. An infected server contains a script that fetches the
Google icon from 192.168.0.2 using Host:www.google.com. /"
Totaly agree with you but what we/you could do is to replace the original
dns records from the headers by the records squid will find and allow the
cache hit.
Here, squid only applies the correct dns but deny the object to be cached.
if squid corrects the dns it means the object should be safe (normaly) so it
should accept to see the object saved into the cache (partial object or
not), right ?
So, fixing a wrong dns record is a good thing I agree, but why do you deny
the cache action if the request was corrected ?
What about if the end user has fixed to a special dns server (home made,
exotic server, etc...), here the ISP cannot increase the % saving and this
point (% saving) is the top priority for the ISP that's why he needs
solutions like Squid products.
Do you think we could have a workaround for fixing the wrong dns record from
headers (Squid action) and having the object cached ? or it does not make
sense because other security issues ?
I read many forums where admins are requesting this behaviour, I 'm sure
we/you can find a nice solution for all of us .
Fred
--
View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/TProxy-and-client-dst-passthru-tp4670189p4672022.html
Sent from the Squid - Users mailing list archive at Nabble.com.
More information about the squid-users
mailing list