<html>
<head>
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix">Amos, thanks for info.<br>
<br>
The primary settings being used in squid.conf:<br>
<br>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<p style="margin-bottom: 0in; line-height: 100%">http_port 8080<br>
# this port is what
will be used for SSL Proxy on client browser<br>
http_port 8081
intercept</p>
<p style="margin-bottom: 0in; line-height: 100%">https_port 8082
intercept ssl-bump connection-auth=off
generate-host-certificates=on
dynamic_cert_mem_cache_size=16MB cert=/etc/squid/ssl/squid.pem
key=/etc/squid/ssl/squid.key
cipher=ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:DHE-RSA-CAMELLIA128-SHA:AES128-SHA:RC4-SHA:HIGH:!aNULL:!MD5:!ADH</p>
<p style="margin-bottom: 0in; line-height: 100%">sslcrtd_program
/usr/lib64/squid/ssl_crtd -s /var/lib/squid_ssl_db -M 16MB<br>
sslcrtd_children 50
startup=5 idle=1<br>
ssl_bump
server-first all<br>
ssl_bump none
localhost</p>
<br>
Then e2guardian uses 10101 for the browsers, and uses 8080 for
connecting to squid on the same server.<br>
<br>
<title></title>
<meta name="generator" content="LibreOffice 4.4.3.2 (Windows)">
<style type="text/css">
@page { margin: 0.79in }
p { margin-bottom: 0.1in; line-height: 120% }
a:link { so-language: zxx }
</style>Yet what is happening is there is the GET, then CONNECT and the
tunnel is created, never allowing squid to decrypt and pass the
data along to e2guardian, I suspect Google has changed their
settings denying any proxy from intercepting, because we can type
the most foul terms which are in the "bannedssllist" for
e2guardian and literally nothing is filtered at all on google, nor
youtube. Yet other secure sites like wordpress, yahoo, and others
are caught and blocked, so it is just google owned sites that are
not.<br>
<br>
More below...<br>
<br>
<br>
On 6/24/2015 6:36 AM, Amos Jeffries wrote:<br>
</div>
<blockquote cite="mid:558A964C.9070208@treenet.co.nz" type="cite">
<pre wrap="">On 24/06/2015 11:03 a.m., Mike wrote:
</pre>
<blockquote type="cite">
<pre wrap="">We have a server setup using squid 3.5 and e2guardian (newer branch of
dansguardian), the issue is now google has changed a few things around
and google is no longer filtered which is not acceptable. We already
have the browser settings for SSL Proxy set to our server, and squid has
ssl-bump enabled and working. Previously there was enough unsecure
content on Google that the filtering was still working, but now google
has gone 100% encrypted meaning it is 100% unfiltered.
</pre>
</blockquote>
<pre wrap="">
Maybe, maybe not.
</pre>
<blockquote type="cite">
<pre wrap="">What is happening
is it is creating an ssl tunnel (for lack of a better term) between
</pre>
</blockquote>
<pre wrap="">
No. That is the correct and official term for what they are doing. And
"CONNECT tunnel" is the full phrase / name for the particular method of
tunnel creation.
</pre>
<blockquote type="cite">
<pre wrap="">their server and the browser, so all squid sees is the connection to
<a class="moz-txt-link-abbreviated" href="http://www.google.com">www.google.com</a>, and after that it is tunneled and not recognized by
squid or e2guardian at all.
</pre>
</blockquote>
<pre wrap="">
BUT ... you said you were SSL-Bump'ing. Which means you are decrypting
such tunnels to filter the content inside them.
So what is the problem? is your method of bumping not decrypting the
Google traffic for Squid access controls and helpers to filter?
Note that DansGuardian and e2guardian being independent HTTP proxies are
not party to that SSL-Bump decrypted content inside Squid. ONly Squid
internals and ICAP/eCAP services have access to it.
</pre>
</blockquote>
<blockquote cite="mid:558A964C.9070208@treenet.co.nz" type="cite">
<blockquote type="cite">
<pre wrap="">
I found a few options online that was used with older squid versions but
nothing is working with squid 3.5... Looking for something like this:
acl google dstdomain .google.com
deny_info <a class="moz-txt-link-freetext" href="http://www.google.com/webhp?nord=1">http://www.google.com/webhp?nord=1</a> google
</pre>
</blockquote>
<pre wrap="">
As you said Google have gone 100% HTTPS. URLs beginning with <a class="moz-txt-link-freetext" href="http://">http://</a> are
not HTTPS nor accepted there anymore. If used they just get a 30x
redirect to an <a class="moz-txt-link-freetext" href="https://">https://</a> URL.
Amos
</pre>
</blockquote>
This is why we are thinking we can force the redirect, if you have
ides on how to do that. All google pages use the secure aspect,
except when that <a class="moz-txt-link-freetext" href="http://www.google.com/webhp?nord=1">http://www.google.com/webhp?nord=1</a> is used, it
forces use of the insecure pages, and allows e2guardian filtering to
work properly.<br>
<br>
Thank you,<br>
<br>
Mike<br>
<br>
<blockquote cite="mid:558A964C.9070208@treenet.co.nz" type="cite">
<pre wrap="">
_______________________________________________
squid-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:squid-users@lists.squid-cache.org">squid-users@lists.squid-cache.org</a>
<a class="moz-txt-link-freetext" href="http://lists.squid-cache.org/listinfo/squid-users">http://lists.squid-cache.org/listinfo/squid-users</a>
</pre>
</blockquote>
<br>
</body>
</html>