[squid-users] HTTPS issues with squidguard after upgrading from squid 2.7 to 3.5

Eliezer Croitoru eliezer at ngtech.co.il
Fri Jun 17 01:21:46 UTC 2016


Good to hear!!!
Indeed Amos is correct and now I think this thread will be used for many users as a very good example.
Eventually as I mentioned, if it works for you and others then great.
I have encountered couple cases which the squidGuard update was too "expensive" and riskey but it seems that squidGuard is still kicking despite to not being maintain.

I have a non-public question but if you can share it will be nice.
What is the users size\capacity of the system?
I am asking since I have seen that many squidGuard based systems have acted slower then with ICAP.
By slower I mean that the initial squidGuard lookup response caused slower page display by ms to couple secs.
I have not researched the exact reasons since I will not try to fix what is already fine for many.

Thanks,
Eliezer

----
Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: eliezer at ngtech.co.il


-----Original Message-----
From: squid-users [mailto:squid-users-bounces at lists.squid-cache.org] On Behalf Of reqman
Sent: Thursday, June 16, 2016 10:55 AM
To: squid-users at lists.squid-cache.org
Subject: Re: [squid-users] HTTPS issues with squidguard after upgrading from squid 2.7 to 3.5

Hello Eliezer,

first let me thank you for providing a complete and detailed explanation, I think I understand now what gives here.

Minor note: Amos is correct in stating that url_rewrite_access basically controls what is thrown into the redirector.

Now, before I opened this discussion I also came to the conclusion that to get rid of these issues, I had to avoid throwing HTTPS through a rewrite with squidguard. Indeed, setting the following solved the
issue...:

========================
url_rewrite_program /usr/local/bin/squidGuard url_rewrite_children 8
startup=4 idle=4 concurrency=0
url_rewrite_access deny CONNECT
url_rewrite_access allow all
========================

... only to introduce another one: all HTTPS passes unfiltered.

So I tried to follow your advice to implement the same things, without using url_rewrite_program. I have tested and have the patched version of squidGuard (or so it seems). In doing so, I stumbled upon a couple of issues that complicate the solution, specifically:

* My error redirect page can not be used when the original request was a CONNECT one
* Even if it did, it would be complicated a bit to try and extract additional information (ie class that registered a hit with) from squidguard back to squid.

With your information, I was able to happily combine both the functionality of url_rewrite_program, as well as using an external acl.

In not so many words, the modifications to my original.
url_rewrite-based approach is to:
1) first do not use url_rewrite at all for CONNECT methods
2) modify http_access when made via CONNECT, to take into account the external acl rule (again based on squidguard), and block via TCP_RESET

In coding this means that:
1) The initial handling with:
========================
url_rewrite_program /usr/local/bin/squidGuard url_rewrite_children 8
startup=4 idle=4 concurrency=0
url_rewrite_access allow all
========================

... now becomes:
========================
url_rewrite_program /usr/local/bin/squidGuard url_rewrite_children 8
startup=4 idle=4 concurrency=0
url_rewrite_access deny CONNECT
url_rewrite_access allow all
========================

2) To handle CONNECT, the existing block that handles CONNECT logic:
========================
acl CONNECT method CONNECT

http_access deny CONNECT !SSL_ports
http_access allow CONNECT
========================

becomes:

========================
acl CONNECT method CONNECT
external_acl_type filter_url ipv4 concurrency=0 ttl=3 %URI %SRC %{-} %un %METHOD /usr/local/bin/squidGuard acl filter_url_acl external filter_url

# Important: When an unwanted site appears in HTTPS, just forcefully close the TCP connection!
deny_info TCP_RESET filter_url_acl

http_access deny CONNECT filter_url_acl
http_access deny CONNECT !SSL_ports

http_access allow CONNECT
========================

With this approach, some squidguard children are spawned as redirectors (via url_rewrite_program), whereas some more are spawned from "external_acl_type filter_url".

2016-06-15 21:12 GMT+03:00 Eliezer Croitoru <eliezer at ngtech.co.il>:
> Hey Michael,
>
>
>
> Well I have not tested FreeBSD dependencies and patches and I am not 
> following them daily.
>
> The issue itself with SquidGuard and the url_rewrite interface is more 
> of an issue in most cases with CONNECT requests as you mentioned.
>
> Since you are not using ssl_bump then you need to deny the 
> traffic\requests in a way that will not leave squid or the clients and 
> the session in an unknown or unexpected situation.
>
> When the url_rewrite interface is used to "Deny" something it's not 
> really denying but rather "rewriting" something due to it's nature.
>
>
>
> On regular plain http requests some mangling to the request(affecting 
> the
> response) is possible but when handling CONNECT requests it's a whole 
> new story.
>
> We don't know how the client will respond to a malformed response or 
> squid to a malformed rewritten request, and it is possible that the 
> client will expect a specific 50x\40x response code.
>
>
>
> The external_acl interface was built as an ACL extension with the 
> basic ability to overcome the limits of the url_rewrite interface.
>
> Content or url filtering is an ACL level operation and not url 
> rewriting\mangling and there for it is better(in squid) to do it in a 
> way that the client can identify(30x).
>
>
>
> The best possible way to deny such a connection(CONNECT) is using a 
> 50x or 40x code.
> I do not remember which one is accepted better by browsers and clients 
> and you will be able to find it yourself easily adding couple lines or 
> using the scripts I wrote before.
>
>
>
> And more directly to the subject, I will say that if these network "issues"
> are might be because of any squid side of wrongly handling CONNECT 
> requests that was mangled with a url_rewrite.
> I would say that these specific cases is the proof of the wrongly used 
> interface.
>
>
>
> The url_rewrite might contain a bug when handling a CONNECT request 
> but the bug is that it is handling these connections at all and not otherwise.
>
> My suggestion is that you would try to see if you can use something 
> like
> that:
>
> ## START
>
> url_rewrite_program /usr/local/bin/squidGuard url_rewrite_children 8
> startup=4 idle=4 concurrency=0
>
> url_rewrite_access deny CONNECT
>
> url_rewrite_access allow all
>
> ## END
>
>
>
> And  add another external_acl helper with the wrapper I gave you 
> without any special deny_info for the ACL.
> Squid will be able to handle these probably fine and will deny the 
> connections using some right access deny code(don't expect fancy 
> warning pages without using some level of ssl_bump).
>
>
>
> I do not know if you need a fast solution or you are planning 
> carefully with the available options.
>
> The options are to either use a "fast" solution to mitigate an 
> annoying issue or to find the right path.
>
> The solution to either of the options is in your hands..
>
> If for your squidGuard setup offers you the right solution from all 
> the right aspects despite to the fact that it's working slower then 
> other solutions then it is the solution for you!!
>
>
>
> The technology of black and white listing was enhanced in the last 
> decade but not too drastically.
>
> The basic concept is that there are query and analysis components 
> which are not forced to be one software.
>
> There will always be false positives to this or another way but some 
> categorizing systems are much stricter and sensitive about the subject 
> then others.
>
> All this leaving aside the sources of the lists you and since only you 
> have a definition of the wanted\desired results with the available options.
>
>
>
> You also stated that you have some funding issues, but just as a side note:
> the wanted result is eventually forcing the final product of work and 
> the expense(any if at all).
>
> I mean with these words that if you do not care about false positives 
> and using some product satisfy the desire or need then I think it's ok 
> for your case.

I am taking care of false positives by whitelisting sites over this last decade. My whitelisted sites are basically non-growing.

At some point I *will* have to look at something else. But first, I'll have to examine:
1) Which software to use: e2guardian and ufdbguard were referenced.
I'll have to delve into some study here
2) Same thing on which free *lists* to use. Will definitely need some reading here too.

Again, thanks for taking all this time to respond. HUGELY appreciated :)

> I will add that from my tests there are different results to basic 
> pages\objects loading\access speed when using one software or another, 
> and\or one protocol\interface or another.
>
>
>
> Here if you need more help\advice handling the situation.
>
> Eliezer
>
>
>
> ----
>
> Eliezer Croitoru
>
> Linux System Administrator
>
> Mobile: +972-5-28704261
>
> Email: eliezer at ngtech.co.il
>
>
>
>
>
> -----Original Message-----
> From: michail.pappas at gmail.com [mailto:michail.pappas at gmail.com] On 
> Behalf Of reqman
> Sent: Wednesday, June 15, 2016 2:19 PM
> To: Eliezer Croitoru
> Cc: squid-users at lists.squid-cache.org
>
> Subject: Re: [squid-users] HTTPS issues with squidguard after 
> upgrading from squid 2.7 to 3.5
>
>
>
> Hello Eliezer,
>
>
>
> 2016-06-15 11:45 GMT+03:00 Eliezer Croitoru <eliezer at ngtech.co.il>:
>
>> Hey Michael,
>
>>
>
>> I am missing couple details about the setup which might affect the 
>> way we would be able to understand what is causing the issue and how to resolve it.
>
>> There are changes from squid 2.7 to 3.5 and to my opinion these are 
>> mandatory to resolve and to not go one step back.
>
>
>
> Yes, I saw that 3.5 effectively disabled/obsoleted/deactivated various 
> options. I believe I took care in following through those requirements.
>
>
>
>> What version of SquidGuard 1.4 did you installed? The patched for 
>> squid 3.4+ compatibility?
>
>
>
> I installed a ready to use package from FreeBSD. I presume that is is 
> ok with squid 3.4, according to its dependencies:
>
>
>
> # pkg query "%do%dv" squidGuard
>
> www/squid3.5.19
>
> databases/db55.3.28_3
>
>
>
> Other information for squidGuard:
>
>
>
> # pkg info squidGuard
>
> squidGuard-1.4_15
>
> Name           : squidGuard
>
> Version        : 1.4_15
>
> Installed on   : Mon May 30 08:24:17 2016 EEST
>
> Origin         : www/squidguard
>
> Architecture   : freebsd:10:x86:64
>
> Prefix         : /usr/local
>
> Categories     : www
>
> Licenses       : GPLv2
>
> Maintainer     : garga at FreeBSD.org
>
> WWW            : http://www.squidguard.org/
>
> Comment        : Fast redirector for squid
>
> Options        :
>
>        DNS_BL         : off
>
>        DOCS           : on
>
>        EXAMPLES       : on
>
>        LDAP           : off
>
>        QUOTE_STRING   : off
>
>        STRIP_NTDOMAIN : off
>
> Shared Libs required:
>
>        libdb-5.3.so.0
>
> Annotations    :
>
>        repo_type      : binary
>
>        repository     : FreeBSD
>
> Flat size      : 2.24MiB
>
>
>
>> More details about it here:
>
>> http://bugs.squid-cache.org/show_bug.cgi?id=3978
>
>> Now if it is indeed patched and works as expected it from the 3.4+ 
>> computability level of things then lets move on.
>
>
>
> Checking the bug and if I understand correctly, if my squidGuard was 
> not patched then it wouldn't work. This is not the case. It works ok 
> for http urls, ie blocks fine and the user is correctly redirected to 
> the block page setup for this purpose. It's HTTPS I'm having issues
>
> with: according to some talks, if something gets blocked by 
> squidguard, something is miscommunicated (or not communicated at all) with squid.
> Instead of a block, the users waits endlessly for the page.
>
>
>
>> Are you using Squid in intercept\transparent\trpoxy mode or is it 
>> defined in the browsers directly?
>
>> If you are using intercept mode, what have you defined on the FreeBSD 
>> pf\ipfw?
>
>
>
> Squid operates in normal mode. I've configured a 10year old proxy 
> autodiscovery script, which is published through DHCP. All browsers 
> are set to configure everything automatically.
>
>
>
>> And about the quote from the mailing list:
>
>> SquidGuard was written to operate under the url_rewrite 
>> interface\protocol and not external_acl.
>
>
>
> I've lost you here: why do I need squidGuard to operate under external_acl?
> Can't I leave it running with url_rewrite?
>
>
>
>> Due to this it has some disadvantages and the advised details are to 
>> modify the helper(SquidGuard or another) to operate in another way.
>
>> It is possible to use the patched version of SquidGuard under the 
>> external_acl interface and use squid options to deny\redirect the request.
>
>
>
> I saw your other email, but I again have to ask. My own squidguard "seems"
> to work. What are the merits of doing things with external_acl instead 
> of the way I am doing things right now?
>
>
>
>> It removes some things from the complexity of the issue.
>
>> I have just written an example on how to use use my software 
>> SquidBlocker under external_acl and here the adapted example that can 
>> be used with a patched SquidGuard:
>
>> ## START OF SETTINGS
>
>> external_acl_type filter_url ipv4 concurrency=0 ttl=3 %URI %SRC/-
>
>> %LOGIN %METHOD /usr/local/bin/squidGuard url_rewrite_children acl
>
>> filter_url_acl external filter_url deny_info
>
>> http://ngtech.co.il/block_page/?url=%u&domain=%H filter_url_acl #or #
>
>> deny_info 302:http://ngtech.co.il/block_page/?url=%u&domain=%H
>
>> filter_url_acl http_access deny !filter_url_acl http_access allow
>
>> localnet filter_url_acl ## END OF SETTINGS
>
>>
>
>> I have not tested this request format but if it doesn't work this way 
>> then a little cosmetics will make it work.
>
>>
>
>> When more information will be available we can try to see where the 
>> issue is from.
>
>>
>
>> Eliezer
>
>>
>
>> ----
>
>> Eliezer Croitoru
>
>> Linux System Administrator
>
>> Mobile: +972-5-28704261
>
>> Email: eliezer at ngtech.co.il
>
>>
>
>>
>
>> -----Original Message-----
>
>> From: squid-users [mailto:squid-users-bounces at lists.squid-cache.org]
>
>> On Behalf Of reqman
>
>> Sent: Wednesday, June 15, 2016 10:22 AM
>
>> To: squid-users at lists.squid-cache.org
>
>> Subject: [squid-users] HTTPS issues with squidguard after upgrading
>
>> from squid 2.7 to 3.5
>
>>
>
>> Hello all,
>
>>
>
>> I have been running squid 2.7.X alongside squidguard 1.4 on a FreeBSD 
>> 8.x box for years.
>
>> Started out some 10 years ago, with a much older 
>> squid/squidguard/FreeBSD combination.
>
>>
>
>> Having to upgrade to FreeBSD 10.3, I examined my option regarding squid.
>
>> 3.5.19 was available which I assumed would behave the same as 2.7, 
>> regarding compatibility.
>
>> Squidguard 1.4 was also installed.
>
>>
>
>> - Squid was configured to behave along the lines of what I had on 2.7.
>
>> - For squidguard I used the exact same blocklists and configurations.
>
>> Note that I do not employ an URL rewriting in squidguard, only 
>> redirection.
>
>> - no SSL-bump or other SSL interception takes place
>
>> - the squidguard-related lines on squid are the following:
>
>>
>
>> url_rewrite_program /usr/local/bin/squidGuard url_rewrite_children 8
>
>> startup=4 idle=4 concurrency=0 url_rewrite_access allow all
>
>>
>
>> - In squidGuard.conf, the typical redirect section is like:
>
>>
>
>>  default {
>
>>                 pass local-ok !block1 !block2 !blockN all
>
>>                 redirect
>
>>
>> 301:http://localsite/block.htm?clientaddr=%a+clientname=%n+clientiden
>> t=%i+srcclass=%s+targetclass=%t+url=%u
>
>>         }
>
>>
>
>> I am now experiencing problems that I did not have. Specifically, 
>> access to certain but *not* all HTTPS sites seems to timeout.
>
>> Furthermore, I see entries similar to the following in cache.log:
>
>>
>
>> 2016/06/15 09:27:59 kid1| abandoning local=192.168.0.1:3128
>
>> remote=192.168.2.239:3446 FD 591 flags=1
>
>> 2016/06/15 09:27:59 kid1| abandoning local=192.168.0.1:3128
>
>> remote=192.168.2.239:3448 FD 592 flags=1
>
>> 2016/06/15 09:27:59 kid1| abandoning local=192.168.0.1:3128
>
>> remote=192.168.2.239:3452 FD 594 flags=1
>
>> 2016/06/15 09:27:59 kid1| abandoning local=192.168.0.1:3128
>
>> remote=192.168.2.239:3456 FD 596 flags=1
>
>> 2016/06/15 09:27:59 kid1| abandoning local=192.168.0.1:3128
>
>> remote=192.168.2.239:3454 FD 595 flags=1
>
>> 2016/06/15 09:27:59 kid1| abandoning local=192.168.0.1:3128
>
>> remote=192.168.2.239:3458 FD 597 flags=1
>
>> 2016/06/15 09:27:59 kid1| abandoning local=192.168.0.1:3128
>
>> remote=192.168.2.239:3462 FD 599 flags=1
>
>>
>
>> Searching around, the closest I have come to an answer is the
>
>> following:
>
>> http://www.squid-cache.org/mail-archive/squid-users/201211/0165.html
>
>> I am not sure though whether I am plagued by the same issue, 
>> considering that the thread refers to a squid version dated 4 years 
>> ago. And I definitely do not understand what the is meant by the poster's proposal:
>
>>
>
>> "If you can't alter the re-writer to perform redirection you can work 
>> around that by using:
>
>>
>
>>   acl foo ... some test to match the re-written URL ...
>
>>   deny_info 302:%s foo
>
>>   adapted_http_access deny foo "
>
>>
>
>> Can someone help resolve this?
>
>> Is the 2.7 series supported at all?
>
>> As is if everything fails, I'll have to go back to it if there's some 
>> support.
>
>>
>
>> BR,
>
>>
>
>>
>
>> Michael.-
>
>> _______________________________________________
>
>> squid-users mailing list
>
>> squid-users at lists.squid-cache.org
>
>> http://lists.squid-cache.org/listinfo/squid-users
>
>>
_______________________________________________
squid-users mailing list
squid-users at lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



More information about the squid-users mailing list