[squid-users] Subject: Re: authentication of every GET request from part of URL?

Amos Jeffries squid3 at treenet.co.nz
Mon Nov 9 17:42:27 UTC 2015

On 10/11/2015 6:12 a.m., Sreenath BH wrote:
> Hi Alex,
> thanks for your detailed asnwers.
> Here are more details.
> 1. If the URL does not have any token, we would like to send an error
> message back to the browser/client, without doing a cache lookup, or
> going to backend apache server.
> 2. If the token is invalid (that is we can't find it in a database),
> that means we can not serve
> data. In this case we would like to send back a HTTP error (something
> like a  401 or 404, along with a more descriptive message)

All of the above is external_acl_type helper operations.

> 3. If the token is valid(found), remove the token from the URL, and
> use remaining part of URL as the key to look in Squid cache.
> 4. If found return that data, along with proper HTTP status code.

The above is url_rewrite_program operations.

> 5. If cache lookup fails(not cached), send HTTP request to back-end
> apache server (removing the token), get returned result, store in
> cache, and return to client/browser.

And that part is normal caching. Squid will do it by default.

Except the "removing the token" part. Which was done at step #4 already,
so has no relevance here at step #5.

> I read about ACL helper programs, and it appears I can do arbitrary
> validations in it, so should work.
> Is it correct to assume that the external ACL code runs before url rewriting?,

The http_access tests are run before re-writing.
If the external ACL is one of those http_access tests the answer is yes.

> Does the URL rewriter run before a cache lookup?


Although, please note that despite this workaround for your cache. It
really is *only* your proxy which will work nicely. Every other cache on
the planet will see your applications URLs are being unique and needing
different caching slots.

This not only wastes cache space for them, but also forces them to pass
extra traffic in the form of full-object fetches at your proxy. Which
raises the bandwidth costs for both them and you far beyond what proper
header based authentication or authorization would.

As the other sysadmin around the world notice this unnecessarily raised
cost they will start to hack their configs to force-cache the responses
from your application. Which will bypass your protection system entirely
since your proxy may not not even see many of the requests.

The earlier you can get the application re-design underway to remove the
credentials token from URL, the earlier the external problems and costs
will start to dsappear.


More information about the squid-users mailing list