[squid-users] decreased requests per second with big file size

Ambadas H ambadas.tdna at gmail.com
Mon Oct 12 05:51:00 UTC 2015


Hi Amos,

Thanks for responding

*"You would be better off taking the first use of any domain by a client,*

*then ignoring other requests for it until there is some long period*
*between two of them. The opposite of what session helpers do."*

Could you please elaborate a little on the above logic.

My understanding, if not wrong, is to take domain/host of first client GET
request & don't consider the same if it matches with the subsequent GET
requests.

In this case there is possibility of multiple unique domains/hosts for
single page (Eg. other domain Ads, analytics etc)?


On Sat, Oct 10, 2015 at 10:57 AM, Amos Jeffries <squid3 at treenet.co.nz>
wrote:

> On 10/10/2015 1:47 a.m., Ambadas H wrote:
> > Hi,
> >
> > I am using below setup:
> > Squid proxy 3.5.4.
> > CentOS 7.1
> >
> > I am trying to analyze the most used websites by the users via Squid
> proxy.
> > I just require the first GET request for that particular browsed page
> page
> > & not the proceeding GETs of that same page.
> >
> > Eg:
> > 1) user enters *http://google.com <http://google.com>* in client
> (mozilla)
> > 2) client gets page containing some other urls
> > 3) client initiates multiple GETs for same requested page without users
> > knowledge
> >
> > I myself tried a logic where I assumed if "Referer" header is present,
> then
> > its not the first GET but a proceeding one for same requested page.
> >
> > I know i cant rely on "Referer" header to be always present as its not
> > mandatory. But
> > I want to know if my logic is correct? & also if there's any alternative
> > solution?
>
> Your assumption is wrong. Referer header (when it exists) is tracking a
> whole browsing session, not a particular website or page.
>
> You would be better off taking the first use of any domain by a client,
> then ignoring other requests for it until there is some long period
> between two of them. The opposite of what session helpers do.
>
> Amos
>
> _______________________________________________
> squid-users mailing list
> squid-users at lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squid-cache.org/pipermail/squid-users/attachments/20151012/44e8fa65/attachment.html>


More information about the squid-users mailing list