[squid-users] decreased requests per second with big file size
Eliezer Croitoru
eliezer at ngtech.co.il
Wed Oct 14 06:48:33 UTC 2015
You now got my attention!
Depends on what you want you might be able to use external logging
helper for that.
I am unsure if it is possible to use two access log directives in the
configuration and Amos or others can answer that.
It is pretty simple to implement since the input data will flow like any
access log lines but to a program.
Then you can update some DB live.
There is an option to even use the same log daemon for both logging and
http interface for the statistics but I will not go this way now.
...
I have tested and it seems possible to use two access_log directives.
I am unsure how to implement the idea of both access.log and external
logger combined.
But there is sure the option of:
"access_log tcp://host:port"
Which if you will write a TCP service will make your life easy.
I am unfamiliar with the logging protocol but it seems like the wiki can
help with that.
*I am willing to write an example tcp log daemon for the squid in
golang\ruby\python for the squid project if one is not present in these
languages.*
Eliezer
* Testing 3.5.10 RPMs is in progress.
On 14/10/2015 09:00, Ambadas H wrote:
> Hi Eliezer,
>
> Its mostly like a live feed.
>
> I am writing these sites+(a client tracking parameter) to a flat file via
> squid, from where another process reads it & does further processing (eg.
> analyze top sites used by any particular client).
>
> And that is why i was working on getting just the urls entered by clients.
>
>
> Ambadas
>
>
> On Tue, Oct 13, 2015 at 2:01 PM, Eliezer Croitoru <eliezer at ngtech.co.il>
> wrote:
>
>> Hey Ambadas,
>>
>> I was wondering if you want it to be something like a "live feed" or just
>> for logs analyzing?
>>
>> Eliezer
>>
>> On 09/10/2015 15:47, Ambadas H wrote:
>>
>>> Hi,
>>>
>>> I am using below setup:
>>> Squid proxy 3.5.4.
>>> CentOS 7.1
>>>
>>> I am trying to analyze the most used websites by the users via Squid
>>> proxy.
>>> I just require the first GET request for that particular browsed page page
>>> & not the proceeding GETs of that same page.
>>>
>>> Eg:
>>> 1) user enters *http://google.com <http://google.com>* in client
>>> (mozilla)
>>> 2) client gets page containing some other urls
>>> 3) client initiates multiple GETs for same requested page without users
>>> knowledge
>>>
>>> I myself tried a logic where I assumed if "Referer" header is present,
>>> then
>>> its not the first GET but a proceeding one for same requested page.
>>>
>>> I know i cant rely on "Referer" header to be always present as its not
>>> mandatory. But
>>> I want to know if my logic is correct? & also if there's any alternative
>>> solution?
>>>
>>>
>>>
>>> _______________________________________________
>>> squid-users mailing list
>>> squid-users at lists.squid-cache.org
>>> http://lists.squid-cache.org/listinfo/squid-users
>>>
>>>
>> _______________________________________________
>> squid-users mailing list
>> squid-users at lists.squid-cache.org
>> http://lists.squid-cache.org/listinfo/squid-users
>>
>
More information about the squid-users
mailing list