[squid-users] stoping after rotate

Jorgeley Junior jorgeley at gmail.com
Wed Sep 9 14:21:53 UTC 2015


ok, thank you all so much!

2015-09-09 10:40 GMT-03:00 Marcus Kool <marcus.kool at urlfilterdb.com>:

> It seems that your system is finally getting healthy.
>
> The fact that the resident memory is 371 MB means that you have no disk
> cache or Squid is hardly used, or both.
> But look at that red 6.4GB virtual memory which indicates that Squid can
> grow to 6.4 GB and even more when it is used.
>
> So next step is to start using the proxy and monitor the process size.
>
> Marcus
>
>
> On 09/09/2015 10:24 AM, Jorgeley Junior wrote:
>
>> changed cache_mem to 3GB, after one hour, this is my htop:
>>
>>>>
>> 2015-09-09 9:39 GMT-03:00 Jorgeley Junior <jorgeley at gmail.com <mailto:
>> jorgeley at gmail.com>>:
>>
>>     changed cache_mem to 3GB, after one hour, this is my htop:
>>
>>>>
>>     2015-09-09 9:34 GMT-03:00 Jorgeley Junior <jorgeley at gmail.com
>> <mailto:jorgeley at gmail.com>>:
>>
>>         changed cache_mem to 3GB, after one hour, this is my htop:
>>
>>>>
>>         2015-09-08 21:43 GMT-03:00 Jorgeley Junior <jorgeley at gmail.com
>> <mailto:jorgeley at gmail.com>>:
>>
>>             ok, I'll do it
>>
>>             2015-09-08 21:30 GMT-03:00 Marcus Kool <
>> marcus.kool at urlfilterdb.com <mailto:marcus.kool at urlfilterdb.com>>:
>>
>>
>>
>>                 On 09/08/2015 09:23 PM, Jorgeley Junior wrote:
>>
>>                     ok, read that already, i set cache_mem to 5GB, so is
>> not ok?
>>
>>
>>                 No. Squid will use more than 6 GB with cache_mem set to 5
>> GB.
>>                 I suggest that you use 2500 MB and after Squid runs for 1
>> hour, see what the total process size is.
>>
>>                 Marcus
>>
>>
>>                     2015-09-08 20:25 GMT-03:00 Marcus Kool <
>> marcus.kool at urlfilterdb.com <mailto:marcus.kool at urlfilterdb.com> <mailto:
>> marcus.kool at urlfilterdb.com <mailto:marcus.kool at urlfilterdb.com>>>:
>>
>>
>>
>>                          On 09/08/2015 10:39 AM, Jorgeley Junior wrote:
>>
>>                              I have 8GB physical memory and my swap is
>> 32GB.
>>                              I didn't increase the swap yet, should I?
>>
>>
>>                          You must start with reading the memory FAQ:
>> http://wiki.squid-cache.org/SquidFaq/SquidMemory
>>
>>                          The general rule for all processes applies: make
>> sure that a process is *not* larger than 80% of the physical memory.
>>                          In your case, you must reduce cache_mem and make
>> sure that Squid does not use more than 6 GB.
>>
>>                          A swap of 32 GB is fine for a system with 8 GB
>> physical memory.
>>
>>                          I also suggest to consider a memory upgrade.
>>
>>                          Marcus
>>
>>
>>                              2015-09-08 9:23 GMT-03:00 Marcus Kool <
>> marcus.kool at urlfilterdb.com <mailto:marcus.kool at urlfilterdb.com> <mailto:
>> marcus.kool at urlfilterdb.com
>>                     <mailto:marcus.kool at urlfilterdb.com>> <mailto:
>> marcus.kool at urlfilterdb.com <mailto:marcus.kool at urlfilterdb.com> <mailto:
>> marcus.kool at urlfilterdb.com
>>                     <mailto:marcus.kool at urlfilterdb.com>>>>:
>>
>>
>>
>>                                   On 09/08/2015 08:11 AM, Jorgeley Junior
>> wrote:
>>
>>                                       Thank you all, this is the output:
>>                                       vm.overcommit_memory = 0
>>                                       vm.swappiness = 60
>>                                       I have a Redhat 6.6
>>
>>
>>                                   The value of vm.overcommit_memory is OK.
>>                                   The default value for vm.swappiness is
>> way too high. It means that Linux swaps out parts of processes when they
>> are idle for a while.
>>                                   For better overall system performance,
>> you want those processes in memory as long as possible and not swapped out
>> so I recommend to change it to 15.
>>                                   This implies that the OS has 15% of the
>> physical memory available for file system buffers which is plenty.
>>
>>                                   You only mentioned that the swap is 32
>> GB.  What is the size of the physical memory ?
>>
>>                                   Did you already increase the swap ?
>>
>>                                   Marcus
>>
>>
>>                                       2015-09-05 15:08 GMT-03:00 Marcus
>> Kool <marcus.kool at urlfilterdb.com <mailto:marcus.kool at urlfilterdb.com>
>> <mailto:marcus.kool at urlfilterdb.com
>>                     <mailto:marcus.kool at urlfilterdb.com>> <mailto:
>> marcus.kool at urlfilterdb.com <mailto:marcus.kool at urlfilterdb.com> <mailto:
>> marcus.kool at urlfilterdb.com
>>                     <mailto:marcus.kool at urlfilterdb.com>>>
>>                              <mailto:marcus.kool at urlfilterdb.com <mailto:
>> marcus.kool at urlfilterdb.com> <mailto:marcus.kool at urlfilterdb.com <mailto:
>> marcus.kool at urlfilterdb.com>>
>>                     <mailto:marcus.kool at urlfilterdb.com <mailto:
>> marcus.kool at urlfilterdb.com> <mailto:marcus.kool at urlfilterdb.com <mailto:
>> marcus.kool at urlfilterdb.com>>>>>:
>>
>>                                            On Linux, an important sysctl
>> parameter that determines how Linux behaves with respect to VM allocation
>> is vm.overcommit_memory (should be 0).
>>                                            And vm.swappiness is important
>> to tune servers (should be 10-15).
>>
>>                                            Which version of Linux do you
>> have and what is the output of
>>                                                sysctl -a | grep -e
>> vm.overcommit_memory -e  vm.swappiness
>>
>>                                            Marcus
>>
>>
>>                                            On 09/04/2015 07:04 PM,
>> Jorgeley Junior wrote:
>>
>>                                                Thanks Amos, i will
>> increase the swap
>>
>>                                                Em 04/09/2015 17:22, "Amos
>> Jeffries" <squid3 at treenet.co.nz <mailto:squid3 at treenet.co.nz> <mailto:
>> squid3 at treenet.co.nz <mailto:squid3 at treenet.co.nz>>
>>                     <mailto:squid3 at treenet.co.nz <mailto:
>> squid3 at treenet.co.nz> <mailto:squid3 at treenet.co.nz <mailto:
>> squid3 at treenet.co.nz>>>
>>                              <mailto:squid3 at treenet.co.nz <mailto:
>> squid3 at treenet.co.nz> <mailto:squid3 at treenet.co.nz <mailto:
>> squid3 at treenet.co.nz>> <mailto:squid3 at treenet.co.nz
>>                     <mailto:squid3 at treenet.co.nz> <mailto:
>> squid3 at treenet.co.nz <mailto:squid3 at treenet.co.nz>>>> <mailto:
>> squid3 at treenet.co.nz <mailto:squid3 at treenet.co.nz> <mailto:
>> squid3 at treenet.co.nz
>>
>>                     <mailto:squid3 at treenet.co.nz>>
>>                                       <mailto:squid3 at treenet.co.nz
>> <mailto:squid3 at treenet.co.nz> <mailto:squid3 at treenet.co.nz <mailto:
>> squid3 at treenet.co.nz>>> <mailto:squid3 at treenet.co.nz
>>                     <mailto:squid3 at treenet.co.nz> <mailto:
>> squid3 at treenet.co.nz <mailto:squid3 at treenet.co.nz>> <mailto:
>> squid3 at treenet.co.nz <mailto:squid3 at treenet.co.nz> <mailto:
>> squid3 at treenet.co.nz
>>                     <mailto:squid3 at treenet.co.nz>>>>>>
>>                              escreveu:
>>
>>                                                     On 5/09/2015 7:16
>> a.m., Jorgeley Junior wrote:
>>                                                      > Thanks Amos, my
>> swap is 32GB, so that's causing the error as you said.
>>                                                      > Which is the
>> better choice: increase the swap size or reduce the
>>                                                      > cache_mem???
>>                                                      >
>>
>>                                                     Both probably. 128 GB
>> swap I suspect you will need.
>>
>>                                                     Increase the swap so
>> the system lets Squid use more virtual memory.
>>
>>                                                     Decrease the
>> cache_mem so that Squid does not actually end up using the
>>                                                     swap for its main
>> worker processes. That is a real killer for performance.
>>
>>
>>                                                     Amos
>>
>>
>>
>>
>>  _______________________________________________
>>                                                squid-users mailing list
>>                     squid-users at lists.squid-cache.org <mailto:
>> squid-users at lists.squid-cache.org> <mailto:
>> squid-users at lists.squid-cache.org <mailto:
>> squid-users at lists.squid-cache.org>>
>>                     <mailto:squid-users at lists.squid-cache.org <mailto:
>> squid-users at lists.squid-cache.org> <mailto:
>> squid-users at lists.squid-cache.org <mailto:
>> squid-users at lists.squid-cache.org>>>
>>                              <mailto:squid-users at lists.squid-cache.org
>> <mailto:squid-users at lists.squid-cache.org> <mailto:
>> squid-users at lists.squid-cache.org <mailto:
>> squid-users at lists.squid-cache.org>>
>>                     <mailto:squid-users at lists.squid-cache.org <mailto:
>> squid-users at lists.squid-cache.org> <mailto:
>> squid-users at lists.squid-cache.org <mailto:
>> squid-users at lists.squid-cache.org>>>>
>>
>>                     http://lists.squid-cache.org/listinfo/squid-users
>>
>>
>>
>>
>>                                       --
>>                                       *_
>>                                       _*
>>                                       *_
>>                                       _*
>>
>>
>> _______________________________________________
>>                                   squid-users mailing list
>>                     squid-users at lists.squid-cache.org <mailto:
>> squid-users at lists.squid-cache.org> <mailto:
>> squid-users at lists.squid-cache.org <mailto:
>> squid-users at lists.squid-cache.org>>
>>                     <mailto:squid-users at lists.squid-cache.org <mailto:
>> squid-users at lists.squid-cache.org> <mailto:
>> squid-users at lists.squid-cache.org <mailto:
>> squid-users at lists.squid-cache.org>>>
>>                     http://lists.squid-cache.org/listinfo/squid-users
>>
>>
>>
>>
>>                              --
>>                              *_
>>                              _*
>>                              *_
>>                              _*
>>
>>                          _______________________________________________
>>                          squid-users mailing list
>>                     squid-users at lists.squid-cache.org <mailto:
>> squid-users at lists.squid-cache.org> <mailto:
>> squid-users at lists.squid-cache.org <mailto:
>> squid-users at lists.squid-cache.org>>
>>                     http://lists.squid-cache.org/listinfo/squid-users
>>
>>
>>
>>
>>                     --
>>                     *_
>>                     _*
>>                     *_
>>                     _*
>>
>>                 _______________________________________________
>>                 squid-users mailing list
>>                 squid-users at lists.squid-cache.org <mailto:
>> squid-users at lists.squid-cache.org>
>>                 http://lists.squid-cache.org/listinfo/squid-users
>>
>>
>>
>>
>>             --
>>             *_
>>             _*
>>             *_
>>             _*
>>
>>
>>
>>
>>         --
>>         *_
>>         _*
>>         *_
>>         _*
>>
>>
>>
>>
>>     --
>>     *_
>>     _*
>>     *_
>>     _*
>>
>>
>>
>>
>> --
>> *_
>> _*
>> *_
>> _*
>>
> _______________________________________________
> squid-users mailing list
> squid-users at lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>



--
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squid-cache.org/pipermail/squid-users/attachments/20150909/ed6a7f4d/attachment-0001.html>


More information about the squid-users mailing list