<div dir="ltr"><div dir="ltr">Hello, Alex,<div><br></div><div>Thank you very much.</div><div><br></div><div>> Can you repeat this test and share a pointer to the corresponding<br>> compressed cache.log, containing those 500 (or fewer, as long as the<br>> problem is reproduced!) concurrent transactions. One or many of those<br>> concurrent transactions resulted in the unwanted entry deletion. The log<br>> may show what happened in that case.<br></div><div>I cleared the rock cache, set the debug level, restarted squid, cleared the cache.log, ran 500-threads test, waited for it to finish and launched curl to make sure it returned TCP_MISS.</div><div>Then stopped squid to limit the cache.log file.</div><div>The link to the cache.log is <a href="https://drive.google.com/file/d/1kC8oV8WAelsBYoZDoqnNsnyd7cfYOoDi/view?usp=sharing">https://drive.google.com/file/d/1kC8oV8WAelsBYoZDoqnNsnyd7cfYOoDi/view?usp=sharing</a></div><div>I think the access.log file may be helpful for analyzing the problem too:  <a href="https://drive.google.com/file/d/1_fDd2mXgeIIHKdZPEavg50KUqTIR5Ltu/view?usp=sharing">https://drive.google.com/file/d/1_fDd2mXgeIIHKdZPEavg50KUqTIR5Ltu/view?usp=sharing</a></div><div>The last record in the file (<span style="color:rgb(0,0,0);font-family:"Courier New",Courier,monospace,arial,sans-serif;font-size:14px;white-space:pre-wrap">2023-06-02 09:52:48</span>) is a single curl test request.</div><div><br></div><div>The test statistics are:</div><div><font face="monospace">Count   STATUS </font></div><div><font face="monospace">     22 TCP_CF_HIT/200/- HIER_NONE/-<br>     72 TCP_HIT/200/- HIER_NONE/-<br>    404 TCP_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>      3 TCP_SWAPFAIL_MISS/200/200 FIRSTUP_PARENT/parent_proxy</font><br></div><div><br></div><div>Kind regards,</div><div>     Ankor.</div><div><br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">чт, 1 июн. 2023 г. в 19:15, Alex Rousskov <<a href="mailto:rousskov@measurement-factory.com">rousskov@measurement-factory.com</a>>:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On 6/1/23 05:20, Andrey K wrote:<br>
<br>
>  > The next step I would recommend is to study the very first cache miss<br>
>  > _after_ the 500 or 200 concurrent threads test. Doing so may shed light<br>
>  > on why Squid is refusing to serve that (presumably cached) object from<br>
>  > the cache. I suspect that the object was marked for deletion earlier<br>
<br>
> openForReadingAt: cannot open marked entry 11138 for reading cache_mem_map<br>
> openForReadingAt: cannot open marked entry 719406 for reading /data/squid.user/cache_map<br>
<br>
<br>
The debugging log you have shared confirms that Squid deleted[1] the <br>
previously cached entry, from both caches (memory and disk). Now comes <br>
the hard part -- figuring out why Squid deleted that entry.<br>
<br>
> I cleared the rock cache, changed the squid.conf (added debug_options<br>
> ALL,9), restarted the squid, ran a test with 500 concurrent threads<br>
<br>
Can you repeat this test and share a pointer to the corresponding <br>
compressed cache.log, containing those 500 (or fewer, as long as the <br>
problem is reproduced!) concurrent transactions. One or many of those <br>
concurrent transactions resulted in the unwanted entry deletion. The log <br>
may show what happened in that case.<br>
<br>
FWIW, I do not recommend spending your time analyzing that huge log. <br>
Efficient analysis requires specialized knowledge in this case. I will <br>
share the results here, of course.<br>
<br>
<br>
Thank you,<br>
<br>
Alex.<br>
<br>
[1] That entry deletion does not imply that all (or any) of the cached <br>
entry bytes are gone from the cache file on disk. It is likely that only <br>
the shared memory _index_ for that disk file was adjusted in your micro <br>
test. That index adjustment is enough for Squid to declare a cache miss.<br>
<br>
<br>
> ср, 31 мая 2023 г. в 16:43, Alex Rousskov:<br>
> <br>
>     On 5/31/23 02:56, Andrey K wrote:<br>
> <br>
>      >  > Do you get close to 100% hit ratio if clients access these URLs<br>
>      >  > sequentially rather than concurrently? If not, then focus on that<br>
>      >  > problem before you open the collapsed forwarding Pandora box.<br>
>      > When I run curl sequentially like this:<br>
>      > for i in `seq 500`; do curl --tlsv1.2 -k   --proxy 0001vsg01:3131<br>
>       -v<br>
>      > $URL  >/dev/null 2>&1; done<br>
>      > I get only the first request with a status TCP_MISS and all<br>
>     others with<br>
>      > TCP_MEM_HIT:<br>
>      >      Cnt Status            Parent<br>
>      >    499 TCP_MEM_HIT/200/- HIER_NONE/-<br>
>      >        1 TCP_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
> <br>
>     Excellent. This confirms that your Squid can successfully cache this<br>
>     object (in memory).<br>
> <br>
> <br>
>      > It is interesting to note that on both squid versions if I run a<br>
>      > separate curl after processing 500 or 200 concurrent threads, I<br>
>     get a<br>
>      > result with the status TCP_MISS/200<br>
> <br>
>     The next step I would recommend is to study the very first cache miss<br>
>     _after_ the 500 or 200 concurrent threads test. Doing so may shed light<br>
>     on why Squid is refusing to serve that (presumably cached) object from<br>
>     the cache. I suspect that the object was marked for deletion earlier,<br>
>     but we should check before spending more time on more complex triage of<br>
>     concurrent cases. If you can share a (link to) compressed ALL,9<br>
>     cache.log from that single transaction against Squid v6, I may be able<br>
>     to help you with that step.<br>
> <br>
> <br>
>     Cheers,<br>
> <br>
>     Alex.<br>
> <br>
> <br>
>      >  > What is your Squid version? Older Squids have more collapsed<br>
>     forwarding<br>
>      >  > bugs than newer ones. I recommend testing with Squid v6 or<br>
>     master/v7, at<br>
>      >  > least to confirm that the problem is still present in the latest<br>
>      >  > official code.<br>
>      > I run tests on SQUID 5.9.<br>
>      > We compiled 6.0.2 (with disabled delay-pools) and increased memory<br>
>      > parameters:<br>
>      >    cache_mem 2048 MB<br>
>      >    maximum_object_size_in_memory 10 MB<br>
>      > The complete configuration is shown below.<br>
>      ><br>
>      > Now on the version 6.0.2 we have the next results:<br>
>      > 500 threads -  Hit ratio 3.8%:<br>
>      >        3 TCP_CF_HIT/200/- HIER_NONE/-<br>
>      >        2 TCP_CF_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
>      >       16 TCP_HIT/200/- HIER_NONE/-<br>
>      >      467 TCP_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
>      >       12 TCP_SWAPFAIL_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
>      ><br>
>      > 200 threads - 6%<br>
>      >        6 TCP_CF_HIT/200/- HIER_NONE/-<br>
>      >       10 TCP_HIT/200/- HIER_NONE/-<br>
>      >      176 TCP_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
>      >        8 TCP_SWAPFAIL_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
>      ><br>
>      > 50 threads - 82%<br>
>      >       30 TCP_CF_HIT/200/- HIER_NONE/-<br>
>      >       11 TCP_HIT/200/- HIER_NONE/-<br>
>      >        1 TCP_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
>      >        8 TCP_SWAPFAIL_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
>      ><br>
>      > The results are slightly worse than they were on the version 5.9.<br>
>      > It is interesting to note that on both squid versions if I run a<br>
>      > separate curl after processing 500 or 200 concurrent threads, I<br>
>     get a<br>
>      > result with the status TCP_MISS/200, although the requested URL is<br>
>      > already in the rock cache (I can see it in the contents of the cache<br>
>      > using the utility I developed <a href="http://rock_cache_dump.pl" rel="noreferrer" target="_blank">rock_cache_dump.pl</a><br>
>     <<a href="http://rock_cache_dump.pl" rel="noreferrer" target="_blank">http://rock_cache_dump.pl</a>><br>
>      > <<a href="http://rock_cache_dump.pl" rel="noreferrer" target="_blank">http://rock_cache_dump.pl</a> <<a href="http://rock_cache_dump.pl" rel="noreferrer" target="_blank">http://rock_cache_dump.pl</a>>>:<br>
>      > $VAR1 = {<br>
>      >            '1' => {<br>
>      >                     'VERSION' => 'Wed May 31 09:18:05 2023',<br>
>      >                     'KEY_MD5' => 'e5eb10f0ab7d84ff9d3fd1e5a6d3eb9c',<br>
>      >                     'OBJSIZE' => 446985,<br>
>      >                     'STD_LFS' => {<br>
>      >                                    'lastref' => 'Wed May 31<br>
>     09:18:05 2023',<br>
>      >                                    'flags' => '0x4004',<br>
>      >                                    'expires' => 'Wed May 31<br>
>     15:18:05 2023',<br>
>      >                                    'swap_file_sz' => 0,<br>
>      >                                    'refcount' => 1,<br>
>      >                                    'lastmod' => 'Wed Jun 29<br>
>     16:09:14 2016',<br>
>      >                                    'timestamp' => 'Wed May 31<br>
>     09:18:05 2023'<br>
>      >                                  },<br>
>      >                     'URL' =><br>
>      ><br>
>     '<a href="https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf" rel="noreferrer" target="_blank">https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf</a> <<a href="https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf" rel="noreferrer" target="_blank">https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf</a>> <<a href="https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf" rel="noreferrer" target="_blank">https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf</a> <<a href="https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf" rel="noreferrer" target="_blank">https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf</a>>>'<br>
>      >                   }<br>
>      >          };<br>
>      ><br>
>      > ).<br>
>      ><br>
>      >  > How much RAM does your server have? You are using default<br>
>     256MB memory<br>
>      >  > cache (cache_mem). If you have spare memory, make your memory<br>
>     cache much<br>
>      >  > larger: A rock cache_dir cannot (yet) share the response<br>
>     _while_ the<br>
>      >  > response is being written to disk, so relying on cache_dir too<br>
>     much will<br>
>      >  > decrease your hit ratio, especially in a collapsed forwarding<br>
>      > environment.<br>
>      > The VM has 32 GB RAM. I configured cache_mem 2048 MB on the 6.0.2<br>
>     version.<br>
>      ><br>
>      >  > Is your Squid built with --enable-delay-pools? If yes,<br>
>     TCP_MISS does not<br>
>      >  > necessarily mean a cache miss (an old Squid bug), even if you<br>
>     do not use<br>
>      >  > any delay pools.<br>
>      > Yes, delay pools on the version 5.9 were enabled though we don't use<br>
>      > them. I disabled this feature on the 6.0.2 version.<br>
>      ><br>
>      ><br>
>      >  > Since you are trying to cache objects lager than 512KB, see<br>
>      >  > maximum_object_size_in_memory.<br>
>      > I configured maximum_object_size_in_memory 10 MB on the 6.0.2<br>
>     version<br>
>      > (as videochunks are less than 7 MB).<br>
>      ><br>
>      >  > Consider making your test much longer (more sequential<br>
>     requests per<br>
>      >  > client/curl worker), to see whether the cache becomes "stable"<br>
>     after one<br>
>      >  > of the first transactions manages to fully cache the response.<br>
>     This may<br>
>      >  > not help with older Squids, but might help with newer ones.<br>
>     However, you<br>
>      >  > should not test using real origin servers (that you do not<br>
>     control)!<br>
>      > I don't have any of my own web servers for tests, so I choose some<br>
>      > resources on the public internet that have a robust infrastructure.<br>
>      > I will conduct the longer tests next week.<br>
>      ><br>
>      > Kind regards,<br>
>      >        Ankor.<br>
>      ><br>
>      > *squid.conf*<br>
>      > workers 21<br>
>      ><br>
>      > sslcrtd_program<br>
>     /data/squid.user/usr/lib/squid/security_file_certgen -s<br>
>      > /data/squid.user/var/lib/squid/ssl_db -M 20MB<br>
>      > sslcrtd_children 21<br>
>      ><br>
>      > logformat extended-squid %{%Y-%m-%d %H:%M:%S}tl| %6tr %>a<br>
>      > %Ss/%03>Hs/%<Hs %<st %rm %ru %un %Sh/%<A %mt %ea<br>
>      ><br>
>      > logfile_rotate 0<br>
>      > access_log daemon:/var/log/squid.user/access.log<br>
>      > logformat=extended-squid on-error=drop<br>
>      ><br>
>      > cache_peer parent_proxy  parent 3128 0<br>
>      > never_direct allow all<br>
>      ><br>
>      > cachemgr_passwd pass config<br>
>      ><br>
>      > acl PURGE method PURGE<br>
>      > http_access allow PURGE<br>
>      ><br>
>      > http_access allow all<br>
>      ><br>
>      > http_port 3131 ssl-bump generate-host-certificates=on<br>
>      > dynamic_cert_mem_cache_size=20MB<br>
>      > tls-cert=/etc/squid.user/sslbump/bump.crt<br>
>      > tls-key=/etc/squid.user/sslbump/bump.key<br>
>      > sslproxy_cert_error allow all<br>
>      ><br>
>      > acl step1 at_step SslBump1<br>
>      > acl step2 at_step SslBump2<br>
>      > acl step3 at_step SslBump3<br>
>      ><br>
>      > ssl_bump peek step1<br>
>      > ssl_bump bump step2<br>
>      > ssl_bump bump step3<br>
>      ><br>
>      > cache_dir rock /data/squid.user/cache 20000 max-size=12000000<br>
>      > cache_swap_low 85<br>
>      > cache_swap_high 90<br>
>      ><br>
>      > collapsed_forwarding on<br>
>      > cache_mem 2048 MB<br>
>      > maximum_object_size_in_memory 10 MB<br>
>      ><br>
>      > pinger_enable off<br>
>      > max_filedesc 8192<br>
>      > shutdown_lifetime 5 seconds<br>
>      > netdb_filename none<br>
>      > log_icp_queries off<br>
>      ><br>
>      > via off<br>
>      > forwarded_for delete<br>
>      ><br>
>      > client_request_buffer_max_size 100 MB<br>
>      ><br>
>      > coredump_dir /data/squid.user/var/cache/squid<br>
>      ><br>
>      ><br>
>      ><br>
>      ><br>
>      > пн, 29 мая 2023 г. в 23:17, Alex Rousskov<br>
>      > <<a href="mailto:rousskov@measurement-factory.com" target="_blank">rousskov@measurement-factory.com</a><br>
>     <mailto:<a href="mailto:rousskov@measurement-factory.com" target="_blank">rousskov@measurement-factory.com</a>><br>
>      > <mailto:<a href="mailto:rousskov@measurement-factory.com" target="_blank">rousskov@measurement-factory.com</a><br>
>     <mailto:<a href="mailto:rousskov@measurement-factory.com" target="_blank">rousskov@measurement-factory.com</a>>>>:<br>
>      ><br>
>      >     On 5/29/23 10:43, Andrey K wrote:<br>
>      ><br>
>      >      > We need to configure a dedicated proxy server to provide<br>
>     caching of<br>
>      >      > online video broadcasts in order to reduce the load on the<br>
>     uplink<br>
>      >     proxy.<br>
>      >      > Hundreds of users will access the same video-chunks<br>
>     simultaneously.<br>
>      >      ><br>
>      >      > I developed a simple configuration for the test purposes<br>
>     (it is<br>
>      >     shown<br>
>      >      > below).<br>
>      >      > The *collapsed_forwarding* option is on.<br>
>      ><br>
>      >     Do you get close to 100% hit ratio if clients access these URLs<br>
>      >     sequentially rather than concurrently? If not, then focus on that<br>
>      >     problem before you open the collapsed forwarding Pandora box.<br>
>      ><br>
>      >     What is your Squid version? Older Squids have more collapsed<br>
>     forwarding<br>
>      >     bugs than newer ones. I recommend testing with Squid v6 or<br>
>      >     master/v7, at<br>
>      >     least to confirm that the problem is still present in the latest<br>
>      >     official code.<br>
>      ><br>
>      >     How much RAM does your server have? You are using default<br>
>     256MB memory<br>
>      >     cache (cache_mem). If you have spare memory, make your memory<br>
>     cache<br>
>      >     much<br>
>      >     larger: A rock cache_dir cannot (yet) share the response<br>
>     _while_ the<br>
>      >     response is being written to disk, so relying on cache_dir<br>
>     too much<br>
>      >     will<br>
>      >     decrease your hit ratio, especially in a collapsed forwarding<br>
>      >     environment.<br>
>      ><br>
>      >     Is your Squid built with --enable-delay-pools? If yes,<br>
>     TCP_MISS does<br>
>      >     not<br>
>      >     necessarily mean a cache miss (an old Squid bug), even if you<br>
>     do not<br>
>      >     use<br>
>      >     any delay pools.<br>
>      ><br>
>      >     Since you are trying to cache objects lager than 512KB, see<br>
>      >     maximum_object_size_in_memory.<br>
>      ><br>
>      >     Consider making your test much longer (more sequential<br>
>     requests per<br>
>      >     client/curl worker), to see whether the cache becomes<br>
>     "stable" after<br>
>      >     one<br>
>      >     of the first transactions manages to fully cache the<br>
>     response. This may<br>
>      >     not help with older Squids, but might help with newer ones.<br>
>     However,<br>
>      >     you<br>
>      >     should not test using real origin servers (that you do not<br>
>     control)!<br>
>      ><br>
>      ><br>
>      >      > Could you clarify if this behavior of my squid is<br>
>      >      > a bug/misconfiguration, or if I'm running into server<br>
>     performance<br>
>      >      > limitations (squid is running on a VM with 22 cores)?<br>
>      ><br>
>      >     Most likely, reduction of hit ratio with increase of<br>
>     concurrency is<br>
>      >     _not_ a performance limitation.<br>
>      ><br>
>      ><br>
>      >     HTH,<br>
>      ><br>
>      >     Alex.<br>
>      ><br>
>      ><br>
>      >      > I selected a couple of cacheable resources in the internet for<br>
>      >     testing:<br>
>      >      >   - small size (~400 KB):<br>
>      >      ><br>
>      ><br>
>     <a href="https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf" rel="noreferrer" target="_blank">https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf</a> <<a href="https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf" rel="noreferrer" target="_blank">https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf</a>> <<a href="https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf" rel="noreferrer" target="_blank">https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf</a> <<a href="https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf" rel="noreferrer" target="_blank">https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf</a>>> <<a href="https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf" rel="noreferrer" target="_blank">https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf</a> <<a href="https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf" rel="noreferrer" target="_blank">https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf</a>> <<a href="https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf" rel="noreferrer" target="_blank">https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf</a> <<a href="https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf" rel="noreferrer" target="_blank">https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf</a>>>><br>
>      >      >   - large (~8 MB):<br>
>      >      ><br>
>      ><br>
>     <a href="https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf" rel="noreferrer" target="_blank">https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf</a> <<a href="https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf" rel="noreferrer" target="_blank">https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf</a>> <<a href="https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf" rel="noreferrer" target="_blank">https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf</a> <<a href="https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf" rel="noreferrer" target="_blank">https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf</a>>> <<a href="https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf" rel="noreferrer" target="_blank">https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf</a> <<a href="https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf" rel="noreferrer" target="_blank">https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf</a>> <<a href="https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf" rel="noreferrer" target="_blank">https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf</a> <<a href="https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf" rel="noreferrer" target="_blank">https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf</a>>>><br>
>      >      > To test simultaneous connections I am forking curl using a<br>
>     simple<br>
>      >     script<br>
>      >      > (it is also shown below).<br>
>      >      ><br>
>      >      > When I run a test (500 curl threads to<br>
>      >      ><br>
>      ><br>
>     <a href="https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf" rel="noreferrer" target="_blank">https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf</a> <<a href="https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf" rel="noreferrer" target="_blank">https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf</a>> <<a href="https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf" rel="noreferrer" target="_blank">https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf</a> <<a href="https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf" rel="noreferrer" target="_blank">https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf</a>>> <<a href="https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf" rel="noreferrer" target="_blank">https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf</a> <<a href="https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf" rel="noreferrer" target="_blank">https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf</a>> <<a href="https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf" rel="noreferrer" target="_blank">https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf</a> <<a href="https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf" rel="noreferrer" target="_blank">https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf</a>>>>) I see lots of TCP_MISS/200 with FIRSTUP_PARENT/parent_proxy records in the logs.<br>
>      >      ><br>
>      >      > A simple analysis shows a low percentage of cache hits:<br>
>      >      > cat /var/log/squid.user/access.log| grep '2023-05-29 14' |<br>
>     grep<br>
>      >     pdf  |<br>
>      >      > awk '{print $5" " $10}' | sort | uniq -c<br>
>      >      >       24 TCP_CF_MISS/200/- HIER_NONE/-<br>
>      >      >      457 TCP_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
>      >      >       10 TCP_MISS/200/- HIER_NONE/-<br>
>      >      >        9 TCP_SWAPFAIL_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
>      >      ><br>
>      >      > So the Hit ratio is about (500-457-9)*100/500=6.8%<br>
>      >      ><br>
>      >      > Almost the same situation we see when run 200 threads:<br>
>      >      > cat /var/log/squid.user/access.log| grep '2023-05-29 15:45' |<br>
>      >     grep pdf<br>
>      >      >   | awk '{print $5" " $10}' | sort | uniq -c<br>
>      >      >        4 TCP_CF_MISS/200/- HIER_NONE/-<br>
>      >      >      140 TCP_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
>      >      >       40 TCP_MISS/200/- HIER_NONE/-<br>
>      >      >       16 TCP_SWAPFAIL_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
>      >      ><br>
>      >      > This time the Hit ratio is about (200-140-16)*100/500=21%<br>
>      >      ><br>
>      >      > With 50 threads the Hit ratio is 90%:<br>
>      >      > cat /var/log/squid.user/access.log| grep '2023-05-29 15:50' |<br>
>      >     grep pdf<br>
>      >      >   | awk '{print $5" " $10}' | sort | uniq -c<br>
>      >      >       27 TCP_CF_MISS/200/- HIER_NONE/-<br>
>      >      >        1 TCP_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
>      >      >       18 TCP_MISS/200/- HIER_NONE/-<br>
>      >      >        4 TCP_SWAPFAIL_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
>      >      ><br>
>      >      > I thought that it should always be near 99% - only the first<br>
>      >     request to<br>
>      >      > an URL should be forwarded to the parent proxy and all<br>
>     subsequent<br>
>      >      > requests should be served from the cache.<br>
>      >      ><br>
>      >      > The situation is even worse with downloading a large file:<br>
>      >      > 500 threads (0.4%):<br>
>      >      > cat /var/log/squid.user/access.log| grep '2023-05-29 17:2'<br>
>     | grep<br>
>      >     pdf  |<br>
>      >      > awk '{print $5" " $10}' | sort | uniq -c<br>
>      >      >       10 TCP_CF_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
>      >      >        2 TCP_CF_MISS/200/- HIER_NONE/-<br>
>      >      >      488 TCP_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
>      >      ><br>
>      >      > 200 threads (3%):<br>
>      >      > cat /var/log/squid.user/access.log| grep '2023-05-29 17:3'<br>
>     | grep<br>
>      >     pdf  |<br>
>      >      > awk '{print $5" " $10}' | sort | uniq -c<br>
>      >      >        9 TCP_CF_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
>      >      >        6 TCP_CF_MISS/200/- HIER_NONE/-<br>
>      >      >      180 TCP_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
>      >      >        5 TCP_SWAPFAIL_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
>      >      ><br>
>      >      > 50 threads (98%):<br>
>      >      > cat /var/log/squid.user/access.log| grep '2023-05-29 17:36' |<br>
>      >     grep pdf<br>
>      >      >   | awk '{print $5" " $10}' | sort | uniq -c<br>
>      >      >       25 TCP_CF_HIT/200/- HIER_NONE/-<br>
>      >      >       12 TCP_CF_MISS/200/- HIER_NONE/-<br>
>      >      >       12 TCP_HIT/200/- HIER_NONE/-<br>
>      >      >        1 TCP_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
>      >      ><br>
>      >      > Could you clarify if this behavior of my squid is a<br>
>      >      > bug/misconfiguration, or if I'm running into server<br>
>     performance<br>
>      >      > limitations (squid is running on a VM with 22 cores)?<br>
>      >      ><br>
>      >      > Kind regards,<br>
>      >      >       Ankor<br>
>      >      ><br>
>      >      ><br>
>      >      ><br>
>      >      > *squid.conf:*<br>
>      >      > workers 21<br>
>      >      ><br>
>      >      > sslcrtd_program<br>
>      >     /data/squid.user/usr/lib/squid/security_file_certgen -s<br>
>      >      > /data/squid.user/var/lib/squid/ssl_db -M 20MB<br>
>      >      > sslcrtd_children 21<br>
>      >      ><br>
>      >      > logformat extended-squid %{%Y-%m-%d %H:%M:%S}tl| %6tr %>a<br>
>      >      > %Ss/%03>Hs/%<Hs %<st %rm %ru %un %Sh/%<A %mt %ea<br>
>      >      ><br>
>      >      > logfile_rotate 0<br>
>      >      > access_log daemon:/var/log/squid.user/access.log<br>
>      >      > logformat=extended-squid on-error=drop<br>
>      >      ><br>
>      >      > cache_peer parent_proxy  parent 3128 0<br>
>      >      > never_direct allow all<br>
>      >      ><br>
>      >      > cachemgr_passwd pass config<br>
>      >      ><br>
>      >      > acl PURGE method PURGE<br>
>      >      > http_access allow PURGE<br>
>      >      ><br>
>      >      > http_access allow all<br>
>      >      ><br>
>      >      > http_port 3131 ssl-bump generate-host-certificates=on<br>
>      >      > dynamic_cert_mem_cache_size=20MB<br>
>      >      > tls-cert=/etc/squid.user/sslbump/bump.crt<br>
>      >      > tls-key=/etc/squid.user/sslbump/bump.key<br>
>      >      > sslproxy_cert_error allow all<br>
>      >      ><br>
>      >      > acl step1 at_step SslBump1<br>
>      >      > acl step2 at_step SslBump2<br>
>      >      > acl step3 at_step SslBump3<br>
>      >      ><br>
>      >      > ssl_bump peek step1<br>
>      >      > ssl_bump bump step2<br>
>      >      > ssl_bump bump step3<br>
>      >      ><br>
>      >      > cache_dir rock /data/squid.user/cache 20000 max-size=12000000<br>
>      >      > cache_swap_low 85<br>
>      >      > cache_swap_high 90<br>
>      >      ><br>
>      >      > *collapsed_forwarding on*<br>
>      >      ><br>
>      >      > pinger_enable off<br>
>      >      > max_filedesc 8192<br>
>      >      > shutdown_lifetime 5 seconds<br>
>      >      > netdb_filename none<br>
>      >      > log_icp_queries off<br>
>      >      > client_request_buffer_max_size 100 MB<br>
>      >      ><br>
>      >      > via off<br>
>      >      > forwarded_for delete<br>
>      >      ><br>
>      >      > coredump_dir /data/squid.user/var/cache/squid<br>
>      >      ><br>
>      >      > *curl_forker.sh:*<br>
>      >      > #!/bin/sh<br>
>      >      > N=100<br>
>      >      ><br>
>      >   <br>
>       URL=<a href="https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf" rel="noreferrer" target="_blank">https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf</a> <<a href="https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf" rel="noreferrer" target="_blank">https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf</a>> <<a href="https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf" rel="noreferrer" target="_blank">https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf</a> <<a href="https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf" rel="noreferrer" target="_blank">https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf</a>>> <<a href="https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf" rel="noreferrer" target="_blank">https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf</a> <<a href="https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf" rel="noreferrer" target="_blank">https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf</a>> <<a href="https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf" rel="noreferrer" target="_blank">https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf</a> <<a href="https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf" rel="noreferrer" target="_blank">https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf</a>>>><br>
>      >      ><br>
>      >      > if [[  -n $1 &&  $1 =~ help$  ]];<br>
>      >      > then<br>
>      >      >     echo "Usage: $0 [<cnt>] [<url>]"<br>
>      >      >     echo<br>
>      >      >     echo "Example: $0 10<br>
>      >      ><br>
>      ><br>
>     <a href="https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf" rel="noreferrer" target="_blank">https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf</a> <<a href="https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf" rel="noreferrer" target="_blank">https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf</a>> <<a href="https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf" rel="noreferrer" target="_blank">https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf</a> <<a href="https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf" rel="noreferrer" target="_blank">https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf</a>>> <<a href="https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf" rel="noreferrer" target="_blank">https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf</a> <<a href="https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf" rel="noreferrer" target="_blank">https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf</a>> <<a href="https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf" rel="noreferrer" target="_blank">https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf</a> <<a href="https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf" rel="noreferrer" target="_blank">https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf</a>>>>";<br>
>      >      >     echo<br>
>      >      >     exit;<br>
>      >      > fi<br>
>      >      ><br>
>      >      > while [[ $# -gt 0 ]]<br>
>      >      > do<br>
>      >      >    if [[ $1 =~ ^[0-9]+$ ]]<br>
>      >      >    then<br>
>      >      >       N=$1<br>
>      >      >    else<br>
>      >      >       URL=$1<br>
>      >      >    fi<br>
>      >      >    shift<br>
>      >      > done<br>
>      >      ><br>
>      >      > echo $URL<br>
>      >      > echo $N threads<br>
>      >      ><br>
>      >      > for i in `seq $N`<br>
>      >      > do<br>
>      >      >    nohup curl --tlsv1.2 -k   --proxy 0001vsg01:3131  -v $URL<br>
>      >       >/dev/null<br>
>      >      >   2>&1 &<br>
>      >      > done<br>
>      >      ><br>
>      >      ><br>
>      >      ><br>
>      >      ><br>
>      >      ><br>
>      >      ><br>
>      >      ><br>
>      >      ><br>
>      >      ><br>
>      >      ><br>
>      >      ><br>
>      >      > _______________________________________________<br>
>      >      > squid-users mailing list<br>
>      >      > <a href="mailto:squid-users@lists.squid-cache.org" target="_blank">squid-users@lists.squid-cache.org</a><br>
>     <mailto:<a href="mailto:squid-users@lists.squid-cache.org" target="_blank">squid-users@lists.squid-cache.org</a>><br>
>      >     <mailto:<a href="mailto:squid-users@lists.squid-cache.org" target="_blank">squid-users@lists.squid-cache.org</a><br>
>     <mailto:<a href="mailto:squid-users@lists.squid-cache.org" target="_blank">squid-users@lists.squid-cache.org</a>>><br>
>      >      > <a href="http://lists.squid-cache.org/listinfo/squid-users" rel="noreferrer" target="_blank">http://lists.squid-cache.org/listinfo/squid-users</a><br>
>     <<a href="http://lists.squid-cache.org/listinfo/squid-users" rel="noreferrer" target="_blank">http://lists.squid-cache.org/listinfo/squid-users</a>><br>
>      >     <<a href="http://lists.squid-cache.org/listinfo/squid-users" rel="noreferrer" target="_blank">http://lists.squid-cache.org/listinfo/squid-users</a><br>
>     <<a href="http://lists.squid-cache.org/listinfo/squid-users" rel="noreferrer" target="_blank">http://lists.squid-cache.org/listinfo/squid-users</a>>><br>
>      ><br>
>      >     _______________________________________________<br>
>      >     squid-users mailing list<br>
>      > <a href="mailto:squid-users@lists.squid-cache.org" target="_blank">squid-users@lists.squid-cache.org</a><br>
>     <mailto:<a href="mailto:squid-users@lists.squid-cache.org" target="_blank">squid-users@lists.squid-cache.org</a>><br>
>      >     <mailto:<a href="mailto:squid-users@lists.squid-cache.org" target="_blank">squid-users@lists.squid-cache.org</a><br>
>     <mailto:<a href="mailto:squid-users@lists.squid-cache.org" target="_blank">squid-users@lists.squid-cache.org</a>>><br>
>      > <a href="http://lists.squid-cache.org/listinfo/squid-users" rel="noreferrer" target="_blank">http://lists.squid-cache.org/listinfo/squid-users</a><br>
>     <<a href="http://lists.squid-cache.org/listinfo/squid-users" rel="noreferrer" target="_blank">http://lists.squid-cache.org/listinfo/squid-users</a>><br>
>      >     <<a href="http://lists.squid-cache.org/listinfo/squid-users" rel="noreferrer" target="_blank">http://lists.squid-cache.org/listinfo/squid-users</a><br>
>     <<a href="http://lists.squid-cache.org/listinfo/squid-users" rel="noreferrer" target="_blank">http://lists.squid-cache.org/listinfo/squid-users</a>>><br>
>      ><br>
> <br>
<br>
</blockquote></div></div>