<div dir="ltr"><div dir="ltr">Hello, Alex,<div><br></div><div>I have shortened the correspondence because it does not meet the size requirements for the mailing list.</div><div><br></div><div>Thank you so much for your time, the analysis and recommendations.</div><div><br></div><div>I disabled the cache_dir and now squid works as expected - there is only one request to the original content server:</div><div>- on the small file:</div><div><font face="monospace"> 1 NONE_NONE/503/- HIER_NONE/-<br> 4 TCP_CF_HIT/200/- HIER_NONE/-<br> 128 TCP_HIT/200/- HIER_NONE/-<br> 366 TCP_MEM_HIT/200/- HIER_NONE/-<br> 1 TCP_MISS/200/200 FIRSTUP_PARENT/parent_proxy</font><br></div><div><br></div><div>- on the large file:</div><div><font face="monospace"> 17 TCP_CF_HIT/200/- HIER_NONE/-<br> 482 TCP_HIT/200/- HIER_NONE/-<br> 1 TCP_MISS/200/200 FIRSTUP_PARENT/parent_proxy</font><br></div><div><br></div><div>I think this configuration is perfect for caching online video broadcasts. Chanks of video are required by clients simultaneously only for a short period of time, so there is no need to save them to disk..</div><div>As my VM has 32 GB of RAM, I can configure a sufficient amount of cache_mem, say 20000 MB to provide caching of video broadcasts.</div><div><br></div><div><br></div><div>Kind regards,</div><div> Ankor.</div><div><br></div></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">пн, 5 июн. 2023 г. в 17:31, Alex Rousskov <<a href="mailto:rousskov@measurement-factory.com" target="_blank">rousskov@measurement-factory.com</a>>:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On 6/2/23 03:29, Andrey K wrote:<br>
<br>
> > Can you repeat this test and share a pointer to the corresponding<br>
> > compressed cache.log, containing those 500 (or fewer, as long as the<br>
> > problem is reproduced!) concurrent transactions. One or many of those<br>
> > concurrent transactions resulted in the unwanted entry deletion. The log<br>
> > may show what happened in that case.<br>
<br>
> I cleared the rock cache, set the debug level, restarted squid, cleared <br>
> the cache.log, ran 500-threads test, waited for it to finish and <br>
> launched curl to make sure it returned TCP_MISS.<br>
> Then stopped squid to limit the cache.log file.<br>
<br>
<br>
Thank you for sharing that log! I only had time to study a few misses. <br>
They all stemmed from the same sequence of events:<br>
<br>
1. A collapsed request finds the corresponding entry in the cache.<br>
2. Squid decides that this request should open the disk file.<br>
3. The rock disk entry is still being written (i.e. "swapped out"),<br>
so the attempt to swap it in fails (TCP_SWAPFAIL_MISS).<br>
4. The request goes to the origin server.<br>
5. The fresh response deletes the existing cached entry.<br>
6. When a subsequent request finds the cached entry marked for<br>
deletion, it declares a cache miss (TCP_MISS) and goes to step 4.<br>
<br>
Disclaimer: The above sequence of events causes misses, but it may not <br>
be the only or even the primary cause. I do not have enough free time to <br>
rule out or confirm other causes (and order them by severity).<br>
<br>
<br>
Squid can (and should) handle concurrent swapout/swapins better, and we <br>
may be able to estimate that improvement potential for your workload <br>
without significant development, but, for the next step, I suggest <br>
disabling cache_dir and testing whether your get substantially better <br>
results with memory cache alone. Shared memory cache also has periods <br>
where a being-written entry cannot be read, but, compared to the disk <br>
cache, those periods are much shorter IIRC. I would like to confirm that <br>
this simplified mode of operation works well for your workload before I <br>
suggest code changes that would rely, in part, on this mode.<br>
<br>
<br>
Thank you,<br>
<br>
Alex.<br><br>
<br>
</blockquote></div>
</blockquote></div></div>