<div dir="ltr"><div dir="ltr">Hello, Alex,<div><br></div><div>Thanks for the help.</div><div><br></div><div>> The next step I would recommend is to study the very first cache miss<br>> _after_ the 500 or 200 concurrent threads test. Doing so may shed light<br>> on why Squid is refusing to serve that (presumably cached) object from<br>> the cache. I suspect that the object was marked for deletion earlier,<br>> but we should check before spending more time on more complex triage of<br>> concurrent cases. If you can share a (link to) compressed ALL,9<br>> cache.log from that single transaction against Squid v6, I may be able<br>> to help you with that step.<br></div><div>I cleared the rock cache, changed the squid.conf (added debug_options ALL,9), restarted the squid, ran a test with 500 concurrent threads, and dumped the rock cache to make sure that the URL is there:</div><div><font face="monospace">$VAR1 = {<br> '1' => {<br> 'URL' => '<a href="https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf">https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf</a>',<br> 'KEY_MD5' => 'e5eb10f0ab7d84ff9d3fd1e5a6d3eb9c',<br> 'OBJSIZE' => 446985,<br> 'STD_LFS' => {<br> 'lastref' => 'Thu Jun 1 11:29:01 2023',<br> 'timestamp' => 'Thu Jun 1 11:29:01 2023',<br> 'expires' => 'Thu Jun 1 17:29:01 2023',<br> 'refcount' => 3,<br> 'flags' => '0x4004',<br> 'swap_file_sz' => 0,<br> 'lastmod' => 'Wed Jun 29 16:09:14 2016'<br> },<br> 'VERSION' => 'Thu Jun 1 11:29:01 2023'<br> }<br> };</font><br></div><div>Then cleared the cache.log to have only a single transaction in the logs, and finally ran curl.</div><div>Log file is uploaded to <a href="https://drive.google.com/file/d/1uwbBVjWeDEHI95B6ArPZr5_pqkCr_h9P/view?usp=sharing">https://drive.google.com/file/d/1uwbBVjWeDEHI95B6ArPZr5_pqkCr_h9P/view?usp=sharing</a></div><div><br></div><div><br></div><div>There are records in the log:</div><div><font face="monospace">2023/06/01 11:30:34.556 kid7| 20,3| Controller.cc(429) peek: E5EB10F0AB7D84FF9D3FD1E5A6D3EB9C<br>2023/06/01 11:30:34.556 kid7| 54,5| StoreMap.cc(443) openForReading: opening entry with key E5EB10F0AB7D84FF9D3FD1E5A6D3EB9C for reading transients_map<br>2023/06/01 11:30:34.556 kid7| 54,5| StoreMap.cc(455) openForReadingAt: opening entry 11138 for reading transients_map<br>2023/06/01 11:30:34.556 kid7| 54,7| StoreMap.cc(467) openForReadingAt: cannot open empty entry 11138 for reading transients_map<br>2023/06/01 11:30:34.556 kid7| 54,5| StoreMap.cc(443) openForReading: opening entry with key E5EB10F0AB7D84FF9D3FD1E5A6D3EB9C for reading cache_mem_map<br>2023/06/01 11:30:34.556 kid7| 54,5| StoreMap.cc(455) openForReadingAt: opening entry 11138 for reading cache_mem_map<br>2023/06/01 11:30:34.556 kid7| 54,7| StoreMap.cc(474) openForReadingAt: cannot open marked entry 11138 for reading cache_mem_map<br>2023/06/01 11:30:34.556 kid7| 54,5| StoreMap.cc(443) openForReading: opening entry with key E5EB10F0AB7D84FF9D3FD1E5A6D3EB9C for reading /data/squid.user/cache_map<br>2023/06/01 11:30:34.556 kid7| 54,5| StoreMap.cc(455) openForReadingAt: opening entry 719406 for reading /data/squid.user/cache_map<br>2023/06/01 11:30:34.556 kid7| 54,7| StoreMap.cc(474) openForReadingAt: cannot open marked entry 719406 for reading /data/squid.user/cache_map<br>2023/06/01 11:30:34.556 kid7| 20,6| Disks.cc(254) get: none of 1 cache_dirs have E5EB10F0AB7D84FF9D3FD1E5A6D3EB9C<br>2023/06/01 11:30:34.556 kid7| 20,4| Controller.cc(463) peek: cannot locate E5EB10F0AB7D84FF9D3FD1E5A6D3EB9C<br>2023/06/01 11:30:34.556 kid7| 85,7| client_side_reply.cc(1626) detailStoreLookup: mismatch<br>2023/06/01 11:30:34.556 kid7| 85,3| client_side_reply.cc(1561) identifyFoundObject: StoreEntry is NULL - MISS<br>2023/06/01 11:30:34.556 kid7| 83,7| LogTags.cc(57) update: TAG_NONE to TCP_MISS</font><br></div><div><br></div><div>The file that squid tried to read <span style="font-family:monospace">/data/squid.user/cache_map does not exist. I use a cache_dir file </span><span style="font-family:monospace">/data/squid.user/cache/rock. I suppose that file </span><span style="font-family:monospace">/data/squid.user/cache_map </span>is a virtual entity.</div><div><br></div><div><br></div><div>Kind regards, </div><div> Ankor</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">ср, 31 мая 2023 г. в 16:43, Alex Rousskov <<a href="mailto:rousskov@measurement-factory.com">rousskov@measurement-factory.com</a>>:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On 5/31/23 02:56, Andrey K wrote:<br>
<br>
> > Do you get close to 100% hit ratio if clients access these URLs<br>
> > sequentially rather than concurrently? If not, then focus on that<br>
> > problem before you open the collapsed forwarding Pandora box.<br>
> When I run curl sequentially like this:<br>
> for i in `seq 500`; do curl --tlsv1.2 -k --proxy 0001vsg01:3131 -v <br>
> $URL >/dev/null 2>&1; done<br>
> I get only the first request with a status TCP_MISS and all others with <br>
> TCP_MEM_HIT:<br>
> Cnt Status Parent<br>
> 499 TCP_MEM_HIT/200/- HIER_NONE/-<br>
> 1 TCP_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
<br>
Excellent. This confirms that your Squid can successfully cache this <br>
object (in memory).<br>
<br>
<br>
> It is interesting to note that on both squid versions if I run a <br>
> separate curl after processing 500 or 200 concurrent threads, I get a <br>
> result with the status TCP_MISS/200<br>
<br>
The next step I would recommend is to study the very first cache miss <br>
_after_ the 500 or 200 concurrent threads test. Doing so may shed light <br>
on why Squid is refusing to serve that (presumably cached) object from <br>
the cache. I suspect that the object was marked for deletion earlier, <br>
but we should check before spending more time on more complex triage of <br>
concurrent cases. If you can share a (link to) compressed ALL,9 <br>
cache.log from that single transaction against Squid v6, I may be able <br>
to help you with that step.<br>
<br>
<br>
Cheers,<br>
<br>
Alex.<br>
<br>
<br>
> > What is your Squid version? Older Squids have more collapsed forwarding<br>
> > bugs than newer ones. I recommend testing with Squid v6 or master/v7, at<br>
> > least to confirm that the problem is still present in the latest<br>
> > official code.<br>
> I run tests on SQUID 5.9.<br>
> We compiled 6.0.2 (with disabled delay-pools) and increased memory <br>
> parameters:<br>
> cache_mem 2048 MB<br>
> maximum_object_size_in_memory 10 MB<br>
> The complete configuration is shown below.<br>
> <br>
> Now on the version 6.0.2 we have the next results:<br>
> 500 threads - Hit ratio 3.8%:<br>
> 3 TCP_CF_HIT/200/- HIER_NONE/-<br>
> 2 TCP_CF_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
> 16 TCP_HIT/200/- HIER_NONE/-<br>
> 467 TCP_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
> 12 TCP_SWAPFAIL_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
> <br>
> 200 threads - 6%<br>
> 6 TCP_CF_HIT/200/- HIER_NONE/-<br>
> 10 TCP_HIT/200/- HIER_NONE/-<br>
> 176 TCP_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
> 8 TCP_SWAPFAIL_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
> <br>
> 50 threads - 82%<br>
> 30 TCP_CF_HIT/200/- HIER_NONE/-<br>
> 11 TCP_HIT/200/- HIER_NONE/-<br>
> 1 TCP_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
> 8 TCP_SWAPFAIL_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
> <br>
> The results are slightly worse than they were on the version 5.9.<br>
> It is interesting to note that on both squid versions if I run a <br>
> separate curl after processing 500 or 200 concurrent threads, I get a <br>
> result with the status TCP_MISS/200, although the requested URL is <br>
> already in the rock cache (I can see it in the contents of the cache <br>
> using the utility I developed <a href="http://rock_cache_dump.pl" rel="noreferrer" target="_blank">rock_cache_dump.pl</a> <br>
> <<a href="http://rock_cache_dump.pl" rel="noreferrer" target="_blank">http://rock_cache_dump.pl</a>>:<br>
> $VAR1 = {<br>
> '1' => {<br>
> 'VERSION' => 'Wed May 31 09:18:05 2023',<br>
> 'KEY_MD5' => 'e5eb10f0ab7d84ff9d3fd1e5a6d3eb9c',<br>
> 'OBJSIZE' => 446985,<br>
> 'STD_LFS' => {<br>
> 'lastref' => 'Wed May 31 09:18:05 2023',<br>
> 'flags' => '0x4004',<br>
> 'expires' => 'Wed May 31 15:18:05 2023',<br>
> 'swap_file_sz' => 0,<br>
> 'refcount' => 1,<br>
> 'lastmod' => 'Wed Jun 29 16:09:14 2016',<br>
> 'timestamp' => 'Wed May 31 09:18:05 2023'<br>
> },<br>
> 'URL' => <br>
> '<a href="https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf" rel="noreferrer" target="_blank">https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf</a> <<a href="https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf" rel="noreferrer" target="_blank">https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf</a>>'<br>
> }<br>
> };<br>
> <br>
> ).<br>
> <br>
> > How much RAM does your server have? You are using default 256MB memory<br>
> > cache (cache_mem). If you have spare memory, make your memory cache much<br>
> > larger: A rock cache_dir cannot (yet) share the response _while_ the<br>
> > response is being written to disk, so relying on cache_dir too much will<br>
> > decrease your hit ratio, especially in a collapsed forwarding <br>
> environment.<br>
> The VM has 32 GB RAM. I configured cache_mem 2048 MB on the 6.0.2 version.<br>
> <br>
> > Is your Squid built with --enable-delay-pools? If yes, TCP_MISS does not<br>
> > necessarily mean a cache miss (an old Squid bug), even if you do not use<br>
> > any delay pools.<br>
> Yes, delay pools on the version 5.9 were enabled though we don't use <br>
> them. I disabled this feature on the 6.0.2 version.<br>
> <br>
> <br>
> > Since you are trying to cache objects lager than 512KB, see<br>
> > maximum_object_size_in_memory.<br>
> I configured maximum_object_size_in_memory 10 MB on the 6.0.2 version <br>
> (as videochunks are less than 7 MB).<br>
> <br>
> > Consider making your test much longer (more sequential requests per<br>
> > client/curl worker), to see whether the cache becomes "stable" after one<br>
> > of the first transactions manages to fully cache the response. This may<br>
> > not help with older Squids, but might help with newer ones. However, you<br>
> > should not test using real origin servers (that you do not control)!<br>
> I don't have any of my own web servers for tests, so I choose some <br>
> resources on the public internet that have a robust infrastructure.<br>
> I will conduct the longer tests next week.<br>
> <br>
> Kind regards,<br>
> Ankor.<br>
> <br>
> *squid.conf*<br>
> workers 21<br>
> <br>
> sslcrtd_program /data/squid.user/usr/lib/squid/security_file_certgen -s <br>
> /data/squid.user/var/lib/squid/ssl_db -M 20MB<br>
> sslcrtd_children 21<br>
> <br>
> logformat extended-squid %{%Y-%m-%d %H:%M:%S}tl| %6tr %>a <br>
> %Ss/%03>Hs/%<Hs %<st %rm %ru %un %Sh/%<A %mt %ea<br>
> <br>
> logfile_rotate 0<br>
> access_log daemon:/var/log/squid.user/access.log <br>
> logformat=extended-squid on-error=drop<br>
> <br>
> cache_peer parent_proxy parent 3128 0<br>
> never_direct allow all<br>
> <br>
> cachemgr_passwd pass config<br>
> <br>
> acl PURGE method PURGE<br>
> http_access allow PURGE<br>
> <br>
> http_access allow all<br>
> <br>
> http_port 3131 ssl-bump generate-host-certificates=on <br>
> dynamic_cert_mem_cache_size=20MB <br>
> tls-cert=/etc/squid.user/sslbump/bump.crt <br>
> tls-key=/etc/squid.user/sslbump/bump.key<br>
> sslproxy_cert_error allow all<br>
> <br>
> acl step1 at_step SslBump1<br>
> acl step2 at_step SslBump2<br>
> acl step3 at_step SslBump3<br>
> <br>
> ssl_bump peek step1<br>
> ssl_bump bump step2<br>
> ssl_bump bump step3<br>
> <br>
> cache_dir rock /data/squid.user/cache 20000 max-size=12000000<br>
> cache_swap_low 85<br>
> cache_swap_high 90<br>
> <br>
> collapsed_forwarding on<br>
> cache_mem 2048 MB<br>
> maximum_object_size_in_memory 10 MB<br>
> <br>
> pinger_enable off<br>
> max_filedesc 8192<br>
> shutdown_lifetime 5 seconds<br>
> netdb_filename none<br>
> log_icp_queries off<br>
> <br>
> via off<br>
> forwarded_for delete<br>
> <br>
> client_request_buffer_max_size 100 MB<br>
> <br>
> coredump_dir /data/squid.user/var/cache/squid<br>
> <br>
> <br>
> <br>
> <br>
> пн, 29 мая 2023 г. в 23:17, Alex Rousskov <br>
> <<a href="mailto:rousskov@measurement-factory.com" target="_blank">rousskov@measurement-factory.com</a> <br>
> <mailto:<a href="mailto:rousskov@measurement-factory.com" target="_blank">rousskov@measurement-factory.com</a>>>:<br>
> <br>
> On 5/29/23 10:43, Andrey K wrote:<br>
> <br>
> > We need to configure a dedicated proxy server to provide caching of<br>
> > online video broadcasts in order to reduce the load on the uplink<br>
> proxy.<br>
> > Hundreds of users will access the same video-chunks simultaneously.<br>
> ><br>
> > I developed a simple configuration for the test purposes (it is<br>
> shown<br>
> > below).<br>
> > The *collapsed_forwarding* option is on.<br>
> <br>
> Do you get close to 100% hit ratio if clients access these URLs<br>
> sequentially rather than concurrently? If not, then focus on that<br>
> problem before you open the collapsed forwarding Pandora box.<br>
> <br>
> What is your Squid version? Older Squids have more collapsed forwarding<br>
> bugs than newer ones. I recommend testing with Squid v6 or<br>
> master/v7, at<br>
> least to confirm that the problem is still present in the latest<br>
> official code.<br>
> <br>
> How much RAM does your server have? You are using default 256MB memory<br>
> cache (cache_mem). If you have spare memory, make your memory cache<br>
> much<br>
> larger: A rock cache_dir cannot (yet) share the response _while_ the<br>
> response is being written to disk, so relying on cache_dir too much<br>
> will<br>
> decrease your hit ratio, especially in a collapsed forwarding<br>
> environment.<br>
> <br>
> Is your Squid built with --enable-delay-pools? If yes, TCP_MISS does<br>
> not<br>
> necessarily mean a cache miss (an old Squid bug), even if you do not<br>
> use<br>
> any delay pools.<br>
> <br>
> Since you are trying to cache objects lager than 512KB, see<br>
> maximum_object_size_in_memory.<br>
> <br>
> Consider making your test much longer (more sequential requests per<br>
> client/curl worker), to see whether the cache becomes "stable" after<br>
> one<br>
> of the first transactions manages to fully cache the response. This may<br>
> not help with older Squids, but might help with newer ones. However,<br>
> you<br>
> should not test using real origin servers (that you do not control)!<br>
> <br>
> <br>
> > Could you clarify if this behavior of my squid is<br>
> > a bug/misconfiguration, or if I'm running into server performance<br>
> > limitations (squid is running on a VM with 22 cores)?<br>
> <br>
> Most likely, reduction of hit ratio with increase of concurrency is<br>
> _not_ a performance limitation.<br>
> <br>
> <br>
> HTH,<br>
> <br>
> Alex.<br>
> <br>
> <br>
> > I selected a couple of cacheable resources in the internet for<br>
> testing:<br>
> > - small size (~400 KB):<br>
> ><br>
> <a href="https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf" rel="noreferrer" target="_blank">https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf</a> <<a href="https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf" rel="noreferrer" target="_blank">https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf</a>> <<a href="https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf" rel="noreferrer" target="_blank">https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf</a> <<a href="https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf" rel="noreferrer" target="_blank">https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf</a>>><br>
> > - large (~8 MB):<br>
> ><br>
> <a href="https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf" rel="noreferrer" target="_blank">https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf</a> <<a href="https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf" rel="noreferrer" target="_blank">https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf</a>> <<a href="https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf" rel="noreferrer" target="_blank">https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf</a> <<a href="https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf" rel="noreferrer" target="_blank">https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf</a>>><br>
> > To test simultaneous connections I am forking curl using a simple<br>
> script<br>
> > (it is also shown below).<br>
> ><br>
> > When I run a test (500 curl threads to<br>
> ><br>
> <a href="https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf" rel="noreferrer" target="_blank">https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf</a> <<a href="https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf" rel="noreferrer" target="_blank">https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf</a>> <<a href="https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf" rel="noreferrer" target="_blank">https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf</a> <<a href="https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf" rel="noreferrer" target="_blank">https://ia800406.us.archive.org/13/items/romeo-y-julieta-texto-completo/Romeo%20y%20Julieta%20-%20William%20Shakespeare.pdf</a>>>) I see lots of TCP_MISS/200 with FIRSTUP_PARENT/parent_proxy records in the logs.<br>
> ><br>
> > A simple analysis shows a low percentage of cache hits:<br>
> > cat /var/log/squid.user/access.log| grep '2023-05-29 14' | grep<br>
> pdf |<br>
> > awk '{print $5" " $10}' | sort | uniq -c<br>
> > 24 TCP_CF_MISS/200/- HIER_NONE/-<br>
> > 457 TCP_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
> > 10 TCP_MISS/200/- HIER_NONE/-<br>
> > 9 TCP_SWAPFAIL_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
> ><br>
> > So the Hit ratio is about (500-457-9)*100/500=6.8%<br>
> ><br>
> > Almost the same situation we see when run 200 threads:<br>
> > cat /var/log/squid.user/access.log| grep '2023-05-29 15:45' |<br>
> grep pdf<br>
> > | awk '{print $5" " $10}' | sort | uniq -c<br>
> > 4 TCP_CF_MISS/200/- HIER_NONE/-<br>
> > 140 TCP_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
> > 40 TCP_MISS/200/- HIER_NONE/-<br>
> > 16 TCP_SWAPFAIL_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
> ><br>
> > This time the Hit ratio is about (200-140-16)*100/500=21%<br>
> ><br>
> > With 50 threads the Hit ratio is 90%:<br>
> > cat /var/log/squid.user/access.log| grep '2023-05-29 15:50' |<br>
> grep pdf<br>
> > | awk '{print $5" " $10}' | sort | uniq -c<br>
> > 27 TCP_CF_MISS/200/- HIER_NONE/-<br>
> > 1 TCP_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
> > 18 TCP_MISS/200/- HIER_NONE/-<br>
> > 4 TCP_SWAPFAIL_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
> ><br>
> > I thought that it should always be near 99% - only the first<br>
> request to<br>
> > an URL should be forwarded to the parent proxy and all subsequent<br>
> > requests should be served from the cache.<br>
> ><br>
> > The situation is even worse with downloading a large file:<br>
> > 500 threads (0.4%):<br>
> > cat /var/log/squid.user/access.log| grep '2023-05-29 17:2' | grep<br>
> pdf |<br>
> > awk '{print $5" " $10}' | sort | uniq -c<br>
> > 10 TCP_CF_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
> > 2 TCP_CF_MISS/200/- HIER_NONE/-<br>
> > 488 TCP_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
> ><br>
> > 200 threads (3%):<br>
> > cat /var/log/squid.user/access.log| grep '2023-05-29 17:3' | grep<br>
> pdf |<br>
> > awk '{print $5" " $10}' | sort | uniq -c<br>
> > 9 TCP_CF_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
> > 6 TCP_CF_MISS/200/- HIER_NONE/-<br>
> > 180 TCP_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
> > 5 TCP_SWAPFAIL_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
> ><br>
> > 50 threads (98%):<br>
> > cat /var/log/squid.user/access.log| grep '2023-05-29 17:36' |<br>
> grep pdf<br>
> > | awk '{print $5" " $10}' | sort | uniq -c<br>
> > 25 TCP_CF_HIT/200/- HIER_NONE/-<br>
> > 12 TCP_CF_MISS/200/- HIER_NONE/-<br>
> > 12 TCP_HIT/200/- HIER_NONE/-<br>
> > 1 TCP_MISS/200/200 FIRSTUP_PARENT/parent_proxy<br>
> ><br>
> > Could you clarify if this behavior of my squid is a<br>
> > bug/misconfiguration, or if I'm running into server performance<br>
> > limitations (squid is running on a VM with 22 cores)?<br>
> ><br>
> > Kind regards,<br>
> > Ankor<br>
> ><br>
> ><br>
> ><br>
> > *squid.conf:*<br>
> > workers 21<br>
> ><br>
> > sslcrtd_program<br>
> /data/squid.user/usr/lib/squid/security_file_certgen -s<br>
> > /data/squid.user/var/lib/squid/ssl_db -M 20MB<br>
> > sslcrtd_children 21<br>
> ><br>
> > logformat extended-squid %{%Y-%m-%d %H:%M:%S}tl| %6tr %>a<br>
> > %Ss/%03>Hs/%<Hs %<st %rm %ru %un %Sh/%<A %mt %ea<br>
> ><br>
> > logfile_rotate 0<br>
> > access_log daemon:/var/log/squid.user/access.log<br>
> > logformat=extended-squid on-error=drop<br>
> ><br>
> > cache_peer parent_proxy parent 3128 0<br>
> > never_direct allow all<br>
> ><br>
> > cachemgr_passwd pass config<br>
> ><br>
> > acl PURGE method PURGE<br>
> > http_access allow PURGE<br>
> ><br>
> > http_access allow all<br>
> ><br>
> > http_port 3131 ssl-bump generate-host-certificates=on<br>
> > dynamic_cert_mem_cache_size=20MB<br>
> > tls-cert=/etc/squid.user/sslbump/bump.crt<br>
> > tls-key=/etc/squid.user/sslbump/bump.key<br>
> > sslproxy_cert_error allow all<br>
> ><br>
> > acl step1 at_step SslBump1<br>
> > acl step2 at_step SslBump2<br>
> > acl step3 at_step SslBump3<br>
> ><br>
> > ssl_bump peek step1<br>
> > ssl_bump bump step2<br>
> > ssl_bump bump step3<br>
> ><br>
> > cache_dir rock /data/squid.user/cache 20000 max-size=12000000<br>
> > cache_swap_low 85<br>
> > cache_swap_high 90<br>
> ><br>
> > *collapsed_forwarding on*<br>
> ><br>
> > pinger_enable off<br>
> > max_filedesc 8192<br>
> > shutdown_lifetime 5 seconds<br>
> > netdb_filename none<br>
> > log_icp_queries off<br>
> > client_request_buffer_max_size 100 MB<br>
> ><br>
> > via off<br>
> > forwarded_for delete<br>
> ><br>
> > coredump_dir /data/squid.user/var/cache/squid<br>
> ><br>
> > *curl_forker.sh:*<br>
> > #!/bin/sh<br>
> > N=100<br>
> ><br>
> URL=<a href="https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf" rel="noreferrer" target="_blank">https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf</a> <<a href="https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf" rel="noreferrer" target="_blank">https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf</a>> <<a href="https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf" rel="noreferrer" target="_blank">https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf</a> <<a href="https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf" rel="noreferrer" target="_blank">https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf</a>>><br>
> ><br>
> > if [[ -n $1 && $1 =~ help$ ]];<br>
> > then<br>
> > echo "Usage: $0 [<cnt>] [<url>]"<br>
> > echo<br>
> > echo "Example: $0 10<br>
> ><br>
> <a href="https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf" rel="noreferrer" target="_blank">https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf</a> <<a href="https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf" rel="noreferrer" target="_blank">https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf</a>> <<a href="https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf" rel="noreferrer" target="_blank">https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf</a> <<a href="https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf" rel="noreferrer" target="_blank">https://ia600601.us.archive.org/10/items/Linux-Journal-2015-01/Linux-Journal-2015-01.pdf</a>>>";<br>
> > echo<br>
> > exit;<br>
> > fi<br>
> ><br>
> > while [[ $# -gt 0 ]]<br>
> > do<br>
> > if [[ $1 =~ ^[0-9]+$ ]]<br>
> > then<br>
> > N=$1<br>
> > else<br>
> > URL=$1<br>
> > fi<br>
> > shift<br>
> > done<br>
> ><br>
> > echo $URL<br>
> > echo $N threads<br>
> ><br>
> > for i in `seq $N`<br>
> > do<br>
> > nohup curl --tlsv1.2 -k --proxy 0001vsg01:3131 -v $URL<br>
> >/dev/null<br>
> > 2>&1 &<br>
> > done<br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> > _______________________________________________<br>
> > squid-users mailing list<br>
> > <a href="mailto:squid-users@lists.squid-cache.org" target="_blank">squid-users@lists.squid-cache.org</a><br>
> <mailto:<a href="mailto:squid-users@lists.squid-cache.org" target="_blank">squid-users@lists.squid-cache.org</a>><br>
> > <a href="http://lists.squid-cache.org/listinfo/squid-users" rel="noreferrer" target="_blank">http://lists.squid-cache.org/listinfo/squid-users</a><br>
> <<a href="http://lists.squid-cache.org/listinfo/squid-users" rel="noreferrer" target="_blank">http://lists.squid-cache.org/listinfo/squid-users</a>><br>
> <br>
> _______________________________________________<br>
> squid-users mailing list<br>
> <a href="mailto:squid-users@lists.squid-cache.org" target="_blank">squid-users@lists.squid-cache.org</a><br>
> <mailto:<a href="mailto:squid-users@lists.squid-cache.org" target="_blank">squid-users@lists.squid-cache.org</a>><br>
> <a href="http://lists.squid-cache.org/listinfo/squid-users" rel="noreferrer" target="_blank">http://lists.squid-cache.org/listinfo/squid-users</a><br>
> <<a href="http://lists.squid-cache.org/listinfo/squid-users" rel="noreferrer" target="_blank">http://lists.squid-cache.org/listinfo/squid-users</a>><br>
> <br>
<br>
</blockquote></div></div>