[squid-users] slow TCP_TUNNEL

Alex Rousskov rousskov at measurement-factory.com
Mon Jul 25 19:43:52 UTC 2022


On 7/25/22 04:40, Katerina Bubenickova wrote:
> Hi,
> We have 2 squid proxies running on Centos 6 which is very old (let's 
> call them C1, C2) in DMZ.
> 
> I have installed Debian 11 bullseye and squid +squidguard, trying to use 
> the same configuration (let's call it D1).
> If I use this proxy for 1 pc station, all is ok.
> If Iadded the proxy to dns as third proxy (C1+C2 + D1) or use one old 
> proxy and the new one (C1+D1) the response of internet was very slow, 
> unusable.
> 
> I tryed to fix it without success:
> I added directive url_rewrite_children 200
> I changed DNS from 8.8.8.8 to localhost
> I turned off squid cache
> I commented out squidguard
> I migrated from one vmware server to another, better,
> I added memory (16 GB) and CPU (6) ,
> I added directive dns_v4_first on
> There are no rules in D1 firewall
> 
> We have about 700 pc and the problem starts after 15-20 minutes after I 
> add D1 into dns as proxy.
> I can see it on our monitor Flowmon, where I can see value NPM response 
> time. In normal state it is about 0.1s, when the problem arises it is up 
> to 3s.
> If I remove D1 from dns, all is ok after a while.
> 
> I  tried to set IP of D1 the same as IP of C2 (of course C2 was power 
> off) to test, whether the problem could be caused by some firewall 
> between user pc and proxy, it doesn.t help
> 
> 
> access.log of D1
> 
> 1658483765.546 1622444 172.19.11.101 TCP_TUNNEL/200 3635 CONNECT 
> epns.eset.com:443 - HIER_DIRECT/91.228.167.192 -

Are all D1 transactions that slow (the second field is response time in 
milliseconds)? Or are some of them fast even when others are slow?

Is D1 Squid itself very busy (90% single CPU core utilization or 
higher)? If Squid looks idle, it is probably waiting for something like 
a DNS response or a problematic helper.

Any errors or warnings in D1 cache.log?

Can you reproduce the problem with a single curl/wget transaction that 
targets D1?


HTH,

Alex.



> 
> in access.log of C1 are the time numbers a lot lower:
> 
> 1658491430.020      7 172.19.14.191 TCP_MISS/200 473 GET 
> http://i5.c.eset.com/v1/auth/60064ADBC289880E8D77/updlist/31/eid/108/lid/108 
> - DIRECT/91.228.166.45 application/octet-stream
> 1658491430.025      6 172.19.15.72 TCP_MISS/200 473 GET 
> http://i5.c.eset.com/v1/auth/60064ADBC289880E8D77/updlist/31/eid/108/lid/108 
> - DIRECT/91.228.166.45 application/octet-stream
> 1658491430.031      6 172.19.14.191 TCP_MISS/200 473 GET 
> http://i5.c.eset.com/v1/auth/60064ADBC289880E8D77/updlist/32/eid/117876/lid/117876 
> - DIRECT/91.228.166.45 application/octet-stream
> 1658491430.038    430 172.19.14.71 TCP_MISS/200 4906 CONNECT 
> v10.events.data.microsoft.com:443 - DIRECT/20.42.65.90 -
> 1658491430.076     40 172.19.14.191 TCP_MISS/200 473 GET 
> http://i5.c.eset.com/v1/auth/60064ADBC289880E8D77/updlist/33/eid/10185/lid/10185 
> - DIRECT/91.228.166.45 application/octet-stream
> 1658491430.112     55 172.19.15.72 TCP_MISS/200 473 GET 
> http://i5.c.eset.com/v1/auth/60064ADBC289880E8D77/updlist/32/eid/117876/lid/117876 
> - DIRECT/91.228.166.45 application/octet-stream
> 1658491430.155  30017 172.19.14.186 TCP_MISS/200 62553 CONNECT 
> armmf.adobe.com:443 - DIRECT/2.23.8.158 -
> 1658491430.180     43 172.19.14.191 TCP_MISS/200 473 GET 
> http://i5.c.eset.com/v1/auth/60064ADBC289880E8D77/updlist/34/eid/8576/lid/8576 
> - DIRECT/91.228.166.45 application/octet-stream
> 1658491430.227    235 172.19.15.144 TCP_MISS/200 3793 CONNECT 
> ts.eset.com:443 - DIRECT/91.228.166.148 -
> 1658491430.276     73 172.19.15.72 TCP_MISS/200 473 GET 
> http://i5.c.eset.com/v1/auth/60064ADBC289880E8D77/updlist/33/eid/10185/lid/10185 
> - DIRECT/91.228.166.45 application/octet-stream
> 1658491430.404     90 172.19.14.191 TCP_MISS/200 473 GET 
> http://i5.c.eset.com/v1/auth/60064ADBC289880E8D77/updlist/35/eid/7351/lid/7351 
> - DIRECT/91.228.166.45 application/octet-stream
> 
> 
> 
> Now I am out of ideas what to test.



More information about the squid-users mailing list