[squid-users] Fwd: Re: very slow squid response

Antony Stone Antony.Stone at squid.open.source.it
Tue Sep 19 11:41:35 UTC 2017


Hi.

Forwarding private reply back to the list in case it helps anyone reply with 
suggestions.

Iraj - please reply to the list in future.

Antony.

----------  Forwarded Message Starts  ----------

Subject: Re: [squid-users] very slow squid response
Date: Tuesday 19 September 2017 12:34:47
From: Iraj Norouzi <zeutech at gmail.com>
To: Antony Stone <Antony Stone <Antony.Stone at squid.open.source.it>>

hi Antony
thanks for you reply
> i setup squid on ubuntu and centos

Why both?
because of test and because i not get the result

> with tproxy and wccp for 6 gb/s traffic

What hardware are you using for that sort of traffic flow?
i use hp DL360 with 2 6 core processor with 3 GHZ and 64 GB RAM and 1 TB HDD

> but when i try to test squid with 40 mb/s traffic

How are you generating "40 mb/s traffic"?  I'm assuming that your Internet
connection is 6Gbps as stated above, so how are you restricting this down to
40Mbps for testing?
i redirect one class of ip address with 40 mb/s traffic for test of squid
and i am going decide to redirect whole of traffic to squid after getting
fast browsing

> it response very slow

Numbers please.
websites load at 2 or 1 second by direct browsing and load at 10 second or
not load by squid

> while when i use direct browsing i can browse websites very fast

Is the direct traffic still being routed through the Squid server (you say
you're using tproxy, so I assume this is an intercept machine with the
traffic
going through it between client and server)?
no, HTTP traffic redirect to squid by wccp and access-list config on Cisco
ip wccp 80 redirect-list wccp
ip wccp 90 redirect-list wccp_to_inside

ip access-list extended wccp
 remark Permit http access from clients
 permit tcp x.x.x.x 0.0.0.255 any eq www
 deny   ip any any
ip access-list extended wccp_to_inside
 permit tcp any eq www x.x.x.x 0.0.0.255
 deny   ip any any

> i used tcpdump for tracing connections arrive time and there was no
problem,

Arrival time where?  From the origin server to Squid?  From Squid to the
client?  What are you actually measuring?
yes, arriving source packets from clients to squid interface,by time that i
push enter on browser address bar and getting packets on tcpdump immediately

> i used watch -d for tracing packets match by iptables rules and it was ok,

Please be more specific - what did you measure and what does "OK" mean?
i add rule to iptables for tracing one website packets and i saw them on
kern.log that matched with the rule, so if you need please tell me to send
commands that i used.

ip rule add fwmark 1 lookup 100ip route add local 0.0.0.0/0 dev
enp3s0f0 table 100

iptables -t mangle -A PREROUTING -d $LOCALIP -j ACCEPTiptables -t
mangle -N DIVERTiptables -t mangle -A DIVERT -j MARK --set-mark
1iptables -t mangle -A DIVERT -j ACCEPTiptables -t mangle -A
PREROUTING -p tcp -m socket -j DIVERTiptables -t mangle -A PREROUTING
-p tcp --dport 80 -j TPROXY --tproxy-mark 0x1/0x1 --on-port 3129

watch -d iptables -t mangle -vnL

Did you compare with and without Squid in place to see what differs?
no, as i told when i browsing directly it is well and packets not coming to
squid and exit from cisco to internet and arrive to clients from cisco and
because of traffic on cisco i can't enable debugging on it but when i
browse from squid i get latency so i suppose the problem is squid or server
that squid running on it

> i also used iptables trace command for tracing matching iptables rules,
> there was no problem except i had latency on arriving packets on iptables
> rule while tcpdump captured packets fast, it happened when my browsing was
> so slow, at some times that my browsing was fast there was no latency on
> iptables trace log.

That description is too vague to know exactly what you were measuring and
what
results you got.

iptables -t raw -A PREROUTING -s x.x.x.x -j TRACE

iptables -t raw -A OUTPUT -s x.x.x.x -j TRACE

tailf /var/log/kern.log

tcpdump -e -i enp3s0f0 -d x.x.x.x dst port 80

tcpdump -e -i enp3s0f0 -s x.x.x.x src port 80



> i also used tcp and linux enhancement configurations

Details?
net.core.wmem_default=524288
net.core.wmem_max=16777216
net.core.rmem_default=524288
net.core.rmem_max=16777216
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_rmem = 66560 524288 16777216
net.ipv4.tcp_wmem = 66560 524288 16777216
net.core.somaxconn=4000
net.ipv4.tcp_timestamps=0
net.ipv4.tcp_sack=0
net.ipv4.tcp_fin_timeout=20
net.ipv4.ip_local_port_range=10240 65000
net.ipv4.tcp_keepalive_time = 900
net.ipv4.tcp_keepalive_intvl = 900
net.ipv4.tcp_keepalive_probes = 9
net.core.somaxconn = 5000
net.core.netdev_max_backlog = 8000
net.ipv4.tcp_max_syn_backlog = 8096
net.ipv4.tcp_slow_start_after_idle = 0
net.ipv4.tcp_tw_reuse = 1

> but nothing happened.
> wccp send packets very well and tcpdump show capturing packets too but
> browsing with squid is very slow.

Firstly, please define "slow" - do you mean it takes a long time for new web
pages / images / etc to appear (but once they start, they arrive quickly)
browse in 10 second or not browsing, webpages that i browse for first time
or i browse for multiple times
, or do you mean that a continuous stream of data (a "download") arrives
more
slowly when going through Squid than going direct (and if so, what are the
different speeds)?
no just browsing,

Secondly, what are you trying to achieve with Squid - what is its purpose in
your network?
caching pages and their objects for better internet browsing by clients and
save bandwith.

squid.conf
dns_v4_first on
acl fanava_net src 89.221.82.161/32 # Fanava First clients range to cache
acl Safe_ports port 80 # http
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow localhost manager
http_access deny manager
http_access allow localhost
http_access allow fanava_net
http_access deny all
http_port 0.0.0.0:3128
http_port 0.0.0.0:3129 tproxy
cache_mem 50 GB
maximum_object_size_in_memory 100 MB
minimum_object_size 2 KB
maximum_object_size 6 GB
#cache_dir ufs /var/spool/squid 1024000 256 512
cache_swap_low 90
cache_swap_high 92
coredump_dir /var/spool/squid
# Image files
refresh_pattern -i \.(png|gif|jpg|jpeg|bmp|tif|tiff)$ 10080 90% 43200
# Compressed files
refresh_pattern -i \.(zip|rar|tar|gz|tgz|z|arj|lha|lzh|iso|deb|rpm)$ 10080
90% 43200
# Binary files
refresh_pattern -i \.(exe|msi)$ 10080 90% 43200
# Multimedia files
refresh_pattern -i \.(mp3|wav|mid|midi|ram|avi|wmv|mpg|mpeg|mp4|swf|flv|)$
10080 90% 43200
# Document files
refresh_pattern -i \.(pdf|ps|doc|ppt|xls|pps)$ 10080 90% 43200
# HTML patterns
refresh_pattern -i \.index.(html|htm)$ 0 40% 10080
refresh_pattern -i \.default.(html|htm)$ 0 40% 10080
refresh_pattern -i \.(html|htm|css|js)$ 1440 40% 40320
# Default patterns
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320
refresh_pattern (Release|Packages(.gz)*)$ 0 20% 2880
cache_mgr zeutech at gmail.com
wccp2_router x.x.x.x
wccp_version 2
wccp2_rebuild_wait on
wccp2_forwarding_method 2
wccp2_return_method 2
wccp2_service dynamic 80
wccp2_service_info 80 protocol=tcp flags=src_ip_hash priority=240 ports=80
wccp2_service dynamic 90
wccp2_service_info 90 protocol=tcp flags=dst_ip_hash,ports_source
priority=240 ports=80


*Regards,Iraj Norouzi*
*+989122494558*

On Tue, Sep 19, 2017 at 3:04 PM, Antony Stone <
Antony.Stone at squid.open.source.it> wrote:

> On Tuesday 19 September 2017 at 11:18:34, Iraj Norouzi wrote:
>
> > hi everybody
> > i setup squid on ubuntu and centos
>
> Why both?
>
> > with tproxy and wccp for 6 gb/s traffic
>
> What hardware are you using for that sort of traffic flow?
>
> > but when i try to test squid with 40 mb/s traffic
>
> How are you generating "40 mb/s traffic"?  I'm assuming that your Internet
> connection is 6Gbps as stated above, so how are you restricting this down
> to
> 40Mbps for testing?
>
> > it response very slow
>
> Numbers please.
>
> > while when i use direct browsing i can browse websites very fast
>
> Is the direct traffic still being routed through the Squid server (you say
> you're using tproxy, so I assume this is an intercept machine with the
> traffic
> going through it between client and server)?
>
> > i used tcpdump for tracing connections arrive time and there was no
> problem,
>
> Arrival time where?  From the origin server to Squid?  From Squid to the
> client?  What are you actually measuring?
>
> > i used watch -d for tracing packets match by iptables rules and it was
> ok,
>
> Please be more specific - what did you measure and what does "OK" mean?
>
> Did you compare with and without Squid in place to see what differs?
>
> > i also used iptables trace command for tracing matching iptables rules,
> > there was no problem except i had latency on arriving packets on iptables
> > rule while tcpdump captured packets fast, it happened when my browsing
> was
> > so slow, at some times that my browsing was fast there was no latency on
> > iptables trace log.
>
> That description is too vague to know exactly what you were measuring and
> what
> results you got.
>
> > i also used tcp and linux enhancement configurations
>
> Details?
>
> > but nothing happened.
> > wccp send packets very well and tcpdump show capturing packets too but
> > browsing with squid is very slow.
>
> Firstly, please define "slow" - do you mean it takes a long time for new
> web
> pages / images / etc to appear (but once they start, they arrive quickly),
> or
> do you mean that a continuous stream of data (a "download") arrives more
> slowly when going through Squid than going direct (and if so, what are the
> different speeds)?
>
> Secondly, what are you trying to achieve with Squid - what is its purpose
> in
> your network?
>
> > please help me.
>
> Please help us - give us more details about the hardware you're running
> this
> on, the version of Squid you're using, what WCCP routing / filtering you're
> doing, the measurements you've made and the results you got.
>
>
> Regards,
>
>
> Antony.
>
> --
> We all get the same amount of time - twenty-four hours per day.
> How you use it is up to you.
>
>                                                    Please reply to the
> list;
>                                                          please *don't* CC
> me.
> _______________________________________________
> squid-users mailing list
> squid-users at lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>

----------  Forwarded Message Ends  ----------

-- 
Most people are aware that the Universe is big.

 - Paul Davies, Professor of Theoretical Physics

                                                   Please reply to the list;
                                                         please *don't* CC me.


More information about the squid-users mailing list