[squid-users] IPVS/LVS load balancing Squid servers, anyone did it?

Eliezer Croitor ngtech1ltd at gmail.com
Thu Aug 27 12:03:48 UTC 2020


Hey Bruce,

 

Thanks for the detailed and beautiful answer.

I am actually trying to understand what ipvs gives and to compare it to nftables.

 

We need a setup structure which will make it more real.

 

I am trying to think about a setup Sketch:

3+ Proxies

2 LB

1 Edge Router VIP(maybe more actual routers)

 

Networks:

PX Internal net: 192.168.100.0/24

Wan Edge Routers net: 192.168.200.0/24

 

R1:

WAN VIP: 192.168.200.200/24

LAN VIP: 192.168.100.254/24

Static route toward 192.168.101.100 via 192.168.200.200

(Another option would be using FRR for ECMP and ACTIVE/ACTIVE LB)

 

Proxies VIP: 192.168.101.100/32

PX1 IP:  192.168.100.101/24 GW 192.168.100.254

PX2 IP:  192.168.100.102/24 GW 192.168.100.254

PX3 IP:  192.168.100.103/24 GW 192.168.100.254

 

LBs VIP: 192.168.200.200/24

LB 1 IP: 192.168.200.201/24

LB 2 IP: 192.168.200.202/24

 

## Things to consider about the setup:

We can use either FWMARK based LB or mac replacement.

It is possible to avoid arp issues with either tunnels or VIP assignment to interfaces.

There are couple tunneling options such as GENEVE\GUE\FUE\GRE\IPIP which can be used with IPVS.

 

Thoughts? 

 

----

Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email: ngtech1ltd at gmail.com <mailto:ngtech1ltd at gmail.com> 

 

From: Bruce Rosenberg <bruce.rosenberg.au at gmail.com> 
Sent: Thursday, August 27, 2020 7:35 AM
To: Eliezer Croitor <ngtech1ltd at gmail.com>
Cc: squid-users at lists.squid-cache.org
Subject: Re: [squid-users] IPVS/LVS load balancing Squid servers, anyone did it?

 

Hi Eliezer,

 

We are running a couple of Squid proxies (the real servers) in front of a pair of LVS servers with keepalived and it works flawlessly.

The 2 x Squid proxies are active / active and the LVS servers are active / passive.

If a Squid proxy dies the remaining proxy takes all the traffic.

If the active LVS server dies, keepalived running on the backup LVS (via VRRP) moves the VIP to itself and it takes all the traffic, so the only difference between the two is one has a higher priority so it gets the VIP first.

I have included some sanitised snippets from a keepalived.conf file that should help you.

You could easily scale this out if you need more than 2 Squid proxies.

 

The config I provided is for LVS/DR (Direct Route) mode.

This method rewrites the MAC address of forwarded packets to that of one of the real servers and is the most scalable way to run LVS.

It does require the LVS and real servers be on the same L2 network.

If that is not possible then consider LVS/TUN mode or LVS/NAT mode.

 

As LVS/DR rewrites the MAC address, it requires each real server to have the VIP address plumbed on an interface and also requires the real servers to ignore ARP requests for the VIP address as the only device that should respond to ARP requests for the VIP is the active LVS server.

We do this by configuring the VIP on the loopback interface on each real but there are other methods as well such as dropping the ARP responses using arptables, iptables or firewalld.

I think back in the kernel 2.4 and 2.6 days people used the noarp kernel module which could be configured to ignore ARP requests for a particular IP address but you don't really need this anymore.

 

More info on the loopback arp blocking method - https://www.loadbalancer.org/blog/layer-4-direct-routing-lvs-dr-and-layer-4-tun-lvs-tun-in-aws/

More info on firewall type arp blocking methods - https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/load_balancer_administration/s1-lvs-direct-vsa

More info about LVS/DR - http://kb.linuxvirtualserver.org/wiki/LVS/DR

 

If you are using a RPM based distro then to set up the LVS servers you only need the ipvsadm and keepalived packages.

Install squid on the reals and configure the VIP on each and disable ARP.

Then build the keepalived.conf on both LVS servers and restart keepalived.

 

The priority configuration stanza in the vrrp_instance section determines the primary VRRP node (LVS server) for that virtual router instance.

The secondary LVS server needs a lower priority compared to the primary.

You can configure one as the MASTER and the other as the BACKUP but our guys make them both BACKUP and let the priority sort the election of the primary out.

I think this might be to solve a problem of bringing up a BACKUP without a MASTER but I can't confirm that.

 

 

Good luck.

 

 

$ cat /etc/keepalived/keepalived.conf

global_defs {

    notification_email {
        # rootmail at example.com <mailto:rootmail at example.com> 
    }
    notification_email_from keepalive-daemon at lvs01.example.com <mailto:keepalive-daemon at lvs01.example.com> 
    smtp_server 10.1.2.3        # mail.example.com <http://mail.example.com> 
    smtp_connect_timeout 30
    lvs_id lvs01.example.com <http://lvs01.example.com>     # Name to mention in email.
}

vrrp_instance LVS_example {

    state BACKUP
    priority 150
    interface eth0
    lvs_sync_daemon_interface eth0
    virtual_router_id 5
    preempt_delay 20

    virtual_ipaddress_excluded {
        
        10.10.10.10   # Squid proxy
    }

    notify_master "some command to log or send an alert"
    notify_backup "some command to log or send an alert"
    notify_fault "some command to log or send an alert"
}


# SQUID Proxy
virtual_server 10.10.10.10 3128 { 

    delay_loop 5
    lb_algo wrr
    lb_kind DR
    protocol TCP

    real_server 10.10.10.11 3128 {   # proxy01.example.com <http://proxy01.example.com> 
        weight 1
        inhibit_on_failure 1
        TCP_CHECK {
            connect_port 3128
            connect_timeout 5
        }
    }

    real_server 10.10.10.12 3128 {   # proxy02.example.com <http://proxy02.example.com> 
        weight 1
        inhibit_on_failure 1
        TCP_CHECK {
            connect_port 3128
            connect_timeout 5
        }
    }
}

 

 

On Thu, Aug 27, 2020 at 8:24 AM Eliezer Croitor <ngtech1ltd at gmail.com <mailto:ngtech1ltd at gmail.com> > wrote:

Hey All,

 

I am reading about LB and tried to find an up-to-date example or tutorial specific to squid with no luck.

I have seen: http://kb.linuxvirtualserver.org/wiki/Building_Web_Cache_Cluster_using_LVS

 

Which makes sense and also is similar or kind of identical to WCCP with gre.

 

Anyone knows about a working Squid setup with IPVS/LVS?

 

Thanks,

Eliezer

 

----

Eliezer Croitoru

Tech Support

Mobile: +972-5-28704261

Email: ngtech1ltd at gmail.com <mailto:ngtech1ltd at gmail.com> 

 

_______________________________________________
squid-users mailing list
squid-users at lists.squid-cache.org <mailto:squid-users at lists.squid-cache.org> 
http://lists.squid-cache.org/listinfo/squid-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squid-cache.org/pipermail/squid-users/attachments/20200827/e1d34cc2/attachment.htm>


More information about the squid-users mailing list