While we show plain configuration content here, almost everything should be configurable using the web-interface only.
Node1: /etc/network/interfaces
auto vmbr0 iface vmbr0 inet manual bridge-ports eno1 bridge-stp off bridge-fd 0 bridge-vlan-aware yes bridge-vids 2-4094 #management ip on vlan100 auto vmbr0.100 iface vmbr0.100 inet static address 192.168.0.1/24 source /etc/network/interfaces.d/*
Node2: /etc/network/interfaces
auto vmbr0 iface vmbr0 inet manual bridge-ports eno1 bridge-stp off bridge-fd 0 bridge-vlan-aware yes bridge-vids 2-4094 #management ip on vlan100 auto vmbr0.100 iface vmbr0.100 inet static address 192.168.0.2/24 source /etc/network/interfaces.d/*
Create a VLAN zone named ‘myvlanzone’:
id: myvlanzone bridge: vmbr0
Create a VNet named ‘myvnet1' with `vlan-id` `10’ and the previously created ‘myvlanzone’ as it’s zone.
id: myvnet1 zone: myvlanzone tag: 10
Apply the configuration through the main SDN panel, to create VNets locally on each nodes.
Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on ‘myvnet1’.
Use the following network configuration for this VM:
auto eth0 iface eth0 inet static address 10.0.3.100/24
Create a second Virtual Machine (vm2) on node2, with a vNIC on the same VNet ‘myvnet1’ as vm1.
Use the following network configuration for this VM:
auto eth0 iface eth0 inet static address 10.0.3.101/24
Then, you should be able to ping between both VMs over that network.
While we show plain configuration content here, almost everything should be configurable using the web-interface only.
Node1: /etc/network/interfaces
auto vmbr0 iface vmbr0 inet manual bridge-ports eno1 bridge-stp off bridge-fd 0 bridge-vlan-aware yes bridge-vids 2-4094 #management ip on vlan100 auto vmbr0.100 iface vmbr0.100 inet static address 192.168.0.1/24 source /etc/network/interfaces.d/*
Node2: /etc/network/interfaces
auto vmbr0 iface vmbr0 inet manual bridge-ports eno1 bridge-stp off bridge-fd 0 bridge-vlan-aware yes bridge-vids 2-4094 #management ip on vlan100 auto vmbr0.100 iface vmbr0.100 inet static address 192.168.0.2/24 source /etc/network/interfaces.d/*
Create an QinQ zone named ‘qinqzone1’ with service VLAN 20
id: qinqzone1 bridge: vmbr0 service vlan: 20
Create another QinQ zone named ‘qinqzone2’ with service VLAN 30
id: qinqzone2 bridge: vmbr0 service vlan: 30
Create a VNet named ‘myvnet1’ with customer vlan-id 100 on the previously created ‘qinqzone1’ zone.
id: myvnet1 zone: qinqzone1 tag: 100
Create a ‘myvnet2’ with customer VLAN-id 100 on the previously created ‘qinqzone2’ zone.
id: myvnet2 zone: qinqzone2 tag: 100
Apply the configuration on the main SDN web-interface panel to create VNets locally on each nodes.
Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on ‘myvnet1’.
Use the following network configuration for this VM:
auto eth0 iface eth0 inet static address 10.0.3.100/24
Create a second Virtual Machine (vm2) on node2, with a vNIC on the same VNet ‘myvnet1’ as vm1.
Use the following network configuration for this VM:
auto eth0 iface eth0 inet static address 10.0.3.101/24
Create a third Virtual Machine (vm3) on node1, with a vNIC on the other VNet ‘myvnet2’.
Use the following network configuration for this VM:
auto eth0 iface eth0 inet static address 10.0.3.102/24
Create another Virtual Machine (vm4) on node2, with a vNIC on the same VNet ‘myvnet2’ as vm3.
Use the following network configuration for this VM:
auto eth0 iface eth0 inet static address 10.0.3.103/24
Then, you should be able to ping between the VMs vm1 and vm2, also between vm3 and vm4. But, none of VMs vm1 or vm2 can ping the VMs vm3 or vm4, as they are on a different zone with different service-vlan.
While we show plain configuration content here, almost everything should be configurable using the web-interface only.
node1: /etc/network/interfaces
auto vmbr0 iface vmbr0 inet static address 192.168.0.1/24 gateway 192.168.0.254 bridge-ports eno1 bridge-stp off bridge-fd 0 mtu 1500 source /etc/network/interfaces.d/*
node2: /etc/network/interfaces
auto vmbr0 iface vmbr0 inet static address 192.168.0.2/24 gateway 192.168.0.254 bridge-ports eno1 bridge-stp off bridge-fd 0 mtu 1500 source /etc/network/interfaces.d/*
node3: /etc/network/interfaces
auto vmbr0 iface vmbr0 inet static address 192.168.0.3/24 gateway 192.168.0.254 bridge-ports eno1 bridge-stp off bridge-fd 0 mtu 1500 source /etc/network/interfaces.d/*
Create an VXLAN zone named ‘myvxlanzone’, use the lower MTU to ensure the extra 50 bytes of the VXLAN header can fit. Add all previously configured IPs from the nodes as peer address list.
id: myvxlanzone peers address list: 192.168.0.1,192.168.0.2,192.168.0.3 mtu: 1450
Create a VNet named ‘myvnet1’ using the VXLAN zone ‘myvxlanzone’ created previously.
id: myvnet1 zone: myvxlanzone tag: 100000
Apply the configuration on the main SDN web-interface panel to create VNets locally on each nodes.
Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on ‘myvnet1’.
Use the following network configuration for this VM, note the lower MTU here.
auto eth0 iface eth0 inet static address 10.0.3.100/24 mtu 1450
Create a second Virtual Machine (vm2) on node3, with a vNIC on the same VNet ‘myvnet1’ as vm1.
Use the following network configuration for this VM:
auto eth0 iface eth0 inet static address 10.0.3.101/24 mtu 1450
Then, you should be able to ping between between vm1 and vm2.
node1: /etc/network/interfaces
auto vmbr0 iface vmbr0 inet static address 192.168.0.1/24 gateway 192.168.0.254 bridge-ports eno1 bridge-stp off bridge-fd 0 mtu 1500 source /etc/network/interfaces.d/*
node2: /etc/network/interfaces
auto vmbr0 iface vmbr0 inet static address 192.168.0.2/24 gateway 192.168.0.254 bridge-ports eno1 bridge-stp off bridge-fd 0 mtu 1500 source /etc/network/interfaces.d/*
node3: /etc/network/interfaces
auto vmbr0 iface vmbr0 inet static address 192.168.0.3/24 gateway 192.168.0.254 bridge-ports eno1 bridge-stp off bridge-fd 0 mtu 1500 source /etc/network/interfaces.d/*
Create a EVPN controller, using a private ASN number and above node addreesses as peers.
id: myevpnctl asn: 65000 peers: 192.168.0.1,192.168.0.2,192.168.0.3
Create an EVPN zone named ‘myevpnzone’ using the previously created EVPN-controller Define node1 and node2 as exit nodes.
id: myevpnzone vrf vxlan tag: 10000 controller: myevpnctl mtu: 1450 vnet mac address: 32:F4:05:FE:6C:0A exitnodes: node1,node2
Create the first VNet named ‘myvnet1’ using the EVPN zone ‘myevpnzone’.
id: myvnet1 zone: myevpnzone tag: 11000
Create a subnet 10.0.1.0/24 with 10.0.1.1 as gateway on vnet1
subnet: 10.0.1.0/24 gateway: 10.0.1.1
Create the second VNet named ‘myvnet2’ using the same EVPN zone ‘myevpnzone’, a different IPv4 CIDR network.
id: myvnet2 zone: myevpnzone tag: 12000
Create a different subnet 10.0.2.0/24 with 10.0.2.1 as gateway on vnet2
subnet: 10.0.2.0/24 gateway: 10.0.2.1
Apply the configuration on the main SDN web-interface panel to create VNets locally on each nodes and generate the FRR config.
Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on ‘myvnet1’.
Use the following network configuration for this VM:
auto eth0 iface eth0 inet static address 10.0.1.100/24 gateway 10.0.1.1 #this is the ip of the vnet1 mtu 1450
Create a second Virtual Machine (vm2) on node2, with a vNIC on the other VNet ‘myvnet2’.
Use the following network configuration for this VM:
auto eth0 iface eth0 inet static address 10.0.2.100/24 gateway 10.0.2.1 #this is the ip of the vnet2 mtu 1450
Then, you should be able to ping vm2 from vm1, and vm1 from vm2.
If you ping an external IP from vm2 on the non-gateway node3, the packet will go to the configured myvnet2 gateway, then will be routed to the exit nodes (node1 or node2) and from there it will leave those nodes over the default gateway configured on node1 or node2.
Of course you need to add reverse routes for the 10.0.1.0/24 and 10.0.2.0/24 network to node1, node2 on your external gateway, so that the public network can reply back.
If you have configured an external BGP router, the BGP-EVPN routes (10.0.1.0/24 and 10.0.2.0/24 in this example), will be announced dynamically.