Network configuration can be done either via the GUI, or by manually
editing the file /etc/network/interfaces
, which contains the
whole network configuration. The interfaces(5)
manual page contains the
complete format description. All Proxmox VE tools try hard to keep direct
user modifications, but using the GUI is still preferable, because it
protects you from errors.
Once the network is configured, you can use the Debian traditional tools ifup
and ifdown
commands to bring interfaces up and down.
Proxmox VE does not write changes directly to /etc/network/interfaces
. Instead, we
write into a temporary file called /etc/network/interfaces.new
, this way you
can do many related changes at once. This also allows to ensure your changes
are correct before applying, as a wrong network configuration may render a node
inaccessible.
With the default installed ifupdown
network managing package you need to
reboot to commit any pending network changes. Most of the time, the basic Proxmox VE
network setup is stable and does not change often, so rebooting should not be
required often.
With the optional ifupdown2
network managing package you also can reload the
network configuration live, without requiring a reboot.
Since Proxmox VE 6.1 you can apply pending network changes over the web-interface, using the Apply Configuration button in the Network panel of a node.
To install ifupdown2 ensure you have the latest Proxmox VE updates installed, then
installing ifupdown2 will remove ifupdown, but as the removal scripts of ifupdown before version 0.8.35+pve1 have a issue where network is fully stopped on removal [1] you must ensure that you have a up to date ifupdown package version.
For the installation itself you can then simply do:
apt install ifupdown2
With that you’re all set. You can also switch back to the ifupdown variant at any time, if you run into issues.
We currently use the following naming conventions for device names:
eth0
, eth1
, …) This naming
scheme is used for Proxmox VE hosts which were installed before the 5.0
release. When upgrading to 5.0, the names are kept as-is.
vmbr0
- vmbr4094
)
bond0
, bond1
, …)
eno1.50
, bond1.30
)
This makes it easier to debug networks problems, because the device name implies the device type.
Systemd uses the two character prefix en for Ethernet network devices. The next characters depends on the device driver and the fact which schema matches first.
The most common patterns are:
For more information see Predictable Network Interface Names.
Depending on your current network organization and your resources you can choose either a bridged, routed, or masquerading networking setup.
The Bridged model makes the most sense in this case, and this is also the default mode on new Proxmox VE installations. Each of your Guest system will have a virtual interface attached to the Proxmox VE bridge. This is similar in effect to having the Guest network card directly connected to a new switch on your LAN, the Proxmox VE host playing the role of the switch.
For this setup, you can use either a Bridged or Routed model, depending on what your provider allows.
In that case the only way to get outgoing network accesses for your guest systems is to use Masquerading. For incoming network access to your guests, you will need to configure Port Forwarding.
For further flexibility, you can configure VLANs (IEEE 802.1q) and network bonding, also known as "link aggregation". That way it is possible to build complex and flexible virtual networks.
Bridges are like physical network switches implemented in software. All virtual guests can share a single bridge, or you can create multiple bridges to separate network domains. Each host can have up to 4094 bridges.
The installation program creates a single bridge named vmbr0
, which
is connected to the first Ethernet card. The corresponding
configuration in /etc/network/interfaces
might look like this:
auto lo iface lo inet loopback iface eno1 inet manual auto vmbr0 iface vmbr0 inet static address 192.168.10.2/24 gateway 192.168.10.1 bridge-ports eno1 bridge-stp off bridge-fd 0
Virtual machines behave as if they were directly connected to the physical network. The network, in turn, sees each virtual machine as having its own MAC, even though there is only one network cable connecting all of these VMs to the network.
Most hosting providers do not support the above setup. For security reasons, they disable networking as soon as they detect multiple MAC addresses on a single interface.
Some providers allow you to register additional MACs through their management interface. This avoids the problem, but can be clumsy to configure because you need to register a MAC for each of your VMs.
You can avoid the problem by “routing” all traffic via a single interface. This makes sure that all network packets use the same MAC address.
A common scenario is that you have a public IP (assume 198.51.100.5
for this example), and an additional IP block for your VMs
(203.0.113.16/28
). We recommend the following setup for such
situations:
auto lo iface lo inet loopback auto eno0 iface eno0 inet static address 198.51.100.5/29 gateway 198.51.100.1 post-up echo 1 > /proc/sys/net/ipv4/ip_forward post-up echo 1 > /proc/sys/net/ipv4/conf/eno1/proxy_arp auto vmbr0 iface vmbr0 inet static address 203.0.113.17/28 bridge-ports none bridge-stp off bridge-fd 0
Masquerading allows guests having only a private IP address to access the
network by using the host IP address for outgoing traffic. Each outgoing
packet is rewritten by iptables
to appear as originating from the host,
and responses are rewritten accordingly to be routed to the original sender.
auto lo iface lo inet loopback auto eno1 #real IP address iface eno1 inet static address 198.51.100.5/24 gateway 198.51.100.1 auto vmbr0 #private sub network iface vmbr0 inet static address 10.10.10.1/24 bridge-ports none bridge-stp off bridge-fd 0 post-up echo 1 > /proc/sys/net/ipv4/ip_forward post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eno1 -j MASQUERADE
In some masquerade setups with firewall enabled, conntrack zones might be
needed for outgoing connections. Otherwise the firewall could block outgoing
connections since they will prefer the POSTROUTING
of the VM bridge (and not
MASQUERADE
).
Adding these lines in the /etc/network/interfaces
can fix this problem:
post-up iptables -t raw -I PREROUTING -i fwbr+ -j CT --zone 1 post-down iptables -t raw -D PREROUTING -i fwbr+ -j CT --zone 1
For more information about this, refer to the following links:
Patch on netdev-list introducing conntrack zones
Blog post with a good explanation by using TRACE in the raw table
Bonding (also called NIC teaming or Link Aggregation) is a technique for binding multiple NIC’s to a single network device. It is possible to achieve different goals, like make the network fault-tolerant, increase the performance or both together.
High-speed hardware like Fibre Channel and the associated switching hardware can be quite expensive. By doing link aggregation, two NICs can appear as one logical interface, resulting in double speed. This is a native Linux kernel feature that is supported by most switches. If your nodes have multiple Ethernet ports, you can distribute your points of failure by running network cables to different switches and the bonded connection will failover to one cable or the other in case of network trouble.
Aggregated links can improve live-migration delays and improve the speed of replication of data between Proxmox VE Cluster nodes.
There are 7 modes for bonding:
If your switch support the LACP (IEEE 802.3ad) protocol then we recommend using the corresponding bonding mode (802.3ad). Otherwise you should generally use the active-backup mode. If you intend to run your cluster network on the bonding interfaces, then you have to use active-passive mode on the bonding interfaces, other modes are unsupported.
The following bond configuration can be used as distributed/shared storage network. The benefit would be that you get more speed and the network will be fault-tolerant.
Example: Use bond with fixed IP address.
auto lo iface lo inet loopback iface eno1 inet manual iface eno2 inet manual iface eno3 inet manual auto bond0 iface bond0 inet static bond-slaves eno1 eno2 address 192.168.1.2/24 bond-miimon 100 bond-mode 802.3ad bond-xmit-hash-policy layer2+3 auto vmbr0 iface vmbr0 inet static address 10.10.10.2/24 gateway 10.10.10.1 bridge-ports eno3 bridge-stp off bridge-fd 0
Another possibility it to use the bond directly as bridge port. This can be used to make the guest network fault-tolerant.
Example: Use a bond as bridge port.
auto lo iface lo inet loopback iface eno1 inet manual iface eno2 inet manual auto bond0 iface bond0 inet manual bond-slaves eno1 eno2 bond-miimon 100 bond-mode 802.3ad bond-xmit-hash-policy layer2+3 auto vmbr0 iface vmbr0 inet static address 10.10.10.2/24 gateway 10.10.10.1 bridge-ports bond0 bridge-stp off bridge-fd 0
A virtual LAN (VLAN) is a broadcast domain that is partitioned and isolated in the network at layer two. So it is possible to have multiple networks (4096) in a physical network, each independent of the other ones.
Each VLAN network is identified by a number often called tag. Network packages are then tagged to identify which virtual network they belong to.
Proxmox VE supports this setup out of the box. You can specify the VLAN tag when you create a VM. The VLAN tag is part of the guest network configuration. The networking layer supports different modes to implement VLANs, depending on the bridge configuration:
To allow host communication with an isolated network. It is possible to apply VLAN tags to any network device (NIC, Bond, Bridge). In general, you should configure the VLAN on the interface with the least abstraction layers between itself and the physical NIC.
For example, in a default configuration where you want to place the host management address on a separate VLAN.
Example: Use VLAN 5 for the Proxmox VE management IP with traditional Linux bridge.
auto lo iface lo inet loopback iface eno1 inet manual iface eno1.5 inet manual auto vmbr0v5 iface vmbr0v5 inet static address 10.10.10.2/24 gateway 10.10.10.1 bridge-ports eno1.5 bridge-stp off bridge-fd 0 auto vmbr0 iface vmbr0 inet manual bridge-ports eno1 bridge-stp off bridge-fd 0
Example: Use VLAN 5 for the Proxmox VE management IP with VLAN aware Linux bridge.
auto lo iface lo inet loopback iface eno1 inet manual auto vmbr0.5 iface vmbr0.5 inet static address 10.10.10.2/24 gateway 10.10.10.1 auto vmbr0 iface vmbr0 inet manual bridge-ports eno1 bridge-stp off bridge-fd 0 bridge-vlan-aware yes bridge-vids 2-4094
The next example is the same setup but a bond is used to make this network fail-safe.
Example: Use VLAN 5 with bond0 for the Proxmox VE management IP with traditional Linux bridge.
auto lo iface lo inet loopback iface eno1 inet manual iface eno2 inet manual auto bond0 iface bond0 inet manual bond-slaves eno1 eno2 bond-miimon 100 bond-mode 802.3ad bond-xmit-hash-policy layer2+3 iface bond0.5 inet manual auto vmbr0v5 iface vmbr0v5 inet static address 10.10.10.2/24 gateway 10.10.10.1 bridge-ports bond0.5 bridge-stp off bridge-fd 0 auto vmbr0 iface vmbr0 inet manual bridge-ports bond0 bridge-stp off bridge-fd 0
Proxmox VE works correctly in all environments, irrespective of whether IPv6 is deployed or not. We recommend leaving all settings at the provided defaults.
Should you still need to disable support for IPv6 on your node, do so by
creating an appropriate sysctl.conf (5)
snippet file and setting the proper
sysctls,
for example adding /etc/sysctl.d/disable-ipv6.conf
with content:
net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1
This method is preferred to disabling the loading of the IPv6 module on the kernel commandline.