Need to add Vlans to my NetPlan Config with KVM Bridge and bonding

I have made this yaml for a 4 nic bond and a static IP but I need to add four VLANs to the bond. The vlans are 77, 88, 99, 333 could someone help me with this config? I also use the configuration for a KVM bridge with br0 and need that to still work.

This current yaml works but just need to add the vlans

Thank you for help in advance

https://gist.githubusercontent.com/R…nager-all.yaml

network:     bridges:         br0:             addresses:             - 10.0.77.2/24             dhcp4: false             gateway4: 10.0.77.1             nameservers:                 addresses:                 - 10.0.77.1                 - 8.8.8.8             interfaces:                 - bond0     bonds:         bond0:             interfaces:             - eno1             - eno2             - eno3             - eno4             parameters:                 mode: balance-xor     ethernets:         eno1:             addresses: []             dhcp4: false             dhcp6: false         eno2:             addresses: []             dhcp4: false             dhcp6: false         eno3:             addresses: []             dhcp4: false             dhcp6: false         eno4:             addresses: []             dhcp4: false             dhcp6: false 

Ubuntu 18.04.2 Ethernet Bonding

This question is quite unique and different from other guides and documentations out there.

I don’t see any point in bonding two NICs that are connected to the same source because if the source goes down, then bond0 will be rendered useless.

What I’m doing is, I’m trying to bond two NICs with each one having a different source (ISP) so it’s obvious that each one of them will have a different IP address, different gateway etc.

My question is: following almost “every guide” on the internet shows that bond0 needs to be static and i need to set an IP address for it as well as a gateway. How on earth am I supposed to set a static IP address with a static gateway FOR TWO DIFFERENT SOURCEs?

Can someone please explain this?

let’s say NIC1 have: IP_ADDRESS=192.168.1.3 GATEWAY=192.168.1.1

NIC2 have: IP_ADDRESS=192.168.2.14 GATEWAY=192.168.2.1

BOND0 should be static based on what? What am I supposed to write in /etc/network/interfaces

Bonding different ISPs in a gateway in ubuntu 18.04

I am using four ISPs connections in a gateway and would like to use them in mode 0 (round robin).

All of ISPs are having static IPs. I have been trying to bond ISPs, but unable to suceed. I have tried both netplan and interfaces mode to configure in round robin mode but failed.

I have used the following code to connect atleast two ISPs

auto lo     iface lo inet loopback  auto enp4s5f0     iface enp4s5f0 inet manual     bond-master bond0  auto enp4s5f1     iface enp4s5f1 inet manual     bond-master bond0    auto bond0     iface bond0 inet static     address 192.168.2.201     netmask 255.255.255.0     network 192.168.1.0     gateway 192.168.2.1     bond-mode round-robin     bond-miimon 100     bond-slaves none 

Your help is much appreciated.

How to configure DRBL Client Bonding (High Availability)

Good morning

I’m looking to create a Diskless boot environment with high availability. At present, I have a DRBL server configured with two enslaved NIC’s (let’s call them eth0 and eth1 enslaved to bond0) in active_backup mode. Each is connected to different switch in a stack, both have an external connection to the same LAN.

The problem I’m having is getting this set up on the DRBL Client machines. /etc/network/interfaces on the clients contains a warning that DRBL manages these settings itself and that changing this file will do nothing. How can I configure the client similarly to the server – that is to say, eth0 and eth1 bonded in active_backup, connected to two different switches in a stack. Alternatively, is there a better way to achieve uninterrupted operation between the DRBL server and the client.

bonding network interface speed changed to 0

I have Suse12 system with Intel 82599ES nic(with 2*10-Gigabit SFI/SFP+ port),two ports are bonded by lacp.Recently,the system network is unreachable,lasted 3 minutes.

Looking through the message log, I notice when the interface goes down, we are getting following information:

2019-03-03T09:23:10.491731+08:00 oradb12 kernel: [9519285.192448] ixgbe 0000:02:00.1 eth5: initiating reset due to tx timeout 2019-03-03T09:23:10.491754+08:00 oradb12 kernel: [9519285.192464] ixgbe 0000:02:00.1 eth5: Reset adapter 2019-03-03T09:23:16.995739+08:00 oradb12 kernel: [9519291.696952] ixgbe 0000:02:00.1 eth5: speed changed to 0 for port eth5 2019-03-03T09:23:16.995763+08:00 oradb12 kernel: [9519291.697438] bond1: link status definitely down for interface eth5, disabling it

system kernel version is as follows:

Linux oradb12 4.4.74-92.35-default #1 SMP Mon Aug 7 18:24:48 UTC 2017 (c0fdc47) x86_64 x86_64 x86_64 GNU/Linux oradb12:/etc/sysconfig/network # cat /etc/SuSE-release SUSE Linux Enterprise Server 12 (x86_64) VERSION = 12 PATCHLEVEL = 2

This file is deprecated and will be removed in a future service pack or release.

Please check /etc/os-release for details about this release.

bonding networking interface is as follows:

oradb12:/etc/sysconfig/network # cat ifcfg-bond1 BOOTPROTO=’static’ STARTMODE=’onboot’ BONDING_MASTER=’yes’ BONDING_SLAVE0=’eth3′ BONDING_SLAVE1=’eth5′ IPADDR=10.252.128.2 GATEWAY=10.252.128.1 NETMASK=255.255.255.0 USERCONTROL=’no’ BONDING_MODULE_OPTS=’mode=4 miimon=100 use_carrier=1′ oradb12:/etc/sysconfig/network # cat ifcfg-eth3 NAME=’bond1-slave-eth3′ TYPE=’Ethernet’ BOOTPROTO=’none’ STARTMODE=’onboot’ MASTER=’bond1′ SLAVE=’yes’ USERCONTROL=’no’ oradb12:/etc/sysconfig/network # cat ifcfg-eth5 NAME=’bond1-slave-eth5′ TYPE=’Ethernet’ BOOTPROTO=’none’ STARTMODE=’onboot’ MASTER=’bond1′ SLAVE=’yes’ USERCONTROL=’no’

bonding network interface status is as follows:

oradb12:/etc/sysconfig/network # cat /proc/net/bonding/bond1 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation Transmit Hash Policy: layer2 (0) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0

802.3ad info LACP rate: slow Min links: 0 Aggregator selection policy (ad_select): stable System priority: 65535 System MAC address: 48:fd:8e:c9:21:64 Active Aggregator Info: Aggregator ID: 1 Number of ports: 2 Actor Key: 13 Partner Key: 10273 Partner Mac Address: 74:4a:a4:08:ea:14

Slave Interface: eth3 MII Status: up Speed: 10000 Mbps Duplex: full Link Failure Count: 1 Permanent HW addr: 48:fd:8e:c9:21:64 Slave queue ID: 0 Aggregator ID: 1 Actor Churn State: none Partner Churn State: none Actor Churned Count: 0 Partner Churned Count: 0 details actor lacp pdu: system priority: 65535 system mac address: 48:fd:8e:c9:21:64 port key: 13 port priority: 255 port number: 1 port state: 61 details partner lacp pdu: system priority: 32768 system mac address: 74:4a:a4:08:ea:14 oper key: 10273 port priority: 32768 port number: 33 port state: 61

Slave Interface: eth5 MII Status: up Speed: 10000 Mbps Duplex: full Link Failure Count: 24 Permanent HW addr: 48:fd:8e:c9:21:65 Slave queue ID: 0 Aggregator ID: 1 Actor Churn State: none Partner Churn State: none Actor Churned Count: 0 Partner Churned Count: 0 details actor lacp pdu: system priority: 65535 system mac address: 48:fd:8e:c9:21:64 port key: 13 port priority: 255 port number: 2 port state: 61 details partner lacp pdu: system priority: 32768 system mac address: 74:4a:a4:08:ea:14 oper key: 10273 port priority: 32768 port number: 87 port state: 61

network interface driver information is as follows:

oradb12:/etc/sysconfig/network # ethtool -i eth3 driver: ixgbe version: 4.2.1-k firmware-version: 0x800003df expansion-rom-version: bus-info: 0000:02:00.0 supports-statistics: yes supports-test: yes supports-eeprom-access: yes supports-register-dump: yes supports-priv-flags: no oradb12:/etc/sysconfig/network # ethtool -i eth5 driver: ixgbe version: 4.2.1-k firmware-version: 0x800003df expansion-rom-version: bus-info: 0000:02:00.1 supports-statistics: yes supports-test: yes supports-eeprom-access: yes supports-register-dump: yes supports-priv-flags: no

When the network interface goes down restarting the networking service on the server, by running service networking restart, seems to remedy the issues

I was wondering if anyone had experienced similar issues before and or has any suggestions for debugging the cause of something like this?

NIC Bonding Doesn’t Drop IP Addresses

I have a pretty standard 4 NIC bonding config:

/etc/network/interfaces:

# This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5).  source /etc/network/interfaces.d/*  # The loopback network interface auto lo iface lo inet loopback  # eth0 auto eth0 iface eth0 inet manual   bond-master bond0  # eth1 auto eth1 iface eth1 inet manual   bond-master bond0  # eth2 auto eth2 iface eth2 inet manual   bond-master bond0  # eth3 auto eth3 iface eth3 inet manual   bond-master bond0  # bond auto bond0 iface bond0 inet static          address 192.168.1.145         netmask 255.255.255.0         gateway 192.168.1.1         dns-nameservers 8.8.8.8 8.8.4.4          #bond 5 Settings         bond-mode 5         bond-miimon 100         bond-slaves none         bond-downdelay 0         bond-updelay 0 

But upon reboot, the network is down and all the slave interfaces still have ip addresses assigned, despite working as slaves :

ifconfig:

bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST>  mtu 1500         inet 192.168.1.145  netmask 255.255.255.0  broadcast 192.168.1.255         inet6 fe80::215:5dff:fe3c:6a03  prefixlen 64  scopeid 0x20<link>         ether 00:15:5d:3c:6a:03  txqueuelen 1000  (Ethernet)         RX packets 33470  bytes 16441811 (16.4 MB)         RX errors 0  dropped 0  overruns 0  frame 0         TX packets 5570  bytes 2466881 (2.4 MB)         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0  eth0: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500         inet 192.168.1.162  netmask 255.255.255.0  broadcast 192.168.1.255         inet6 fe80::1284:e8d5:9b0:1f11  prefixlen 64  scopeid 0x20<link>         ether 00:15:5d:3c:6a:01  txqueuelen 1000  (Ethernet)         RX packets 6022  bytes 681072 (681.0 KB)         RX errors 0  dropped 0  overruns 0  frame 0         TX packets 1054  bytes 53372 (53.3 KB)         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0  eth1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500         inet 192.168.1.163  netmask 255.255.255.0  broadcast 192.168.1.255         inet6 fe80::3f0c:a610:4c6e:57da  prefixlen 64  scopeid 0x20<link>         ether 00:15:5d:3c:6a:02  txqueuelen 1000  (Ethernet)         RX packets 5404  bytes 519892 (519.8 KB)         RX errors 0  dropped 0  overruns 0  frame 0         TX packets 335  bytes 25284 (25.2 KB)         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0  eth2: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500         inet 192.168.1.164  netmask 255.255.255.0  broadcast 192.168.1.255         inet6 fe80::725d:2a46:a4c6:c287  prefixlen 64  scopeid 0x20<link>         ether 00:15:5d:3c:6a:03  txqueuelen 1000  (Ethernet)         RX packets 16645  bytes 14721805 (14.7 MB)         RX errors 0  dropped 0  overruns 0  frame 0         TX packets 3908  bytes 2368250 (2.3 MB)         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0  eth3: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500         inet 192.168.1.165  netmask 255.255.255.0  broadcast 192.168.1.255         inet6 fe80::345:b686:a3df:4b0c  prefixlen 64  scopeid 0x20<link>         ether 00:15:5d:3c:6a:04  txqueuelen 1000  (Ethernet)         RX packets 5399  bytes 519042 (519.0 KB)         RX errors 0  dropped 0  overruns 0  frame 0         TX packets 273  bytes 19975 (19.9 KB)         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0  lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536         inet 127.0.0.1  netmask 255.0.0.0         inet6 ::1  prefixlen 128  scopeid 0x10<host>         loop  txqueuelen 1000  (Local Loopback)         RX packets 1879  bytes 432110 (432.1 KB)         RX errors 0  dropped 0  overruns 0  frame 0         TX packets 1879  bytes 432110 (432.1 KB)         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0 

If I flush the ip addresses from the interfaces using:

sudo ip addr flush eth0 && sudo ip addr flush eth1 && sudo ip addr flush eth2 && sudo ip addr flush eth3 

Then everything starts working okay. Any ideas why when the NICs are bonded the slaves aren’t dropping their IPs?

Configuring Bonding of Two 10Gb NICs Using Netplan

I’m running Ubuntu 18.04.2 LTS and I want to bond two 10Gb NICs using LACP up to a Cisco 3850 switch….this is the configuration I’m using but it is not working….I’m editing the YAML file within Netplan:

YAML File Configuration

network: version: 2 renderer: networkd bonds: bond0: interfaces: – ens1f0 – ens1f1 addresses: [192.168.1.49/24] gateway4: 192.168.1.1 nameservers: addresses: [8.8.8.8,8.8.4.4] parameters: mode: 802.3ad lacp-rate: fast primary: ens1f0 mii-monitor-interval: 100