veth down after reboot on bionic beaver (netplan) systemd

First, please forgive my lame markdown skills.

Running Bionic Beaver: host:~# lsb_release -a Distributor ID: Ubuntu Description: Ubuntu 18.04.2 LTS Release: 18.04 Codename: bionic Can’t seem to figure out the best way to make sure my veth devices automatically “link up” after reboot.

My use case for the veth: I use them for attaching local network bridges to containers. I do this because attaching docker macvlan directly to the bridge inhibits communication between the containers and their host.

Now that that is out of the way:
I’ve tried putting: ip link set veth1a up ip link set veth5a up in /etc/rc.local I had to create new and add execute permissions, but did nothing upon reboot.

I have the interfaces listed in netplan, but this only successfully brings up the bridge side of the veth e.g veth1b: network: ethernets: enp131s0f0: dhcp4: false enp131s0f1: dhcp4: false enp6s0: dhcp4: false enp7s0: dhcp4: false veth1a: dhcp4: false veth1b: dhcp4: false veth5a: dhcp4: false veth5b: dhcp4: false bridges: br0: dhcp4: true interfaces: - enp6s0 - enp7s0 - veth1b br5: dhcp4: false interfaces: - vlan5 - veth5b vlans: vlan5: id: 5 link: br0 dhcp4: false version: 2 I also have some systemd configs to create the veths in the first place, but I don’t know how to tell systemd to “admin up” the veth1a and veth5a. This is what I need help with. host:~# cat /etc/systemd/network/25-veth-* [NetDev] Name=veth1a Kind=veth [Peer] Name=veth1b [NetDev] Name=veth5a Kind=veth [Peer] Name=veth5b

How to automate the mounting of a Samba shared directory ith each reboot (fstab already created)

I have two Linux/Ubuntu boxes.

  • Box A ( works as a file server, with Samba installed. It’s always switched on.
  • Box B: workstation with my office tools, which I reboot each time I need to work with it.

In Box B, I have ‘/etc/fstab’ modified:

// /mnt/SambaFiles cifs username=tom,password=foo,rw,iocharset=utf8,file_mode=0777,dir_mode=0777 0 0 

However, each time I reboot Box B, I have to do ‘sudo mount -a’ to mount the file directory of Box A.

Is it possible to automate it to avoid mounting it with every reboot? Thank you very much.

udev doesn’t load rule automatically after reboot

I have a rule that should apply to /dev/ipmi0 to change its group and mode. When I reboot, that node shows as root:root, but if I run “test” it successfully applies it:

$   cat /etc/udev/rules.d/99-ipmi-nonroot.rules SUBSYSTEM=="ipmi", KERNEL=="ipmi0", GROUP="adm", MODE="0660"  $   ls -l /dev/ipmi* crw------- 1 root root 244, 0 Jul  6 16:57 /dev/ipmi0  $   sudo udevadm test /sys/class/ipmi/ipmi0 .... Reading rules file: /lib/udev/rules.d/97-dmraid.rules Reading rules file: /etc/udev/rules.d/99-ipmi-nonroot.rules Reading rules file: /lib/udev/rules.d/99-systemd.rules rules contain 49152 bytes tokens (4096 * 12 bytes), 14763 bytes strings 2054 strings (26612 bytes), 1334 de-duplicated (12570 bytes), 721 trie nodes used GROUP 4 /etc/udev/rules.d/99-ipmi-nonroot.rules:1 MODE 0660 /etc/udev/rules.d/99-ipmi-nonroot.rules:1 handling device node '/dev/ipmi0', devnum=c244:0, mode=0660, uid=0, gid=4 preserve permissions /dev/ipmi0, 020660, uid=0, gid=4 preserve already existing symlink '/dev/char/244:0' to '../ipmi0' ACTION=add DEVNAME=/dev/ipmi0 ...  $   ls -l /dev/ipmi* crw-rw---- 1 root adm 244, 0 Jul  6 17:00 /dev/ipmi0  

I also tried udev trigger as seen in this answer. It will also change the group to adm, but doesn’t “stick” after a reboot either. Should I be placing my rule under /lib instead of /etc? That feels wrong.

how to make imideat change in file format disk of temporary of azure agent without reboot?

i tried edit in /etc/waagent.conf this lines :

ResourceDisk.MountPoint=/tempo # instead of /mnt  ResourceDisk.Format=y # isntead of n 

and even removed the line from /etc/fstab

/dev/disk/cloud/azure_resource-part1    /mnt    auto    defaults,nofail,x-systemd.requires=cloud-init.service,comment=cloudconfig       0       2 

and restart the service via

sudo service walinuxagent restart 

but stil in lsblk output is :

sdb      8:16   0    4G  0 disk └─sdb1   8:17   0    4G  0 part /mnt 

only after reboot sudo reboot lsblk shows that the mopunt has cahnged :

sdb      8:16   0    4G  0 disk └─sdb1   8:17   0    4G  0 part /tempo 

i am using ubuntu 16.04 LTS image. how can i make this mount without reboot ?

Changes to xorg.conf not taking effect after reboot

I’ve been trying to create a new xorg.conf from scratch, both as a learning experience, and as a way to solve an issue in another question. However, upon saving the file to /etc/X11/xorg.conf and rebooting, I’ve noticed that the changes are not taking effect.

Below are the contents of my xorg.conf file:

Section "InputDevice"     Identifier  "Keyboard"     Driver      "kbd" EndSection Section "InputDevice"     Identifier  "Mouse"     Driver      "mouse"     Option      "Protocol"      "auto"     Option      "Device"        "/dev/psaux"     Option      "Emulate3Buttons"   "false"     Option      "ZAxisMapping"      "4 5" EndSection  Section "Monitor"     Identifier  "MonitorLeft"     Option      "LeftOf"        "MonitorCentre"     Option      "PreferredMode"     "1680x1050"     Option      "Position"      "0 36" EndSection Section "Monitor"     Identifier  "MonitorCentre"     Option      "Primary"       "true"     Option      "LeftOf"        "MonitorRight"     Option      "PreferredMode"     "1920x1080"     Option      "Position"      "1680 0" EndSection Section "Monitor"     Identifier  "MonitorRight"     Option      "RightOf"       "MonitorCentre"     Option      "PreferredMode"     "1440x900"     Option      "Position"      "3600 180" EndSection  Section "Device"     Identifier  "VideoCard"     Driver      "nvidia"     VendorName  "NVIDIA Corporation"     BoardName   "GeForce GTX 1080"     Option      "TripleBuffer"      "true"     Option      "Monitor-DP-3"      "MonitorLeft"     Option      "Monitor-HDMI-0"    "MonitorCentre"     Option      "Monitor-DVI-D-0"   "MonitorRight" EndSection  Section "Screen"     Identifier  "Screen"     Device      "VideoCard"     Monitor     "MonitorLeft"     Monitor     "MonitorCentre"     Monitor     "MonitorRight"     DefaultDepth    24     Option      "Coolbits"      "31" EndSection 

However, when I open nvidia-settings, the values for different options do not match what is in xorg.conf.

Doing some research, I found a possible cause of this problem. There is a file at ~/.config/monitors.xml, which might be overriding xorg.conf. If I delete this file and reboot, I see some changes to my monitor layout. However, they still do not reflect the contents of xorg.conf. nvidia-settings also shows different setting values, which are reflective of what I see with the monitor/desktop layout.

With monitors.xml deleted, upon boot, the desktop would have the left-most part on the centre monitor, then the centre part on the right monitor, and the right-most part on the left monitor.

So now the question is, what is the cause, and what is the solution? Is something else overriding xorg.conf? Are the contents of xorg.conf incorrect? Something else?

Honour 4x mobile problem- continuous reboot

I have a Huawei honor 4x android mobile when I switch on the mobile it is continuously getting rebooted without booting booting I could see the logo of honor and it just getting rebooted battery is fine but am unable to recover the phone from this state , i don’t need any data just want to reset completely or bring back to normal state any idea or suggestions would be really helpful


Can I trust Boot Repair when it says “There was an error. You can now reboot”?

My laptop used to have three operating systems on the same hard disk: Windows, Ubuntu 14, Ubuntu 18. I tried removing Windows with OS-Uninstaller and got some error messages. I tried recovering with Boot-Repair, twice, and both times ended up with this message:

An error occurred during the repair.

You can now reboot your computer.
Please do not forget to make your BIOS boot on sda1 file!

Should I really reboot my computer? Or should I rather try to repair the error?

My intuition is to rather try to fix the error, but I don’t know how to go about it. The main error in the log file that Boot-Repair produced appears to be:

grub-install: error: /boot/efi doesn't look like an EFI partition. grub-install --efi-directory=/boot/efi --target=x86_64-efi --uefi-secure-boot : exit code of grub-install :1 Error: no grub*.efi generated. Please report this message to 

This is the same error as discussed in this thread. It is claimed there that the EFI system partition needs to be formated as FAT file system. GParted shows that, while my EFI system partition /dev/sda1 indeed used to be formatted in FAT, after running OS-Uninstaller and Boot-Repair it is now formatted as ext3 file system. Could this be part of the problem?

Ubuntu 18.04 and Timeshift – what happens if I reboot while timeshift is taking a snapshot

I have Timeshift scheduled to take a snapshot at regular intervals using rsync. What happens if I reboot during snapshot, will the incomplete snapshot be corrupt, will it be deleted, will it re-commence when I reboot. Also is it OK to delete the initial and earlier snapshots, are they incremental or is each one complete. I want to be sure I have reliable restore points but not too many. Thanks guys.