Why NIC name always change after reboot?

I’m managing lots of servers which runs Ubuntu 18.04 server. And it has at lest 4 NIC for each server. My server will reboot every weekend, and I always found the NIC name changed after reboot. Sometimes is onboad NIC, sometimes is PCI-e NIC, some times is all of NIC, sometimes none of theme.
I know I can fix it by config /etc/udev/rules.d/70-persistent-net.rules file, specific which MAC address corresponding which name. But I still want know why the NIC name always and how can I fix it in a easier way (edit rules file could cost me whole day or worse).
Thanks in advance.

External HD mounts to different /media location on every reboot and partialy at that

The files graphical app always shows my external drive consistently under “Other Location”. The external driver is called dropbox.

enter image description here

From the terminal, I get 4 mounts for this hd.

dir /media/john dropbox  dropbox1  dropbox2  dropbox3 

And even more strange, I get different views on each of these 4 mounts.

john@john-ubuntu:~$   dir /media/john/dropbox3/transcription_db tensorboard_logs john@john-ubuntu:~$   dir /media/john/dropbox2/transcription_db run_results  tensorboard_logs john@john-ubuntu:~$   dir /media/john/dropbox1/transcription_db crops_sets john@john-ubuntu:~$   dir /media/john/dropbox/transcription_db crops_sets 

Most of the time, one of these contains the full list of files, but sometimes each captures only a partial view.

Is there a way to fix where the external hard drive gets installed?

Touchpad randomly disabled upon reboot on lenovo x1c7

Just switched from apple to lenovo (carbon 7th gen), installed Ubuntu (19.04, with 5.0.0-25-generic). However, the touchpad randomly is disabled upon reboot (it is activated in the system settings). After rebooting the touchpanel sometimes starts to work, sometimes doesn’t. This happens only with the ubuntu partition, not with Windows. Can somebody help?

Ubuntu Server 18.04 LVM shrinks on reboot

I have installed Ubuntu 18.04 server on a thin provisioned VMWare virtual machine and after running the updates and rebooting the 50G disk now shows as being 3.9G and is 78% used. I can resize the LVM but I would like to know what is going on! Any ideas? Am I doing something wrong or is there a bug in Ubuntu / VMWare causing the thin provisioned disk to shrink the LVM to the used size?

Thanks in advance

Ubuntu 18.04 black screen on reboot after upgrading to Linux Kernel 5.0?

My Lubuntu 18.04 (as a VirtualBox 6.0.10 virtual machine) is just upgraded to

Linux pbox 5.0.0-23-generic #24~18.04.1-Ubuntu SMP Mon Jul 29 16:12:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux 

with an apt upgrade. After the upgrade, the VM shows a black screen with a cursor in the upper left corner, and seems to hang.

Does anyone have similar experience with the new 5.0 Linux kernel, and know how to fix the problem?

FYI, I can go to the GUI anytime after hitting Ctrl+Alt+F1 and then Ctrl+Alt+F7 without a problem. A systemd-analyze blame didn’t show anything taking an excessive amount of time (3 seconds max) during reboot. But if I don’t switch manually, the black green never disappears.

veth down after reboot on bionic beaver (netplan) systemd

First, please forgive my lame markdown skills.

Running Bionic Beaver: host:~# lsb_release -a Distributor ID: Ubuntu Description: Ubuntu 18.04.2 LTS Release: 18.04 Codename: bionic Can’t seem to figure out the best way to make sure my veth devices automatically “link up” after reboot.

My use case for the veth: I use them for attaching local network bridges to containers. I do this because attaching docker macvlan directly to the bridge inhibits communication between the containers and their host.

Now that that is out of the way:
I’ve tried putting: ip link set veth1a up ip link set veth5a up in /etc/rc.local I had to create new and add execute permissions, but did nothing upon reboot.

I have the interfaces listed in netplan, but this only successfully brings up the bridge side of the veth e.g veth1b: network: ethernets: enp131s0f0: dhcp4: false enp131s0f1: dhcp4: false enp6s0: dhcp4: false enp7s0: dhcp4: false veth1a: dhcp4: false veth1b: dhcp4: false veth5a: dhcp4: false veth5b: dhcp4: false bridges: br0: dhcp4: true interfaces: - enp6s0 - enp7s0 - veth1b br5: dhcp4: false interfaces: - vlan5 - veth5b vlans: vlan5: id: 5 link: br0 dhcp4: false version: 2 I also have some systemd configs to create the veths in the first place, but I don’t know how to tell systemd to “admin up” the veth1a and veth5a. This is what I need help with. host:~# cat /etc/systemd/network/25-veth-* [NetDev] Name=veth1a Kind=veth [Peer] Name=veth1b [NetDev] Name=veth5a Kind=veth [Peer] Name=veth5b

How to automate the mounting of a Samba shared directory ith each reboot (fstab already created)

I have two Linux/Ubuntu boxes.

  • Box A ( works as a file server, with Samba installed. It’s always switched on.
  • Box B: workstation with my office tools, which I reboot each time I need to work with it.

In Box B, I have ‘/etc/fstab’ modified:

// /mnt/SambaFiles cifs username=tom,password=foo,rw,iocharset=utf8,file_mode=0777,dir_mode=0777 0 0 

However, each time I reboot Box B, I have to do ‘sudo mount -a’ to mount the file directory of Box A.

Is it possible to automate it to avoid mounting it with every reboot? Thank you very much.

udev doesn’t load rule automatically after reboot

I have a rule that should apply to /dev/ipmi0 to change its group and mode. When I reboot, that node shows as root:root, but if I run “test” it successfully applies it:

$   cat /etc/udev/rules.d/99-ipmi-nonroot.rules SUBSYSTEM=="ipmi", KERNEL=="ipmi0", GROUP="adm", MODE="0660"  $   ls -l /dev/ipmi* crw------- 1 root root 244, 0 Jul  6 16:57 /dev/ipmi0  $   sudo udevadm test /sys/class/ipmi/ipmi0 .... Reading rules file: /lib/udev/rules.d/97-dmraid.rules Reading rules file: /etc/udev/rules.d/99-ipmi-nonroot.rules Reading rules file: /lib/udev/rules.d/99-systemd.rules rules contain 49152 bytes tokens (4096 * 12 bytes), 14763 bytes strings 2054 strings (26612 bytes), 1334 de-duplicated (12570 bytes), 721 trie nodes used GROUP 4 /etc/udev/rules.d/99-ipmi-nonroot.rules:1 MODE 0660 /etc/udev/rules.d/99-ipmi-nonroot.rules:1 handling device node '/dev/ipmi0', devnum=c244:0, mode=0660, uid=0, gid=4 preserve permissions /dev/ipmi0, 020660, uid=0, gid=4 preserve already existing symlink '/dev/char/244:0' to '../ipmi0' ACTION=add DEVNAME=/dev/ipmi0 ...  $   ls -l /dev/ipmi* crw-rw---- 1 root adm 244, 0 Jul  6 17:00 /dev/ipmi0  

I also tried udev trigger as seen in this answer. It will also change the group to adm, but doesn’t “stick” after a reboot either. Should I be placing my rule under /lib instead of /etc? That feels wrong.