Lubuntu, broken fstab and non existent swap

I did a fresh install of Lubuntu desktop (meant as a headless machine mostly), with formatting the disk via installer.

Apparently it didn’t create a partition properly.. no GPT but msdos.

The machine wouldn’t boot to the main drive unless i unplugged the 2nd SATA drive (for storage). Remained stuck on HP bios.

The system is currently running, i have SSH access, but unfortunately won’t be physically accessing the machine for a while.

After hours of crawling the web for tutorials and discussions.. no success. I’ve only managed to trash the fstab, and obviously didnt make backup (not that it was any good..)

here are a few cmd returns :
blkid

/dev/sda1: UUID="1124c007-a2cd-4e2b-8942-06fca94f5f88" TYPE="ext4" PARTUUID="4ac38d5d-01" 

lsblk

   NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT sda      8:0    0 596,2G  0 disk  └─sda1   8:1    0 596,2G  0 part / 

df

    Filesystem     1K-blocks    Used Available Use% Mounted on udev             1959572       0   1959572   0% /dev tmpfs             403696    1220    402476   1% /run /dev/sda1      614267120 5072576 577921748   1% / tmpfs            2018476       0   2018476   0% /dev/shm tmpfs               5120       4      5116   1% /run/lock tmpfs            2018476       0   2018476   0% /sys/fs/cgroup tmpfs             403692       4    403688   1% /run/user/1000 tmpfs             403692       0    403692   0% /run/user/117 

sudo parted -l

Model: ATA WDC WD6400BEVT-8 (scsi) Disk /dev/sda: 640GB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags:   Number  Start   End    Size   Type     File system  Flags  1      1049kB  640GB  640GB  primary  ext4 

swapon -s returns nothing

How do i properly create a SWAP file with the current config (do i need one, how come the installer didn’t make it?)

And most importantly what should the fstab look like please? So i can safely reboot the machine via SSH without fear of it not booting where it needs to and then the machine staying idle for weeks..

Thanks in advance !

(what will i need to run to be safe, regarding GRUB or else?)

ps: after install i had the following error (fwiw)

cryptsetup: WARNING: The initramfs image may not contain cryptsetup binaries nor crypto modules. If that's on purpose, you may want to uninstall the 'crypsetup-initramfs' package in order to disable the cryptsetup initramfs integration and avoid this warning.

.. removed cryptsetup and then ran update-initramfs -u

CIFS share on fstab via krb5

I have followed; https://warlord0blog.wordpress.com/2018/03/27/access-dfs-shares-from-linux/

Through this I can mount the cifs share manually, however when I try to mount it in the fstab via kerberos;

//windows/sahre/filepath /home/Drive cifs user,uid=me,gid=metoo,vers=3.0,rw,sec=krb5 0 0

I get;

➜ ~ sudo mount -a
mount error(2): No such file or directory Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)

How to automate the mounting of a Samba shared directory ith each reboot (fstab already created)

I have two Linux/Ubuntu boxes.

  • Box A (192.168.1.10): works as a file server, with Samba installed. It’s always switched on.
  • Box B: workstation with my office tools, which I reboot each time I need to work with it.

In Box B, I have ‘/etc/fstab’ modified:

//192.168.1.10/SambaSharedDirectory /mnt/SambaFiles cifs username=tom,password=foo,rw,iocharset=utf8,file_mode=0777,dir_mode=0777 0 0 

However, each time I reboot Box B, I have to do ‘sudo mount -a’ to mount the file directory of Box A.

Is it possible to automate it to avoid mounting it with every reboot? Thank you very much.

sshfs with fstab: wrong uid, gid and Input/output error

I want to sshfs from my Linux machine (Ubuntu 18.04.2 LTS) to my MacBook. I could do it at command line with

sshfs jczhang@10.0.2.2:/Users/jczhang/mysharedfolder /home/jczhang/mysharedfolder

It worked perfectly. Since I wanted to mount the shared folder automatically at boot time, I put this in /etc/fstab.

jczhang@10.0.2.2:/Users/jczhang/mysharedfolder /home/jczhang/mysharedfolder fuse.sshfs delay_connect,_netdev,user,uid=1000,gid=1000,IdentityFile=/home/jczhang/.ssh/id_rsa,allow_other 0 0

Here, 1000 is my uid and gid in Linux. After reboot, I found the directory was mounted but I could not access the directory.

ls -l d?????????  ? ?       ?              ?            ? mysharedfolder/  cd mysharedfolder -bash: cd: mysharedfolder: Input/output error  mount status had jczhang@10.0.2.2:/Users/jczhang/mysharedfolder on /home/jczhang/mysharedfolder type fuse.sshfs (rw,nosuid,nodev,noexec,relatime,user_id=0,group_id=0,allow_other,_netdev,user) 

I did not know why sshfs kept using user_id=0,group_id=0. I tried different combinations of uid=1000,gid=1000 and idmap=user. None worked. I umounted the shared folder and did “mount -a”. It still did not solve the problem.

I used the default SSHFS version 2.10. Later, I upgraded it to version 3.5.2. Nothing changed.

Does anyone know a solution? Thanks.

Update fstab / mount information in High Sierra

There are some Q&A about updating /etc/fstab but they are all for much older versions of MacOs.

  • Is /etc/fstab in fact still supported?

  • A pointer to what options are supported in High Sierra and a command line

I tried putting in place the following and the associated Volume did not mount at all. When the /etc/fstab were moved away then the volume mounted again so clearly it is the offender. Can it be fixed?

UUID=C051CBF1-62E1-3C45-8312-7060BB339EC3 /d hfs rw 0 2 

cannot run .sh in Fstab Mounted Partition

So, I’m dual booting and I’ve created a shared partition to use between my two OSs. This is formatted as fat.

For access via my ubuntu OS, my fstab entry to automatically mount this is as follows:

UUID=<PARTITION_UUID> /mnt/storage vfat rw,exec,auto,user,uid=1000,gid=1000,umask=000 0 2

On boot, the partition is available and all user, group and permissions look in line according to the fstab entry.

However when I run a .sh file, I get the error: bash: ./my_script.sh: Permission denied

This even fails to run with sudo. Any ideas?

Custom fstab + grub menu in bootable persistent USB created with mkusb

I have created a custom live CD image (erm… live USB, I suppose it’d be more accurate) through a combination of Cubic (to generate a custom .iso) and mkusb to provide it with persistency through a casper-rw partition.

It’s working really well, but mkusb seems to be creating its own fstab and its own grub menu.

One of the things that mkusb does is creating a “regular” NTFS partition so the USB stick can be used as a “regular” storage stick (to save pictures, docs or whatever in it), yet that partition doesn’t seem to be mounted on boot.

It would be great if it could be, because I have a pretty specific use I’d like to give to it (specifically, Docker images which now only seem to work properly if I specify devicemapper as the storage-driver). It would really, really help if I could have that NTFS partition mounted in /var/lib/docker/[storage], but even if I change /etc/fstab while in Cubic, those changes are not reflected in the image that is written to the USB stick.

Something similar happens with the Grub menu. Cubic allows to specify your own, but this seems to be overwritten by mkusb and because of issues with the computers where the stick is going to be used, it would be great if I could add a nolapic flag to the boot line.

Is there any way of doing this?

PS 01: I’m not married to mkusb… I do like how easy it is to get a persistent USB with it and that works on bios with UEFI boot, though but maybe another tool would give me more control?

PS 02: I don’t know much about… anything, really but for this specific use case, let’s say I don’t know much about persistent partitions on bootable USB sticks.

Continue boot with offline fstab disk (linux/systemd)

During boot, with pre-systemd versions of Ubuntu server (eg. 14.04), if a non-critical fstab disk was offline, the system would wait to mount the disk (30s iirc), timeout and continue booting.

Since upgrading through 16.04 to Ubuntu 18.04, thanks to systemd’s dependencies I presume, a missing fstab disk stops the boot process resulting in the “Emergency mode… Press Enter for Maintenance” prompt at boot time.

  1. Is there a way to change this behaviour by default? Ie. simply continue booting or an option to flag disks as non-critical?
  2. Failing that, is there a straightforward systemctl command to ‘continue booting ignoring missing disk’ from maintenance?

A question about fstab

All the information I find about the fstab file shows that the units are listed chronologically.

sda1 sda2 sdb1 etc…

My fstab file is listed as follows:

sda2 (root) sda1 (efi) sda3 (swap)

Computer works, but starts a little slow, it takes a long time to connect sda2.

Therefore, I wonder how the devices are listed in the fstab file has some significance