CIFS share on fstab via krb5

I have followed;

Through this I can mount the cifs share manually, however when I try to mount it in the fstab via kerberos;

//windows/sahre/filepath /home/Drive cifs user,uid=me,gid=metoo,vers=3.0,rw,sec=krb5 0 0

I get;

➜ ~ sudo mount -a
mount error(2): No such file or directory Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)

How to automate the mounting of a Samba shared directory ith each reboot (fstab already created)

I have two Linux/Ubuntu boxes.

  • Box A ( works as a file server, with Samba installed. It’s always switched on.
  • Box B: workstation with my office tools, which I reboot each time I need to work with it.

In Box B, I have ‘/etc/fstab’ modified:

// /mnt/SambaFiles cifs username=tom,password=foo,rw,iocharset=utf8,file_mode=0777,dir_mode=0777 0 0 

However, each time I reboot Box B, I have to do ‘sudo mount -a’ to mount the file directory of Box A.

Is it possible to automate it to avoid mounting it with every reboot? Thank you very much.

sshfs with fstab: wrong uid, gid and Input/output error

I want to sshfs from my Linux machine (Ubuntu 18.04.2 LTS) to my MacBook. I could do it at command line with

sshfs jczhang@ /home/jczhang/mysharedfolder

It worked perfectly. Since I wanted to mount the shared folder automatically at boot time, I put this in /etc/fstab.

jczhang@ /home/jczhang/mysharedfolder fuse.sshfs delay_connect,_netdev,user,uid=1000,gid=1000,IdentityFile=/home/jczhang/.ssh/id_rsa,allow_other 0 0

Here, 1000 is my uid and gid in Linux. After reboot, I found the directory was mounted but I could not access the directory.

ls -l d?????????  ? ?       ?              ?            ? mysharedfolder/  cd mysharedfolder -bash: cd: mysharedfolder: Input/output error  mount status had jczhang@ on /home/jczhang/mysharedfolder type fuse.sshfs (rw,nosuid,nodev,noexec,relatime,user_id=0,group_id=0,allow_other,_netdev,user) 

I did not know why sshfs kept using user_id=0,group_id=0. I tried different combinations of uid=1000,gid=1000 and idmap=user. None worked. I umounted the shared folder and did “mount -a”. It still did not solve the problem.

I used the default SSHFS version 2.10. Later, I upgraded it to version 3.5.2. Nothing changed.

Does anyone know a solution? Thanks.

Update fstab / mount information in High Sierra

There are some Q&A about updating /etc/fstab but they are all for much older versions of MacOs.

  • Is /etc/fstab in fact still supported?

  • A pointer to what options are supported in High Sierra and a command line

I tried putting in place the following and the associated Volume did not mount at all. When the /etc/fstab were moved away then the volume mounted again so clearly it is the offender. Can it be fixed?

UUID=C051CBF1-62E1-3C45-8312-7060BB339EC3 /d hfs rw 0 2 

cannot run .sh in Fstab Mounted Partition

So, I’m dual booting and I’ve created a shared partition to use between my two OSs. This is formatted as fat.

For access via my ubuntu OS, my fstab entry to automatically mount this is as follows:

UUID=<PARTITION_UUID> /mnt/storage vfat rw,exec,auto,user,uid=1000,gid=1000,umask=000 0 2

On boot, the partition is available and all user, group and permissions look in line according to the fstab entry.

However when I run a .sh file, I get the error: bash: ./ Permission denied

This even fails to run with sudo. Any ideas?

Custom fstab + grub menu in bootable persistent USB created with mkusb

I have created a custom live CD image (erm… live USB, I suppose it’d be more accurate) through a combination of Cubic (to generate a custom .iso) and mkusb to provide it with persistency through a casper-rw partition.

It’s working really well, but mkusb seems to be creating its own fstab and its own grub menu.

One of the things that mkusb does is creating a “regular” NTFS partition so the USB stick can be used as a “regular” storage stick (to save pictures, docs or whatever in it), yet that partition doesn’t seem to be mounted on boot.

It would be great if it could be, because I have a pretty specific use I’d like to give to it (specifically, Docker images which now only seem to work properly if I specify devicemapper as the storage-driver). It would really, really help if I could have that NTFS partition mounted in /var/lib/docker/[storage], but even if I change /etc/fstab while in Cubic, those changes are not reflected in the image that is written to the USB stick.

Something similar happens with the Grub menu. Cubic allows to specify your own, but this seems to be overwritten by mkusb and because of issues with the computers where the stick is going to be used, it would be great if I could add a nolapic flag to the boot line.

Is there any way of doing this?

PS 01: I’m not married to mkusb… I do like how easy it is to get a persistent USB with it and that works on bios with UEFI boot, though but maybe another tool would give me more control?

PS 02: I don’t know much about… anything, really but for this specific use case, let’s say I don’t know much about persistent partitions on bootable USB sticks.

Continue boot with offline fstab disk (linux/systemd)

During boot, with pre-systemd versions of Ubuntu server (eg. 14.04), if a non-critical fstab disk was offline, the system would wait to mount the disk (30s iirc), timeout and continue booting.

Since upgrading through 16.04 to Ubuntu 18.04, thanks to systemd’s dependencies I presume, a missing fstab disk stops the boot process resulting in the “Emergency mode… Press Enter for Maintenance” prompt at boot time.

  1. Is there a way to change this behaviour by default? Ie. simply continue booting or an option to flag disks as non-critical?
  2. Failing that, is there a straightforward systemctl command to ‘continue booting ignoring missing disk’ from maintenance?

A question about fstab

All the information I find about the fstab file shows that the units are listed chronologically.

sda1 sda2 sdb1 etc…

My fstab file is listed as follows:

sda2 (root) sda1 (efi) sda3 (swap)

Computer works, but starts a little slow, it takes a long time to connect sda2.

Therefore, I wonder how the devices are listed in the fstab file has some significance

How do I change the behaviour of the defaults flag in fstab?

I want to add nofail to the defaults flag in my fstab so my system doesn’t boot in to recovery mode every time I plug in a new device and reboot. I know that I could just go and edit my fstab manually each time but that’s a massive hassle.

EDIT: I have figured out a stopgap solution writing the output of cat /etc/fstab | sed 's/nofail,//'| sed 's/defaults/nofail,defaults/' to /etc/fstab but it is both ugly and I have to manually remember to run the script after every (EDIT: new) drive gets attached.