Why doesn’t `btrfs send | btrfs receive` preserve the “no dump” file attribute?

Btrfs supports the “no dump” file attribute and it is preserved when taking snapshots. But after sending such a snapshot to another Btrfs using btrfs send and btrfs receive,` the values of the “no dump” attribute are lost. By contrast the “compress data” attribute values are preserved through that procedure. Why?

Connect USB Btrfs HDD to Ubuntu Hyper-V virtual machine

I have an Ubuntu Hyper-V virtual machine running on Windows 10 Pro. I also have a Btrfs formatted HDD (taken from a Synology NAS) connected through USB to the Windows 10 machine.

The Win10 machine sees the HDD, it appears in Storage Management, but it does not assign a letter to it (I assume because it cannot read the Btrfs partition). The HDD is marked as offline (required for USB passthrough).
On the Ubuntu VM I have enabled the enhanced session mode. I also created a new hard drive for the VM that I mapped to the USB hdd. The hard drive has been created in virtual machine’ settings after the virtual machine has been started (if I create the hard drive before starting the VM, the VM does not start because Hyper-V cannot create a checkpoint due to the attached USB hard drive).

Where can I see in Ubuntu the USB hard drive or do I need to mount it manually?

Or in 2019 the USB passthrough still does not work properly in Hyper-V and I have to switch to VMWare or VirtualBox?

How to reduce log spew when btrfs drive fails?

I have btrfs raid-1. When one of the drives failed, I ran out of disk space.

The disk was used by:

  • /var/log/messages
  • /var/log/kern.log
  • /var/log/syslog

All of these contained errors that look like this:

Mar 31 15:15:55 b2 kernel: [ 3816.592556] BTRFS warning (device mmcblk0p4): i/o error at logical 217488986112 on dev /dev/sda, physical 182894366720, root 5, inode 2943, offset 18087936, length 4096, links 1 (path: bitcoin/bitcoin-data/blocks/rev00740.dat) Mar 31 15:15:55 b2 kernel: [ 3816.604975] scrub_handle_errored_block: 7743 callbacks suppressed Mar 31 15:15:55 b2 kernel: [ 3816.604996] BTRFS error (device mmcblk0p4): unable to fixup (regular) error at logical 217488797696 on dev /dev/sda 

also

Mar 31 15:20:45 b2 kernel: [ 4107.205771] btrfs_dev_stat_print_on_error: 28954 callbacks suppressed Mar 31 15:20:45 b2 kernel: [ 4107.205792] BTRFS error (device mmcblk0p4): bdev /dev/sda errs: wr 1087192, rd 543622, flush 2, corrupt 0, gen 0 Mar 31 15:20:45 b2 kernel: [ 4107.206125] BTRFS error (device mmcblk0p4): bdev /dev/sda errs: wr 1087193, rd 543622, flush 2, corrupt 0, gen 0 Mar 31 15:20:45 b2 kernel: [ 4107.206155] BTRFS error (device mmcblk0p4): bdev /dev/sda errs: wr 1087194, rd 543622, flush 2, corrupt 0, gen 0 Mar 31 15:20:45 b2 kernel: [ 4107.206165] scrub_handle_errored_block: 9645 callbacks suppressed 

I have

Linux l2 4.19.25+ #1205 Mon Feb 25 17:52:12 GMT 2019 armv6l GNU/Linux btrfs-progs v4.7.3  $    btrfs fi show Label: 'bitcoind'  uuid: 8848dbdc-bf70-4de0-b06f-312ce71b396a         Total devices 2 FS bytes used 203.44GiB         devid    1 size 225.00GiB used 206.03GiB path /dev/sda         devid    2 size 225.00GiB used 206.03GiB path /dev/mmcblk0p4  $   btrfs fi df /mnt/btrfs_bitcoind/ Data, RAID1: total=204.00GiB, used=202.88GiB System, RAID1: total=32.00MiB, used=48.00KiB Metadata, RAID1: total=2.00GiB, used=581.30MiB GlobalReserve, single: total=421.27MiB, used=0.00B 

btrfs ontop of mdadm raid – calculating stripes for corrupt sectors for use with raid6check

I’ve got a setup with btrfs running on top of mdadm raid6 as btrfs’s RAID5/6 code isn’t stable yet. I figured this way I’d get the benefits of snapshotting and checksumming with a few extra hoops to jump through, now that I actually have to jump through those hoops I’m running into some problems.

This morning my dmesg produced this issue:

BTRFS error (device md2): bad tree block start, want 28789209759744 have 7611175298055105740 BTRFS info (device md2): read error corrected: ino 0 off 28789209759744 (dev /dev/md2 sector 55198191488) BTRFS info (device md2): read error corrected: ino 0 off 28789209763840 (dev /dev/md2 sector 55198191496) BTRFS info (device md2): read error corrected: ino 0 off 28789209767936 (dev /dev/md2 sector 55198191504) BTRFS info (device md2): read error corrected: ino 0 off 28789209772032 (dev /dev/md2 sector 55198191512) 

This is the kind of thing that could have slipped by silently had I not used btrfs so at least it did me some good… so now, I should be be able to figure out which disk has the issue and replace it, right?

Well, mdadm seems to only support determining the failing disk using the raid6check tool, I had to build it from source to get it working on Debian, but after I did so, it seems like I’m in business.

The only catch here is that this tool seems to be extremely slow, to scan 1000 stripes it takes a good 3 minutes. This means to scan the 15261512 stripes that comprise my array it’ll take over 31 days. I’d like to avoid that if possible. The mdadm check/repair is much faster, only around 3 days, but doesn’t produce any useful information about which disk could be responsible for this, so I don’t exactly want to use it.

The raid6check tool appears to support accepting a stripe number – I’m wondering if it’s possible to calculate what stripe number to pass it so I can get it to directly check the relevant portion of the disk.

Here’s the raid6check information for reference purposes if it helps:

layout: 2 disks: 8 component size: 8001427603456 total stripes: 15261512 chunk size: 524288 

Thanks, any ideas are appreciated.

BTRFS unmountable after cold reboot (total_rw_bytes is twice too big)

One of my users in research environment invoked out-of-memory on a server which mounts a 52TB btrfs partition. I had to power cycle the server. After the reboot my btrfs partition cannot be mounted in read-write mode.

 mount /mnt/storage/ mount: /mnt/storage: wrong fs type, bad option, bad superblock on /dev/mapper/fc_trunk-part3, missing codepage or helper program, or other error. 

Kernel logs show a problem with device size:

 Mar 19 15:10:52 mamut kernel: BTRFS error (device dm-5): open_ctree failed Mar 19 15:10:52 mamut kernel: BTRFS info (device dm-5): use lzo compression, level 0 Mar 19 15:10:52 mamut kernel: BTRFS info (device dm-5): disk space caching is enabled Mar 19 15:10:52 mamut kernel: BTRFS info (device dm-5): has skinny extents Mar 19 15:10:52 mamut systemd[1]: mnt-storage.mount: Mount process exited, code=killed, status=15/TERM Mar 19 15:10:52 mamut systemd[1]: mnt-storage.mount: Failed with result 'timeout'. Mar 19 15:10:52 mamut systemd[1]: Failed to mount /mnt/storage. Mar 19 15:10:52 mamut kernel: BTRFS error (device dm-5): super_total_bytes 52798547820544 mismatch with fs_devices total_rw_bytes 105597095641088 Mar 19 15:10:52 mamut kernel: BTRFS error (device dm-5): failed to read chunk tree: -22 Mar 19 15:10:52 mamut kernel: BTRFS error (device dm-5): open_ctree failed [...] Mar 19 15:15:52 mamut systemd-helper[9798]: IO Error (subvolume is not a btrfs subvolume). Mar 19 15:15:52 mamut systemd-helper[9798]: number cleanup for 'storage' failed. Mar 19 15:15:52 mamut systemd-helper[9798]: running timeline cleanup for 'storage'. Mar 19 15:15:52 mamut systemd-helper[9798]: IO Error (subvolume is not a btrfs subvolume). Mar 19 15:15:52 mamut systemd-helper[9798]: timeline cleanup for 'storage' failed. Mar 19 15:15:52 mamut systemd-helper[9798]: running empty-pre-post cleanup for 'storage'. Mar 19 15:15:52 mamut systemd-helper[9798]: IO Error (subvolume is not a btrfs subvolume). Mar 19 15:15:52 mamut systemd-helper[9798]: empty-pre-post cleanup for storage failed. Mar 19 15:15:52 mamut systemd[1]: snapper-cleanup.service: Main process exited, code=exited, status=1/FAILURE Mar 19 15:15:52 mamut systemd[1]: snapper-cleanup.service: Failed with result 'exit-code'. 

The super_total_bytes=52798547820544 is the correct size of the partition in bytes reported by fdisk. fs_devices total_rw_bytes=105597095641088 is exactly twice of that.

I tried running btrfs check but got this error:

 btrfs check /dev/mapper/fc_trunk-part3 Opening filesystem to check... Checking filesystem on /dev/mapper/fc_trunk-part3 UUID: 40a2e65b-f34a-4d33-946d-055d93fe7ffa [1/7] checking root items ERROR: failed to repair root items: Input/output error 

Now, I know about btrfs rescue fix-device-size, but I have never ran it before. The man page says:

 fix-device-size             fix device size and super block total bytes values that are do            not match             Kernel 4.11 starts to check the device size more strictly and            this might mismatch the stored value of total bytes. See the            exact error message below. Newer kernel will refuse to mount the            filesystem where the values do not match. This error is not fatal            and can be fixed. This command will fix the device size values if            possible.                 BTRFS error (device sdb): super_total_bytes 92017859088384 mismatch with fs_devices total_rw_bytes 92017859094528             The mismatch may also exhibit as a kernel warning:                 WARNING: CPU: 3 PID: 439 at fs/btrfs/ctree.h:1559 btrfs_update_device+0x1c5/0x1d0 [btrfs] 

Kernel version did change after reboot, but both versions are > 4.11 and previously I had no problems mounting this partition.

The partition:

  • is big and will take a lot of time, and space I don’t have, to back up
  • has critical data for my research
  • has snapshots
  • it is possible to mount it with -o rescue,ro

Is it safe to call btrfs rescue fix-device-size?

Can I fix it in some other safe way?

NFS export an overlay of ext4 and btrfs

I have 2 data sources. One is btrfs (raid) and one is a simple ext4 partition. Those should be transparently displayed as one. This is a simple read only example, but the lower/upper/workdir version produces the same problem, with btrfs as upper and ext4 as lower.

manual mount:

mount -t overlay overlay -o lowerdir=/mnt/raid/folder1/:/mnt/ext4/folder1 -o comment=merge  -o nfs_export=on /data/merged 

fstab mount:

overlay /data/merged overlay defaults,lowerdir=/mnt/raid/folder1/:/mnt/ext4/folder1,comment=merge,nfs_export=on 0 0 

this is my nfs export:

/data/merged 192.168.0.0/255.255.255.0(ro,fsid=1,async,insecure,crossmnt) 

exportfs -ra produces: exportfs: /data/merged does not support NFS export

My configuration: Ubuntu 18.04 LTS with HWE kernel 4.18.0-13-generic This is my main source for the config: https://www.kernel.org/doc/Documentation/filesystems/overlayfs.txt

Maybe I am missing some NFSv4 stuff (which is needed for nfs exporting an overlayfs) ?

Btrfs check repair give device busy. How to see what it keeps busy?

I am in a chicken-egg situation for my secondary harddrive (non boot):

  • My BTRFS mount has gone readonly because no space left.
  • btrfs filesystem resize only works on mounted volumes.
  • I needed to umount this ro mount in force mode (because of device busy).
  • If I try to mount rw it fails because of errors (most likely caused by no space left)
  • If I try to run btrfs check --repair it gives device busy

What to do to find out what keeps /dev/sda busy? My hdd is listed in /etc/fstab, does that matter (UUID=262a8d86-279a-4f6b-8968-32e200c32255 /mnt/hdd btrfs defaults,compress=zlib 0 1)???

I tried:

  • lsof | grep /dev/s -> nothing
  • lsof | grep /mnt/hdd-> nothing
  • The same for fuser -> nothing

So:

mount -o recovery /dev/sda /mnt/hdd

[63035.539792] BTRFS error (device sda): Remounting read-write after error is not allowed

If I try to run:

root@myhost:/mnt# btrfs check --repair /dev/sda enabling repair mode ERROR: cannot open device '/dev/sda': Device or resource busy ERROR: cannot open file system