Does FileVault encrypt the disk instantly?

I have recently found that FileVault is disabled on my machine. Although I remember that I set “encrypt” option when I was setting the machine up.

I opened System Preferences and enabled FileVault. To my surprise it didn’t take any time. I didn’t have to wait while the data is encrypted. It just got instantly enabled.

Questions:

  1. How does it possible that turning on FileVault doesn’t take any time?
  2. What does the encryption option mean in the initial setup if it doesn’t turn on FileVault? Is it a bug?

How to characterize the group of homeomorphisms of unit disk in terms of the group of homeomorphisms of plane?

Let $ G$ be the group of homeomorphisms of unit disk $ (D)$ fixing boundary point wise and $ P$ be the group of homeomorphisms of plane $ \mathbb{R}^2$ .

Can we characterize $ G$ in terms of $ P$ . Speaking precisely, can we have the result of the form: $ \gamma$ is in $ G$ if and only if $ F(\gamma)$ is in $ P$ , where $ F$ is some condition or some function (for example it could be in terms of euclidean norm in $ \mathbb{R}^2$ ).

We can easily embedd $ G$ in $ P$ as: Define a homeomorphism $ g:D\rightarrow\mathbb{R}^2$ in polar coordinates as $ g(r,\theta)=(\frac{r}{1-r},\theta)$ . Then the function $ f:G\rightarrow P$ defined by $ f(\gamma)=g^{-1}\gamma g$ is an embedding. Through $ f$ we can study the image of $ G$ in $ P$ rather than $ G$ itself.

The question may be improved further. I may not have been able to express what I want to ask.

Disk apparently full despite only ~25% being reported in baobab


The problem

I am using Ubuntu 18.04.1 and have an encrypted home folder.

I am having this weird problem where baobab only reports on ~95GB of my data, whereas df -h tells me my Ubuntu partition has 480GB with a usage of 100%.

The usage of 100% is something I cannot explain, but bothers me a lot and creates problems.

My home directory, with ~78GB (reported by baobab), makes up for most of the 95GB mentioned above.

I don’t really know how to proceed from here. Please help me find out what is going on and where 75% of disk usage come from that I cannot account for.

Appendix

df -x squashfs -x tmpfs -h -T

Filesystem               Type      Size  Used Avail Use% Mounted on udev                     devtmpfs   16G     0   16G   0% /dev /dev/nvme0n1p7           ext4      480G  447G  8.7G  99% / /dev/nvme0n1p1           vfat      646M   77M  570M  12% /boot/efi /home/sebastian/.Private ecryptfs  480G  447G  8.7G  99% /home/sebastian 

sudo du -hs /* | sort -h

0       /initrd.img 0       /initrd.img.old 0       /proc 0       /sys 0       /vmlinuz 0       /vmlinuz.old 4.0K    /cdrom 4.0K    /lib64 4.0K    /srv 16K     /lost+found 40K     /media 48K     /dev 176K    /root 3.0M    /tmp 3.2M    /run 5.8M    /lib32 6.5M    /libx32 13M     /sbin 14M     /bin 21M     /etc 234M    /boot 516M    /mnt 848M    /lib 1.2G    /opt 6.5G    /usr 12G     /snap 146G    /home 712G    /var 

Should I create APFS encrypted disk image on hard disk drives?

enter image description here

I’m about to create an encrypted sparse disk image on my HFS+ formatted hard drive, but then I wondered if APFS’s drawbacks on HDDs will mean anything in this case, since the hardware is anyway on HFS+. Since the disk image is in software, I think APFS should do faster logical copying and decrypting because it’s newer but I don’t really know how this works.

RAID: Comparision of load on disks while expanding disk space or upgrading RAID level

I have an Areca ARC-1880IX-24 RAID Controller (FW 1.51) with several different Seagate 3TB disks (SAS and SATA, and of different age) attached to it to form a RAID5. When I tried to expand it by two disks too many failed during data migration and the data was lost. This was “just” the backup, so the data is still save on the other main RAID of level 6.

I had one more spare disk and a colleague lent me another one (which I have to give back) so I have enough to set up a new RAID5 with the original disk space. Of course I want my backup up and running again as soon as possible and I also decided to upgrade the backup RAID to level 6. Unfortunately, it will take up to 2 weeks for new disks to be delivered.

I now have the following possibilities:

  1. Create the original setup and copy the data. After two weeks: Switch the lent disk for a new one and then add enough disks to expand disk space and upgrade RAID level.
  2. Create a RAID6 with less space now and copy the data. In two weeks switch the lent disk and then expand the RAID to the disk space I was aiming for in the first place.
  3. My colleague can lent me 2 more disks, so I could set up a RAID6 with the final disk space now, copy the data and in two weeks simply switch the lent disks.
  4. Some idea you might have that I didn’t think of.

My question now is: Is it possible to know / Can you tell me which of those scenarios would put the least stress on my disks? I would like to minimize the risk of more / too many disks failing in two weeks, because then I might have to pipe my 30TB through the local network again.

If it’s not possible to answer my question I’m prefering option 3 right now.

Thank you for your time and help,

noes

Veracrypt system encryption on ssd, do i have to trim the disk right after finishing? how?

i want to encrypt my os that is on ssd (windows 7, x64, mbr partition style).
From what i know veracrypt will encrypt the whole disk, not only the actual data.
not only! it will “fill” the empty space (inside the decrypted disk) with random data to allow hidden os/containers.
This means that the ssd will think that the ssd is full and wear leveling will be limited, thus decreasing the life and speed of the ssd.
can a solution be write in the encrypted disk a very big file that fill the disk and then delete it so that trim will run and mark again most of the disk as empty? will it be one trim operation or more probably many many trim operations (one per sector) that might fill the trim buffer and failing?
does windoes 7 defragment also retrim the disk by sending trim operations at slow speed to allow the disk to process them?
or i have to do nothing and veracrypt will notice that it’s an ssd and encrypt only data and not the whole disk?
can someone point me to a solution inside the veracrypt guide?

How does full disk encryption cater for overprovisoned disk space in flash devices and can this result in data leakage?

My understanding is that flash based devices such as SSDs are over-provisioned and do not advertise the additional blocks of storage available to the operating system. The over-provisioned blocks of storage is to support effective distribution of data via wear leveling.

Assuming my understanding is correct, how does full disk encryption cater for over-provisioned of storage if the additional block of storage isn’t advertised or accessible by the operating system?

If the distribution of data is limited to the drive’s controller, is there a risk of data flowing from encrypted blocks to unencrypted blocks e.g. over-provisioned storage?