From my ceph (mimic release) administrative node I ran
ceph-deploy osd create --data /dev/vdb ceph0
against a bare metal ceph node and it worked without error.
Now I run the same command against a virtual ceph node (with an unpartitioned qcow2 disk attached) and the command times out. I see no evidence of any progress having been made on this virtual machine. The target VM sees its vdb disk and virsh shows the disk as being properly attached.
The data disk on the bare metal machine is 500GB while the virtual disk on the virtual machine node is only 30GB. Could this massive size difference be the problem? Should I try using a raw .img disk instead of qcow2?