Hyper-V Replication setup fails, can’t resolve hostname [Windows 2016 datacenter edition]

Ok, here i have something nasty.

  • 3 Windows datacenter 2016 servers, running since early 2018
  • Replication has been up and running since early 2018
  • in january 2019 the private certificates for replication expired and we didn’t notice it
  • yesterday we renewed all certificates on the 3 machines
  • We have have done this before and there were no problems

The 3 machines are called

  1. HV2016-1 (192.168.140.90)
  2. HV2016-2 (192.168.140.92)
  3. HV2016-3 (192.168.140.91)

To prevent things from going bad (the replicated data on the backup server was at least 3 months old) we removed the replication configuration from the running VM’s and removed the replicated VM from the backup server. (HV2016-3 is running on older hardware and has the lowest amount of running VM’s. But it is sufficient for replications.)

But now… Replicating data from HV2016-1 works great and without problems to both HV2016-2 and HV2016-3. Replicating data from HV2016-3 works great and without problems to both HV2016-2 and HV2016-1. Replicating data from HV2016-2 works great and without problems to ONLY HV2016-3. Replicating data from HV2016-2 to HV2016-1 results in a flickering pop-up when we enter the hostname and press next for setting up replication information. We can’t get past this screen and it flickers for ever.

[hostnames] The names HV2016-1,2 and 3 are all on all servers specified in the hosts file. All hosts-files are identical on all 3 servers.

What we tried:

  • Reboot the servers
  • ping on IP address 192.168.140.90 from HV2016-2: ping good
  • ping on hostname HV2016-1 from HV2016-2: ping good
  • tracert HV2016-1: resolves good
  • ipconfig /flushdns
  • nbtstat -R
  • added other hostnames for the same IP address in the hosts file on HV2016-2, but every name that resolves to 192.168.140.90 results in the flickering pop-up

So i am stuck at this and can’t explain why this happens.

Connect USB Btrfs HDD to Ubuntu Hyper-V virtual machine

I have an Ubuntu Hyper-V virtual machine running on Windows 10 Pro. I also have a Btrfs formatted HDD (taken from a Synology NAS) connected through USB to the Windows 10 machine.

The Win10 machine sees the HDD, it appears in Storage Management, but it does not assign a letter to it (I assume because it cannot read the Btrfs partition). The HDD is marked as offline (required for USB passthrough).
On the Ubuntu VM I have enabled the enhanced session mode. I also created a new hard drive for the VM that I mapped to the USB hdd. The hard drive has been created in virtual machine’ settings after the virtual machine has been started (if I create the hard drive before starting the VM, the VM does not start because Hyper-V cannot create a checkpoint due to the attached USB hard drive).

Where can I see in Ubuntu the USB hard drive or do I need to mount it manually?

Or in 2019 the USB passthrough still does not work properly in Hyper-V and I have to switch to VMWare or VirtualBox?

Openstack Neutron Hyper-V agent – unable to bind port

I have a fully working system running Rocky that I built using Openstack-Ansible on Ubuntu 18.04. I am using it as a POC to test out for use in my datacenter, which has a large Hyper-V investment.

I am using the ML2 Linuxbridge agent with my KVM hosts, with VLAN, VXLAN, and FLAT networking all working great.

While attempting to add some Hyper-V nodes into the mix, all is working fine, I can provision new Instances from image, Cinder volumes mount via iSCSI, and they boot up successfully. The issue is that the Hyper-V virtual switch remains in a disconnected state, and does not tag a VLAN when it should. The port state also shows as “down” in the Horizon Web UI or from the Command line.

On my neutron server containers, I have installed the “networking-hyperv” driver. My /etc/neutron/plugins/ml2/ml2_conf.ini file has the following:

[ml2] extension_drivers = port_security,qos mechanism_drivers = linuxbridge,hyperv tenant_network_types = vxlan,flat,vlan type_drivers = flat,vlan,vxlan,local 

Here are my Neutron Server Container Logs:

I think the key line is: Device fbb9010c-3326-4689-b46c-c3393e43f23b has no active binding in host None

2019-04-14 02:10:59.141 805 DEBUG neutron.db.db_base_plugin_common [req-c40d2349-2ade-4367-a017-181e6644c1fb 202a012dda274d0a8e974e22690e340f fde3db651e3a424a95a34bef449949a3 - default default] Allocated IP 172.31.60.62 (7559f5d9-bf1e-431a-b947-af8cc85bcf91/0c59baf1-3880-4b6c-8198-037923ec8cfb/fbb9010c-3326-4689-b46c-c3393e43f23b) _store_ip_allocation /openstack/venvs/neutron-18.1.4/lib/python2.7/site-packages/neutron/db/db_base_plugin_common.py:121 2019-04-14 02:10:59.151 805 DEBUG neutron.db.db_base_plugin_common [req-c40d2349-2ade-4367-a017-181e6644c1fb 202a012dda274d0a8e974e22690e340f fde3db651e3a424a95a34bef449949a3 - default default] Allocated IP 2001:470:3ab1:ff3c::11 (7559f5d9-bf1e-431a-b947-af8cc85bcf91/d6704a54-bdd2-4ced-a1c6-ea425e87f772/fbb9010c-3326-4689-b46c-c3393e43f23b) _store_ip_allocation /openstack/venvs/neutron-18.1.4/lib/python2.7/site-packages/neutron/db/db_base_plugin_common.py:121 2019-04-14 02:10:59.826 805 DEBUG neutron.db.provisioning_blocks [req-c40d2349-2ade-4367-a017-181e6644c1fb 202a012dda274d0a8e974e22690e340f fde3db651e3a424a95a34bef449949a3 - default default] Transition to ACTIVE for port object fbb9010c-3326-4689-b46c-c3393e43f23b will not be triggered until provisioned by entity DHCP. add_provisioning_component /openstack/venvs/neutron-18.1.4/lib/python2.7/site-packages/neutron/db/provisioning_blocks.py:73 2019-04-14 02:10:59.964 805 DEBUG neutron.api.rpc.handlers.resources_rpc [req-c40d2349-2ade-4367-a017-181e6644c1fb 202a012dda274d0a8e974e22690e340f fde3db651e3a424a95a34bef449949a3 - - -] Pushing event updated for resources: {'Port': ['ID=fbb9010c-3326-4689-b46c-c3393e43f23b,revision_number=2']} push /openstack/venvs/neutron-18.1.4/lib/python2.7/site-packages/neutron/api/rpc/handlers/resources_rpc.py:241 2019-04-14 02:11:02.847 1024 INFO neutron.wsgi [req-f5b3d393-7550-477d-9dac-118149b09680 343681e3a838455796d46e4449625438 1868e5b19c6443c7a0ec010ece16447d - default default] 192.168.41.6,192.168.41.58 "PUT /v2.0/ports/fbb9010c-3326-4689-b46c-c3393e43f23b HTTP/1.1" status: 200  len: 1169 time: 2.3884900 2019-04-14 02:11:03.366 1024 INFO neutron.notifiers.nova [-] Nova event response: {u'status': u'completed', u'tag': u'fbb9010c-3326-4689-b46c-c3393e43f23b', u'name': u'network-changed', u'server_uuid': u'4286e04d-84f7-4e1d-9ad6-d150debafca1', u'code': 200} 2019-04-14 02:11:03.438 788 INFO neutron.wsgi [req-e491113e-0eec-4688-bbb4-835fa2473c0f 343681e3a838455796d46e4449625438 1868e5b19c6443c7a0ec010ece16447d - default default] 192.168.41.6,192.168.41.58 "GET /v2.0/floatingips?fixed_ip_address=2001%3A470%3A3ab1%3Aff3c%3A%3A11&port_id=fbb9010c-3326-4689-b46c-c3393e43f23b HTTP/1.1" status: 200  len: 212 time: 0.0544460 2019-04-14 02:11:04.616 1024 INFO neutron.wsgi [req-c3b90556-cd97-40d8-90d5-ac7fbf19247b 202a012dda274d0a8e974e22690e340f fde3db651e3a424a95a34bef449949a3 - default default] 192.168.41.151,192.168.41.58 "GET /v2.0/floatingips?tenant_id=fde3db651e3a424a95a34bef449949a3&port_id=fbb9010c-3326-4689-b46c-c3393e43f23b HTTP/1.1" status: 200  len: 212 time: 0.0501890 2019-04-14 02:11:21.842 1024 INFO neutron.wsgi [req-b7271e74-bffe-40fc-b7c5-9f50c2496626 202a012dda274d0a8e974e22690e340f fde3db651e3a424a95a34bef449949a3 - default default] 192.168.41.151,192.168.41.58 "GET /v2.0/floatingips?tenant_id=fde3db651e3a424a95a34bef449949a3&port_id=fbb9010c-3326-4689-b46c-c3393e43f23b HTTP/1.1" status: 200  len: 212 time: 0.0771079 2019-04-14 02:11:45.530 790 DEBUG neutron.plugins.ml2.rpc [req-feb90349-b6da-4c5f-9d6b-027bc4b6e36b - - - - -] Device fbb9010c-3326-4689-b46c-c3393e43f23b details requested by agent hyperv_HOST-H with host None get_device_details /openstack/venvs/neutron-18.1.4/lib/python2.7/site-packages/neutron/plugins/ml2/rpc.py:79 2019-04-14 02:11:45.731 790 DEBUG neutron.plugins.ml2.db [req-feb90349-b6da-4c5f-9d6b-027bc4b6e36b - - - - -] For port fbb9010c-3326-4689-b46c-c3393e43f23b, host HOST-H, got binding levels [<neutron.plugins.ml2.models.PortBindingLevel[object at 7f505617b910] {port_id=u'fbb9010c-3326-4689-b46c-c3393e43f23b', host=u'HOST-H', level=0, driver=u'hyperv', segment_id=u'598c754b-1b2a-4865-9211-ef3a1e5ed95c'}>] get_binding_levels /openstack/venvs/neutron-18.1.4/lib/python2.7/site-packages/neutron/plugins/ml2/db.py:77 2019-04-14 02:11:45.753 790 DEBUG neutron.plugins.ml2.rpc [req-feb90349-b6da-4c5f-9d6b-027bc4b6e36b - - - - -] Device fbb9010c-3326-4689-b46c-c3393e43f23b has no active binding in host None _get_device_details /openstack/venvs/neutron-18.1.4/lib/python2.7/site-packages/neutron/plugins/ml2/rpc.py:133 2019-04-14 02:11:49.538 1024 INFO neutron.wsgi [req-4213915d-3fee-40ff-8921-439c9ada3a30 202a012dda274d0a8e974e22690e340f fde3db651e3a424a95a34bef449949a3 - default default] 192.168.41.151,192.168.41.58 "GET /v2.0/floatingips?tenant_id=fde3db651e3a424a95a34bef449949a3&port_id=fbb9010c-3326-4689-b46c-c3393e43f23b HTTP/1.1" status: 200  len: 212 time: 0.0534980 2019-04-14 02:12:07.359 788 INFO neutron.wsgi [req-6d6e94e6-9100-4923-894a-341a9b4be439 202a012dda274d0a8e974e22690e340f fde3db651e3a424a95a34bef449949a3 - default default] 192.168.41.151,192.168.41.58 "GET /v2.0/floatingips?tenant_id=fde3db651e3a424a95a34bef449949a3&port_id=fbb9010c-3326-4689-b46c-c3393e43f23b HTTP/1.1" status: 200  len: 212 time: 0.0527611 2019-04-14 02:12:29.106 789 INFO neutron.wsgi [req-507d6cf9-a93e-4988-948c-e409dc226de7 202a012dda274d0a8e974e22690e340f fde3db651e3a424a95a34bef449949a3 - default default] 192.168.41.151,192.168.41.58 "GET /v2.0/floatingips?tenant_id=fde3db651e3a424a95a34bef449949a3&port_id=fbb9010c-3326-4689-b46c-c3393e43f23b HTTP/1.1" status: 200  len: 212 time: 0.0640750 

Here is my Neutron Hyper-V Agent Log:

I believe the key line here is: No port fbb9010c-3326-4689-b46c-c3393e43f23b defined on agent

2019-04-13 22:11:02.830 4220 DEBUG networking_hyperv.neutron.agent.layer2 [req-f5b3d393-7550-477d-9dac-118149b09680 343681e3a838455796d46e4449625438 1868e5b19c6443c7a0ec010ece16447d - - -] port_update received: fbb9010c-3326-4689-b46c-c3393e43f23b port_update C:\Program Files\Cloudbase Solutions\OpenStack\Nova\Python\lib\site-packages\networking_hyperv\neutron\agent\layer2.py:436 2019-04-13 22:11:02.846 4220 DEBUG networking_hyperv.neutron.agent.layer2 [req-f5b3d393-7550-477d-9dac-118149b09680 343681e3a838455796d46e4449625438 1868e5b19c6443c7a0ec010ece16447d - - -] No port fbb9010c-3326-4689-b46c-c3393e43f23b defined on agent. port_update C:\Program Files\Cloudbase Solutions\OpenStack\Nova\Python\lib\site-packages\networking_hyperv\neutron\agent\layer2.py:449 2019-04-13 22:11:44.890 4220 INFO networking_hyperv.neutron.agent.layer2 [-] Hyper-V VM vNIC added: fbb9010c-3326-4689-b46c-c3393e43f23b 2019-04-13 22:11:45.530 4220 DEBUG networking_hyperv.neutron.agent.layer2 [req-feb90349-b6da-4c5f-9d6b-027bc4b6e36b - - - - -] Agent loop has new devices! _work C:\Program Files\Cloudbase Solutions\OpenStack\Nova\Python\lib\site-packages\networking_hyperv\neutron\agent\layer2.py:427 2019-04-13 22:11:45.765 4220 INFO networking_hyperv.neutron.agent.layer2 [req-feb90349-b6da-4c5f-9d6b-027bc4b6e36b - - - - -] Adding port fbb9010c-3326-4689-b46c-c3393e43f23b 2019-04-13 22:11:45.765 4220 DEBUG networking_hyperv.neutron.agent.layer2 [req-feb90349-b6da-4c5f-9d6b-027bc4b6e36b - - - - -] Missing port_id from device details: fbb9010c-3326-4689-b46c-c3393e43f23b. Details: {'device': 'fbb9010c-3326-4689-b46c-c3393e43f23b', 'no_active_binding': True} _treat_devices_added C:\Program Files\Cloudbase Solutions\OpenStack\Nova\Python\lib\site-packages\networking_hyperv\neutron\agent\layer2.py:374 2019-04-13 22:11:45.765 4220 DEBUG networking_hyperv.neutron.agent.layer2 [req-feb90349-b6da-4c5f-9d6b-027bc4b6e36b - - - - -] Remove the port from added ports set, so it doesn't get reprocessed. _treat_devices_added C:\Program Files\Cloudbase Solutions\OpenStack\Nova\Python\lib\site-packages\networking_hyperv\neutron\agent\layer2.py:376 

If I happen to “catch” the VM in the Hyper-V console while it’s being provisioned, I can attach the vNIC to the Hyper-V Switch, and check the box and tag the VLAN and the system will get an IP from the DHCP Agent, Pull its hostname and ssh key from the Metadata Agent, and everything else – although Openstack will still have the port status as “Down”

There’s not a lot of info out there on specific nuances of Nova with Hyper-V, If I need to look into the Open vSwitch Agent, I’m not opposed to going that direction either. I also realize that I may need to reach out to Cloudbase directly for better support.

Got 3 HP Proliant DL365 G1 for really, really cheap. Can I run Hyper-v or ESXI?

Title. I got them for really cheap, including the rack, and I was thinking of setting them to virtualize stuff for home use and for some labs. First I tried to install Windows Server 2019 (I have MSDN Subscription so licenses are ok), got an error. Then Every single image (Windows Servers from 2003 to 2019, Ubuntu Server latest, EXSI from 5.5 and upwards) I tried to boot up from, getting the same error (Ilegal Opcode).

Since these servers are really old, do any of you remember what can I run? Do I need to do special stuff that a guy like me with only PCs have never done?

Ubuntu VM will not boot up with Kernel newer than version with Microsoft Hyper-V Hypervisor on Windows Server 2012r2

I have two Ubuntu Server 18.04 LTS VM’s running on Hyper-V on Windows Server 2012r2. Additionally, I have two Windows VM’s (1 10, 1 7) on the same Hyper-V server. The Windows VM’s have not experienced this problem.

They both exhibit the following symptoms when booted from a kernel > 4.15.0-43 (specifically 46 or 47)

The Kernel boots very very slowly compared to normal. It typically hangs for a while around this line: 6 seconds into kernel boot and eventually continues.

The next line it hangs on for a while is: crng init done followed by third pause

After a long time, the virtual machine fails to find the virtual hard drive and boots to a BusyBox recovery terminal looking like: BusyBox Recovery Terminal

I can boot off of a linux live-cd (Ubuntu 18.04 LTS), mount the partition of the virtual hard drive and access all the files. Additionally, I can confirm that the UUID seen in the recovery terminal is correct.

This did occur after the windows server updates KB4493451, KB890830 April, KB890830 March and KB4489891. However, I cannot confirm if the Linux VM’s were rebooted between the application of those updates and now. I can confirm that the problem did occur for the first time after the reboot after the installation of KB890830. (We haven’t rebooted since the installation of KB4493451).

What might be the problem and how can we ensure that when the machine (either physical or virtual) reboots, all the Linux VM’s startup properly.

Thank you very much.

Hyper-V vhdx is much larger than the contents

I have a virtual machine, that has dynamically expanding vhdx as a C drive. Or ‘thin provisioned’ to speak in terms of other hyper-visors.

I’m running Windows server 2016 on it, as a DC to be exact. Over the time of two years the vhdx file grew; the size of the disk within the virtual machine lists as 18gb, while the vhdx file is around 128gb.

What caused this?, and how can I shrink the size of the vhdx file.

Hyper-V portforwarding Minikube service sitting on a nested vbox 6 virtualization

Setup is: Hyper-V Windows 10 Host | Vagrant| CentOS7 guest | Nested Virtualization with VBox 6 | Minikube on top| Portfowarding application to host

I am targeting the Default Switch with Hyper-V. No matter what I do I can’t expose successfully the Tomcat service that I can curl on the vbox Centos 7 host e.g. curl 127.0.0.1:8001. How can I achieve that? Vagrant can’t use portforwarding with Hyper-V.

Not able to open port between windows 10 and locally hosted Ubuntu VM created on Hyper-v

This started when I was trying to use docker swarm join from the ubuntu vm (created using hyper-v) to my windows 10 docker engine. I am using the default port 2377. I keep getting connection refused error.

Now I am just trying to telnet over port 2377 from ubuntu vm to windows 10 or from windows to ubuntu and it is failing on both ways. while telnet is working fine on port 80.

I added inbound and outbound rules for both protocols udp and tcp on windows 10, and used the command “ufw allow 2377/tcp” on the ubuntu vm and also opened the port for both tcp and udp. Still it is not working.

Any ideas?

Sluggish Linux VM’s on Hyper-V

I am experiencing very sluggish response with Linux VM’s (Debian) running on Hyper-V on a Windows host (Server 2012 R2).

The VM’s themselves are not running at 100% CPU and the host itself is reporting very low CPU usage (10%) yet each new VM I create seems to run slower and slower.

I’d understand if the host CPU was running high, or the VM itself, but that is not the case.

It seems with each VM I create, responsiveness gets worse, which I understand points to CPU resource, but I cannot see any indicators as to that being the case.

Responsiveness of the host seems fine, no issues. Memory usage is OK, Disk I/O isn’t high… Not sure where to look.

Any pointers would be appreciated, thanks.