Problems with conjure-up openstack to Esxi 6.7 hosts, neutron-api is not deploying to LXD

I have an ESXi 6.7 with 6 VMs.
One of the VMs is configured with Ubuntu 18.04 updated.
I am following this instructions to the letter
I use virsh as power type

Power type: Virsh (virtual system)
Virsh Address: esx://root@>
Virsh password: xxxxx Virsh VM Id: “VM name on Esxi”

MAAS is controlling the VMs perfectly, I can deploy Ubuntu18.04 with no problems and SSH to it.
All machines are in “Ready state”

I have tried with this versions of conjure-up

sudo snap install conjure-up –classic
sudo snap install conjure-up –classic –beta
sudo snap install conjure-up –classic –edge
sudo snap refresh conjure-up –classic –edge
sudo snap refresh conjure-up –classic –beta

It always fails when juju is lauching the Neutron-api in the LXD container.

I am trying with this Guide also:

I have tried with all this juju versions:

sudo snap install juju –classic
sudo snap install juju –beta –classic
sudo snap install juju –edge –classic

After following this page:
The moment I launch this command, juju stops and I loose connectivity to the host where the neutron-api should be launched.

juju deploy –to lxd:1 –config neutron.yaml neutron-api

It always fails when juju is lauching the Neutron-api in the LXD container.

Help, I need somebody. Help.

Setting up new ubuntu server through ESXI (netplan not working)

No doubt I am doing something wrong but I am not familiar with ubuntu, not even linux that well.

I have deployed ubuntu server via ESXI on an OVH Dedicated server.

I’ve done this a million times with windows machines but it is a alot easier on there…

img: this is what the settings would be on windows

How would I format this on netplan now?

Shutdown ESXi 6.7 with a script via SSH without entering maintenance mode

I’m trying to write a script that ssh’s to an ESXi 6.7 and shuts down the host and also shuts down the VMs according to the current system shutdown policy.

I’m running Dell customized image ESXi 6.7 in a Dell R710 with a dual Xeon X5650 and 144GB RAM.

In fact what I want is the same that I can get with:

Shutdown via GUI

Shutdown via console

I have ssh enabled in the server.

I already tryed:

1) (it just gets there indefinitely).

2) /bin/ (it to gets there indefinitely).

3) halt (shutdowns the server but it does not shuts down the VMs)

I also tried:

esxcli system shutdown poweroff --reason I_want_IT 

but the system must be in maintenance mode and I want to do it without entering maintenance mode

I then discovered this thread here in Server Fault, but it does not work in my server (I suppose it only runs on ESXi 5):

How do I shutdown the Host over ssh on ESXi 5 so it shuts down the guests properly?

I think I’m too dumb to discover on my own how to do it, because I presume it must be a simple thing to do.

Ubuntu 18.04 on vmware ESXi 6.7 and Nvidia – device available, unable to install drivers

I have esxi 6.7 installed and have my Nvidia Quadro p3200 selected as a pass through on the host. On the guest, I have ubuntu desktop 18.04 lspci shows me nvidia device I installed the latest ver 430, and it installed without issues. I also installed cuda toolkit 10.0 without errors. however, nvidia-smi shows me no devices were found nvcc –version tells me to install cuda toolkit

when I do sudo prime-select nvidia, it tells me nvidia is already prime

I’m unable to start nvidia-x-server settings either.

I’ve installed nvidia and cuda on ubuntu without the vm many times, but not sure why this is such a problem. I have tried to do this over a dozen times looking at various help articles. I must be missing some step.

I have installed and reinstalled ubuntu several times on the vm in order to try to get this to work

any help would be appreciated


Best scenario to use a storage with ESXi and Linux VM

I’m looking forward to discussing my file server architecture. Today I have the following scenario: 1 Windows 2008R2 server as AD, 1 Windows 2012R2 server as fileserver and a 27TB storage connected via iSCSI in Windows 2012R2 (NTFS file system). Both Windows servers are VMs hosted on an ESXi 6.0U3. Today this fileserver hosts something around 12TB of my company’s daily use files.

We are in the migration project of our infrastructure from Windows to Linux and this is a good time to rethink this architecture. So I set up 2 possible scenarios and would like to share with you and discuss the best performance-focused architecture:

Scenario 1: Storage concatenated via iSCSI directly on Linux (the ESXi VM) and formatted with EXT4;

Scenario 2: Storage connected via iSCSI in ESXi and creation of a VMTK with 27TB (using VMFS-5) attached to Linux VM.

In both cases the contents of the storage will be shared using SMB for more or less 150 Windows stations.

Migrate VMware workstation Virtual Network settings to VMware ESXi

I’m trying to migrate my vm guests from VMware workstation pro to a physical machine running VMware ESXi 6.7 .

Now i’m struggling to find out what are the equivalent Virtual Network settings for my ESXi.

ESXi machine has one physical nic at this moment which is connected to my home router in dhcp mode and has this ip:

Other settings are pretty much at defaults.

These are my VMware workstation pro settings in Virtual Network Editor: my settings

As you can see i have 4 VMnet’s which all has different subnets. NAT(VMnet8) and VMnet3 has DHCP enabled.

So for more clarification here is an example of what i’m trying to accomplish, i hope it doesn’t be more confusing:

Right now in my VMware workstation pro I have a pfsense guest with two network adapters: First one connected to NAT(VMnet8) and gets Internet from it, and second one’s on VMnet1 and pfsense has enabled dhcp on it. My win7 guest with adapter connected to VMnet1 gets Internet from pfsense.

So in ESXi instead of my old NAT, i want to have a vSwitch(am i correct?) that can redistribute IP’s from my home router to connected guests and act as a dhcp relay. (I don’t know if ESXi’s default vSwitch0 can do this?)

And i want to have 3 more different subnets to use them for an Internal network designing. Two of them should be exactly like my old VMnet1 and VMnet2 with simple IP settings so guests like pfsense may enable DHCP server on them. the third one should be a DHCP server itself and gives ip’s to it’s guests.

Again i’m sorry if couldn’t be more clear.

Thank you in advance.

ESXI 6.7: How to only have the management interface accessable on the host machine’s LAN?

I have an ESXI 6.7 Host and 6 Public IPs from my colocation provider. Currently one of the IPs is being used to access the web client for ESXI and another is used to access the pfSense router that is a VM on the machine. Is there a way to make it so that I can’t access the Web UI from the internet but from the VPN that is setup through pfSense?

Thank You, I’m sorry if this seems too complicated.

ESXi 6.7 – 3D Support only available with Red Hat or Ubuntu

I have a server running ESXi 6.7U2 which serves 3 VMs on my local network.

I have installed Arch Linux on 2 of them (using the Other 3.x or higher Linux option) and the Enable 3D Support is greyed out. I tried creating a new VM — and found that if I choose the Guest OS to be one of the following list, then the option becomes available under the VM settings when creating a new VM.

  1. Cent OS 7 (64-bit)
  2. Cent OS 8 (64-bit)
  3. Red Hat Enterprise Linux 8 (64-bit)
  4. Red Hat Fedora (64-bit)
  5. Oracle Linux 8 (64-bit)
  6. Ubuntu Linux (64-bit)

All the other options including Asianux, CoreOS, Debian, Novell, Other XXX kernel or higher Linux, Other Linux, SuSe Enterprise Linux, openSuse, Vmware Photon Linux have the 3D option greyed out.

Of the list that does support it, except Ubuntu, all others are based off of Red Hat — so it makes sense that if one supports it, others do as well.

It’s also strange that Ubuntu supports it, but Debian doesn’t given that Ubuntu is based off of Debian and usually one’s packages etc work without issues on the other distro.

Is there a way to enable 3D support for these other distros?