{"id":177,"date":"2020-11-16T18:12:53","date_gmt":"2020-11-16T10:12:53","guid":{"rendered":"https:\/\/blog.swineson.me\/en\/?p=177"},"modified":"2020-11-30T19:15:46","modified_gmt":"2020-11-30T11:15:46","slug":"vsan-7-0u1-cluster-rebuild","status":"publish","type":"post","link":"https:\/\/blog.swineson.me\/en\/vsan-7-0u1-cluster-rebuild\/","title":{"rendered":"vSAN 7.0U1 Cluster Rebuild: A Firsthand Experience"},"content":{"rendered":"<h1>How It Started<\/h1>\n<p>I screwed up a vCenter instance. Actually it is pretty easy to screw up the state-of-the-art hypervisor controller from its beautifully designed web UI, using the appealing buttons that always have been there. The process only requires 2 simple steps:<\/p>\n<p><!--more--><\/p>\n<ol>\n<li>Enable vCenter HA<\/li>\n<li>Replace the machine SSL certificate<\/li>\n<\/ol>\n<p><a href=\"https:\/\/docs.vmware.com\/en\/VMware-vSphere\/7.0\/com.vmware.vsphere.avail.doc\/GUID-CDC20BD4-E0CE-45D9-B73B-9AA795DA5FDD.html\" target=\"_blank\" rel=\"noopener noreferrer\">The vCenter HA documentation<\/a> do state &#8220;if you want to use custom certificates, you have to remove the vCenter HA configuration&#8221; using the smallest font size possible, but the warning is nowhere mentioned in the documentation related to replacing SSL certificates where it should be. The UI won&#8217;t stop you from playing with fire, either.<\/p>\n<p>If you have enough time and a lab environment then give it a try. The vCenter VM will reboot a few times before it completely stops working. It will still spin up, but you won&#8217;t be able to login anymore. You&#8217;ll see a very unhelpful error message on the login screen:<\/p>\n<blockquote><p>An error occurred when processing the metadata during vCenter Single Sign-On setup &#8211; Failed to connect to VMware Lookup Service<\/p><\/blockquote>\n<p>By the way, don&#8217;t bother trying <a href=\"https:\/\/kb.vmware.com\/s\/article\/2097936\" target=\"_blank\" rel=\"noopener noreferrer\">the vSphere Certificate Manager command-line tool<\/a> to unscrew the situation; that tool will refuse to do anything if it detects itself running in a HA vCenter cluster. So, if you don&#8217;t have any backup or snapshot to revert to, your vCenter is dead.<\/p>\n<p>Things were a little complicated for my case: The dead vCenter VM ran on a 3-node hyperconverged cluster with HA, DRS and vSAN. As the vCenter goes down, now I have a problem.<\/p>\n<h1>How It&#8217;s Going<\/h1>\n<p>Luckily, the ESXi hypervisor is largely independent from vCenter, so I could still log in and do something on the individual hypervisors. Now I had to do something to (hopefully) make the situation better.<\/p>\n<h2>Preparing<\/h2>\n<p>The first obvious thing I did was to shut down the old vCenter VMs. These does not work anymore and might interfere with the recovery process.<\/p>\n<p>Next thing I did was to do a backup of all important data on the cluster. Backing up an ESXi hypervisor is easy: mount some NFS storage on each hypervisor, and manually move\/copy the VMs over. vMotion wouldn&#8217;t be available so everything had to be done by hand, when the VMs were shut down.<\/p>\n<p>Then I shut down as many VMs as I can. Although there was possibility that one can rebuild the cluster while keeping some VMs running, I recommend against that.<\/p>\n<p>Prepare a vCenter installer ISO on the workstation, and let&#8217;s get into the recovery process.<\/p>\n<h2>The First (Unsuccessful) Attempt<\/h2>\n<p>Being rather unfamiliar with the new vSphere 7.0, initially my strategy was to just <a href=\"https:\/\/blog.swineson.me\/en\/setting-up-an-esxi-cluster\/\" target=\"_blank\" rel=\"noopener noreferrer\">reinstall the vCenter<\/a> directly onto the vSAN storage, <a href=\"https:\/\/kb.vmware.com\/s\/article\/1004775\" target=\"_blank\" rel=\"noopener noreferrer\">take over the hosts<\/a>, rebuild the distributed switch by hand, and simply <a href=\"https:\/\/kb.vmware.com\/s\/article\/2059194\" target=\"_blank\" rel=\"noopener noreferrer\">re-configure the cluster<\/a>. The process did not work: while adding the first host, vCenter reported &#8220;Found host(s) esxi02.corp.contoso.com, esxi03.corp.contoso.com participating in the vSAN service which is not a member of this host&#8217;s vCenter cluster&#8221;, and after a few seconds, vCenter freezed. Later investigation showed that vCenter detected the host had vSAN configured, so it overwrote a single-node vSAN configuration onto that host, breaking the storage it was running on.<\/p>\n<p>Now I have 2 problems: a dead vCenter, and a 3-node vSAN cluster in a split-brain situation.<\/p>\n<h2>The Second (Successful) Attempt<\/h2>\n<p>Knowing that vSAN won&#8217;t automatically delete any inaccessible\/broken object, I was confident that all my data was still there, it was just the vSAN configuration that need to be fixed to at least keep the storage running. After some searching on the Internet, I found out that you can actually <a href=\"https:\/\/www.virtuallyghetto.com\/2014\/07\/does-reinstalling-esxi-with-an-existing-vsan-datastore-wipe-your-data.html\" target=\"_blank\" rel=\"noopener noreferrer\">manage all vSAN configuration on the ESXi hypervisor host<\/a>! There are <a href=\"https:\/\/docs.vmware.com\/en\/VMware-vSphere\/7.0\/com.vmware.vsphere.vsan-monitoring.doc\/GUID-7799D2D7-2513-4372-92EA-4A1FB510E012.html\" target=\"_blank\" rel=\"noopener noreferrer\">some not-very-helpful official documentation<\/a> on the <span class=\"lang:sh highlight:0 decode:true crayon-inline \">esxcli vsan<\/span> subcommand, but it was enough to get me on the correct track.<\/p>\n<p>I enabled SSH on all the hosts, and issued this command to every host:<\/p>\n<pre class=\"lang:sh decode:true \">esxcfg-advcfg -s 1 \/VSAN\/IgnoreClusterMemberListUpdates<\/pre>\n<p>This essentially told the vSAN agent running on every host to ignore everything sent by any vCenter. Now that the &#8220;manual transmission&#8221; mode is engaged, I started to recover the vSAN.<\/p>\n<p>First let&#8217;s confirm the status:<\/p>\n<pre class=\"lang:sh decode:true \">[root@esxi01:~] esxcli vsan cluster list\r\nCluster Information of 3a02d572-728d-482b-a94d-2245a6ec99d1\r\n   Enabled: true\r\n   Current Local Time: 2020-10-29T07:05:18Z\r\n   Local Node UUID: 9f7326ad-f815-45b1-a809-ece25fddc7ec\r\n   Local Node Type: NORMAL\r\n   Local Node State: MASTER\r\n   Local Node Health State: HEALTHY\r\n   Sub-Cluster Master UUID: 9f7326ad-f815-45b1-a809-ece25fddc7ec\r\n   Sub-Cluster Backup UUID:\r\n   Sub-Cluster UUID: 3a02d572-728d-482b-a94d-2245a6ec99d1\r\n   Sub-Cluster Membership Entry Revision: 0\r\n   Sub-Cluster Member Count: 1\r\n   Sub-Cluster Member UUIDs: 9f7326ad-f815-45b1-a809-ece25fddc7ec\r\n   Sub-Cluster Member HostNames: esxi01.corp.contoso.com\r\n   Sub-Cluster Membership UUID: 665dbc18-5bde-4cb6-a510-7c5185c78f3d\r\n   Unicast Mode Enabled: true\r\n   Maintenance Mode State: OFF\r\n   Config Generation: dadf3e7c-8162-4815-9d02-08af4d8c4c7b 2 2020-10-29T06:29:11.652\r\n\r\n[root@esxi03:~] esxcli vsan cluster list\r\nCluster Information of 3a02d572-728d-482b-a94d-2245a6ec99d1\r\n   Enabled: true\r\n   Current Local Time: 2020-10-29T07:09:48Z\r\n   Local Node UUID: 67874ba3-8fd5-463f-80fb-6a82910c5ff2\r\n   Local Node Type: NORMAL\r\n   Local Node State: MASTER\r\n   Local Node Health State: HEALTHY\r\n   Sub-Cluster Master UUID: 67874ba3-8fd5-463f-80fb-6a82910c5ff2\r\n   Sub-Cluster Backup UUID: 04e3bd93-2846-4474-bae7-e16b602e316f\r\n   Sub-Cluster UUID: 3a02d572-728d-482b-a94d-2245a6ec99d1\r\n   Sub-Cluster Membership Entry Revision: 2\r\n   Sub-Cluster Member Count: 2\r\n   Sub-Cluster Member UUIDs: 67874ba3-8fd5-463f-80fb-6a82910c5ff2, 04e3bd93-2846-4474-bae7-e16b602e316f\r\n   Sub-Cluster Member HostNames: esxi03.corp.contoso.com, esxi02.corp.contoso.com\r\n   Sub-Cluster Membership UUID: 3b5c9a5f-3063-68bb-eafc-0c42a1719576\r\n   Unicast Mode Enabled: true\r\n   Maintenance Mode State: OFF\r\n   Config Generation: dd0af2e3-d7e0-4407-9a50-d87be61513b3 9 2020-10-22T08:59:00.661<\/pre>\n<p>We indeed had a split brain. Then kick esxi01 out of the imaginary one-node cluster (it is a very slow process, have some patience), and re-join it with the correct <strong>sub-cluster UUID<\/strong> from the other hosts&#8217; config:<\/p>\n<pre class=\"lang:sh decode:true\">[root@esxi01:~] esxcli vsan cluster leave\r\n[root@esxi01:~] esxcli vsan cluster join -u 3a02d572-728d-482b-a94d-2245a6ec99d1\r\n[root@esxi01:~] esxcli vsan cluster list\r\nCluster Information of 3a02d572-728d-482b-a94d-2245a6ec99d1\r\n   Enabled: true\r\n   Current Local Time: 2020-10-29T07:09:55Z\r\n   Local Node UUID: 9f7326ad-f815-45b1-a809-ece25fddc7ec\r\n   Local Node Type: NORMAL\r\n   Local Node State: MASTER\r\n   Local Node Health State: HEALTHY\r\n   Sub-Cluster Master UUID: 9f7326ad-f815-45b1-a809-ece25fddc7ec\r\n   Sub-Cluster Backup UUID: \r\n   Sub-Cluster UUID: 3a02d572-728d-482b-a94d-2245a6ec99d1\r\n   Sub-Cluster Membership Entry Revision: 0\r\n   Sub-Cluster Member Count: 1\r\n   Sub-Cluster Member UUIDs: 9f7326ad-f815-45b1-a809-ece25fddc7ec\r\n   Sub-Cluster Member HostNames: esxi01.corp.contoso.com\r\n   Sub-Cluster Membership UUID: ab6a9a5f-2401-89af-99aa-0c42a171e24e\r\n   Unicast Mode Enabled: true\r\n   Maintenance Mode State: OFF\r\n   Config Generation: None 0 0.0<\/pre>\n<p>A vCenter configured vSAN cluster would be in the unicast mode (i.e. peer discovery depends on the IP list sent by the control plane), so we also need to synchronize the IP address list of the cluster on every host. Verify the VMKernel adapter for vSAN is set up on esxi01:<\/p>\n<pre class=\"lang:sh decode:true\">[root@esxi01:~] esxcli vsan network list\r\nInterface\r\n   VmkNic Name: vmk2\r\n   IP Protocol: IP\r\n   Interface UUID: 699fe1e6-eaba-49db-9d04-8859ed2b066f\r\n   Agent Group Multicast Address: 224.2.3.4\r\n   Agent Group IPv6 Multicast Address: ff19::2:3:4\r\n   Agent Group Multicast Port: 23451\r\n   Master Group Multicast Address: 224.1.2.3\r\n   Master Group IPv6 Multicast Address: ff19::1:2:3\r\n   Master Group Multicast Port: 12345\r\n   Host Unicast Channel Bound Port: 12321\r\n   Data-in-Transit Encryption Key Exchange Port: 0\r\n   Multicast TTL: 5\r\n   Traffic Type: vsan<\/pre>\n<p>If you don&#8217;t see &#8220;vsan&#8221; traffic type in the output, reconfigure your VMKernel adapter. Since esxi02 and esxi03 already know each other, we can concentrate the list from the 2 hosts\u2026<\/p>\n<pre class=\"lang:sh decode:true\">[root@esxi02:~] esxcli vsan cluster unicastagent list\r\nNodeUuid                              IsWitness  Supports Unicast  IP Address       Port  Iface Name  Cert Thumbprint                                              SubClusterUuid\r\n------------------------------------  ---------  ----------------  --------------  -----  ----------  -----------------------------------------------------------  --------------\r\n67874ba3-8fd5-463f-80fb-6a82910c5ff2          0              true  192.168.1.201  12321              73:F4:93:D8:D8:2A:C0:D3:4F:A6:DF:4D:3D:BE:34:8C:15:D9:45:52  3a02d572-728d-482b-a94d-2245a6ec99d1\r\n9f7326ad-f815-45b1-a809-ece25fddc7ec          0              true  192.168.1.215  12321              05:B1:CF:D5:09:6A:05:7C:D7:C4:69:69:7A:85:04:90:51:D4:9A:D6  3a02d572-728d-482b-a94d-2245a6ec99d1\r\n\r\n[root@esxi03:~] esxcli vsan cluster unicastagent list \r\nNodeUuid                              IsWitness  Supports Unicast  IP Address       Port  Iface Name  Cert Thumbprint                      \r\n                        SubClusterUuid\r\n------------------------------------  ---------  ----------------  --------------  -----  ----------  -----------------------------------------------------------  --------------\r\n9f7326ad-f815-45b1-a809-ece25fddc7ec          0              true  192.168.1.215  12321              05:B1:CF:D5:09:6A:05:7C:D7:C4:69:69:7A:85:04:90:51:D4:9A:D6  3a02d572-728d-482b-a94d-2245a6ec99d1\r\n04e3bd93-2846-4474-bae7-e16b602e316f          0              true  192.168.1.160  12321              6D:E4:62:CA:FB:17:96:41:97:F4:22:B9:8F:D8:B2:5E:93:0F:79:0D  3a02d572-728d-482b-a94d-2245a6ec99d1<\/pre>\n<p>then play them back onto esxi01 (if you have vSAN witness applications, you need to slightly change the arguments here):<\/p>\n<pre class=\"lang:sh decode:true\">[root@esxi01:~] esxcli vsan cluster unicastagent add -a 192.168.1.201 -U true -u 67874ba3-8fd5-463f-80fb-6a82910c5ff2 -t node\r\n[root@esxi01:~] esxcli vsan cluster unicastagent add -a 192.168.1.160 -U true -u 04e3bd93-2846-4474-bae7-e16b602e316f -t node\r\n[root@esxi01:~] esxcli vsan cluster unicastagent list                  \r\nNodeUuid                              IsWitness  Supports Unicast  IP Address       Port  Iface Name  Cert Thumbprint  SubClusterUuid\r\n------------------------------------  ---------  ----------------  --------------  -----  ----------  ---------------  --------------\r\n67874ba3-8fd5-463f-80fb-6a82910c5ff2          0              true  192.168.1.201  12321                               3a02d572-728d-482b-a94d-2245a6ec99d1\r\n04e3bd93-2846-4474-bae7-e16b602e316f          0              true  192.168.1.160  12321                               3a02d572-728d-482b-a94d-2245a6ec99d1<\/pre>\n<p>As esxi01&#8217;s IP addresses are not changed, no changes are needed on the other 2 hosts. Let&#8217;s verify if vSAN is up and running again.<\/p>\n<pre class=\"lang:sh decode:true \">[root@esxi01:~] esxcli vsan cluster get \r\nCluster Information\r\n   Enabled: true\r\n   Current Local Time: 2020-10-29T07:15:40Z\r\n   Local Node UUID: 9f7326ad-f815-45b1-a809-ece25fddc7ec\r\n   Local Node Type: NORMAL\r\n   Local Node State: AGENT\r\n   Local Node Health State: HEALTHY\r\n   Sub-Cluster Master UUID: 67874ba3-8fd5-463f-80fb-6a82910c5ff2\r\n   Sub-Cluster Backup UUID: 04e3bd93-2846-4474-bae7-e16b602e316f\r\n   Sub-Cluster UUID: 3a02d572-728d-482b-a94d-2245a6ec99d1\r\n   Sub-Cluster Membership Entry Revision: 3\r\n   Sub-Cluster Member Count: 3\r\n   Sub-Cluster Member UUIDs: 67874ba3-8fd5-463f-80fb-6a82910c5ff2, 04e3bd93-2846-4474-bae7-e16b602e316f, 9f7326ad-f815-45b1-a809-ece25fddc7ec\r\n   Sub-Cluster Member HostNames: esxi03.corp.contoso.com, esxi02.corp.contoso.com, esxi01.corp.contoso.com\r\n   Sub-Cluster Membership UUID: 3b5c9a5f-3063-68bb-eafc-0c42a1719576\r\n   Unicast Mode Enabled: true\r\n   Maintenance Mode State: OFF\r\n   Config Generation: 9f7326ad-f815-45b1-a809-ece25fddc7ec 2 2020-10-29T07:15:25.0<\/pre>\n<p>Yay!<\/p>\n<p>Rest of the steps are pretty straightforward. The key takeaway here is: to join a host to a cluster, it must be either in the maintenance mode (i.e. all VMs shut off), or only have vCenter running on it. All other steps are essential to solve the chicken-and-egg problem.<\/p>\n<ol>\n<li>Shutdown all the VMs running on the hosts if you haven&#8217;t already done this<\/li>\n<li>Find the node with the oldest CPU (assume it is esxi01), and if possible, connect a temporary non-vSAN datastore (NFS or a local storage device)<\/li>\n<li>install vCenter onto esxi01 using the temporary datastore<\/li>\n<li>Set up vCenter (networking, admin user, certificate)<\/li>\n<li>Add esxi01 to the vCenter, put it in a new cluster, you can use the cluster quickstart wizard but do not let it configure networking for you<\/li>\n<li>Enable VMWare EVC on the new cluster<\/li>\n<li>If you have a backup for distributed switch config, restore it; otherwise configure a new distributed switch<\/li>\n<li>Add another host (say, esxi02) to the vCenter, do not add it to a cluster yet<\/li>\n<li>Add esxi02 to the distribute switch and migrate all adapters<\/li>\n<li>vMotion the vCenter VM to esxi02<\/li>\n<li>Add esxi01 and esxi03 to the distribute switch and migrate all adapters<\/li>\n<li>Go to the web portal of esxi02 and esxi03, put them into maintenance mode, set vSAN migration mode to &#8220;no data migration&#8221; (do not use vCenter to put them into maintenance mode as this will cause vSAN to evict data; also, this will temporary block all requests to the vSAN datastore, so make sure nothing is running on it)<\/li>\n<li>Add esxi02 and esxi03 to the cluster and configure the cluster in the quickstart wizard<\/li>\n<li>If this caused vSAN to move some data back and forth, wait for the migration to finish<\/li>\n<li>Verify all objects in vSAN is readable, and try restart the VMs<\/li>\n<li>vMotion the vCenter back onto the vSAN datastore<\/li>\n<\/ol>\n<p>Now we have a new vCenter server and a new cluster good to go.<\/p>\n<h2>Cleaning Up<\/h2>\n<p>If you still want to configure vSAN from vCenter later, first execute the following command on every ESXi host:<\/p>\n<pre class=\"lang:sh decode:true\">esxcfg-advcfg -d \/VSAN\/IgnoreClusterMemberListUpdates<\/pre>\n<p>This allows the vSAN agent to receive further configuration from the vCenter. Then let vCenter synchronize once with all the hosts: Cluster -&gt; Monitor -&gt; Skyline Health -&gt; vCenter state is authoritative -&gt; click on &#8220;UPDATE ESXI CONFIGURATION&#8221;.<\/p>\n<p>If you have custom storage policies, you can restore them using the following command in vCenter ruby console:<\/p>\n<pre class=\"lang:ruby decode:true\">Command&gt; rvc administrator@vsphere.local@localhost\r\nvsan.recover_spbm \/localhost\/&lt;datacenter_name&gt;\/computers\/&lt;cluster_name&gt;<\/pre>\n<p>vSAN default policy will be created automatically.<\/p>\n<p>If you have any inaccessable object, SSH login to one of the hosts containing that object, then delete it manually:<\/p>\n<pre class=\"lang:sh decode:true\">\/usr\/lib\/vmware\/osfs\/bin\/objtool delete -f -v 10 -u &lt;object_uuid&gt;<\/pre>\n<p>The following things will require a rebuilt by hand in the new vCenter:<\/p>\n<ul>\n<li>users, groups, permissions<\/li>\n<li>content libraries<\/li>\n<li>host profiles<\/li>\n<li>HA &amp; DRS<\/li>\n<li>VM rules<\/li>\n<\/ul>\n<p>If you have vSAN file services configured, you might need to re-enable them from vCenter. You will need to re-upload the OVAs, and you won&#8217;t be able to change the configuration. Note that vSAN file services 7.0U1 is extremely buggy and locked itself up (I can&#8217;t enable it\/disable it\/configure it\/use it) on my cluster, so I currently do not recommend using it in production.<\/p>\n<p>If you have some &#8220;Unable to connect to MKS&#8221; error when connecting to VM consoles on the new vCenter: see\u00a0<a href=\"https:\/\/kb.vmware.com\/s\/article\/2115126\" target=\"_blank\" rel=\"noopener noreferrer\">&#8220;Unable to connect to MKS&#8221; error in vSphere Web Client (2115126)<\/a><\/p>\n<h1>Final Thoughts<\/h1>\n<p>One thing I like about vSphere is its ability to continue functioning without a centralized control plane. HA, multiple-access datastores, and vSAN are all designed around this basic assumption and this have saved me many times.\u00a0On the other hand, vCenter is a fragile thing, and vCenter 7.0, with a lot legacy Java components being rewritten by Python, is much more fragile than ever before.<\/p>\n<p>Always export and backup your distributed switch config, even if you have automated backup for vCenter. This will save you a lot time in case you must set up a new vCenter. If you have vSAN file services configured, failing to restore the old distributed switch after a vCenter rebuild might render the entire service inaccessible. (If you can&#8217;t re-enable it on the vSphere UI, try to call the vCenter API <span class=\"lang:default highlight:0 decode:true crayon-inline \">vim.vsan.ReconfigSpec<\/span>\u00a0 with a different port group; there is a chance, but your mileage might vary.)<\/p>\n<h1>References<\/h1>\n<ul>\n<li><a href=\"https:\/\/www.thehumblelab.com\/lesson-in-vsan-resiliency\/\" target=\"_blank\" rel=\"noopener noreferrer\">The Resiliency of vSAN &#8211; Recovering my 2-Node Direct Connect While Preserving vSAN Datastore<\/a><\/li>\n<li><a href=\"https:\/\/blog.rylander.io\/2017\/01\/19\/configure-2-node-vsan-on-esxi-free-using-cli-without-vcenter\/\" target=\"_blank\" rel=\"noopener noreferrer\">Configure 2-Node VSAN on ESXi Free Using CLI Without VCenter<\/a><\/li>\n<li><a href=\"https:\/\/www.driftar.ch\/2018\/08\/18\/vmware-vsan-cache-disk-failed-and-how-to-recover-from-it\/\" target=\"_blank\" rel=\"noopener noreferrer\">VMware vSAN cache disk failed and how to recover from it<\/a><\/li>\n<li><a href=\"https:\/\/www.reddit.com\/r\/vmware\/comments\/72oeyj\/vsan_question_restore_vcsa_on_vsan\/\" target=\"_blank\" rel=\"noopener noreferrer\">vSAN question: Restore VCSA on vSAN<\/a><\/li>\n<li><a href=\"https:\/\/docs.vmware.com\/en\/VMware-vSphere\/7.0\/vsan-70-administration-guide.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">Administering VMware vSAN<\/a> (PDF)<\/li>\n<li><a href=\"https:\/\/vinfrastructure.it\/2019\/11\/purge-inaccessible-objects-in-vmware-vsan\/\" target=\"_blank\" rel=\"noopener noreferrer\">Purge inaccessible objects in VMware vSAN<\/a><\/li>\n<li><a href=\"https:\/\/virtually2cents.com\/fixing-these-dratted-unknown-vsan-objects\/\" target=\"_blank\" rel=\"noopener noreferrer\">Fixing these dratted Unknown vSAN Objects<\/a><\/li>\n<li><a href=\"https:\/\/www.ivobeerens.nl\/2017\/03\/21\/fix-orphaned-vsan-objects\/\" target=\"_blank\" rel=\"noopener noreferrer\">Fix orphaned vSAN objects<\/a><\/li>\n<li><a href=\"https:\/\/www.steffr.ch\/vmware-vsan-deletepurge-inaccessible-objects\/\" target=\"_blank\" rel=\"noopener noreferrer\">VMware VSAN delete\/purge inaccessible objects<\/a><\/li>\n<li><a href=\"https:\/\/www.vmware.com\/content\/dam\/digitalmarketing\/vmware\/en\/pdf\/products\/vsan\/vmware-ruby-vsphere-console-command-reference-for-virtual-san.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">VMware\u00aeRuby vSphere Console Command Reference for Virtual SAN<\/a> (PDF)<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>How It Started I screwed up a vCenter instance. Actually it is pretty easy to screw up the state-of-the-art hypervisor controller from its beautifully designed web UI, using the appealing buttons that always have been there. The process only requires 2 simple steps:<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[11],"tags":[],"class_list":["post-177","post","type-post","status-publish","format-standard","hentry","category-vmware-vsphere"],"acf":[],"_links":{"self":[{"href":"https:\/\/blog.swineson.me\/en\/wp-json\/wp\/v2\/posts\/177"}],"collection":[{"href":"https:\/\/blog.swineson.me\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.swineson.me\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.swineson.me\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.swineson.me\/en\/wp-json\/wp\/v2\/comments?post=177"}],"version-history":[{"count":18,"href":"https:\/\/blog.swineson.me\/en\/wp-json\/wp\/v2\/posts\/177\/revisions"}],"predecessor-version":[{"id":233,"href":"https:\/\/blog.swineson.me\/en\/wp-json\/wp\/v2\/posts\/177\/revisions\/233"}],"wp:attachment":[{"href":"https:\/\/blog.swineson.me\/en\/wp-json\/wp\/v2\/media?parent=177"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.swineson.me\/en\/wp-json\/wp\/v2\/categories?post=177"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.swineson.me\/en\/wp-json\/wp\/v2\/tags?post=177"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}