Tag Archives: Vcenter

VCSA DNS Error Troubleshoot

Setup Bind but vCenter appliance couldn’t see it. This is how you troubleshoot that.

To resolve this issue from vCenter:

  1. Open the console to the vCenter Server Appliance and press CTRL+ALT+F3 and log in with root credentials that you mentioned during the install phase.
  2. To enable shell, run this command.

    shell.set –enabled true

  3. Enter shell with this command:

    shell

  4. Ping the DNS servers to confirm communication with this command:

    ping yourdnsfqdn

  5. Use nslookup to make sure the vCenter Server Appliance can be resolved:

    nslookup vcenterFQDN

  6. Use the nslookup command to resolve the shortname.
  7. After underlying networking issue is resolved, redeploy the vCenter Server Appliance.

My issue was that firewalld was blocking DNS and once I took it out it worked. Stop yelling I will add the rules once vCenter is installed fully.

It is recommended that once you are able to reach DNS – you should redeploy vCenter.

Hope this helped.

OUCH – DISABLING DRS IN VCENTER DESTROYS ALL RESOURCE POOLS LEAVING VCLOUD DIRECTOR INOPERABLE!

I haven’t tried this yet but turns out disabling DRS literally destroys all resource pools and leaves vCloud Director inoperable. Sounds nasty but thats what VMware is telling us.

Well so is there a fix? Well seems like there isn’t! You have to recreate the entire environment in vCloud Director which, can be, a lot of work. Not just you have to clone the vm’s in vCloud director as existing vms will all be deleted with the work around suggested by VMware.

Disabling DRS in vCenter Server destroys all resource pools and renders vCloud Director inoperable. It is recommended that you contact VMware technical support for assistance with recovering from this issue.

Here is the full KB article that also has the work around to the issue.

Bottom lab, do not disable DRS in vCenter. You will need it to allow vms to move around to satisfy their resource requirement. If you do not want a vm to move around, I recommend using the DRS rules to pin a virtual machine to a specific hypervisor.

You can alternatively deploy that virtual machine on a local disk for a hypervisor that will prevent it from moving around.

TIME OUT ERRORS ON AN SRM SHARED RECOVERY SITE

vQuicky

> SRM will report operation timed out when trying to power on virtual machines on a busy shared DR vCenter site

> Increasing the time out from the default 900 seconds will help prevent the issue.

inDepth

VMware came out with a KB article that talks about time our errors occurring while powering on the virtual machines on a shared recovery site. Now with SRM 5.1, you can one shared recovery site for upto 10 production sites.

However you  might see that SRM reports operation timed out errors when powering on the virtual machine.

The error message is – Error:Operation timed out:900 seconds.

VMware recommends changing the default time out to more than 900 seconds. The time out occurs when the vCenter is running too many virtual machines and is way too busy to respond to the SRM server. We are talking thousands of virtual machines here.

  1. Go to C:\Program Files\VMware\VMware vCenter Site Recovery Manager\config on the SRM Server host machine on the recovery site.
  2. Open the vmware-dr.xml in a text editor.
  3. Increase the default RemoteManager timeout value from 900 to a larger number, for example 1200. <RemoteManager>
    <DefaultTimeout>900</DefaultTimeout>
    </RemoteManager>
  4. Restart the SRM Server service.

This should take care of the error. You could do this if you had a high latency network however we would not want to run a DR site over a high latency network in the first place.

Here is the KB Article.

A workaround would be to split up a busy vcenter to multiple instances. This could incur additional licensing costs however it could prevent such time outs as well.

Hope this helps 🙂

VCENTER APPLIANCE 10433 INVENTORY SERVICE ISSUE

vQuicky

> After deploying vcenter appliance, while using the web client you see a message Inventory service failed to connect at port 10433.

> Rebooting the vcenter server wont fix it.

> You will have to re-run the vCenter Setup wizard with defaults.

> Once done, re- add the Active directory, re-add vcenter permissions and you should be good.

> vcenter hosts and other info should remain intact.

inDepth

I did not really deep dive in this issue and vmware isn’t clear in any of its KB articles either. But when you see the issue mentioned above in the vquicky, simply re-run the vcenter appliance setup wizard with defaults in place. Remember to stop the vcenter and inventory services before you run it. Once done, go ahead and add Active directory credentials and then reboot the vcenter.

Once up, login to the web client using root/vmware – and assign the vcenter permissions to the domain user group in case you have one. (you should its good practice).

Once thats done you should be good to go!

Do comment/post 🙂

RUNNING VCENTER 5.1? KEEP THIS KB ARTICLE HANDY! – UPDATED

 

If Everytime I had a nickel for when my home lab broke – I would be filthy rich!

As always my vcenter 5.1 broke. Started throwing unable to connect to vcenter/sdk error. When I logged in as the SSO admin I did not see vcenter registered. For some reason it disappeared.

I am still yet to fix the issue but came across this KB which helps you with re-registering vcenter components to each other. It is by no way intuitive and is all command line.

This will be very handy now that vcenter has all its services separate! You have to make sure all the moving parts, SSO, Inventory Service, web service and vcenter are connected and aware of each other – not in that particular order anyway.

Here is the KB

Update – I only have vague update on my issue with vcenter. I gave up and went ahead to reinstall just the vcenter vm – found out it kept giving me inventory server error. vCenter talks to the inventory server on https://inventory-server-url:10443 however it kept failing.

I re-installed the inventory server and then it worked fine, so the above issue was possibly due to something messy in the inventory server.

More as I know it.