Saturday 5 February 2022

Step-by-step VMware Cloud Foundation 4.3 design and install vROPS

In my previous post Step-by-step VMware Cloud Foundation 4.3 design and install WSA we have completed deployment of VMware vRealize workspace one access, now we are ready to start deployment of vRealize operations manager aka vROPs.

Its very important we understand that flow, first we installed vRealize lifecycle manager, did a repository sync and created a global environment for workspace one access. Workspace one access will deliver identity and access management in VMware cloud foundation environment, as it is installed in a cross region nsx segment we can leverage it on each site which is part of same VCF deployment.

But before we start discussing how we are going to consume these services, we should complete the deployment of vRealize suite components. Next in line is vRealize operations manager.

Sunday 9 January 2022

Step-by-step VMware Cloud Foundation 4.3 design and install WSA

In my previous post Step-by-step VMware Cloud Foundation 4.3 design and install VRLCM we have completed deployment of VMware vRealize lifecycle manager, as explained in my previous post in order to install vRealize components in a VMware cloud foundation environment we need vRLCM.

If you go to vRealize tab on  SDDC manager before installing vRLCM you will get a clear message as in you need to deploy it before you can deploy other vRealize suite products.

As shown in the image below. Once vRLCM is deployed we start vRealize suite deployment with workspace one access.

Thursday 9 December 2021

Step-by-step VMware Cloud Foundation 4.3 design and install VRLCM

In my previous post Step-by-step VMware Cloud Foundation 4.3 design and install AVNs we have completed deployment of application virtual networks, where we will be hosting our vRealize suite products. As VMware Cloud Foundation is the private cloud it would be incomplete without automation, logs management, operations management. That's where vRealize suite comes in picture. 

Before we talk about vRealize suite, we need to be aware of the life cycle management. Not only in IT lifecycle management plays an important role. knowingly or unknowingly we do it in our day to day life.

Step-by-step VMware Cloud Foundation 4.3 design and install AVNs

In my previous post Step-by-step VMware Cloud Foundation 4.3 design and install Edge Nodes we completed deployment of edge node cluster in VMware cloud foundation management domain and successfully deployed T0 and T1 router instance for dynamic routing. For dynamic routing we used BGP in real world deployment your network team will enable BGP on uplink devices, however if you are doing it in LAB then you need to enable BGP on your uplink router which would be in our case CSR1000v. Please refer to my post where I have enabled it on our virtual router. 

Now as we are advancing with the deployment its time we create AVNs for hosting vRealize components. As each component will be placed on a AVN based on its usage and compliant with VMware Validated Designs.

I have already talked about VMware validated designs in my post Home Lab Step-by-Step-NSX-T 3.1 design and Install-P1 I would recommend to have a look at it once.

Monday 22 November 2021

Step-by-step VMware Cloud Foundation 4.3 design and install Edge Nodes

In my previous post "Step-by-step VMware Cloud Foundation 4.3 design and install MGMT domain" we have completed VCF management domain deployment.

As we are using version 4.3, which gives us flexibility of deploying either with Static routing or dynamic using BGP, it doesn't deploy edge nodes and no AVN gets created. It actually allows us "Architects" to design and deploy without manual intervention or work around for keeping up with the needs of customer.

In a brown field deployment if you have an underlay network which is configured with static routes, as customer have limited subnets and do not have requirement of rapid network provisioning, they would not want to enable BGP, only to accommodate VCF and to design around this requirement we used work arounds, but now its a straight solution.

Saturday 20 November 2021

Step-by-Step-NSX-T 3.1 design and Install-P3

In my previous post Step-by-Step-NSX-T 3.1 design and Install-P2 we have have cover configuration of IP pool for host and edge transport node, transport zones, uplink profiles, Distributed switch for nsx, host addition to vds, host transport node profile and finally configuration of NSX on host transport nodes.

Now our esxi hosts are ready to participate in NSX-T datacenter however until we have a working connectivity model it would be of no use. Hence in this post we will take care of EDGE nodes.

To start with, I would urge you all to configure backup of your nsx-t manager cluster, as making it a habit will save you from many unfortunate situations. 

Thursday 28 October 2021

Step-by-step VMware Cloud Foundation 4.3 design and install MGMT domain

As VMware cloud foundation 4.3 is around for a while and 4.3.1 is already available, I thought I should write this piece on how to design and deploy step by step. So with out wasting any time lets directly jump on to the product. VMware cloud foundation is available for some time now and many enterprises are adopting it because of ease of management it provides, in terms of a complete suite which includes all required/necessary products for a true software defined datacenter. But if you are new to VMware Cloud Foundation then be aware VMware cloud foundation is a VMware validated suite of products such as vSphere for compute virtualization, vSAN for storage virtualization and NSX for network virtualization along with other products to ease day 2 operations. Interoperability of these products is extensively tested by VMware and finally made available for general use. It is based on VMware validated designs so all solution designing principle are accounted for.

If you are installing it fresh or you need to upgrade from a previous version of VCF I would recommend reading the release notes. Below are few sections I focus.

Friday 1 October 2021

Step-by-Step-NSX-T 3.1 design and Install-P2

In my previous post Home Lab Step-by-Step-NSX-T 3.1 design and Install-P1, we completed the installation of NSX-T managers and added compute manager. In this post we will start discussing the different deployment options and how to ascertain which deployment model is best suited for our customers.

But, before jumping on to deployment options, I would like to talk about architecting/designing a solution. This is for the folks who are new to solution architecture field. To summarize, a solution is build up of  5 components. Requirements, Risks, Constraints, Assumptions and based on these, design decisions.

Tuesday 21 September 2021

Step-by-Step server 2022 installation on VMware platform

Here I came again with a new post, this time its not a VMware product but recently announced Microsoft Windows server 2022, on 1st June 2021 it was announced that it is available in evaluation center. It was in preview since march 2021, in addition to multi-layer security and hybrid capabilities with azure, it features flexible platform to modernize applications with containers.

We are going to install Server 2022, On VMware platform with ESXi Version 7.

Sunday 12 September 2021

Are you testing NSX-T ECMP link failover or network convergence correctly?

 Hello friends!! In this post I am going to talk about NSX-T link failover testing, which we all do at the end of deployment. 

I recently got to know I was doing it all wrong. Though it was always giving correct results still that's not the recommended way of checking link failover or testing convergence NSX-T deployment.

Well you must be wondering what was I doing and what you should be avoiding. 

So let me spill the beans, I was testing convergence by disconnecting Edge node virtual NIC one by one, which makes that path down and traffic use to converge to other available path. However it is not the correct way of doing it. 

So if you are also using same test for making a path down, would suggest not to use it, instead bring down physical NIC card of the host and for complete edge node failure, put edge node in maintenance mode.  

Popular posts