In my previous post Home Lab Step-by-Step Part-6-Nested-ESXi we completed, installation of nested esxi servers, and now we are ready to create shared storage for our nested SDDC environment.
To facilitate shared storage to our ESXi servers, we have already installed iSCSi target server role on ADDC in our previous post Home Lab Step-by-Step Part-5-Infrastructure Services now its time to configure the shared volume and for that we added one extra disk to the ADDC server. You can ignore the extra disk I added other than the one highlighted (I am using it for keeping ISO files).
Now login to the server, and right click on start icon and select "run".In the run prompt type diskmgmt.msc, it will open disk management console for you.
In the disk management console you will see that the newly added disk is in offline state.
Right click on Disk 2 and make it online.
Now, right click on the disk again and select initialize.
Use the partition style for the disk, I prefer GPT, click ok and now disk is ready to be created as a volume.
Right click on the disk body and select option "new simple volume".
Lets complete the wizard for creating the volume.
Select the disk size and move next.
Assign a drive letter and click next.
Assign a label to the volume, and click next.
Finish the wizard. Once disk is formatted and ready for use, close disk management wizard.
Now open server manager, and select files and storage services.
Once you are in file and storage services, select iSCSI. Under iSCSi, click on the option create iSCSI virtual disk.
Now you will be presented with the new wizard to create iSCSi disk.
Select the disk from the list and press next. This should be the disk you have added for using to host shared iSCSI Lun for nested environment. In the next screen name the disk and press next.
Here we will specify the size of the disk which we will present to out ESXi servers.
Now we will add the members of the target group, to add members press Add
In the Add initiator ID select the type of ID, I am using IP address.Once both the IP addresses are added we can click next.
As its a lab environment, I would leave the authentication disabled
On the confirmation screen validate all the settings and click on create.
Add the iSCSI target server IP address on the dynamic target using "add dynamic target". Save configuration. Follow same steps for the other ESXi host.
Now, a new disk should be listed on the Devices tab of the ESXi, do not worry about the degraded status as it will be there due to no multi-pathing. Now we will create one datastore from the shared disk, and this action needs to be performed only on one esxi. Select the disk and click on new datastore option.
Now you should have a shared datastore available on both esxi servers.
Next step is to add additional nic on both ESXi and connect that on to nested trunk port group, to avoid warning displaying on the esxi host once we add it to vCenter.
As we are already in the networking of nested ESXi, lets rename the default VM network to management-VMs network.
Next is to assign a VMkernal adaptor for vMotion network. Move to vmkernal NIC tab and add VMKernal nic.
Fill in the details for vMotion adaptor. Make sure there are no spelling mistakes or upper/lower case mis-match as these are case sensetive.
Navigate to reverse lookup zone and right click on it to create a reverse lookup zone for management subnet.
Now select the option for the update of the records, in our lab I am selecting secure and non-secure dynamic updates.
Now configure the NTP on both the esxi hosts and we will be ready to deploy vCenter server. Navigate to Manage>>system>Time and date. Select edit NTP settings.
Now we are ready to deploy vCenter server
In the disk management console you will see that the newly added disk is in offline state.
Right click on Disk 2 and make it online.
Now, right click on the disk again and select initialize.
Use the partition style for the disk, I prefer GPT, click ok and now disk is ready to be created as a volume.
Right click on the disk body and select option "new simple volume".
Lets complete the wizard for creating the volume.
Select the disk size and move next.
Assign a drive letter and click next.
Assign a label to the volume, and click next.
Finish the wizard. Once disk is formatted and ready for use, close disk management wizard.
Now open server manager, and select files and storage services.
Once you are in file and storage services, select iSCSI. Under iSCSi, click on the option create iSCSI virtual disk.
Now you will be presented with the new wizard to create iSCSi disk.
Select the disk from the list and press next. This should be the disk you have added for using to host shared iSCSI Lun for nested environment. In the next screen name the disk and press next.
Here we will specify the size of the disk which we will present to out ESXi servers.
Now you define the name of target group where the disks will be mapped.
Now we will add the members of the target group, to add members press Add
In the Add initiator ID select the type of ID, I am using IP address.Once both the IP addresses are added we can click next.
As its a lab environment, I would leave the authentication disabled
On the confirmation screen validate all the settings and click on create.
Once all the tasks gets complete, we need to wait for the disk to be completely written zero, hence the disk status should not show any warning or error, we will start esxi configuration to use iSCSi disks.
Login to ESXi with the root credentials and click on storage.
Add the iSCSI target server IP address on the dynamic target using "add dynamic target". Save configuration. Follow same steps for the other ESXi host.
now you will be presented with the new datastore wizard, we will create the DS using same steps we used in our post Home Lab Step-by-Step Part-3-Networking.
Now you should have a shared datastore available on both esxi servers.
Next step is to add additional nic on both ESXi and connect that on to nested trunk port group, to avoid warning displaying on the esxi host once we add it to vCenter.
As we are already in the networking of nested ESXi, lets rename the default VM network to management-VMs network.
Navigate to port groups tab and select VM network>>click on edit settings.
Next is to assign a VMkernal adaptor for vMotion network. Move to vmkernal NIC tab and add VMKernal nic.
Fill in the details for vMotion adaptor. Make sure there are no spelling mistakes or upper/lower case mis-match as these are case sensetive.
For that we will open dns management console and we will make entries. open run command and type "dnsmgmt.msc".
Navigate to reverse lookup zone and right click on it to create a reverse lookup zone for management subnet.
Now select the option for the update of the records, in our lab I am selecting secure and non-secure dynamic updates.
Now configure the NTP on both the esxi hosts and we will be ready to deploy vCenter server. Navigate to Manage>>system>Time and date. Select edit NTP settings.
Change from manual to use Network Time Protocol (enable NTP client), change startup policy to start and stop with host and add the NTP server IP (In our case we are using Domain controller as NTP).
Now navigate to services and start the ntpd service.
In my next post Home Lab Step-by-Step Part-8-vCenter we will install and configure vCenter for centralized management.
I hope I was able to add value, if your answer is yes, then don't forget to share and subscribe. 😊
If you want me to write on specific content or you have any feedback on this post, kindly comment below.
If you want, you can connect with me on Linkedin, and please like and subscribe my youtube channel VMwareNSXCloud for step by step technical videos.
Pradhuman - thanks so much for all your ESX instruction blogs. I have now finished step 1-6 successfully.
ReplyDeleteMy goal is to run NSX-T so i look forward to following the rest of your blogs
Dear slogo, I am glad to know it is helping and surely will move forward with our NSX-T series, Actually got caught up with multiple projects, but would try to start writing on this topic soon.
DeleteThis comment has been removed by the author.
ReplyDeleteThis comment has been removed by the author.
ReplyDeleteThis comment has been removed by the author.
ReplyDeleteHello, great tutorial! question about IP address on VMkernel for vMotion. should we use the same IP for all the hosts, or we need to use different ip for different hosts, like ESXI1 172.16.12.101 ESXI2 172.16.12.102?
ReplyDeleteDear tekservices4u, We can not use same IP on different machines until its a virtual IP. Hence you will have to use separate IPs for each hosts vMotion interface. You can use vMotion on management VMkernal interface as well.
Delete