VCloud foundation fully automated deployment with bringup using vlc
Home > VMWare platform > vCloud Foundation > Setup automated lab environment for vCloud Foundation using VLC > vCloud foundation fully automated deployment with bringup using vlc
vlc ( vCF Lab Constructor ) is powerCLI (VMWare powershell module for managing ESXi hosts / vCenter etc.) script for configuring nested setup for vCloud Foundation. Since vCloud Foundation is not setup step by step manually (Install ESXi, Install vCenter, etc.) but is instead built automatically using cloud builder and an input parameters file, it is not easy to get this setup in lab or nested lab, unless you have validated hardware nodes available as spare. With help of vlc we can setup nested (ESXi installed in a VM on top of existing ESXi host) vCF for experiments.
The below document is based on VLC 4.2
For this following pre-requisites are required:
- An ESXi host with ESXi 6.7+ with 12 cores, 128 GB RAM and 800 GB disk space. It would be best if the disk is SSD
- A Windows 10 VM or use enterprise evaluation license if required
- VMWare Licenses for
- ESXi
- 4 socket license
- vSAN
- 4 socket license
- vCenter
- 1 standard license
- NSX
- 4 socket license
- SDDC Manager
- 4 socket license. This is not required during the deployment as it is not asked in the vlc configuration input json, but can be added later on manually
If the above mentioend pre-requisites are available then to get fully automated vCF including build-up using VLC use below steps:
- Configure ESXi host switch with MTU 9000
- Create at least one trunked port-group with all VLANs trunked Create standard port-group for trunking all VLANs to VM
- On the above all-vlan-trunk port-group make sure all the three security settings are set to accept:
- Promiscuous Mode = Accept
- Allow Forged Transmits = Accept
- Allow MAC Address Changes = Accept
- This is required even if the settings are set to accept at switch level. Override them at port-group level and set them to accept again.
- Disable HA, DRS at cluster level. Optionally also disable vMotion service on vmkernel port
- Build a Windows 10 VMs with one NIC connected to current LAN.
- Install VMWare tools on Windows VM and reboot.
- Add second NIC of type "vmxnet3" connected to ALL-VLAN-TRUNK port-group
- Install following software on Windows host
- Dotnet framework offline installer 4.8 for Windows
- VMWare ovf tool 4.4.1
- Download offline bundle of VMWare powerCLI 12.1 and install it using:
- Open powershell with administrator privileges (Right click on powershell launch icon and choose "Run as administrator")
- Execute command
- $env:PSModulePath
- Based on the output of above command go to one of the modules directory (Create it if required) and extract offline downloaded powerCLI 12.1 zip file contents in this location. After extracting you should have 10+ subfolders named VMWare.* inside the Modules folder.
- In the administrator powershell cd to the modules folder where you have extracted the offline bundle.
- Execute following to unblock extracted modules
- Get-ChildItem * -Recurse | Unblock-File
- Check the version of powerCLI that got installed using:
- Get-Module -Name VMware.PowerCLI -ListAvailable
- On the second NIC of Windows host (Connected to all VLAN trunk port) set IP 10.0.0.220/24 with DNS as 10.0.0.221 without any gateway
- On second NIC of Windows host configure VLAN as 10. Right click on adapter go to properties -> Click on configure -> Go to Advanced tab -> Go to VLAN ID property and set its value to 10
- Download VLC by filling google form ( https://docs.google.com/forms/u/0/d/e/1FAIpQLScU_X8LpdC6FHpANGBkdY87GoBkuuWcIiVh7dalFqJQJAOLpw/formResponse ) or using direct download link https://www.dropbox.com/s/wsvsgf37wb65448/VLC_4.2_021021.zip?dl=0
- Extract the Zip file at C:\VLC
- Download latest cloud builder appliance (VMWare Cloud Builder 4.2.0 at time of this writing) and cop the OVA to C:\VLC folder
- Edit AUTOMATED_AVN_VCF_VLAN_10-13_NOLIC_v42 file and set the four licenses (ESXi, vSAN, vCenter and NSX) at appropriate places in the document. Do not modify anything else.
- Run "VLCGUi.ps1" in powershell.
- Choose automated option for the deployment.
- Give vCenter of ESXi host details for connection in the right side fields and try connect
- Based on connection choose cluster, network (ALL-VLAN-Trunk-port-group created earlier) and datastore for vCF node deployment
- Select the cloud builder ova
- Click Validate
- Once validation is successful click Construct to build the nested vCF setup.
- On 6 2xTB magnetic disks based datastore in RAID 5 (10TB usable) with 256 GB RAM server with 32 cores, it took about 4 hours for deployment to complete.
Study of the automated deployment
Once automated deployment is complete note following components to understand it in better way:
Physical lab deployment
On actual physical lab infrastructure we can see following deployments:
- VMs: CB-01a (Cloud builder), esxi-1, esxi-2, esxi-3, esxi-4
- In datastore folder ISO: with VLC_vsphere.iso file
- On jumpbox under C:\VLC - Logs and cb_esx_iso folders
To start fresh delete all of these and launch a new deployment
Cloud Builder services
Access cloud builder using SSH to IP 10.0.0.221 from jump box. Login using username:password admin:VMware123!
Following services are deployed on cloud builder at IP 10.0.0.221 in case of fully automated deployment:
- DNS
- /etc/maradns/db.vcf.ssdc.lab; /etc/dwood3rc; systemctl restart maradns; systemctl restart maradns.deadwood
- DHCP
- /etc/dhcp/dhcpd.conf
- NTP
- /etc/ntp.conf
- L3 inter-VLAN routing
- ip addr show; sysctl net.ipv4.ip_forward; iptables -L
- BGP
- /usr/bin/gobgpd.conf
AVN and NO_AVN json differences
The only difference between AVN and NON-AVN json files is in the last line where NO_AVN json file has exclusion for AVN and EBGP. 'diff' output of two json files is:
< "excludedComponents": ["NSX-V"] --- > "excludedComponents": ["NSX-V", "AVN", "EBGP"]
Values in json file and where there impact can be seen in final deployment
Following values are worth noting in the automated deployment json config files: The corresponding place in deployment where these are reflected are also mentioned.
- sddcManagerSpec:
- ipAddress
- 10.0.0.4
- sddcid
- mgmt-domain
- dnsSpec.subdomain
- vcf.ssdc.lab
- dnsSpec.domain
- vcf.sddc.lab
- networkSpecs:
- We can see various networks at vCenter (10.0.0.12) level via distributed switch (sddc-vds01)
- MANAGEMENT subnet
- 10.0.0.0/24
- Gateway
- 10.0.0.221 (Cloud builder apppliance)
- There is option while doing fully automated deployment to specify EXT_GW. If we leave it blank the default route in Cloud builder appliance is:
- default via 10.0.0.221 dev eth0.10 proto static
- perhaps once we specify EXT_GW value in the automated form this gets upated to specified value.
- MANAGEMENT vlanId
- 10 -- VLAN ID for all other portgroups seems to be 10. Hence various different subnets are being broadcasted / shared within a single VLAN.
- MANAGEMENT mtu
- 1500 -- VDS MTU is 9000. Then for vmkernel ports in respective VLANs / networks, the MTU is set as per configuration. Hence vmkernel ports in management network have 1500 MTU.
- VMOTION subnet
- 10.0.4.0/24
- VMOTION mtu
- 8940 (Same for other later subnets)
- vSAN subnet
- 10.0.8.0/24
- vMotion and vSAN networks are defined as part of mgmt-network-pool as part of SDDC manager network settings
- UPLINK01 - vlanId
- 11
- UPLINK01 - subnet
- 172.27.11.0/24
- UPLINK02 - vlanId
- 12
- UPLINK02 - subnet
- 172.27.12.0/24
- NSXT_EDGE_TEP - vlanId
- 13
- NSXT_EDGE_TEP - subnet
- 172.27.13.0/24
- Note that sddc-edge-uplink01 and sddc-edge-uplink02 networks are all VLAN trrunk and not limited to VLAN IDs 11,12. This is expected so that edge can do L2 bridging between VLAN and segment if required. We can see VCF-edge_mgmt-edge-cluster_segment_11 and VCF-edge_mgmt-edge-cluster_segment_11 in NSX Manager (10.0.0.21 admin:VMware123!VMware123!) - Networking -> Segments
- X_REGION - subnet
- 10.60.0.0/24
- X_REGION - vlanId
- 0
- REGION_SPECIFIC - subnet
- 10.50.0.0/24
- REGION_SPECIFIC - vlanId: 0
- We can see two segments in NSX Manager (10.0.0.21 admin:VMware123!VMware123!):
- Mgmt-RegionA01-VXLAN - 10.50.0.1/24 - Connected to Tier1 (mgmt-domain-tier1-gateway)
- Mgmt-xRegion01-VXLAN - 10.60.0.1/24 - Connected to Tier1 (mgmt-domain-tier1-gateway)
- Both of type overlay connected to mgmt-domain-m01-overlay-tz transport zone. The purpose of creating two different segments with chosen naming is not clear.
- We can see various networks at vCenter (10.0.0.12) level via distributed switch (sddc-vds01)
- nsxtSpec:
- nsxtManagers.ip
- 10.0.0.21
- nsxtManagers-hostname
- nsx-mgmt-1
- vip
- 10.0.0.20
- vipFqdn
- nsx-mgmt
- transportVlanId
- 10
- asn
- 65003
- We can see VIP by going to system -> Appliances -> NSX Manager in NSX Manager which can be accessed at both 10.0.0.21 or 10.0.0.20 IPs.
- We can see use of transport VLAN ID by going to System -> Fabric -> Profiles and looking host-uplink-mgmt-cluster profile with Transport VLAN 10
- ASN can be seen at Networking -> Tier-0 Gateways -> mgmt-domain-tier0-gateway -> BGP
- edgeNodeSpecs:
- edge01-mgmt managementCidr
- 10.0.0.23/24
- edge01-mgmt edgeVtep1Cidr
- 172.27.13.2/24
- edge01-mgmt edgeVtep2Cidr
- 172.27.13.3/24
- edge01-mgmt uplink-edge1-tor1 interfaceCidr
- 172.27.11.2/24
- edge01-mgmt uplink-edge1-tor2 interfaceCidr
- 172.27.12.2/24
- edge02-mgmt managementCidr
- 10.0.0.24/24
- edge02-mgmt edgeVtep1Cidr
- 172.27.13.4/24
- edge02-mgmt edgeVtep2Cidr
- 172.27.13.5/24
- edge02-mgmt uplink-edge1-tor1 interfaceCidr
- 172.27.11.3/24
- edge02-mgmt uplink-edge1-tor2 interfaceCidr
- 172.27.12.3/24
- We can see edge management IP and VTEP IPs at System -> Fabric -> Nodes -> Edge Transport Nodes.
- The ToR IPs are visible at Networking -> Tier-0 Gateway -> GW -> Interfaces
- bgpNeighbours.neighbourIp
- 172.27.11.1 (AS 65001)
- bgpNeighbours.neighbourIp
- 172.27.12.1 (AS 65001)
- BGP neighbors are visible at Networking -> Tier-0 Gateway -> GW -> BGP -> BGP Neighbours
- logicalSegments Mgmt-RegionA01-VXLAN
- REGION_SPECIFIC
- logicalSegments Mgmt-xRegionA01-VXLAN
- X_REGION
- networks
- MANAGEMENT, VMOTION, VSAN, UPLINK01, UPLINK02, NSXT_EDGE_TEP
- hostSpecs:
- esxi-1 ipAddressPrivate.ipAddress
- 10.0.0.101
- esxi-1 ipAddressPrivate.ipAddress
- 10.0.0.102
- esxi-1 ipAddressPrivate.ipAddress
- 10.0.0.103
- esxi-1 ipAddressPrivate.ipAddress
- 10.0.0.104
Refer:
- VLC Download link
- http://tiny.cc/getVLC
- Blog on VLC part 1
- https://blogs.vmware.com/cloud-foundation/2020/01/31/deep-dive-into-vmware-cloud-foundation-part-1-building-a-nested-lab/
- Blog on VLC part 2
- https://blogs.vmware.com/cloud-foundation/2020/02/06/deep-dive-into-vmware-cloud-foundation-part-2-nested-lab-deployment/
- Google form to get link for downloading VLC
- https://docs.google.com/forms/u/0/d/e/1FAIpQLScU_X8LpdC6FHpANGBkdY87GoBkuuWcIiVh7dalFqJQJAOLpw/formResponse
- Direct VLC 4.2 download link
- https://www.dropbox.com/s/wsvsgf37wb65448/VLC_4.2_021021.zip?dl=0
- VLC slack community
- https://vlc-support.slack.com
Home > VMWare platform > vCloud Foundation > Setup automated lab environment for vCloud Foundation using VLC > vCloud foundation fully automated deployment with bringup using vlc