Configure NSX-T 3.0 from scratch with edge cluster and tier gateways
Home > VMWare platform > VMWare NSX > Configure NSX-T 3.0 from scratch with edge cluster and tier gateways
As such NSX configuration is very specific to individual needs and having a fixed set of steps is not practical. But for lab / learning purposes one can go through below steps to get a working NSX-T 3.0 setup with edge cluster and Tier gateways (Both T0, T1) which talk to ToR using BGP.
Pre-requisites
- Following values / pre-requisites are required before we can go ahead with the NSX-T 3.0 setup with BGP peering with ToR switches:
- NTP for the setup
- DNS to resolve various FQDN for the setup
- NSX-T, vCenter, ESXi licenses
- ESXi hosts IP, FQDN, Management VLAN, vMotion VLAN, Shared storage/vSAN
- vCenter IP, FQDN, VLAN
- NSX-T manager management IP
- VLAN for Host VTEP communication - Needs DHCP
- VLAN for Edge VTEP communication
- VLAN for left ToR Edge Uplink (North-South)
- VLAN for right ToR Edge Uplink (North-South)
- Two IPs for edge in management VLAN with DNS entries
- Two IPs per edge (4 for 2 edges) in edge VTEP communication VLAN
- Two IPs per edge (one in left ToR Edge uplink VLAN and other in right ToR Edge uplink VLAN)
- BGP configuration on both left and right ToR in respective uplink VLANs
Example values used for below steps
- For example in below steps we are going to use below values
- NTP
- time.google.com
- DNS
- 10.1.1.2 #A bind DNS specifically setup for rnd.com for lab experiments with 10.1.1.2 IP address Refer Configuring basic DNS service with bind. This DNS has named.conf and rnd.com.forward files as listed in below article.
- ESXi hosts IPs
- 10.1.1.11 to 10.1.1.14
- ESXi hosts FQDN
- esxi01.rnd.com to esxi04.rnd.com
- Management VLAN
- 201 - 10.1.1.0/24
- vMotion VLAN
- 202 - 10.1.2.0/24. ESXi hosts will use IPs 10.1.2.11 to 10.1.2.14 in this VLAN for vMotion.
- Shared storage
- Created NFS at 10.1.1.41 using CentOS 7.x Cloudstack 4.11 Setup NFS for secondary and primary (if no central storage) storage
- vCenter Details
- IP: 10.1.1.3, FQDN: vcenter.rnd.com, VLAN:201 Management VLAN
- NSX-T maanger management IP
- 10.1.1.4, FQDN: nsxt.rnd.com
- VLAN for Host VTEP communication
- VLAN 203 - 10.1.3.0/24. For this we can setup DHCP by adding a Linux machine with DHCP service to the network. Or we can setup DHCP service on ToR switches. Refer: Configure DHCP server on Cisco router
- VLAN for Edge VTEP communication
- VLAN 204 - 10.1.4.0/24
- VLAN for left ToR Edge Uplink (North-South)
- VLAN 205 - 10.1.5.0/24
- VLAN for right ToR Edge Uplink (North-South)
- VLAN 206 - 10.1.6.0/24
- Two IPs for edge in management VLAN with DNS entries
- edge01.rnd.com ( 10.1.1.21) and edge02.rnd.com (10.1.1.22)
- Two IPs per edge (4 for 2 edges) in edge VTEP communication VLAN
- 10.1.4.11/24 to 10.1.4.14/24
- Two IPs per edge (one in left ToR Edge uplink VLAN and other in right ToR Edge uplink VLAN)
- 10.1.5.2/24, 10.1.5.3/24, 10.1.6.2/24 and 10.1.6.3/24
- BGP configuration on both left and right ToR in respective uplink VLANs
- For this please refer to sample single CSRV1000 router configuration given below.
Setup ESXi, NFS, DNS and vCenter
- After this setup four ESXi hosts (esxi01.rnd.com to esxi04.rnd.com) for nested setup using all trunk VLAN ports with management IP in VLAN 201 - 10.1.1.11 to 10.1.1.14. Note point about VMkernel MAC address at Install nested ESXi on top of ESXi in a VM
- Create DNS machine with IP 10.1.1.2 connected to VLAN 201 for DNS resolution. Refer Configuring basic DNS service with bind.
- Create NFS machine with IP 10.1.1.41 for shared storage among all four ESXi hosts. Refer CentOS 7.x Cloudstack 4.11 Setup NFS for secondary and primary (if no central storage) storage
- Change "VM Network" VLAN to 201 in all four ESXi hosts. This will help in reachbility to vCenter when it is deployed on 'VM Network' later.
- Add NFS datastore from 10.1.1.41 to esxi01.rnd.com so that we can deploy vCenter on this shared NFS datastore
- Create a few management stations 10.1.1.31 (Linux) and 10.1.1.32 (Windows) in VLAN 201 for nested lab operations. For both these machines we need an interface in normal LAN (eg 172.31.1.0/24 suggested here) without gateway for connecting to these admin machines from LAN machines in 172.31.1.0/24 subnet. Gateway for these management stations should be 10.1.1.1 so that these machines can access all subnets via L3 switch / router.
- Deploy vCenter using a Windows management station at IP 10.1.1.3 / FQDN-vcenter.rnd.com on top of nested ESXi (esxi01.rnd.com) using NFS datastore and VM network.
- Once vCenter for nested lab experiments is available do the following:
- Add all four ESXi hosts among two clusters - Computer cluster (esxi01, esxi02) and Edge cluster (esxi03, esxi04)
- Add ESXi and vCenter licenses to vCenter. Assign these licenses.
- Create distributed switch using two ports out of four of all four nested ESXi VMs.
- Create distributed port group for management using VLAN 201
- Migrate vCenter and vmk0 kernel port to this distributed switch.
- Later migrate the remaining two uplinks also to distributed switch.
- Delete standard switch from all four hosts
- Configure MTU of at least 1700 on this distributed switch in nested LAB. The MTU on external ESXi host switch hosting these nested ESXi VMs should be higher (ideally 9000+). If multiple ESXi hosts are used for lab setup then these hosts should have VLAN connectivity for all required VLANs (including 201-206 listed above) and MTU 9000+
- Create distributed port-group for ALL-VLAN-TRUNK on distributed switch using "VLAN Trunnking" and 0-4096 options.
- Change security settings of this distributed port-group and allow all three - Promiscous mode, MAC address changes, Forged transmits.
- Create distributed port-group for vMotion (VLAN 202) and add VMKernel ports (10.1.2.11 to 10.1.2.14) on all four ESXi hosts for vMotion
- Mount NFS datastore from 10.1.1.41 mounted on ESXi on other 3 hosts.
Setup NSX-T Manager and Edge cluster
- Deploy NSX-T manager using IP 10.1.1.4 - FQDN nsxt.rnd.com on maangement port group from admin stations (10.1.1.31 or 10.1.1.32). This NSX manager can be deployed on top of esxi02.rnd.com
- Dont choose size as extra small. That is for different purpose altogether.
- EnableS SSH to NSX-T manager
- Add NSX-T license to NSX-T manager and add go to System -> Fabric -> Computer manager and integrated NSX manager with vCenter.
- Go to System -> Fabric -> Transport Zones and create Transport zone one for edge communication using Traffic Type VLAN (edge-vlan-tz) and one for management communication using Traffic Type: overlays (mg-overlay-tz)
- Go to System -> Fabric -> Profiles. Create uplink profile with load-balance source with 4 active uplinks (u1, u2, u3, u4) in VLAN 203 for host-VTEP communication
- Go to System -> Fabric -> Nodes -> Host Transport Nodes. Select managed by vcenter.rnd.com. Configure NSX on computer cluster hosts (esxi01 and esxi02) using management overlay (mg-overlay-tz) transport zone and uplink profile with 4 uplinks in VLAN 203 (host-VTEP) with DHCP based IPs. Assign all four uplinks of Existing vDS to this host. No need to go for N-VDS or enhanced data path
- Go to System -> Fabric -> Nodes -> Edge Clusters. Create Edge cluster with name edgecluster01
- Go to System -> Fabric -> Profiles. Create uplink profile for edge with VLAN 204 (Edge vtep) with two uplinks in load-balance source
- Go to Networking -> Segments and create segments for edge uplink. left-uplink segment and for transport zone choose edge-vlan-tz. Specify VLAN as 205 for left-uplink-vlan.
- Similarly crete right-uplink segment with edge-vlan-tz and VLAN ID of 206.
- Go to System -> Fabric -> Nodes -> Edge Transport Nodes. Create edge (edge01) with management IP 10.1.1.21 - FQDN edge01.rnd.com with both uplink as ALL-VLANS-Trunk-DPG. Use the edge uplink profile created in previous steps. Use 10.1.4.11 and 10.1.4.12 as VTEP IPs for Edge in 204 VLAN. Both mg-overlay-tz and edge-vlan-tz transport zones should be selected for edge.
- Create this without resource (memory) reservation. Use edge cluster created above for deploying this.
- Similarly created edge (edge02) with management IP 10.1.1.22 - FQDN edge02.rnd.com with both uplinks as ALL-VLANS-Trunk-DPG. Use the edge uplink profile created in previous steps. Use 10.1.4.13 and 10.1.4.14 as VTEP IPs for Edge in 204 VLAN.
- Once both edges are ready add them to edgecluster
- SSH to edge as admin and run "get logical-routers". Go to first vrf 0 and check interfaces. "get interfaces". Try to ping gateway 10.1.4.1 to validate edge VTEP connectivity to VTEP VLAN gateway.
Setup T0 and T1 gateway routers with overlay segments
- Go to Networking -> T0-Gateways
- Create t0-gw as active-standby on edgecluster01. Click save
- Add interfaces edge01-left-uplink with IP 10.1.5.2/24 connected to left-uplink segment. Select edg01 as edge node.
- Similarly create interface edge01-right-uplink with IP 10.1.6.2/24 connected to right-uplink segment. Select edge01 as edge node.
- Add interface edge02-left-uplink with IP 10.1.5.3/24 connected to left-uplink segment. Select edge02 as edge node.
- Similarly create interface edge02-right-uplink with IP 10.1.6.3/24 connected to right-uplink segement. Select edge02 as right node.
- Go to BGP and configure AS 65000. Enable BGP.
- Add BGP neighbor 10.1.5.1 with remote as 65000 with source IPs 10.1.5.2 and 10.1.5.3
- Similarly add BGP neighbor 10.1.6.1 with remote as 65000 with source IPs 10.1.6.2 and 10.1.6.3
- Add route-redistribution for all-routes. Select at least NAT-IP and "Connected Interfaces and Segments (including subtree)" under T0 gateway
- Select "Connected Interfaces and segments" and NAT-IP for Tier-1 gateways
- SSH to edge and do "get logical-routers". There should be service router for T0-gateway at vrf 1. Enter vrf 1
- Validate bgp connection to ToR have worked using "get bgp neighbor summary"
- Look at bgp routes using "get bgp"
- Go to Networking -> T1-Gateways
- Add tier1 gateway such as t1-prod with connectivity to t0-gw created before. Select edge cluster as edgecluster1. By selecting edge cluster we get service router for the t1-gateway. This is useful for NAT / L2 bridging etc. Click save.
- Go to router advertizements and advertize routes for "All connected segments and service ports"
- Go to Networking -> Sgements. Add Segment such as app-seg with connnectiity to t1-prod and transport zone as mg-overlay-tz. Give subnet as 10.1.7.1/24. This will become default-gw for this overlay in ditributed router.
- Add another segment web-seg with connectivity to t1-prod and transport zone as mg-overlay-tz. Give subnet as 10.1.8.1/24.
- Now these two new overlay segments 10.1.7.0/24 and 10.1.8.0/24 should be reachable from admin stations 10.1.1.31 and 10.1.1.32. We should see their related routing on service router for t0-gateway or ToR
Note that in nested environments with NSX ping responses might appear to be duplicate. This only happens in nested environment where the nested VMs are on same physical host.
See: https://communities.vmware.com/t5/VMware-NSX-Discussions/Duplicated-Ping-responses-in-NSX/td-p/969774
Input files to help with test setup
rnd.com zone in named.conf for dns.rnd.com
Only rnd.com zone related lines of named.conf are captured below:
zone "rnd.com" IN { type master; file "rnd.com.forward"; };
For full steps refer Configuring basic DNS service with bind
rnd.com.forward for dns.rnd.com
rnd.com.forward file for rnd.com zone is:
$TTL 3600 @ SOA ns.rnd.com. root.rnd.com. (1 15m 5m 30d 1h) NS dns.rnd.com. A 10.1.1.2 l3switch IN A 10.1.1.1 dns IN A 10.1.1.2 vcenter IN A 10.1.1.3 nsxt IN A 10.1.1.4 nsxt1 IN A 10.1.1.5 nsxt2 IN A 10.1.1.6 nsxt3 IN A 10.1.1.7 esxi01 IN A 10.1.1.11 esxi02 IN A 10.1.1.12 esxi03 IN A 10.1.1.13 esxi04 IN A 10.1.1.14 edge01 IN A 10.1.1.21 edge02 IN A 10.1.1.22 admin-machine IN A 10.1.1.31 admin-machine-windows IN A 10.1.1.32 nfs IN A 10.1.1.41
Configure pfsense as router with BGP and DHCP functionality
To do required inter-VLAN routing, NAT, DHCP, BGP, etc. use a pfsense machine as router. The same can be created using:
- Download pfsense ISO
- Create VM with two interfaces. One interface connected to LAN / Internet network. Other interface connected to "ALL VLAN Trunk" portgroup.
- Install pfsense on a VM with default options
- Give local IP address to first interface as WAN. You can identify which interface is first based on its MAC address.
- Access the pfsense webUI using configured WAN interface (Name is WAN but it is configured with local LAN IP). Login using admin:pfsense.
- Go through setup wizard and configure various settings such as admin password.
- After wizard is complete go to Interfaces -> Interface Assignments -> VLANs. Add VLANs 201 to 206 here.
- Go to Interfaces -> Interface Assignments. Add second parent interface and all child VLAN interfaces. Overall there should be 8 interfaces - one for LAN access (WAN), 6 for VLANs 201-206 and one for entire second interface.
- Set MTU as 9000 on second interface and enable it
- Configure all VLAN interfaces 201-206 one by one with descriptive name. For VLAN 201 use MTU 1500. For others use MTU 9000. Configure appropriate static IP 10.1.1-6.1/24 in various VLANs.
- Go to Interfaces -> Interfaces Assignments. Create Interface Group "LAN Group" with all six VLAN 201-206 interfaces
- Go to Firewall -> Rules. At this point pfsense might stop working on WAN interface. In that case proceed with jump box 10.1.1.2 and access pfSense at 10.1.1.1 IP configured in VLAN 201
- Add firewall rules in LANGroup and WAN for allowing any traffic.
- Go to Services -> DHCP Server. Enable DHCP in 203 VLAN with range 10.1.1.50-100, DNS 10.1.1.2, appropriate domain name and domain search list.
- Go to System -> package manager and install FRR package
- Go to Services -> FRR BGP and configure BGP settings as:
- Enable BGP routing
- Enable "Preserve FW State"
- In "Redistribute Local" set all three to IPv4
- Go to "Neighbors" tab and add four neighbours 10.1.5.2, 10.1.5.3, 10.1.6.2, 10.1.6.3 with AS 65000
- Go to Services -> FRR Global/Zebra. Enable FRR and configure master password
- Go to Prefix lists under "Global Settings" and create a prefix list with:
- IP Type
- IPv4
- Name
- bgp-any
- Prefix List Entries
- Sequence:0; Action:Permit; Any: Enabled
- Click Save
- Go to Prefix lists under "Global Settings" and create a prefix list with:
- For all neighbors 10.1.5.2, 10.1.5.3 etc. in "Prefix List Filter" select bgp-any filter created above
Refer:
Sample CSRV1000 configuration with single router
Ideally there should be two ToR for redundancy. But in lab we can use a single router. A single router can also act as "router on a stick" for doing inter-VLAN routing among VLANs 201-206 listed above. Also router can act as DHCP for VLAN 203 (Host-VTEP). Sample CSRV1000 which does all this and also does NAT for management (10.1.1.0/24) via external interface for Internet access is:
CSRV1000 routers have been found to have very low throughput (<2mbps). Ideally consider using a pfsense as router as described earlier
version 15.4 service timestamps debug datetime msec service timestamps log datetime msec no platform punt-keepalive disable-kernel-core platform console virtual ! hostname l3-switch ! boot-start-marker boot-end-marker ! ! enable secret 5 $1$ynrj$KFnQs1u7Xb/szNkdzw9RP1 ! no aaa new-model ! ! ! ! ! ! ! ! ip dhcp pool host-overlay network 10.1.3.0 255.255.255.0 domain-name rnd.com dns-server 10.1.1.2 default-router 10.1.3.1 lease 7 ! ! ! ! ! ! ! ! ! ! subscriber templating multilink bundle-name authenticated ! ! license udi pid CSR1000V sn 91E1JKLD4F3 ! username admin privilege 15 secret 5 $1$QUXO$iQWimYJ8a4Ah1JwZmIyLp0 ! redundancy mode none ! ! ! ip ssh rsa keypair-name ssh-key ip ssh version 2 ip scp server enable ! ! ! ! interface VirtualPortGroup0 ip unnumbered GigabitEthernet1 ! interface GigabitEthernet1 ip address 172.31.1.173 255.255.255.0 ip nat outside negotiation auto ! interface GigabitEthernet2 no ip address negotiation auto mtu 9216 ! interface GigabitEthernet2.1 encapsulation dot1Q 201 ip address 10.1.1.1 255.255.255.0 ip nat inside ! interface GigabitEthernet2.2 encapsulation dot1Q 202 ip address 10.1.2.1 255.255.255.0 ! interface GigabitEthernet2.3 encapsulation dot1Q 203 ip address 10.1.3.1 255.255.255.0 ! interface GigabitEthernet2.4 encapsulation dot1Q 204 ip address 10.1.4.1 255.255.255.0 ! interface GigabitEthernet2.5 encapsulation dot1Q 205 ip address 10.1.5.1 255.255.255.0 ! interface GigabitEthernet2.6 encapsulation dot1Q 206 ip address 10.1.6.1 255.255.255.0 ! interface GigabitEthernet3 no ip address shutdown negotiation auto ! router bgp 65000 bgp router-id 10.1.6.1 bgp log-neighbor-changes neighbor 10.1.5.2 remote-as 65000 neighbor 10.1.5.3 remote-as 65000 neighbor 10.1.6.2 remote-as 65000 neighbor 10.1.6.3 remote-as 65000 ! address-family ipv4 network 0.0.0.0 redistribute connected redistribute static neighbor 10.1.5.2 activate neighbor 10.1.5.3 activate neighbor 10.1.6.2 activate neighbor 10.1.6.3 activate exit-address-family ! ! virtual-service csr_mgmt vnic gateway VirtualPortGroup0 ! ip nat inside source list 1 interface GigabitEthernet1 overload ip forward-protocol nd ! no ip http server ip http secure-server ip route 0.0.0.0 0.0.0.0 GigabitEthernet1 172.31.1.1 ip route 0.0.0.0 0.0.0.0 172.31.1.1 ip route 172.31.1.162 255.255.255.255 VirtualPortGroup0 ! access-list 1 permit 10.0.0.0 0.255.255.255 ! ! ! control-plane ! ! line con 0 stopbits 1 line aux 0 stopbits 1 line vty 0 4 login local transport input ssh ! ! end
Note:
- Without "mtu 9216" on GigabitEthernet 2 the ping might work but any data transfer eg running command with large output after ssh or opening https:// site will not work.
Here:
- Gigabit Ethernet1 is connected to external LAN network (172.31.1.0/24) for Internet connectivity / NAT with IP 172.31.1.173
- Gigabit Ethernet2 is all VLAN trunk port Create standard port-group for trunking all VLANs to VM
Sample CSRV1000 configuration with two routers
CSRV1000 routers have been found to have very low throughput (<2mbps). Ideally consider using a pfsense as router as described earlier
Router1 configuration
! ! Last configuration change at 17:58:46 UTC Wed Apr 28 2021 ! version 15.4 service timestamps debug datetime msec service timestamps log datetime msec no platform punt-keepalive disable-kernel-core platform console virtual ! hostname l3-switch ! boot-start-marker boot-end-marker ! ! enable secret 5 $1$ynrj$KFnQs1u7Xb/szNkdzw9RP1 ! no aaa new-model ! ! ! ! ! ! ! ! ip dhcp pool host-overlay network 10.1.3.0 255.255.255.0 domain-name rnd.com dns-server 10.1.1.2 default-router 10.1.3.1 lease 7 ! ! ! ! ! ! ! ! ! ! subscriber templating multilink bundle-name authenticated ! ! license udi pid CSR1000V sn 9VA2HK6J7QZ ! username admin privilege 15 secret 5 $1$QUXO$iQWimYJ8a4Ah1JwZmIyLp0 ! redundancy mode none ! ! ! ip ssh rsa keypair-name ssh-key ip ssh version 2 ip scp server enable ! ! ! ! interface VirtualPortGroup0 ip unnumbered GigabitEthernet1 ! interface GigabitEthernet1 ip address 172.31.1.179 255.255.255.0 ip nat outside negotiation auto ! interface GigabitEthernet2 mtu 9216 no ip address negotiation auto ! interface GigabitEthernet2.1 ip nat inside ! interface GigabitEthernet2.2 encapsulation dot1Q 202 ip address 10.1.2.1 255.255.255.0 ! interface GigabitEthernet2.3 encapsulation dot1Q 203 ip address 10.1.3.1 255.255.255.0 ! interface GigabitEthernet2.4 encapsulation dot1Q 204 ip address 10.1.4.1 255.255.255.0 ! interface GigabitEthernet2.5 encapsulation dot1Q 205 ip address 10.1.5.1 255.255.255.0 ip nat inside ! interface GigabitEthernet2.201 encapsulation dot1Q 201 ip address 10.1.1.1 255.255.255.0 ip nat inside ! interface GigabitEthernet3 no ip address shutdown negotiation auto ! router bgp 65001 bgp router-id 10.1.5.1 bgp log-neighbor-changes neighbor 10.1.5.2 remote-as 65000 neighbor 10.1.5.3 remote-as 65000 ! address-family ipv4 network 0.0.0.0 redistribute connected redistribute static neighbor 10.1.5.2 activate neighbor 10.1.5.3 activate exit-address-family ! ! virtual-service csr_mgmt vnic gateway VirtualPortGroup0 ! ip nat inside source list 1 interface GigabitEthernet1 overload ip forward-protocol nd ! no ip http server ip http secure-server ip route 0.0.0.0 0.0.0.0 GigabitEthernet1 172.31.1.1 ip route 0.0.0.0 0.0.0.0 172.31.1.1 ip route 172.31.1.162 255.255.255.255 VirtualPortGroup0 ! access-list 1 permit 10.0.0.0 0.255.255.255 ! ! ! control-plane ! ! line con 0 logging synchronous stopbits 1 line aux 0 stopbits 1 line vty 0 4 login local transport input ssh ! ! end
Router2 configuration
! ! Last configuration change at 18:18:36 UTC Wed Apr 28 2021 by admin ! version 15.4 service timestamps debug datetime msec service timestamps log datetime msec no platform punt-keepalive disable-kernel-core platform console virtual ! hostname l3-switch-right ! boot-start-marker boot-end-marker ! ! enable secret 5 $1$ynrj$KFnQs1u7Xb/szNkdzw9RP1 ! no aaa new-model ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! subscriber templating multilink bundle-name authenticated ! ! license udi pid CSR1000V sn 94Z2LDZJH7F ! username admin privilege 15 secret 5 $1$QUXO$iQWimYJ8a4Ah1JwZmIyLp0 ! redundancy mode none ! ! ! ip ssh rsa keypair-name ssh-key ip ssh version 2 ip scp server enable ! ! ! ! interface VirtualPortGroup0 ip unnumbered GigabitEthernet1 ! interface GigabitEthernet1 ip address 172.31.1.180 255.255.255.0 ip nat outside negotiation auto ! interface GigabitEthernet2 mtu 9216 no ip address negotiation auto ! interface GigabitEthernet2.1 ip nat inside ! interface GigabitEthernet2.2 encapsulation dot1Q 202 ip address 10.1.2.251 255.255.255.0 ! interface GigabitEthernet2.3 encapsulation dot1Q 203 ip address 10.1.3.251 255.255.255.0 ! interface GigabitEthernet2.4 encapsulation dot1Q 204 ip address 10.1.4.251 255.255.255.0 ! interface GigabitEthernet2.6 encapsulation dot1Q 206 ip address 10.1.6.1 255.255.255.0 ip nat inside ! interface GigabitEthernet2.201 encapsulation dot1Q 201 ip address 10.1.1.251 255.255.255.0 ip nat inside ! interface GigabitEthernet3 no ip address shutdown negotiation auto ! router bgp 65001 bgp router-id 10.1.6.1 bgp log-neighbor-changes neighbor 10.1.6.2 remote-as 65000 neighbor 10.1.6.3 remote-as 65000 ! address-family ipv4 network 0.0.0.0 redistribute connected redistribute static neighbor 10.1.6.2 activate neighbor 10.1.6.3 activate exit-address-family ! ! virtual-service csr_mgmt vnic gateway VirtualPortGroup0 ! ip nat inside source list 1 interface GigabitEthernet1 overload ip forward-protocol nd ! no ip http server ip http secure-server ip route 0.0.0.0 0.0.0.0 GigabitEthernet1 172.31.1.1 ip route 0.0.0.0 0.0.0.0 172.31.1.1 ip route 172.31.1.162 255.255.255.255 VirtualPortGroup0 ! access-list 1 permit 10.0.0.0 0.255.255.255 ! ! ! control-plane ! ! line con 0 logging synchronous stopbits 1 line aux 0 stopbits 1 line vty 0 4 login local transport input ssh ! ! end
Using Linux machine for BGP, DHCP and inter-VLAN routing
Since CSRV1000 routers have been found to have very low throughput <2mbps, it makes sense to use Linux machine for NAT, inter-VLAN routing, etc. purposes
It is possible to create ToR switch functionality required by NSX using Linux machine using:
- Create a CentOS 8 Stream Linux machine with 7 interfaces, 6 interfaces should be in VLAN 201-206 and last interface in VLAN where the router can get LAN / Internet connectivity
- Login into machine and run following command to note MAC addresses and interface names:
- ip addr show
- Note the VLAN ID and MAC address using vSphere Web UI so that we know which interface is for which VLAN
- Create interface configuration files for VLANs 201-206 similar to at '/etc/sysconfig/network-scripts/ifcfg-<interface-name>:
- TYPE=Ethernet
- BOOTPROTO=static
- DEFROUTE=no
- IPV4_FAILURE_FATAL=no
- IPV6INIT=no
- NAME=<interface-name>
- DEVICE=<inteface-name>
- ONBOOT=yes
- IPADDR=10.1.<X>.1
- NETMASK=255.255.255.0
- ZONE=internal
- MTU=9000
- Here replace interface-name and X with appropriate value based on name of interface and VLAN subnet (Eg for VLAN 201 X is 1, for VLAN 202 X is 2, etc.)
- Refer: https://www.cyberciti.biz/faq/centos-rhel-redhat-fedora-debian-linux-mtu-size/
- For the remaining 7th interface which gives connectivity to LAN/Internet use:
- TYPE=Ethernet
- BOOTPROTO=static
- DEFROUTE=yes
- IPV4_FAILURE_FATAL=no
- IPV6INIT=no
- NAME=<interface-name>
- DEVICE=<inteface-name>
- ONBOOT=yes
- IPADDR=<lan-ip>
- NETMASK=<lan-subnet-mask>
- GATEWAY=<lan-gateway>
- DNS1=4.2.2.2
- ZONE=external
- Here replace interface-name, lan-* and DNS etc. appropriately as required. Avoiding use of 10.1.1.2 DNS on this machine to prevent cyclic dependency. 10.1.1.2 jump box depends upon this machine for Internet access.
- Enable forwarding of packets using:
- sysctl net.ipv4.ip_forward=1
- Edit '/etc/sysctl.d/99-sysctl.conf' and append:
- net.ipv4.ip_forward=1
- Install and configure frr on machine using:
- Install frr using:
- dnf -y install epel-release
- dnf -y install frr
- Edit /etc/frr/daemons and set
- bgpd=yes
- Start and enable frr using:
- systemctl start frr
- systemctl enable frr
- Open terminal using:
- vtysh
- Go to "config t" and paste below to configure frr
- !
- frr defaults traditional
- hostname l3router.vcfrnd.com
- password secret
- service integrated-vtysh-config
- !
- router bgp 65001
- bgp router-id 10.1.6.1
- bgp graceful-restart preserve-fw-state
- no bgp network import-check
- neighbor 10.1.5.2 remote-as 65000
- neighbor 10.1.5.2 description 10.1.5.2
- neighbor 10.1.5.2 update-source 10.1.5.1
- neighbor 10.1.5.3 remote-as 65000
- neighbor 10.1.5.3 description 10.1.5.3
- neighbor 10.1.5.3 update-source 10.1.5.1
- neighbor 10.1.6.2 remote-as 65000
- neighbor 10.1.6.2 description 10.1.6.2
- neighbor 10.1.6.2 update-source 10.1.6.1
- neighbor 10.1.6.3 remote-as 65000
- neighbor 10.1.6.3 description 10.1.6.3
- neighbor 10.1.6.3 update-source 10.1.6.1
- !
- address-family ipv4 unicast
- redistribute connected
- redistribute static
- redistribute kernel
- neighbor 10.1.5.2 activate
- neighbor 10.1.5.3 activate
- neighbor 10.1.6.2 activate
- neighbor 10.1.6.3 activate
- no neighbor 10.1.5.2 send-community
- neighbor 10.1.5.2 prefix-list bgp-any in
- neighbor 10.1.5.2 prefix-list bgp-any out
- no neighbor 10.1.5.3 send-community
- neighbor 10.1.5.3 prefix-list bgp-any in
- neighbor 10.1.5.3 prefix-list bgp-any out
- no neighbor 10.1.6.2 send-community
- neighbor 10.1.6.2 prefix-list bgp-any in
- neighbor 10.1.6.2 prefix-list bgp-any out
- no neighbor 10.1.6.3 send-community
- neighbor 10.1.6.3 prefix-list bgp-any in
- neighbor 10.1.6.3 prefix-list bgp-any out
- exit-address-family
- !
- !
- ip prefix-list bgp-any permit any
- !
- line vty
- !
- end
- Install frr using:
- Configure DHCP as explained at CentOS 8.x Setup basic DHCP server using config file:
- subnet 10.1.1.0 netmask 255.255.255.0
- {
- }
- subnet 10.1.2.0 netmask 255.255.255.0
- {
- }
- #Example subnet where DHCP will give IP, domain name, DNS IPs, Netmask, Gateway IP, etc. to DHCP clients
- subnet 10.1.3.0 netmask 255.255.255.0
- {
- option domain-name "rnd.com";
- option domain-name-servers 10.1.1.2;
- option routers 10.1.3.1;
- range 10.1.3.10 10.1.3.100;
- }
- subnet 10.1.4.0 netmask 255.255.255.0
- {
- }
- subnet 10.1.5.0 netmask 255.255.255.0
- {
- }
- subnet 10.1.6.0 netmask 255.255.255.0
- {
- }
- subnet 172.31.1.0 netmask 255.255.255.0
- {
- }
- where 172.31.1.0/255.255.255.0 should be replaced with subnet giving LAN/Internet access
- Enable firewalld communication between various interfaces in internal zone using:
- firewall-cmd --zone=internal --set-target=ACCEPT --permanent
- firewall-cmd --reload
- firewall-cmd --zone=internal --list-all
- Enable DHCP requests on internal zone using
- firewall-cmd --zone=internal --add-port=67/udp --permanent
- firewall-cmd --reload
- firewall-cmd --zone=internal --list-all
Refer:
- https://www.youtube.com/playlist?list=PLvjREERAnGxJctJOLLTXN9Z77_9g9at7o - Excellent youtube channel referred to learn many of above concepts
Home > VMWare platform > VMWare NSX > Configure NSX-T 3.0 from scratch with edge cluster and tier gateways