CentOS 8.x Cloudstack 4.15 Initial zone setup with VLANs
From Notes_Wiki
Home > CentOS > CentOS 8.x > Virtualization > Cloudstack 4.15 > Initial zone setup with VLANs
After setting up a new management server as described at CentOS 8.x Cloudstack 4.15 Setup Management server, it can be configured for initial zone with VLANs using:
- Open cloudstack manager in the browser at URL http://<cloudstack-manager-ip-or-fqdn>:8080/
- Default username is 'admin' and default password is 'password'
- Choose "Continue with installation"
- Change default password to something more secure
- Do not Enable "Security groups". Leave them disabled and click Next on zone type window leaving default of Advanced zone setup as it is
- Security group features can be achieved via guest networks Refer: https://svn.apache.org/repos/asf/cloudstack/docsite/html/docs/en-US/Apache_CloudStack/4.1.1/html/Admin_Guide/security-groups.html
- Enter details for initial zone to be created. Example values are:
- Name
- Zone1
- IPv4 DNS1
- 4.2.2.2 -- This is supposed to be public DNS which resolves to public IPs for your servers.
- Internal DNS1
- 172.31.1.160
- Hypervisor
- KVM
- Guest CIDR
- 10.100.0.0/16 -- Completely isolated network not used anywhere in the organization and not planned for future use either
- Note that all members in a zone (Pods, clusters, etc.) share same secondary storage
- Add more DNS IPs if available. Leave other values to their defaults and choose Next
- In Physical Network section do following changes:
- Leave isolation method as "VLAN"
- If there is more than one bridge (eg cloudbr1) with a different physical NIC then add one more physical network
- Add storage traffic to one of the physical networks. It can be added to only one of the physical networks.
- Add other appropriate traffics to the available physical networks.
- Only guest traffic can be part of multiple networks.
- Public, Management and storage are part of one physical network only.
- Click "Edit" against each of the traffic types such as Storage, Management etc. and enter network label as cloudbr0 or cloudbr1, etc. This must be done for all four traffic types.
- Best option is to create two bridges cloudbr0 and cloudbr1. Keep management and storage on cloudbr0. Keep public and guest on cloudbr1. cloudbr0 network NIC can be access VLAN for management/storage. cloudbr1 should be trunk port for all other VLANs.
- Click next to continue
- In public Network enter appropriate values. Example
- Gateway
- 172.31.1.1
- Netmask
- 255.255.255.0 (Note /24 will pass form validation but will lead to error during launch)
- VLAN/VNI
- (Leave blank for untagged public network)
- Start IP
- 172.31.1.191
- End IP
- 172.31.1.200
- Click "add" and then click "next"
- In Pod section enter desired name and IPs for Secondary storage VM, Console proxy VMs, etc. Example values are given below: Refer http://docs.cloudstack.apache.org/en/latest/installguide/configuration.html#adding-a-pod. (Ideally for internal cloudstack keep pod (management) on same network as storage)
- Pod name
- Pod1
- Reserved system gateway
- 172.31.1.1
- Reserved system netmask
- 255.255.255.0 (Note /24 will pass form validation but will lead to error during launch)
- Start Reserved system IP
- 172.31.1.201
- End Reserved system IP
- 172.31.1.210
- Pod can have one or more clusters. Each cluster has many hosts sharing same primary storage
- Choose add and click Next
- In VLAN/VNI range enter appropriate VLANs that are tagged to all cloudstack hosts on physical switch level. Example values
- VLANs Range
- 50-70
- Note that these VLANs should be tagged to cloudstack hosts but should ideally be unused - No outside cloudstack devices in these VLANs.
- In Storage traffic add at least one network which will have access to the secondary storage VM IP (Pod1 IPs) while also having access to primary storage (If it is NFS)
- (Ideally for internal cloudstack keep storage on same network as pod (management))
- Gateway
- 172.31.1.1
- Netmask
- 255.255.255.0 (Note /24 will pass form validation but will lead to error during launch)
- VLAN/VNI
- (Leave blank if storage traffic is coming untagged to the physical network (cloudbr0, cloudbr1, etc.) specified for storage traffic)
- Start IP
- 172.31.1.211
- End IP
- 172.31.1.220
- Click add and then click Next
- (Ideally for internal cloudstack keep storage on same network as pod (management))
- Cllick next and enter cluster name. Example
- Cluster name
- Cluster1
- Under Add Resources -> IP address we should add at least one KVM host. Enter details of KVM host prepared using CentOS 8.x Cloudstack 4.15 Setup KVM host. Example values
- Hostname
- 172.31.1.161
- Username
- root
- Password
- <secret>
- Tags
- (Leave blank)
- Click next
- Under PrimaryStorage enter details. Example values are
- Name
- Primary1
- Scope
- Cluster
- Protocol
- NFS
- Server
- 172.31.1.165
- Path
- /mnt/primary
- Storage tags
- (Leave blank)
- Click "Next"
- Having a NFS primary storage for initial wizard makes life easy. Once cloudstack is up and running add other types of primary storage eg gluster later-on.
- It was seen that system vms did not come up unless at least one primary storage of type NFS was added.
- Under Second Storage enter details. This assumes availability of NFS via storage server or setup via CentOS 8.x Cloudstack 4.15 Setup NFS server. Example values
- Provider
- NFS
- Name
- Secondary1
- Server
- 172.31.1.165
- Path
- /mnt/secondary -- This is already mounted on secondary server via NFS while importing system VM templates
- Click "Next"
- Click "Launch Zone" to start zone deployment
- After this launch will appear to hang, after a while open Cloudstack manager UI and add both storages directly via UI. This time it should work in getting cluster with both primary and second storages as up. This might have been the case due to use of SharedMountPoint storage as primary storage. Perhaps with NFS / Gluster this should not happen.
- Click "Enable Zone" after successful "Launch zone" operation.
- If launch zone failed and manually primary and secondary storage were added, then go to zones and enable created zone
- During launch zone when KVM host is being added, go to KVM host and start cloudstack agent using:
- systemctl start cloudstack-agent
- systemctl status cloudstack-agent
- Go to /etc/cloudstack/agent folder. Check below are correct or not:
- public.network.device=cloudbr1
- guest.network.device=cloudbr1
- Resttart Cloudstack agent to use correct device name, if name is updated
- systemctl restart cloudstack-agent
- After this wait for system VMs to come up
- If you have created gluster filesystem on all KVM hosts using CentOS 8.x Glusterfs basic setup of distributed volume then it can be mounted using server value of localhost and path as /primary1 (etc.) path of distributed volume. This way all KVM hosts will use gluster from their own local IP (127.0.0.1).
Home > CentOS > CentOS 8.x > Virtualization > Cloudstack 4.15 > Initial zone setup with VLANs