Difference between revisions of "VMWare vSAN storage policy configuration"
From Notes_Wiki
m |
m |
||
(One intermediate revision by the same user not shown) | |||
Line 1: | Line 1: | ||
[[Main_Page|Home]] > [[VMWare platform]] > [[VMWare VSAN|vMWare VSAN]] > [[VMWare vSAN storage policy configuration]] | |||
==Creating a vSAN storage policy== | ==Creating a vSAN storage policy== | ||
Line 12: | Line 11: | ||
#:;Availability - Failure to tolerate : Should be set to at least 1 for anything useful to ensure that disk failure does not lead to data loss. | #:;Availability - Failure to tolerate : Should be set to at least 1 for anything useful to ensure that disk failure does not lead to data loss. | ||
#:;Availability - Fault tolerance method : In case of Hybrid vSAN (Flash:Cache, Magnetic:Capacity), there is only option of RAID 1-Mirroring. But in cases of all flash clusters with enough nodes there can be option of RAID5/6 Erasure coding which is more space efficient then RAID-1 but also more disk intensive (More I/O) | #:;Availability - Fault tolerance method : In case of Hybrid vSAN (Flash:Cache, Magnetic:Capacity), there is only option of RAID 1-Mirroring. But in cases of all flash clusters with enough nodes there can be option of RAID5/6 Erasure coding which is more space efficient then RAID-1 but also more disk intensive (More I/O) | ||
#: At least till ESXi 7.x note that if we go for RAID-5/RAID-6 erasure coding putting hosts in maintenance mode either with "Full data evacuation" or with "ensure availability" becomes tricky as there is no RAID-I mirroring so there are no blocks to transfer to other nodes. If we use RAID-5 erasure coding then putting more than one hosts in maintenance mode will create serious issues. Here CLOM_TIMEOUT also may not have much significance. '''Hence is there is enough storage use RAID-I and avoid RAID-5/RAID-6 as we dont get software style failure tolerance with RAID-5 / RAID-6 erasure coding.''' | |||
#:;Storage rules - Space efficiency : Deduplication and compression are available on all-flash arrays | #:;Storage rules - Space efficiency : Deduplication and compression are available on all-flash arrays | ||
#:;Advance policy rules - No. of stripes per object : This can be used to split an object into stripes before storing so that a single object can be read and written simultaneously to multiple disk groups. For this we should have many disk groups across many hosts. For example in case of 3 disk groups per host and 5 hosts, total 15 host-groups in cluster, stripe-width of 3 makes more sense then stripe width of 1. This depends a lot on application usage patterns also. | #:;Advance policy rules - No. of stripes per object : This can be used to split an object into stripes before storing so that a single object can be read and written simultaneously to multiple disk groups. For this we should have many disk groups across many hosts. For example in case of 3 disk groups per host and 5 hosts, total 15 host-groups in cluster, stripe-width of 3 makes more sense then stripe width of 1. This depends a lot on application usage patterns also. | ||
Line 48: | Line 48: | ||
[[Main_Page|Home]] > [[VMWare platform]] > [[VMWare VSAN|vMWare VSAN]] > [[VMWare vSAN storage policy configuration]] |
Latest revision as of 09:55, 25 February 2024
Home > VMWare platform > vMWare VSAN > VMWare vSAN storage policy configuration
Creating a vSAN storage policy
To create a vSAN storage policy use following steps:
- Go to "Policies and Profiles" from Menu and then go to "VM Storage Policy"
- Choose to clone existing policy which matches the requirements closely or create a new policy
- Write appropriate name and description for the new policy
- On the Policy structure page, select Enable rules for "vSAN" storage, and click Next.
- On the vSAN page following options are used more often:
- Availability - Failure to tolerate
- Should be set to at least 1 for anything useful to ensure that disk failure does not lead to data loss.
- Availability - Fault tolerance method
- In case of Hybrid vSAN (Flash:Cache, Magnetic:Capacity), there is only option of RAID 1-Mirroring. But in cases of all flash clusters with enough nodes there can be option of RAID5/6 Erasure coding which is more space efficient then RAID-1 but also more disk intensive (More I/O)
- At least till ESXi 7.x note that if we go for RAID-5/RAID-6 erasure coding putting hosts in maintenance mode either with "Full data evacuation" or with "ensure availability" becomes tricky as there is no RAID-I mirroring so there are no blocks to transfer to other nodes. If we use RAID-5 erasure coding then putting more than one hosts in maintenance mode will create serious issues. Here CLOM_TIMEOUT also may not have much significance. Hence is there is enough storage use RAID-I and avoid RAID-5/RAID-6 as we dont get software style failure tolerance with RAID-5 / RAID-6 erasure coding.
- Storage rules - Space efficiency
- Deduplication and compression are available on all-flash arrays
- Advance policy rules - No. of stripes per object
- This can be used to split an object into stripes before storing so that a single object can be read and written simultaneously to multiple disk groups. For this we should have many disk groups across many hosts. For example in case of 3 disk groups per host and 5 hosts, total 15 host-groups in cluster, stripe-width of 3 makes more sense then stripe width of 1. This depends a lot on application usage patterns also.
- Advance policy rules - IOPS limit
- This is good feature to have to limit IOPS of test VMs, esp. if we are expecting them to have heavy I/O and want to ensure they do not affect other production VMs on same cluster
- Advance policy rules - Object space reservation
- In case of vSAN there is no performance benefit of thick provisioning at all. Hence unless we want to show the disk usage as part of calculations / summary pages, there is no point in going for anything other then "Thin provisioning"
- Advance policy rules - Flash read cache reservation
- Reserve portion of cache for VM. This can be used to improve performance of VMs / disks by applying this policy and ensuring they get higher than usual cache.
- Advance policy rules - Force provisioning
- This can be useful to ensure that vSAN allows more data to be stored / created on vSAN datastore even if the storage compliance cannot be met.
- For example if in a three node cluster with FTT=1, one node has failed and due to partial repair ( https://core.vmware.com/resource/intelligent-rebuilds-vsan# ) all the remaining space on two free nodes is used up. If there is a new request to create a new object (Use additional vSAN storage), it would be accepted only if force provisioning is set to true.
- Create the policy as per requirements
Refer:
- https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vsan.doc/GUID-9A3650CE-36AA-459F-BC9F-D6D6DAAA9EB9.html
- https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vsan.doc/GUID-08911FD3-2462-4C1C-AE81-0D4DBC8F7990.html
Assigning vSAN storage policy
To assign VSAN policy to a VM or disk of a VM use following steps:
Avoid going to edit settings and changing VM vSAN disk policy from edit settings page. It has been seen to not work properly in a few cases.
- Click on the VM
- Go to Configure and go to "Policies"
- Choose option to "Edit VM storage policy"
- Select the desired policy and quite often assign it to all disks and the VM home folder. In rare cases choose a particular disk to be configured with a different policy. For this you may have to enable per-disk policy option from top right corner.
- Note that if you select two different types of policies then VM summary page may randomly show compliant and non-compliant for the VM, even if its all disks are compliant to assigned policy.
- After policy is applied we can see compliance status against "VM home folder" and also other VM disks on the same Configure -> Policies page.
Refer:
Home > VMWare platform > vMWare VSAN > VMWare vSAN storage policy configuration