Replicating volume using global mirror
From Notes_Wiki
<yambe:breadcrumb>IBM_Storwize_V5000_Gen2_storage_server|IBM Storwize V5000 Gen2 storage server</yambe:breadcrumb>
Replicating volume using global mirror
Very detailed steps for replicating volume using global mirror over iSCSI when partnership is already present are mentioned below. Note that among two storages one is called sandr (for DR site) and other is called sanpr (for Primary site) in below steps.
Create volume on sandr
- Open sandr web management console and login using superuser.
- Go to Volumes -> Volumes
- Choose "Create Volume" option from top and click "Custom" under Advanced
- Enter exactly same capacity as given for volume on sanpr
- Enter desired volume name. It is recommended to give names which indicate relationship between pr and dr volumes.
- Click on "Volume Location"
- Select appropriate Pool where volume should be created
- Go to Summary and verify all the details
Create remote copy relationship between two volumes on primary
- Open sanpr web management console and login using superuser.
- Go to "Copy services" -> "Remote Copy"
- Click on "Create consistency Group"
- Enter desired consistency group name
- For "Where are the auxillary volumes located" choose "On another system" and click next. This is for one way copy from sanpr to sandr.
- Select "Yes, add relationships to this group" and click next
- Select "Global Mirror" and click next
- On "Create Consistency Group" click next and do not worry about "No items found." message shown.
- On "Master" dropdown select appropriate volume created earlier.
- On "Auxillary" dropdown select appropriate volume created earlier. It will only show volumes of same size on Auxillary. Non qualified (Different sizes) volumes will be hidden automatically.
- Click "Add"
- Click next
- Click "Yes, the volumes are already synchronized". ***DO THIS ONLY FOR NEW VOLUMES WHICH ARE FORMATTED JUST NOW AND HAVE NEVER BEEN USED. DO NOT DO THIS WHEN SELECTED VOLUMES ON PRIMARY AND DR SIDE DIFFER"***.
- Click "Yes, start copying now" and click next.
- In case at step "13" you have choosen "No, the volumes are not synced" then wait for sync to finish before proceeding further.
Mounting primary (sanpr) volume on primary side host with multipath over iSCSI
- "cat /etc/iscsi/initiatorname.iscsi" to get primary side host iqn number
- Open sanpr web management console and login using superuser.
- Go to "Hosts" -> "Hosts"
- Select "Add Host"
- Enter desired name
- Select "iSCSI" as host connection
- For "iSCSI port" copy the iqn number seen in step 1
- Leave other settings as it is and click "Add". Example IQN number is iqn.1994-05.com.redhat:63f1504c1272
- Right click on created host and choose "Modify Voume Mappings" option
- Click on "Add Volume Mappings"
- Click on appropriate volume created in previous steps on sanpr.
- Leave "System Assign" selected and click next
- On next screen click "Map Volumes"
- Click "Close" on "Modify Mappings" popup window
- On the host on primary side login into any of the iSCSI IPs of first controller (node1 on primary) using:
- iscsiadm -m discovery -t st -p <<iscsi-ip> -l
-
- Also do the same for second controller (node2 on primary)
- Check whether multipath is discovered and setup automatically or not using "multipath -ll"
- If "mulitpath -ll" is not showing then do "fdisk -l" to see whether multiple hard-disk (6) are visible or not. If hard-disk are not visible check iqn number and mapping at storage end (Steps 1-14). After correcting iqn number or mapping rescan using:
- echo "- - -" > /sys/class/scsi_host/host0/scan
-
- command. Or logout using "iscsiadm -m session -r <session-no> -u" from all sessions. Then "service iscsid stop". Then "service iscsid start" followed by login
- Now "multipath -ll" must show all the disks and paths.
- Do "chkconfig multipathd on" and "chkconfig iscsid on" if not done already. For CentOS 7.x use "systemctl enable multipathd" and "systemctl start multipathd"
- yum -y install parted
- We can partition the device (Assuming <2TB using fdisk msdos-partition-table, use parted if it is >2TB gpt-partition-table). After creating partition check for partition related device names under /dev/mapper such as /dev/mapper/mpathap1, etc. If device names are not visible do "partprobe /dev/mapper/mpatha"
- Create filesystem on the discovered/created partitions
- As required mount the partition using /etc/fstab so that it is remounted automatically on reboot. Use blkid based UUID and not device names as device names might change with reboots.
- Use the filesystem as required.
In case of emergency or for general verification, steps for using remote copy of sandr on a dr machine
- Open sandr web management console and login using superuser.
- Go to "Copy service" -> "Flash copy"
- Choose target volume where sanpr volume is being copied using remote copy. Right click and choose "Advanced flash copy". Based on the requirement choose between "Create new target volume" and "Use existing target volume"
- In create flash-copy mapping choose "Clone". Click on Advancted Settings and select maximum speed for both "Background Copy" (2Gbps) and Cleaning rate (2Gbps). Click next.
- In "Create FlashCopy Mapping" select "No, do not add the mappings to a consistency group" option and click next
- In next screen choose "create a generic volume" option
- Choose appropriate pool based on requirement whereever the flash copy needs to be stored. Click finish.
- Use the plus(+) sign in front of original volume to get list of flash-copy mapping. Right click the mapping and choose "Start" to start copying. Note that mapping will delete automatically after copy is finished.
- Use the plus (+) sign in front of original volume to check status of flash copy (progress bar). Once the flash copy is finished we can use above steps (Mounting primary (sanpr) volume on primary side host with multipath over iSCSI) to mount the volume created by flash copy on a desired host on dr site.
<yambe:breadcrumb>IBM_Storwize_V5000_Gen2_storage_server|IBM Storwize V5000 Gen2 storage server</yambe:breadcrumb>