Difference between revisions of "Removing disk from VIOS server"
From Notes_Wiki
(Created page with "Home > VIOS or AIX > Removing disk from VIOS server To remove disk from VIOS server use below steps: # Login to VIOS server (padmin) # Find the hdisk name of the disk to be removed using: #:<pre> #:: lspv #:</pre> #: Example output #::<pre> #:::$ $ lspv #:::NAME PVID VG STATUS #:::hdisk120 00c2eae02d3989c8 None #:::hdisk121 00c2eae095ba1018...") |
m |
||
Line 1: | Line 1: | ||
[[Main Page|Home]] > [[VIOS or AIX]] > [[Removing disk from VIOS server]] | [[Main Page|Home]] > [[VIOS or AIX]] > [[Removing disk from VIOS server]] | ||
=Removing disk from VIOS server= | |||
To remove disk from VIOS server use below steps: | To remove disk from VIOS server use below steps: | ||
Line 87: | Line 88: | ||
lsdev | lsdev | ||
</pre> | </pre> | ||
output also. Then using either of below does not seems to delete the | output also. Then using either of below does not seems to delete the vhost device in one case: | ||
vhost device: | |||
<pre> | <pre> | ||
rmdev -Rdl vhost42 | rmdev -Rdl vhost42 | ||
rmdev -dev vhost42 | rmdev -dev vhost42 | ||
</pre> | </pre> | ||
While in other case we were able to delete the device via 'rmdev -dev <vhost-device-name>' without any problem. Having such devices led to alert LED on the power server. Then LED was cleared after deleting the vhost device using '''any one option from below''': | |||
===Using diag menu=== | |||
# diag | |||
# Task Selection (Diagnostics, Advanced Diagnostics, Service Aids, etc.) | |||
# Identify and Attention Indicators | |||
# Set System Attention Indicator to NORMAL | |||
# Enter and press F7 to commit | |||
===Using command=== | |||
/usr/lpp/diagnostics/bin/usysfault -s normal | |||
===From ASMI=== | |||
# Login to ASMI | |||
# Service indicators | |||
# System Attention Indicator | |||
# Turn off the system attention indicators . | |||
===From HMC=== | |||
# Login to HMC | |||
# Systems Management | |||
# Servers | |||
# Select the problematic server. | |||
# In the pop-up menu, click on Operations> LED status> Deactivate attention LED. | |||
[[Main Page|Home]] > [[VIOS or AIX]] > [[Removing disk from VIOS server]] | [[Main Page|Home]] > [[VIOS or AIX]] > [[Removing disk from VIOS server]] |
Revision as of 05:46, 25 January 2023
Home > VIOS or AIX > Removing disk from VIOS server
Removing disk from VIOS server
To remove disk from VIOS server use below steps:
- Login to VIOS server (padmin)
- Find the hdisk name of the disk to be removed using:
- lspv
- Example output
- $ $ lspv
- NAME PVID VG STATUS
- hdisk120 00c2eae02d3989c8 None
- hdisk121 00c2eae095ba1018 None
- hdisk122 none None
- hdisk123 none None
- Note if the disk is not initialized at VIOS level using:
- bootinfo -s hdisk124
- chdev -l hdisk124 -a algorithm=shortest_queue -a reserve_policy=no_reserve
- chdev -l hdisk124 -a pv=yes
- then PVID column may show none instead of hexademical value. We cannot assume a particular hdisk is unused just based on PVID column.
- Thus, first we should check output of
- lsmap -all | more
- to see which physical volumes (lspv hdisk) are mapped with which partition. Against each partition for each PV there is a VTD value, which is a unique ID identifying mapping of this PV (hdisk) to the LPAR / partition.
- If we want to remove this mapping between hdisk and partition, we can use below as root user (r o #OR oem_setup_env):
- rmdev -dev <VTD of mapping between hdisk and partition>
- Additionally if we want to validate (map) a particular hdisk to a LUN at storage level use, we can find the volume serial(UID) number and other details for hdisk using:
- lsmpio -ql <device-name>
- For Ex:
- lsmpio -ql hdisk4
- This can help in validating we have correct hdisk against storage LUN IDs
- After validating that disk is not mapped to partition and we are removing correct disk as seen at storage end, for actual removal use:
- Switch to elevation mode
- r o
- Remove the disk which is not mapped to any partition via
- rmdev -Rdl <device-name>
- For Ex:
- rmdev -Rdl hdisk4
- Switch to elevation mode
- Then finally remove the disk from storage via
- Login to storage and Go to Volumes
- Unmap the volume
- Select the volume by validating name and UID number
- Right click on the volume, Select "Unmap All Hosts"
- Enter the mappings number in "Verify the number of mappings that this operation affects", and click on "Unmap"
- Delete the volume
- Right click on the volume again, select "Delete"
- Enter the number of volumes in "Verify the number of volumes that you are deleting", then click on "Delete"
Vhost without any mapping
If 'lsmap -all | more' shows vhost with no target such as:
SVSA Physloc Client Partition ID --------------- -------------------------------------------- ------------------ vhost42 U9223.22H.782EAE0-V1-C16 0x00000006 VTD NO VIRTUAL TARGET DEVICE FOUND SVSA Physloc Client Partition ID --------------- -------------------------------------------- ------------------ vhost43 U9223.22H.782EAE0-V1-C17 0x00000006 VTD NO VIRTUAL TARGET DEVICE FOUND
and vhost are part of
lsdev
output also. Then using either of below does not seems to delete the vhost device in one case:
rmdev -Rdl vhost42 rmdev -dev vhost42
While in other case we were able to delete the device via 'rmdev -dev <vhost-device-name>' without any problem. Having such devices led to alert LED on the power server. Then LED was cleared after deleting the vhost device using any one option from below:
- diag
- Task Selection (Diagnostics, Advanced Diagnostics, Service Aids, etc.)
- Identify and Attention Indicators
- Set System Attention Indicator to NORMAL
- Enter and press F7 to commit
Using command
/usr/lpp/diagnostics/bin/usysfault -s normal
From ASMI
- Login to ASMI
- Service indicators
- System Attention Indicator
- Turn off the system attention indicators .
From HMC
- Login to HMC
- Systems Management
- Servers
- Select the problematic server.
- In the pop-up menu, click on Operations> LED status> Deactivate attention LED.