Ubuntu HPC Install Openmpi from master node
From Notes_Wiki
Home > Ubuntu > HPC setup with openpbs and openmpi > Install Openmpi from master node
We can install OpenMPI using source code by using:
- Create temporary directory for compilation of sources. Download openmpi tar ball and extract sources in temporary compilation folder.
- mkdir -p /export/apps/temp
- cd /export/apps/temp
- wget https://www.open-mpi.org/software/ompi/v4.1/downloads/openmpi-4.1.1.tar.gz
- tar xzf openmpi-4.1.1.tar.gz
- Create directory for openmpi installation
- mkdir -p /export/apps/mpi/openmpi-4.1.1
- Install openmpi
- cd /export/apps/temp/openmpi-4.1.1
- ./configure --enable-orterun-prefix-by-default --prefix=/export/apps/mpi/openmpi-4.1.1
- make -j 10
- make install
- In place of 'make -j 10' we can run 'nproc' and give higher value as parameter to '-j'. The higher value indicates we can do more parallel compilation, if there are multiple concurrent compilation options within the sources.
- Disable StrictHostKeyChecking by editing '/etc/ssh/ssh_config' file and under 'Host *' set:
- StrictHostKeyChecking no
- Set Passwordless authentication for HPC users. (Eg user1 in below example)
- ssh user1@<master> #Or su - user1, if already on master node
- ssh-keygen #Leave all prompts blank, just press enter-enter-enter
- cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys; chmod 600 ~/.ssh/authorized_keys #OR ssh-copy-id user@<compute1>
- Due to home folder sharing between nodes the above should result into user1 password-less SSH to all nodes in the cluster.
- Set mpi path for user1 account
- su - user1
- export PATH=/export/apps/mpi/openmpi-4.1.1/bin:$PATH
- export LD_LIBRARY_PATH=/export/apps/mpi/openmpi-4.1.1/lib:$LD_LIBRARY_PATH
- The above export PATH and export LD_LIBRARY_PATH approach is used only for testing mpi. We will ideally setup modules Ubuntu HPC Install and Configure Modules to configure above settings for each user before they run/compile a program that depends on particular version of mpi.
- Check mpi version and path from user1
- mpirun --version
- which mpirun
- echo $PATH
- echo $LD_LIBRARY_PATH
- Run parallel job
- mpirun -host master,<node-1>,<node-2> -n 3 /usr/bin/stress --cpu 4 --vm 2 --vm-bytes 300M --timeout 60s
- While above command is executing, in parallel on each node open a terminal / shell and run
- htop #or top
-
- and validate that at least 4CPU on each node are utilized up to 100% for 60 seconds during the above mpirun.
Home > Ubuntu > HPC setup with openpbs and openmpi > Install Openmpi from master node