Initial Setup: Difference between revisions

From HPC Wiki
Jump to:navigation Jump to:search
No edit summary
No edit summary
Line 124: Line 124:


Now every time you login, the GNU compiler and Open MPI compiled with the GNU compiler will be the default.
Now every time you login, the GNU compiler and Open MPI compiled with the GNU compiler will be the default.
== Next Step / Examples ==
To get a set of examples showing how to run jobs on the cluster, go to the [[Update Examples|update examples]] wiki page.


[[Category:Tutorials]]
[[Category:Tutorials]]

Revision as of 16:49, 8 February 2018

In High-Performance computing (HPC) there are several compiler and message passing interface systems for users to choose. Selecting a combination of compiler and message passing interface systems for a particular computational job is an involved process that depends upon the intricacies of the calculation and the specifications of the HPC system. This initial setup tutorial will guide the user in setting up a new account to use the most common combination of compiler and message passing interface system: GNU Compiler Collection, and Open MPI.

Startup Scripts

When logging into the cluster, the bash shell starts, and a series of dot files are executed. To setup aliases, functions, or to run specific commands every time a new shell starts, the .bashrc dot file, located in the home directory ( ${HOME}/.bashrc, or, equivalently, ~/.bashrc ), is used.

The .bashrc File

This file will be used in subsequent sections of this article to setup an environment suitable for computational calculations on the cluster. To see the contents of your .bashrc file, use the following command:

user $cat ${HOME}/.bashrc

Your file should look similar to this:

FILE ${HOME}/.bashrc
# .bashrc

# Test for an interactive shell.  There is no need to set anything              
# past this point for scp and rcp, and it's important to refrain from           
# outputting anything in those cases.                                           
if [[ $- != *i* ]] ; then                                                       
        # Shell is non-interactive.  Be done now!                               
        return                                                                  
fi                                                                              

# Source global definitions
if [ -f /etc/bashrc ]; then
	. /etc/bashrc
fi

# User specific aliases and functions

If you'd like to make changes to your .bashrc file, use the nano editor:

user $nano -w ${HOME}/.bashrc

To save any change you make, use Ctrl+O, then hit Enter to accept the file name. Use Ctrl+X to exit nano.

Important
To avoid problems copying files to and from the cluster, make sure that the test for an interactive shell is included at the top of the ${HOME}/.bashrc file.


Set Defaults

Most of the scripts and programs are activated by use of the module command. As there are many modules installed on Magnolia, it is recommended to use the keyword option to module to search for a specific module. For example, to show a list of all MPI modules, use

user $module keyword mpi
impi                 : intel MPI impi
mpich/gcc            : MPI mpich
mpich/intel          : MPI mpich
openmpi-1.6/gcc      : MPI openmpi
openmpi-1.6/intel    : MPI openmpi
openmpi-1.8/gcc      : MPI openmpi
openmpi-1.8/intel    : MPI openmpi
openmpi-2.0/gcc      : MPI openmpi
openmpi-2.0/intel    : MPI openmpi

The first column of output is the name of the individual modules. Therefore, to load the newest version of Open MPI with the GNU Compiler Collection (GCC) use

user $module load openmpi-2.0/gcc

and to show all loaded modules:

user $module list
Currently Loaded Modulefiles:
  1) openmpi-2.0/gcc

This module loads Open MPI 2.0, and the GNU Compiler Collection environment; to check this, use the --version option with gcc and mpirun:

user $gcc --version
gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11)
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
user $mpirun --version
mpirun (Open MPI) 2.0.1

Report bugs to http://www.open-mpi.org/community/help/
Note
The settings so far affect only the currently running shell and will be lost when logging back in at a later date. To make changes persistent across logins, see the next section.

Make Changes Persistent and Global

Before a job can be successfully submitted to the cluster it is required to have:

  • Persistent settings across logins
  • Same settings on all nodes of cluster

To make the compiler and message passing system settings persistent across logins and all nodes we need to edit the .bashrc file in the home directory. The directions below will be for using the nano editor, but you are free to use whichever text editor you are most comfortable with.

Use the following to start editing the file:

user $nano -w ${HOME}/.bashrc

If this is your first time editing the .bashrc file, it should look just like the one shown in the .bashrc section.

We need to add the module command we used earlier to the bottom of this file. It should look like this:

FILE ${HOME}/.bashrc(after editing)
# .bashrc

# Test for an interactive shell.  There is no need to set anything              
# past this point for scp and rcp, and it's important to refrain from           
# outputting anything in those cases.                                           
if [[ $- != *i* ]] ; then                                                       
        # Shell is non-interactive.  Be done now!                               
        return                                                                  
fi                                                                              

# Source global definitions
if [ -f /etc/bashrc ]; then
	. /etc/bashrc
fi

# User specific aliases and functions
module load openmpi-2.0/gcc

To save the changes, use Ctrl+O, then hit Enter to accept the file name. Use Ctrl+X to exit nano.

Now every time you login, the GNU compiler and Open MPI compiled with the GNU compiler will be the default.

Next Step / Examples

To get a set of examples showing how to run jobs on the cluster, go to the update examples wiki page.