Initial Setup: Difference between revisions
(set defaults section updated for Magnolia) |
|||
(7 intermediate revisions by the same user not shown) | |||
Line 6: | Line 6: | ||
== Startup Scripts == | == Startup Scripts == | ||
When logging into the cluster, the [//wikipedia.org/wiki/Bash_(Unix_shell) bash] shell starts, and a series of [//wikipedia.org/wiki/Hidden_file_and_hidden_directory#Unix_and_Unix-like_environments dot files] are executed. To setup aliases, functions, or to run specific commands every time a new shell starts, the {{C|.bashrc}} dot file, located in the [//www.wikipedia.org/wiki/Home_directory home directory] ( {{Path|${HOME}/.bashrc}}, or, equivalently, {{Path|~/.bashrc}} ), is used. | When [[Connect to Magnolia|logging into the cluster]], the [//wikipedia.org/wiki/Bash_(Unix_shell) bash] shell starts, and a series of [//wikipedia.org/wiki/Hidden_file_and_hidden_directory#Unix_and_Unix-like_environments dot files] are executed. To setup aliases, functions, or to run specific commands every time a new shell starts, the {{C|.bashrc}} dot file, located in the [//www.wikipedia.org/wiki/Home_directory home directory] ( {{Path|${HOME}/.bashrc}}, or, equivalently, {{Path|~/.bashrc}} ), is used. | ||
=== The {{C|.bashrc}} File=== | === The {{C|.bashrc}} File=== | ||
Line 41: | Line 41: | ||
{{Important|To avoid problems copying files to and from the cluster, make sure that the test for an interactive shell is included at the top of the {{Path|${HOME}/.bashrc}} file. }} | {{Important|To avoid problems copying files to and from the cluster, make sure that the test for an interactive shell is included at the top of the {{Path|${HOME}/.bashrc}} file. }} | ||
== Installed Programs and Software == | |||
Most of the scripts and programs are activated by use of the {{C|module}} command. As there are many modules installed on Magnolia, it is recommended to use the ''keyword'' option to {{C|module}} to search for a specific module. For example, to show a list of all programming languages installed as a module, use | |||
{{Cmd|module keyword programming|output=<pre> | |||
perl/5.30.0 : programming language perl | |||
cuda-toolkit/10.1.243: cuda toolkit programming cuda-toolkit | |||
cuda-toolkit/8.0.61 : cuda toolkit programming cuda-toolkit | |||
gcc/6.4.0 : programming language c++ cpp fortran gcc | |||
gcc/7.3.0 : programming language c++ cpp fortran gcc | |||
gcc/8.3.0 : programming language c++ cpp fortran gcc | |||
gcc/9.2.0 : programming language c++ cpp fortran gcc | |||
go/1.11.1 : programming language go | |||
go/1.13.4 : programming language go | |||
go/1.9.3 : programming language go | |||
intel/2017.4.196 : programming language c++ fortran intel | |||
nasm/2.13.02 : programming language assembler assembly nasm | |||
python/2.7.14 : programming language python | |||
python/3.5.4 : programming language python | |||
python/3.6.2 : programming language python | |||
python/3.8.6 : programming language python | |||
</pre>}} | |||
and to show a list of all MPI modules, use | |||
{{Cmd|module keyword mpi|output=<pre> | {{Cmd|module keyword mpi|output=<pre> | ||
Line 88: | Line 108: | ||
== Make Changes Persistent and Global == | == Make Changes Persistent and Global == | ||
Before a job can be successfully submitted to the cluster it is required to have: | Before a job can be successfully submitted to the cluster it is required to have the same settings on all nodes of the cluster, two common methods to accomplish this are: | ||
* Persistent settings across logins | * Persistent settings across logins | ||
* | * Custom job submission scripts | ||
Custom job submission scripts are useful if you plan to use a variety of programs and/or software packages, while persistent settings across logins are used when only one program or software package will be used. | |||
Persistent setting across logins are explained below, while custom job submission scripts can be found by using the [[Update Examples|update examples]] command. | |||
To make the compiler and message passing system settings persistent across logins and all nodes we need to edit the {{C|.bashrc}} file in the [//www.wikipedia.org/wiki/Home_directory home directory]. The directions below will be for using the {{C|nano}} editor, but you are free to use | To make the compiler and message passing system settings persistent across logins and all nodes we need to edit the {{C|.bashrc}} file in the [//www.wikipedia.org/wiki/Home_directory home directory]. The directions below will be for using the {{C|nano}} editor, but you are free to use whichever text editor you are most comfortable with. | ||
Use the following to start editing the file: | Use the following to start editing the file: | ||
Line 99: | Line 124: | ||
If this is your first time editing the {{C|.bashrc}} file, it should look just like the one shown in the [[#The .bashrc File|.bashrc section]]. | If this is your first time editing the {{C|.bashrc}} file, it should look just like the one shown in the [[#The .bashrc File|.bashrc section]]. | ||
We need to add the | We need to add the {{C|module}} command we used earlier to the bottom of this file. It should look like this: | ||
{{FileBox|title=(after editing)|filename=${HOME}/.bashrc|1= | {{FileBox|title=(after editing)|filename=${HOME}/.bashrc|1= | ||
Line 118: | Line 143: | ||
# User specific aliases and functions | # User specific aliases and functions | ||
module load openmpi-2.0/gcc | |||
}} | }} | ||
To save the changes, use {{Key|Ctrl}}+{{Key|O}}, then hit {{Key|Enter}} to accept the file name. Use {{Key|Ctrl}}+{{Key|X}} to exit {{C|nano}}. | To save the changes, use {{Key|Ctrl}}+{{Key|O}}, then hit {{Key|Enter}} to accept the file name. Use {{Key|Ctrl}}+{{Key|X}} to exit {{C|nano}}. | ||
Now every time you login, the GNU compiler and Open MPI compiled with the GNU compiler will be the default. | Now every time you login, the GNU compiler and Open MPI compiled with the GNU compiler will be the default. | ||
== Next Step / Examples == | |||
To get a set of examples showing how to run jobs on the cluster, go to the [[Update Examples|update examples]] wiki page. | |||
[[Category:Tutorials]] | [[Category:Tutorials]] |
Latest revision as of 23:56, 14 February 2021
In High-Performance computing (HPC) there are several compiler and message passing interface systems for users to choose. Selecting a combination of compiler and message passing interface systems for a particular computational job is an involved process that depends upon the intricacies of the calculation and the specifications of the HPC system. This initial setup tutorial will guide the user in setting up a new account to use the most common combination of compiler and message passing interface system: GNU Compiler Collection, and Open MPI.
Startup Scripts
When logging into the cluster, the bash shell starts, and a series of dot files are executed. To setup aliases, functions, or to run specific commands every time a new shell starts, the .bashrc dot file, located in the home directory ( ${HOME}/.bashrc, or, equivalently, ~/.bashrc ), is used.
The .bashrc File
This file will be used in subsequent sections of this article to setup an environment suitable for computational calculations on the cluster. To see the contents of your .bashrc file, use the following command:
user $
cat ${HOME}/.bashrc
Your file should look similar to this:
# .bashrc # Test for an interactive shell. There is no need to set anything # past this point for scp and rcp, and it's important to refrain from # outputting anything in those cases. if [[ $- != *i* ]] ; then # Shell is non-interactive. Be done now! return fi # Source global definitions if [ -f /etc/bashrc ]; then . /etc/bashrc fi # User specific aliases and functions
If you'd like to make changes to your .bashrc file, use the nano editor:
user $
nano -w ${HOME}/.bashrc
To save any change you make, use Ctrl+O, then hit Enter to accept the file name. Use Ctrl+X to exit nano.
To avoid problems copying files to and from the cluster, make sure that the test for an interactive shell is included at the top of the ${HOME}/.bashrc file.
Installed Programs and Software
Most of the scripts and programs are activated by use of the module command. As there are many modules installed on Magnolia, it is recommended to use the keyword option to module to search for a specific module. For example, to show a list of all programming languages installed as a module, use
user $
module keyword programming
perl/5.30.0 : programming language perl cuda-toolkit/10.1.243: cuda toolkit programming cuda-toolkit cuda-toolkit/8.0.61 : cuda toolkit programming cuda-toolkit gcc/6.4.0 : programming language c++ cpp fortran gcc gcc/7.3.0 : programming language c++ cpp fortran gcc gcc/8.3.0 : programming language c++ cpp fortran gcc gcc/9.2.0 : programming language c++ cpp fortran gcc go/1.11.1 : programming language go go/1.13.4 : programming language go go/1.9.3 : programming language go intel/2017.4.196 : programming language c++ fortran intel nasm/2.13.02 : programming language assembler assembly nasm python/2.7.14 : programming language python python/3.5.4 : programming language python python/3.6.2 : programming language python python/3.8.6 : programming language python
and to show a list of all MPI modules, use
user $
module keyword mpi
impi : intel MPI impi mpich/gcc : MPI mpich mpich/intel : MPI mpich openmpi-1.6/gcc : MPI openmpi openmpi-1.6/intel : MPI openmpi openmpi-1.8/gcc : MPI openmpi openmpi-1.8/intel : MPI openmpi openmpi-2.0/gcc : MPI openmpi openmpi-2.0/intel : MPI openmpi
The first column of output is the name of the individual modules. Therefore, to load the newest version of Open MPI with the GNU Compiler Collection (GCC) use
user $
module load openmpi-2.0/gcc
and to show all loaded modules:
user $
module list
Currently Loaded Modulefiles: 1) openmpi-2.0/gcc
This module loads Open MPI 2.0, and the GNU Compiler Collection environment; to check this, use the --version option with gcc and mpirun:
user $
gcc --version
gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11) Copyright (C) 2015 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
user $
mpirun --version
mpirun (Open MPI) 2.0.1 Report bugs to http://www.open-mpi.org/community/help/
The settings so far affect only the currently running shell and will be lost when logging back in at a later date. To make changes persistent across logins, see the next section.
Make Changes Persistent and Global
Before a job can be successfully submitted to the cluster it is required to have the same settings on all nodes of the cluster, two common methods to accomplish this are:
- Persistent settings across logins
- Custom job submission scripts
Custom job submission scripts are useful if you plan to use a variety of programs and/or software packages, while persistent settings across logins are used when only one program or software package will be used.
Persistent setting across logins are explained below, while custom job submission scripts can be found by using the update examples command.
To make the compiler and message passing system settings persistent across logins and all nodes we need to edit the .bashrc file in the home directory. The directions below will be for using the nano editor, but you are free to use whichever text editor you are most comfortable with.
Use the following to start editing the file:
user $
nano -w ${HOME}/.bashrc
If this is your first time editing the .bashrc file, it should look just like the one shown in the .bashrc section.
We need to add the module command we used earlier to the bottom of this file. It should look like this:
# .bashrc # Test for an interactive shell. There is no need to set anything # past this point for scp and rcp, and it's important to refrain from # outputting anything in those cases. if [[ $- != *i* ]] ; then # Shell is non-interactive. Be done now! return fi # Source global definitions if [ -f /etc/bashrc ]; then . /etc/bashrc fi # User specific aliases and functions module load openmpi-2.0/gcc
To save the changes, use Ctrl+O, then hit Enter to accept the file name. Use Ctrl+X to exit nano.
Now every time you login, the GNU compiler and Open MPI compiled with the GNU compiler will be the default.
Next Step / Examples
To get a set of examples showing how to run jobs on the cluster, go to the update examples wiki page.