Capabilities: Difference between revisions

From HPC Wiki
Jump to:navigation Jump to:search
No edit summary
 
(2 intermediate revisions by the same user not shown)
Line 1: Line 1:
=Capabilities=
=Capabilities=
The HPC, fully operational on September 5, 2017, and includes the following capabilities:
The HPC, fully operational on September 5, 2017, includes the following capabilities:


==Hardware==
==Hardware==
Line 6: Line 6:
* 7168 CUDA cores
* 7168 CUDA cores
* 4 High memory nodes (512 GB RAM per node)
* 4 High memory nodes (512 GB RAM per node)
* 4GB per core on normal compute nodes
* 20 cores, and 6 GB RAM per core on normal compute nodes
* Rapid temporary filesystem (LUSTRE) for application execution
* Rapid temporary filesystem (LUSTRE) for application execution
* Network attached storage for data archiving
* Network attached storage for data archiving
Line 23: Line 23:
Support for this project was provided by the National Science Foundation under the Major Research Instrumentation (MRI) program via Grant # ACI 1626217
Support for this project was provided by the National Science Foundation under the Major Research Instrumentation (MRI) program via Grant # ACI 1626217


=Citations to HPC infrastructure=
{{:Citation}}
Please reference USM HPC in any research report, journal or publication. The recognition of the resources you used to perform research is important for acquiring funding for the next generation hardware, support services, and our Research & Development activities in HPC, visualization, data storage, and grid infrastructure.
 
The suggested content of a citation is:
 
:''The authors acknowledge HPC at The University of Southern Mississippi''
:''supported by the National Science Foundation under the Major Research''
:''Instrumentation (MRI) program via Grant # ACI 1626217.''

Latest revision as of 13:02, 8 November 2019

Capabilities

The HPC, fully operational on September 5, 2017, includes the following capabilities:

Hardware

  • 1896 compute cores
  • 7168 CUDA cores
  • 4 High memory nodes (512 GB RAM per node)
  • 20 cores, and 6 GB RAM per core on normal compute nodes
  • Rapid temporary filesystem (LUSTRE) for application execution
  • Network attached storage for data archiving

Software

(See Software page for detailed information)
  • Intel and GNU Fortran, C, and C++ compilers
  • Other software can be installed upon request

DataSpace digital archive

(See DataSpace for more information)
  • Archive for raw data, documents, theses & dissertations
  • Front-end for searching/processing data

Funding

Support for this project was provided by the National Science Foundation under the Major Research Instrumentation (MRI) program via Grant # ACI 1626217

How to Cite the HPC Infrastructure

Please reference USM HPC in any research report, journal or publication. The recognition of the resources you used to perform research is important for acquiring funding for the next generation hardware, support services, and our Research & Development activities in HPC, visualization, data storage, and grid infrastructure.

The suggested content of a citation is:

The authors acknowledge HPC at The University of Southern Mississippi
supported by the National Science Foundation under the Major Research
Instrumentation (MRI) program via Grant # ACI 1626217.