Introduction

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. Based on Charm++ parallel objects, NAMD scales to hundreds of cores for typical simulations and beyond 500,000 cores for the largest simulation. NAMD was the first application able to perform a full all-atom simulation of a virus in 2006, and in 2012 a molecular dynamics flexible fitting interaction of an HIV virus capsid in its tabular form.

Official website for NAMD: http://www.ks.uiuc.edu/Research/namd/

NAMD 2.15a2 with AVXTiles Support

  • NAMD 2.15 alpha 2 has AVX-512 kernel support for AMD Zen 4 CPUs. Visit the download page and follow the link to the AVX-512 version to find out more information.

Getting NAMD Source Files

Spack does not currently support automatically downloading NAMD source tar files. Please refer NAMD download page for downloading the source tar files manually. After download store them in the Spack parent directory.

For NAMD version 2.15alpha*, rename the downloaded source tar file to match naming convention used in spack recipe.

  • Spack expects a format like "NAMD_2.15a2_Source.tar.gz" instead of default downloaded file "NAMD_2.15alpha2_Source-AVX512.tar.gz"

Build NAMD using Spack

Please refer to this link for getting started with spack using AMD Zen Software Studio

    # Example for Building NAMD 2.15alpha2 with avxtiles support using AOCC and AOCL
$ spack install namd@2.15a2 %aocc +avxtiles fftw=amdfftw interface=tcl ^amdfftw ^charmpp backend=mpi build-target=charm++ ^openmpi fabrics=auto
 
# Example for Building NAMD 2.14 with AOCC and AOCL
$ spack install namd@2.14 %aocc fftw=amdfftw interface=tcl ^amdfftw ^charmpp backend=mpi build-target=charm++ ^openmpi fabrics=auto

Explanation of the command options:

Symbol Meaning
%aocc Build NAMD with AOCC compiler.
+avxtiles To add support for AVXTiles algorithm, will be valid for NAMD-v2.15a1 and NAMD-v2.15a2 on systems supporting AVX512 instruction set.
fftw=­amd­fftw Use amdfftw as the FFTW implementation.
inter­fac­e=tcl Use tcl as the interface.
^charmpp backe­nd=mpi build­-ta­rge­t=c­harm++ To build NAMD with charmpp, where charmpp uses mpi backend and charm++ target.
^open­mpi­ fabri­cs=­auto Use OpenMPI as the MPI provider and use autodetection for the network fabric.
Note: It is advised to specifically set the appropriate fabric for the host system if possible.

Running NAMD

STMV benchmark used in this example can be found at: https://www.ks.uiuc.edu/Research/namd/utilities/

Obtaining Benchmarks

    # Download STMV dataset
$ wget https://www.ks.uiuc.edu/Research/namd/utilities/stmv/par_all27_prot_na.inp
$ wget https://www.ks.uiuc.edu/Research/namd/utilities/stmv/stmv.namd
$ wget https://www.ks.uiuc.edu/Research/namd/utilities/stmv/stmv.pdb.gz
$ wget https://www.ks.uiuc.edu/Research/namd/utilities/stmv/stmv.psf.gz
 
# Uncompress and edit input files to use current directory for writing temporary files
$ gunzip stmv.psf.gz
$ gunzip stmv.pdb.gz
$ sed -i 's/\/usr/./g' stmv.namd
$ mkdir -p ./tmp

Process layout for NAMD run:

NAMD needs one communication thread per set of worker threads, for example while running on system with dual socket AMD 4th Gen EPYC™ Processor with 192 (96x2) cores.

  • Running with one communication thread and 191 worker threads (+ppn is used to specify the number of worker threads per rank)
  • Lay out the communication thread on the core 0 (+commap is used to specify mapping of communication threads)
  • The worker threads are then pinned on cores 1-191 (+pemap is used to specify mapping of worker threads)

Run Script for AMD EPYC™ Processors

    # Loading NAMD-v2.15a2 build with AOCC
$ spack load namd@2.15a2
 
# Run command for STMV dataset with NAMD 2.15alpha2 build with avxtiles support with AOCC
# We are using 1 MPI rank with using 192 processor ( 191 worker thread and 1 communication thread)
$ mpirun -np 1 --bind-to core namd2 +ppn 191 +commap 0-192:192 +pemap 1-191:192.191 stmv.namd

Note: The above build and run steps are tested with NAMD-2.14/NAMD-2.15alpha2, AOCC-4.2.0, AOCL-4.2.0, and OpenMPI-5.0.2 on Red Hat Enterprise Linux release 8.6 (Ootpa) using Spack v0.22.0.dev0 (commit id : a9d294c).