Introduction

The NWChem software contains computational chemistry tools that are scalable both in their ability to efficiently treat large scientific problems, and in their use of available computing resources from high-performance parallel supercomputers to conventional workstation clusters.

NWChem can handle:

  • Biomolecules, nanostructures, and solid-state
  • From quantum to classical, and all combinations
  • Ground and excited-states
  • Gaussian basis functions or plane-waves
  • Scaling from one to thousands of processors
  • Properties and relativistic effects

Official website for NWChemhttps://nwchemgit.github.io/

Build NWChem using Spack

Please refer to this link for getting started with spack using AMD Zen Software Studio.

    # Example for building NWChem with AOCC and AOCL
$ spack install nwchem armci=mpi-pr +openmp %aocc ^amdblis threads=openmp ^amdlibflame ^amdscalapack ^openmpi fabrics=cma,ucx

Explanation of the command options:

Symbol Meaning
%aocc Build NWChem with AOCC compiler.
armci=mpi-pr Application variant to enable ARMCI with progress rank.
+openmp Build NWChem with OpenMP support. 
^amdblis threads=openmp Use AMDBlis as the BLAS implementation and enable OpenMP support.
^amdlibflame Use amdlibflame as the LAPACK implementation.
^amdscalapack Use amdscalapack as the SCALAPACK implementation.
^open­mpi­ fabri­cs=cma,ucx Use OpenMPI as the MPI provider and use the CMA network for efficient intra-node communication, falling back to the UCX network fabric, if required. 
Note: It is advised to specifically set the appropriate fabric for the host system if possible. Refer to Open MPI with AMD Zen Software Studio for more guidance.

Running NWChem

Obtaining Benchmarks

To obtain a suite of benchmarks performed with NWChem, please visit https://nwchemgit.github.io/Benchmarks.html.

 C240 Buckyball workload is used for example run:

Sample script for running NWChem with C240 Buckyball dataset:

Run Script for AMD EPYC™ Processors

    #!/bin/bash
# Loading NWChem build
spack load nwchem %aocc

# Download input file for the workload C240 Buckyball
wget https://nwchemgit.github.io/c240_631gs.nw

# MPI and OMP options
# MPI_RANKS = Number of cores available in the system.
MPI_RANKS=$(nproc)
export OMP_NUM_THREADS=1
MPI_OPTS=”-np $MPI_RANKS --bind-to core”

# Run command for NWChem
mpirun $MPI_OPTS -x OMP_STACKSIZE="32M" nwchem c240_631gs.nw

Note: The above build and run steps are tested with NWChem-7.2.3, AOCC-5.0.0, AOCL-5.0.0, and OpenMPI-5.0.5 on Red Hat Enterprise Linux release 8.9 (Ootpa) using Spack v0.23.0.dev0 (commit id : 2da812cbad ).

For technical support on the tools, benchmarks and applications that AMD offers on this page and related inquiries, reach out to us at toolchainsupport@amd.com