Introduction

Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) is a classical molecular dynamics code. LAMMPS can be used to simulate solid-state materials (metals, semiconductors), soft matter (biomolecules, polymers), and coarse-grained or mesoscopic systems. LAMMPS runs on single processors or in parallel using message-passing techniques with a spatial-decomposition of the simulation domain. The code is designed to be easy to modify or extend with new functionality.

Official website for LAMMPS: https://www.lammps.org

Build LAMMPS using Spack

Please refer to this link for getting started with Spack using AMD Zen Software Studio.

    # Example for building LAMMPS with AOCC and AOCL.
$ spack install lammps +intel +asphere +class2 +extra-dump +opt +replica +granular +openmp-package %aocc  ^amdfftw ^openmpi fabrics=cma,ucx

Explanation of the command options:

Symbol Meaning
%aocc Build LAMMPS with AOCC compiler.
^amdfftw Use amdfftw as the FFTW implementation.
+asphere ,+class2, +extra-dump, +opt, +replica, +granular LAMMPS-specific features (please add as per user requirements ).
+intel Build LAMMPS with Intel package for enabling performance improvement with vectorization support for single, mix, and double precision on CPU and accelerators. Details of INTEL package can be accessed at  "https://docs.lammps.org/Speed_intel.html#intel-package".
This option is compatible with AOCC 4.0+ and LAMMPS 20220324+.
+openmp-package Build LAMMPS with OPENMP package. It provides optimized and multi-threaded versions of many pair styles, nearly all bonded styles (bond, angle, dihedral, improper), several Kspace styles, and a few fix styles. Details of OPENMP package can be accessed at "https://docs.lammps.org/Speed_omp.html#openmp-package".
^openmpi fabrics=cma,ucx Use OpenMPI as the MPI provider and use the CMA network for efficient intra-node communication, falling back to the UCX network fabric, if required.
Note: It is advised to specifically set the appropriate fabric for the host system if possible. Refer to Open MPI with AMD Zen Software Studio for more guidance.

Running LAMMPS

LAMMPS can be used for a big variety of workload. Below are the steps to download and run the sample datasets available in LAMMPS directory.

Run Script for AMD EPYC™ Processors

    #!/bin/bash
# Load LAMMPS built  with AOCC
spack load lammps %aocc

# Obtain Benchmarks
# Download the Rhodopsin dataset.
wget https://raw.githubusercontent.com/lammps/lammps/develop/bench/in.rhodo.scaled
wget https://raw.githubusercontent.com/lammps/lammps/develop/bench/data.rhodo

# MPI and OMP settings
# MPI_RANKS=Number of cores available in the system.
MPI_RANKS=$(nproc)
export OMP_NUM_THREADS=1
MPI_OPTS=”-np $MPI_RANKS --map-by core --bind-to core”

# To run the benchmark with intel package
mpirun $MPI_OPTS lmp -var x 8 -var y 8 -var z 8 -in in.rhodo.scaled -sf intel -pk intel 0
## ***(-pk intel 0 indicates to run with intel package on CPU)
## ***(-sf intel switch will automatically append “intel” to styles that support it)

# To run the benchmark With OPENMP  package
mpirun $MPI_OPTS lmp -var x 8 -var y 8 -var z 8 -in in.rhodo.scaled -sf omp -pk omp 1
## ***(-pk omp 1 indicates to run with OPENMP package with 1 openmp thread)
## ***(-sf omp switch will automatically append “omp” to styles that support it)

Note: The above build and run steps apply to lammps- 29 Aug 2024, AOCC-5.0.0, AOCL-5.1.0, and OpenMPI-5.0.8 on Rocky Linux 9.5 (Blue Onyx) using Spack v1.1.0.dev0 and the builtin repo from spack-packages (commit id: 7824c23443).

For technical support on the tools, benchmarks and applications that AMD offers on this page and related inquiries, reach out to us at toolchainsupport@amd.com