AMD Zen Software Studio with Spack
- AMD Optimizing C/C++ Compiler (AOCC)
- AMD Optimizing CPU Libraries (AOCL)
- AMD uProf
- Setting Preference for AMD Zen Software Studio
Open MPI with AMD Zen Software Studio
Micro Benchmarks/Synthetic Benchmarks
Spack HPC Applications
Introduction
CP2K is a quantum chemistry and solid state physics software package that can perform atomistic simulations of solid state, liquid, molecular, periodic, material, crystal, and biological systems. CP2K provides a general framework for different modeling methods such as DFT using the mixed Gaussian and plane waves approaches.
CP2K is written in Fortran 2008 and can be run efficiently in parallel using a combination of OpenMP multi-threading and MPI using AOCC and AOCL. The Spack framework along with the instructions below, provides a convenient way to build CP2K that is optimized for your platform and package versions.
Official website for CP2K : https://www.cp2k.org
Build CP2K using Spack
Please refer to this link for getting started with Spack using AMD Zen Software Studio.
# Example for building CP2K with AOCC and AOCL
$ spack install cp2k@2025.1 +elpa %aocc ^amdfftw ^amdscalapack ^amdblis ^amdlibflame ^openmpi fabrics=cma,ucx
Explanation of the command options:
Symbol | Meaning |
---|---|
%aocc | Build CP2K with AOCC compiler. |
+elpa | Enable optimised diagonalisation routines from ELPA. |
^amdfftw | Use amdfftw as the FFTW implementation. |
^amdscalapack | Use amdscalapack as the SCALAPACK implementation. |
^amdblis | Use amdblis as the BLAS implementation. |
^amdlibflame | Use amdlibflame as the LAPACK implementation. |
^openmpi fabrics=cma,ucx | Use OpenMPI as the MPI provider and use the CMA network for efficient intra-node communication, falling back to the UCX network fabric, if required. Note: It is advised to specifically set the appropriate fabric for the host system if possible. Refer to Open MPI with AMD Zen Software Studio for more guidance. |
Running CP2K
CP2K includes a benchmark suite within its source folder. The benchmark suite is designed to provide performance metrics to help users identify the optimal configuration (e.g., number of MPI processes and number of OpenMP threads) for a particular problem. It also demonstrates the code’s parallel performance across different algorithms.
Runtime optimization: Process binding or pinning at runtime can have a significant impact on CP2K performance - either positively or negatively - depending on problem type and size. It is recommended that users experiment with different binding options, such as --bind-to core, --bind-to socket, or no binding at all, to identify the optimal configuration for their specific workload. For example, in the benchmark case H2O-dft-ls.NREP2.inp, the best performance was observed when MPI processes were not bound to core and mapped by core.
Provided here is a sample script for running CP2K with the H2O-dft-ls.NREP2.inp input from the QS_DM_LS benchmark.
Run Script for AMD EPYC™ Processors
# Load CP2K built with AOCC
spack load cp2k %aocc
# Download the H2O-dft-ls.NREP2 dataset
wget https://raw.githubusercontent.com/cp2k/cp2k/refs/heads/master/benchmarks/QS_DM_LS/H2O-dft-ls.NREP2.inp
# MPI and OMP settings
# MPI_RANKS=Number of cores available in the system.
MPI_RANKS=$(nproc)
export OMP_NUM_THREADS=1
MPI_OPTS="-np $MPI_RANKS --bind-to core --map-by core "
# Run the benchmark
mpirun $MPI_OPTS cp2k.psmp H2O-dft-ls.NREP2.inp
Note: The above build and run steps apply to CP2K-2025.1, AOCC-5.0.0, AOCL-5.1.0, and OpenMPI-5.0.8 on Rocky Linux 9.5 (Blue Onyx) using Spack v1.1.0.dev0 and the builtin repo from spack-packages (commit id: 7824c23443).
For technical support on the tools, benchmarks and applications that AMD offers on this page and related inquiries, reach out to us at toolchainsupport@amd.com.