Introduction

The Weather Research and Forecasting (WRF) Model is a next-generation mesoscale numerical weather prediction system designed for both atmospheric research and operational forecasting applications. WRF features two dynamical cores, a data assimilation system, and a software architecture supporting parallel computation and system extensibility. The model serves a wide range of meteorological applications across scales from tens of meters to thousands of kilometers.

Official website for WRFhttps://www.mmm.ucar.edu/weather-research-and-forecasting-model

Build WRF using Spack

Please refer to this link for getting started with Spack using AMD Zen Software Studio

    # Example for building WRF with AOCC
$ spack install wrf build_type=dm+sm %aocc ^hdf5 +fortran ^openmpi fabrics=cma,ucx

Explanation of the command options:

Symbol Meaning
build_type=dm+sm To install dm+sm build type
^hdf5 +fortran Build HDF5 library with the Fortran bindings
^open­mpi­ fabri­cs=cma,ucx Use OpenMPI as the MPI provider and use the CMA network for efficient intra-node communication, falling back to the UCX network fabric, if required. 
Note: It is advised to specifically set the appropriate fabric for the host system if possible. Refer to Open MPI with AMD Zen Software Studio for more guidance.

Obtaining Benchmarks

Please refer below link to download WRF 3.9.11, 4.2.2 and 4.4 benchmark cases: 

https://www2.mmm.ucar.edu/wrf/users/benchmark/

For example, to download conus 2.5km dataset for WRF v4.4, use https://www2.mmm.ucar.edu/wrf/users/benchmark/v44/v4.4_bench_conus2.5km.tar.gz

Running WRF on EPYC Processors

WRF supports a variety of workloads, but it is most commonly run as a benchmark using the Conus 2.5 km and Conus 12 km datasets.

The following example illustrates running of WRF Conus 2.5 km benchmark on a dual-socket AMD 5th Gen EPYC™ processor with 256 cores ( 128 cores per socket ).

Run Script for AMD EPYC™ Processors

    #!/bin/bash
WRF_VERS=4.4
CONUS_INP=conus_2-5KM  

# Load WRF built with AOCC 
spack load wrf %aocc

# Setup the input and work directory
WORK_DIR=${PWD}/wrf                     # To keep WRF related input and runtime logs
WRF_INP_2_5KM=${WORK_DIR}/input         # Conus Input directory which contains wrfinput_d01, wrfbdy_d01 etc.
WRF_RUN_2_5KM=${WORK_DIR}/rundir		# Conus run directory where WRF output data generates

mkdir -p ${WORK_DIR} ${WRF_INP_2_5KM} ${WRF_RUN_2_5KM}

# Download WRF4.4 input and untar input
cd ${WRF_INP_2_5KM}
wget https://www2.mmm.ucar.edu/wrf/users/benchmark/v44/v4.4_bench_conus2.5km.tar.gz  # Download size ~34 GB
tar -xvf v4.4_bench_conus2.5km.tar.gz -C ${WRF_INP_2_5KM} --strip-components=1
cd ${WRF_RUN_2_5KM}

# Softlink WRF_HOME run directory and conus input to run directory
ln -sfn ${WRF_HOME}/run/* .				# Spack loads WRF_HOME environment variable
cp ${WRF_HOME}/configure.wrf .

# Remove old logs and namelist
rm -rf namelist.input rsl.* wrfout* 1node1tile

# Softlink Conus input to run directory
ln -sfn ${WRF_INP_2_5KM}/* .

# Runtime settings for AMD EPYC™ Processor when NPS is set to 4
export CORES_PER_L3CACHE=8
export NUM_CORES=$(nproc)
export OMP_NUM_THREADS=4
MPI_RANKS=$(( $NUM_CORES / $OMP_NUM_THREADS ))
RANKS_PER_L3CACHE=$(( $CORES_PER_L3CACHE / $OMP_NUM_THREADS ))
MPI_OPTS=" -np $MPI_RANKS --bind-to core --map-by ppr:$RANKS_PER_L3CACHE:l3cache:pe=${OMP_NUM_THREADS} "

export OMP_STACKSIZE="64M"
export OMP_PROC_BIND=TRUE
export OMP_PLACES=threads

export WRFIO_NCD_NO_LARGE_FILE_SUPPORT=1
export WRF_EXE=${WRF_HOME}/main/wrf.exe

echo "Running time mpirun $MPI_OPTS ${WRF_EXE}"

mpirun $MPI_OPTS ${WRF_EXE}

Calculating benchmark performance numbers

Please refer to link below for instructions on how to calculate benchmark performance values:

https://www2.mmm.ucar.edu/wrf/WG2/benchv3/#_Toc212961287

Note: The above build and run steps apply to WRF-4.6.1, AOCC-5.0.0, and OpenMPI-5.0.8 on Rocky Linux 9.5 (Blue Onyx) using Spack v1.1.0.dev0 and the builtin repo from spack-packages ( commit id: 7824c23443).

For technical support  on the tools, benchmarks and applications that AMD offers on this page and related inquiries, reach out to us at toolchainsupport@amd.com