Instructions for using NAMD software
NAMD is a parallel molecular dynamics code adapted for the UNIX platform. It was developed by the Theoretical and Computational Biophysics Group at the Beckman Institute of the University of Illinois. The development of NAMD is adapted to the existing molecular dynamics packages, mostly X-PLOR and CHARMM so that is accepts both types for input data. NAMD is integrated with the VMD visualisation and analysis program.
Licences
NAMD is a licenced program. You can download it free of charge with prior registration.
Installation
On Vega, NAMD is available either as a module or through the Singularity container. On CVMFS, the NAMD/2.14-foss-2020a-mpi
module is available. However, you can also use your own container to run jobs that require the NAMD software.
Example of creating a NAMD container:
For GPU
NVIDIA provides an optimised container for NAMD GPU, which can be translated with the following command:
singularity build namd-gpu.sif docker://nvcr.io/hpc/namd:<tag>
If you do not have administrator rights on the system, use a --fakeroot
switch to create the container. The option of using this switch must be set up by the system administrator.
Example of creating a container for NAMD GPU:
singularity build --fakeroot namd-gpu.sif docker://nvcr.io/hpc/namd:2.13-singlenode
For CPU
Below is an example of a container definition for installing NAMD with the use of MPI and UCX:
Bootstrap: docker
From: centos:8.3.2011
%runscript
namd2 "$@"
charmrun "$@"
Maintainer barbara.krasovec@ijs.si
%runscript
namd2 "$@"
charmrun "$@"
%environment
export PATH=/opt/NAMD_2.14_Source/Linux-x86_64-g++:/opt/NAMD_2.14_Source/charm-6.10.2/ucx-linux-x86_64-ompipmix/bin:$PATH
export LD_LIBRARY_PATH=/opt/NAMD_2.14_Source/lib:/opt/NAMD_2.14_Source/charm-6.10.2/ucx-linux-x86_64-ompipmix/bin:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=/opt/openmpi-4.0.3/build/lib:$LD_LIBRARY_PATH
export PATH=/opt/openmpi-4.0.3/build/bin:$LD_LIBRARY_PATH
%files
/ceph/hpc/software/containers/singularity/sw/NAMD_2.14_Source.tar.gz /opt/NAMD_2.14_Source.tar.gz
%post
dnf -y groupinstall "Development Tools"
dnf -y install epel-release
dnf -y install dnf-plugins-core
dnf config-manager --set-enabled powertools
dnf -y install wget vim git csh openssl-devel hwloc-devel pmix-devel libevent-devel
git clone https://github.com/openucx/ucx.git
cd ucx
./autogen.sh
./contrib/configure-release --prefix=/opt/ucx/build
make -j16
make install
wget https://download.open-mpi.org/release/open-mpi/v4.0/openmpi-4.0.3.tar.gz
tar -xf openmpi-4.0.3.tar.gz
cd openmpi-4.0.3
./configure --enable-mca-no-build=btl-uct --with-slurm --with-pmix --prefix=/opt/openmpi-4.0.3/build
make -j16
make install
export PATH=/opt/openmpi-4.0.3/build/bin:$PATH
export LD_LIBRARY_PATH=/opt/openmpi-4.0.3/build/lib:$LD_LIBRARY_PATH
Save the definition as namd-2.14.def and translate it with the following command:
singularity build (--fakeroot) namd-2.14.sif namd-2.14.def
Example of SLURM task
Example with the use of CPU MPI UCX
Version 4 of OpenMPI introduces UCX as the default communication mode between processes. Check with your cluster administrator how the link between nodes is organised in the cluster and adapt the running of your task accordingly. NAMD must be translated so that the link type between nodes is taken into account (e.g. OFI, VERBS, UCX, etc.).
The following example shows a job that will be run on 32 nodes and 256 cores. The job requires 2GB RAM per core and will be carried out for 5 minutes on cpu partition. One core per task is reserved for communication so NAMD will report as if 224 cores are in use and not 256.
#!/bin/bash
#
#SBATCH --ntasks 256 # No. of cores
#SBATCH --nodes=32 # No. of nodes
#SBATCH --mem=2000MB # memory per node
#SBATCH -o slurm.%N.%j.out # STDOUT
#SBATCH -t 0:05:00 # execution time (D-HH:MM)
#SBATCH --partition=cpu # partition name
module purge
module load NAMD/2.14-foss-2020a-mpi
srun --mpi=pmix_v3 namd2 ./apoa1.namd
Example with the use of GPU
The following example requires 2 GPU cards and 2 cores per node. Another file system is in use for the work directory (the tasks are not run from the home directory).
#SBATCH -J NAMD_test
#SBATCH --time=01:00:00 # Duration
#SBATCH --nodes=1 # No. of nodes
#SBATCH --mem-per-cpu=2G # RAM per core
#SBATCH --cpus-per-task=2 # 2 OpenMP strings
#SBATCH --gres=gpu:2 # No. of GPU cards
module load NAMD/...
### Define work directory (SCRATCH)
cp -pr /d/hpc/home/user/apoa1 $SCRATCH_DIR
cd $SCRATCH_DIR
### Run program
export OMP_NUM_THREADS=1
srun namd2 apoa1.namd
### Other mode:
#namd2 +p$SLURM_CPUS_PER_TASK +setcpuaffinity +idlepoll apoa1.namd
### Download file to home directory ($HOME)
cp -pr $SCRATCH_DIR $HOME/apoa1
Example with the use of a Singularity container on CPU
If you wish to use several nodes to run the job and the job will be run through a container (which does not have the functionalities available in Slurm) it is necessary to acquire the data on the assigned nodes and create a list for Charm++ (nodelist). The record must look as follows:
host {hostname_1} ++cpu {cores_per_node}
host {hostname_2} ++cpu {cores_per_node}
...
host {hostname_n} ++cpu {cores_per_node}
To automatically generate this file based on the assigned nodes in Slurm, do the following:
NODELIST=$(pwd)/nodelist.${SLURM_JOBID}
for host in $(scontrol show hostnames); do
echo "host ${host} ++cpus ${SLURM_CPUS_ON_NODE}" >> ${NODELIST}
done
Also specify the settings for the SSH-link:
SSH="ssh -o PubkeyAcceptedKeyTypes=+ssh-dss -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=ERROR"
To run it, state:
singularity exec-B $(pwd):/host_pwd namd-2.14.sif charmrun ++remote-shell ${SSH} ++nodelist ${NODELIST} ++p X
Or namd2 with the use of charmrun:
singularity exec-B $(pwd):/host_pwd namd-2.14.sif namd2 +ppn <cores_per_node> charmrun ++remote-shell ${SSH} ++nodelist ${NODELIST} datoteka.namd
Example with the use of a Singularity container on GPU
export NAMD_EXE=namd2
singularity exec --nv -B $(pwd):/host_pwd namd-gpu.sif {$NAMD_EXE} ++ppn <nproc> +setcpuaffinity +idlepoll datoteka.namd
Example of sending with ARC
Firstly you need the .xrsl file, in which you state which resources you require for running the job.
&
(rsl_substitution=("JOBID" "test-namd-08"))
(count = 32)
(jobname = $(JOBID))
(inputfiles =
($(JOBID).tar.gz "")
($(JOBID).sh "")
)
(executable = $(JOBID).sh)
(outputfiles = ($(JOBID).out.tar.bz2 ""))
(stdout=$(JOBID).log)
(join=yes)
(gmlog=log)
(count=256)
(countpernode=16)
(memory=2000)
(walltime="4 hours")
And the executable file:
JOBID="test-namd-08"
tar -xf ${JOBID}.tar.gz
cd ${JOBID}
${CHARMRUN} ${CHARMARGS} ${NAMD2} +idlepoll $PWD/${JOBID}.conf > ${JOBID}.log
echo Computing finished!
tar cvjf ../${JOBID}.out.tar.bz2 ${JOBID}.restart.{coor,vel,xsc} ${JOBID}.{dcd,coor,vel,xsc,log}
Citing NAMD in articles
The instructions are available at http://www.ks.uiuc.edu/Research/namd/papers.html
Documentation
- Downloading NAMD software: http://www.ks.uiuc.edu/Development/Download/download.cgi?PackageName=NAMD
- NAMD Tutorials: http://www.ks.uiuc.edu/Training/Tutorials/
- NAMD User Guide: http://www.ks.uiuc.edu/Research/namd/2.14/ug/
- NAMD 2.14 Release notes: http://www.ks.uiuc.edu/Research/namd/2.14/notes.html