pre compiled charmm-6.8.2 for namd2.13 nightly version compilation for multiple GPU node simulations

From: Aravinda Munasinghe (
Date: Tue Jan 29 2019 - 10:37:15 CST

Dear NAMD users and developers,
I have recently attempted to compile namd2.13 nightly build to run multiple
GPU node replica exchange simulations using REST2 methodology.
First, I was able to run the current version of namd 2.13
Linux-x86_64-verbs-smp-CUDA (Multi-copy algorithms on InfiniBand) binaries
with charmrun in our university cluster using multiple node/GPU setup (with
Then, I tried compiling namd 2.13 nightly version to use REST2 (since the
current version have a bug with selecting solute atom IDs as told here -
), with information in NVIDIA site as well as what mentioned in the
release note. But I failed my self miserably as several others had ( as I
can see from the mailing thread). Since the precompiled binaries within the
current version work perfectly, I cannot think of a reason why my attempts
failed other than some issue related to library files and compilers I am
loading when building charm for multiple node GPU setup. I have used
following flags to build the charmm.

*./build charm++ verbs-linux-x86_64 icc smp --with-production *
I have used ifort and Intel/2018 compilers.
One thing I have noticed is that when I use precompiled namd2.13 I did not
have to link LD_LIBRARY_PATH. But I had to do so when I compiled it my
self (otherwise I keep getting missing library files error).

It would be a great help if any of you who have successfully compiled
multiple node GPU namd 2.13 could share your charmm--6.8.2 files along with
information on compilers you used, so I could compile namd by my self. Or
any sort of advice on how to solve this or sharing namd2.13 precompiled
binaries for the nightly version itself is highly appreciated.
Thank you,

Aravinda Munasinghe,

This archive was generated by hypermail 2.1.6 : Tue Dec 10 2019 - 23:20:26 CST