NAMD Wiki: Namd28b1Release
Issues with the 2.8b1 release of NAMD.
Please see the release notes at http://www.ks.uiuc.edu/Research/namd/2.8b1/notes.html
For bugs fixed since the 2.7 release see Namd27Release and for all changes see http://www.ks.uiuc.edu/Research/namd/cvs2html/chronological.html
There is a memory leak in position output (both trajectory and restart files). Fixed in CVS, workaround is to restart simulation before it runs out of memory.
The limit on the number of VDW types in the CUDA build is eliminated as of the March 29 nightly build.
Spherical and cylindrical boundary conditions are broken, giving nan for forces and energies. Fixed in 2.8b2.
SMP and multicore builds complain "FixedAtoms may not be enabled in a script." when attempting to turn fixed atoms off in a script. Fixed in 2.8b2.
Various I/O calls returning interrupted system call errors exit rather than retry. Fixed in 2.8b2.
The charm-6.3.0 included with NAMD 2.8b1 includes some patches from charm-6.3.1.
The experimental memory-optimized build option with parallel I/O is documented at NamdMemoryReduction.
On Abe and Lincoln at NCSA:
/u/ac/jphillip/NAMD_scripts/runbatch_2.8b1
uses the ibverbs build.
/u/ac/jphillip/NAMD_scripts/runbatch_2.8b1_smp
uses one process per node for increased maximum available memory, but is slightly slower since one core per node is reserved for the communication thread.
/u/ac/jphillip/NAMD_scripts/runbatch_2.8b1_cuda
runs on GPU-accelerated Lincoln nodes. See the CUDA section of the release notes for details.
On the Ember Altix UV at NCSA:
/gpfs1/u/ac/jphillip/NAMD_scripts/runbatch_2.8b1
On Ranger at TACC:
/share/home/00288/tg455591/NAMD_scripts/runbatch_2.8b1
uses ibverbs.
/share/home/00288/tg455591/NAMD_scripts/runbatch_2.8b1_smp
can use 1way, 2way, or 4way processes per node with 15, 7, or 3 compute threads per process, and is again progressively slower because more cores are used for communication.
On Lonestar at TACC:
/home1/00288/tg455591/NAMD_scripts/runbatch_2.8b1
and
/home1/00288/tg455591/NAMD_scripts/runbatch_2.8b1_smp
On Kraken at NICS:
/nics/b/home/jphillip/NAMD_scripts/runbatch_2.8b1
still uses MPI but the binary is built with g++ now so it should be noticably faster.
/nics/b/home/jphillip/NAMD_scripts/runbatch_2.8b1_smp
will use 11 compute threads per process.