Building NAMD for Opteron Cluster with ICC

From: Chris Share (
Date: Fri Mar 06 2009 - 21:53:02 CST


I'd like to build NAMD on an AMD Opteron Cluster using Infiniband and running 64-bit CentOS 5. I've managed to successfully build NAMD on my local machine (not the cluster) using several different configurations (GCC, PGI, CUDA). However, I'm unclear as to how to proceed in the case of the cluster build. I'd like to use the Intel compilers as these give the best performance. I'd also like to use MPI.

Firstly, how should I build Charm++?

Am I right in thinking that the Charm++ build configuration should be:

./build charm++ mpi-linux-amd64 icc smp

When I run this I get the following error:

Selected Compiler: icc
Selected Options: smp
Copying src/scripts/Makefile to mpi-linux-amd64-smp-icc/tmp
Soft-linking over bin
Soft-linking over lib
Soft-linking over lib_so
Soft-linking over include
Soft-linking over tmp
Generating mpi-linux-amd64-smp-icc/tmp/
Performing '/usr/bin/gmake charm++ OPTS=' in mpi-linux-amd64-smp-icc/tmp
/usr/bin/gmake headerlinks
gmake[1]: Entering directory `/nfs/user2/cpsmusic/NAMD_INTEL/namd2/charm-6.0/mpi-linux-amd64-smp-icc/tmp'
checking machine name... mpi-linux-amd64-smp-icc
checking "C++ compiler as"... "icpc -fpic -cxxlib-icc -D_REENTRANT "
checking "whether C++ compiler works"... "no"
Cannot compile C++ programs with icpc -fpic -cxxlib-icc -D_REENTRANT
 (check your charm++ version)
gmake[1]: *** [conv-autoconfig.h] Error 1
gmake[1]: Leaving directory `/nfs/user2/cpsmusic/NAMD_INTEL/namd2/charm-6.0/mpi-linux-amd64-smp-icc/tmp'
gmake: *** [headers] Error 2
Charm++ NOT BUILT. Either cd into mpi-linux-amd64-smp-icc/tmp and try
to resolve the problems yourself, visit
for more information. Otherwise, email the developers at

Am I on the right track here?

Any help would be appreciated.



      Stay connected to the people that matter most with a smarter inbox. Take a look

This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:52:27 CST