From: Vani Krishna (vakri2002_at_yahoo.com)
Date: Tue Sep 26 2006 - 22:36:21 CDT
Thanks for the reply. I could run pgm using mpiexec, successfully on multi processors. But this time, I have a problem with 'make' in 'Linux-amd64-MPI' directory. The machine is a Cray XD1 single chassis with 6 nodes and 4 processors/node, running SuSE Linux and the version of gcc is 3.3.3 . The steps I followed from the source directory are:
tar -xvf charm-5.9.tar
./build charm++ mpi-linux-amd64
(then ran the megatest/pgm with mpiexec, runs successfully)
edit Make.charm, arch/Linuc-amd64.fftw, arch/Linuc-amd64.tcl appropriatly to reflect the paths to those directories
./config tcl fftw Linux-amd64-MPI
here the 'make' fails with the following error. do I need higher version of gcc? or are any othet files that I need to modify?:
g++ -I/home/vani/NAMD-2.6-Source/NAMD_2.6_Source/charm-5.9 /mpi-linux-amd64/include -DCMK_OPTIMIZE=1 -Isrc -Iinc -Iplugins/include -I/home/vani/NAMD-2.6-Source/tcl/linux-amd64/include -I/home/vani/tcl/include -DNAMD_TCL -I/home/vani/NAMD-2.6-Source/fftw/linux-amd64/include -I/home/vani/fftw/include -DNAMD_FFTW -DNAMD_VERSION=\"2.6\" -DNAMD_PLATFORM=\"Linux-amd64-MPI\" -O3 -m64 -fexpensive-optimizations -ffast-math -o obj/common.o -c src/common.C
g++: cannot specify -o with -c or -S and multiple compilations
make: *** [obj/common.o] Error 1
Jim Phillips <jim_at_ks.uiuc.edu> wrote:
The ++local option isn't available for MPI builds. The charmrun script is
just a hack that calls mpirun, so forget it exists as well. Just run the
MPI-based pgm binary as you would any other MPI binary.
On Wed, 20 Sep 2006, Vani Krishna wrote:
> I am trying to install NAMD 2.6 on a CRAY XD1 machine running with AMD opteron 64-bit 2.4 GHz processor. The configuration consists of 6 nodes with 4 processors/node in a single chassis. I have been able to install the 'net-linux' form by compiling the source code. To get a better performance I am trying to install the 'mpi-linux' version of NAMD. I am having some issues doing that and I have tried looking at the archive, but couldn't find a similar issue faced by other AMD 64 queries.
> I am getting stuck in testing the charmrun after building it. I have tried the following to build charm:
> ./build charm++ mpi-linux-amd64
> ./build charm++ mpi-linux-amd64 -O -DCMK_OPTIMIZE=1
> In both cases, when i try to run the 'megatest':
> cd mpi-linux-amd64/tests/charm++/megatest/
> make pgm
> ./charmrun ++local +p4 ./pgm
> I run into the following problem:
> ./charmrun ++local +p4 ./pgm
> Running on 4 processors: ++local ./pgm
> Unrecognized argument ++local ignored.
> Without hostfile option, hostnames must be specified on command line.
> Usage: mpirun_rsh -np N [-debug] [-paramfile pfile] [-show] [-tv] [-xterm]
> (-hostfile hfile | h1 h2 ... hN)
> np => specify the number of processes
> debug => run each process under the control of gdb
> paramfile => file containing the run-time MRICH parameters
> show => show the commands but don't execute them
> tv => run each process under the control of totalview
> xterm => run the remote processes under xterm
> hostfile => name of the file contining the hosts on which
> to run the job, one per line
> h1 h2 ... => names of the hosts on which to run the job
> progname => name of the MPI binary
> options => arguments for the MPI binary
> are there any solutions to this?
> Get your own web address for just $1.99/1st yr. We'll help. Yahoo! Small Business.
Get your own web address for just $1.99/1st yr. We'll help. Yahoo! Small Business.
This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:42:37 CST