VMD-L Mailing List
From: Axel Kohlmeyer (akohlmey_at_gmail.com)
Date: Wed Jul 01 2020 - 04:52:33 CDT
- Next message: Ashar Malik: "Re: VMD on HPC"
- Previous message: Adupa Vasista: "Re: VMD on HPC"
- In reply to: Adupa Vasista: "Re: VMD on HPC"
- Next in thread: Ashar Malik: "Re: VMD on HPC"
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] [ attachment ]
VMD is by default not an MPI parallel program, so trying to run it in
parallel on a cluster as such an application is a pointless and wasteful
operation.
VMD *can* be compiled with MPI support (but is non-trivial and has to be
done locally) and then it has extra functionality, but that must be used
*explicitly* to take advantage of parallel computing. you can also add MPI
support after the fact by compiling/loading a Tcl MPI wrapper package (like
this https://sites.google.com/site/akohlmey/software/tclmpi ), but this -
again - won't get you any advantage unless you write explicitly parallel
VMD/Tcl scripts using those MPI wrapper commands (e.g. to split per-frame
analysis across multiple VMD processes by letting each instance load a
different range of frames).
finally, a few of the built in VMD commands (e.g. measure gofr) have
multi-threading support included. those will run in parallel if multiple
cores are available, but not across multiple nodes or MPI ranks, so using
srun to create multiple instances of VMD makes no sense for that.
axel.
On Wed, Jul 1, 2020 at 5:36 AM Adupa Vasista <adupavasista_at_gmail.com> wrote:
> I don't think that is the case, because the same command I am using to run
> namd, I never stumbled on this issue before.
>
> Moreover, this morning I saw a similar mail in lammps mailing list,
> where the problem is said to be caused by *VMD compiling without MPI
> support* or using a *mpirun command from a *different* MPI library*. I
> messaged the system administrator about this, and will update as soon as he
> replies.
>
> Thank you.
>
>
>
> On Wed, Jul 1, 2020 at 1:49 PM Ashar Malik <asharjm_at_gmail.com> wrote:
>
>> It prints N times because you start "vmd" N times.
>> If you want to use "multiple cores" -- you should start VMD only once ...
>> you can see that VMD in its (every) screen dump is listing that it's
>> identifying 24 CPUs.
>>
>> Following this, if the contents of poly.tcl are capable of scaling to
>> many CPUs they will.
>> Additionally it will also depend on how this particular HPC node on which
>> your job is going to end up running is set up, and if it allows a program
>> to scale up.
>>
>> On Wed, Jul 1, 2020 at 1:33 PM Adupa Vasista <adupavasista_at_gmail.com>
>> wrote:
>>
>>> Dear VMD users,
>>>
>>> I am running VMD on HPC using the command
>>>
>>> srun -N 5 vmd -dispdev text -e poly.tcl
>>>
>>> But when I execute it, the vmd message is printed N number of times,
>>> like the below.
>>>
>>>
>>>
>>> Info) VMD for LINUXAMD64, version 1.9.3 (November 30, 2016)
>>>> Info) http://www.ks.uiuc.edu/Research/vmd/
>>>> Info) Email questions and bug reports to vmd_at_ks.uiuc.edu
>>>> Info) Please include this reference in published work using VMD:
>>>> Info) Humphrey, W., Dalke, A. and Schulten, K., `VMD - Visual
>>>> Info) Molecular Dynamics', J. Molec. Graphics 1996, 14.1, 33-38.
>>>> Info) -------------------------------------------------------------
>>>> Info) Multithreading available, 24 CPUs detected.
>>>> Info) CPU features: SSE2 AVX AVX2 FMA
>>>> Info) VMD for LINUXAMD64, version 1.9.3 (November 30, 2016)
>>>> Info) http://www.ks.uiuc.edu/Research/vmd/
>>>> Info) Email questions and bug reports to vmd_at_ks.uiuc.edu
>>>> Info) Please include this reference in published work using VMD:
>>>> Info) Humphrey, W., Dalke, A. and Schulten, K., `VMD - Visual
>>>> Info) Molecular Dynamics', J. Molec. Graphics 1996, 14.1, 33-38.
>>>> Info) -------------------------------------------------------------
>>>> Info) Multithreading available, 24 CPUs detected.
>>>> Info) CPU features: SSE2 AVX AVX2 FMA
>>>> Info) Free system memory: 123GB (97%)
>>>> Info) Free system memory: 123GB (97%)
>>>> Info) No CUDA accelerator devices available.
>>>> Info) No CUDA accelerator devices available.
>>>> Info) VMD for LINUXAMD64, version 1.9.3 (November 30, 2016)
>>>> Info) http://www.ks.uiuc.edu/Research/vmd/
>>>> Info) Email questions and bug reports to vmd_at_ks.uiuc.edu
>>>> Info) Please include this reference in published work using VMD:
>>>> Info) Humphrey, W., Dalke, A. and Schulten, K., `VMD - Visual
>>>> Info) Molecular Dynamics', J. Molec. Graphics 1996, 14.1, 33-38.
>>>
>>>
>>> Any insights on why this happens.
>>>
>>> Thank you.
>>>
>>
>>
>> --
>> Best,
>> /A
>>
>
>
> --
>
>
-- Dr. Axel Kohlmeyer akohlmey_at_gmail.com http://goo.gl/1wk0 College of Science & Technology, Temple University, Philadelphia PA, USA International Centre for Theoretical Physics, Trieste. Italy.
- Next message: Ashar Malik: "Re: VMD on HPC"
- Previous message: Adupa Vasista: "Re: VMD on HPC"
- In reply to: Adupa Vasista: "Re: VMD on HPC"
- Next in thread: Ashar Malik: "Re: VMD on HPC"
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] [ attachment ]