From: Sadhu, Shubho (NIH/NCI) [F] (sadhusj_at_mail.nih.gov)
Date: Thu Jul 23 2009 - 09:43:34 CDT
Just to clarify, does this mean individual NAMD threads are competing for GPU resources if number of processors requested > number of GPUs? For me NAMD runs faster as more processors are added (over the number of GPUs), but scaling is much worse.
From: owner-namd-l_at_ks.uiuc.edu [owner-namd-l_at_ks.uiuc.edu] On Behalf Of Axel Kohlmeyer [akohlmey_at_cmm.chem.upenn.edu]
Sent: Thursday, July 23, 2009 9:04 AM
To: David Chalmers
Subject: Re: namd-l: NAMD CUDA on dual Nvidia 295 GPUs
On Thu, 2009-07-23 at 22:20 +1000, David Chalmers wrote:
> Can I use all four GPUs?
> Should I be using one, four (or some other number) of cores?
you need at least one cpu core per GPU. NAMD can oversubscribe
GPUs and then you can hope to better utilize the individual GPUs
by running something else on a cpu core, while another task is
accessing the GPUs. the are some issues with this due to limitations
of the nvidia drivers.
> Which version of NAMD should I be using, multicore or the charmrun version?
i've been able to use the charmrun version. the multi-core version often
gave me segfaults (even without using GPUs).
> Is there some way that I can better understand how many GPUs NAMD is using?
there should be a message in the output telling you exactly
how the individual GPUs are enumerated and bound to tasks.
> Thanks for any advice,
> David Chalmers
> Faculty of Pharmacy, Monash University
> 381 Royal Pde, Parkville, Vic 3053. Australia
-- ======================================================================= Axel Kohlmeyer akohlmey_at_cmm.chem.upenn.edu http://www.cmm.upenn.edu Center for Molecular Modeling -- University of Pennsylvania Department of Chemistry, 231 S.34th Street, Philadelphia, PA 19104-6323 tel: 1-215-898-1582, fax: 1-215-573-6233, office-tel: 1-215-898-5425 ======================================================================= If you make something idiot-proof, the universe creates a better idiot.
This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:53:04 CST