Re: GaMD is slower on GPU compared to cMD

From: Josh Vermaas (
Date: Tue Sep 07 2021 - 09:08:00 CDT

Hi Venkat,

Welcome to the wonderful world of alpha software. :D The performance you
see for conventional MD on normal GPUs is because it follows a new code
path that has been GPU optimized, and the simulation data doesn't leave
the GPU. Not everything in NAMD works that way, and so sometimes you get
to use the old code path, where the GPU computes only some of the terms
needed, and timestep integration has to happen on the CPU. Even if you
use more than 1 CPU to help accelerate the integration steps, shuffling
data back and forth still limits simulation performance on modern
hardware. So you aren't doing anything wrong per se (you are using more
than 1 CPU, right?), but your performance is going to be much worse
unless you fit your algorithm to fit the CUDASOAIntegrate codepath.


On 9/6/21 12:15 PM, Venkatareddy Dadireddy wrote:
> Hi,
> I am new to NAMD and want to use GaMD module in NAMD v3.0 alpha 9.
> I am following the protocol:
> <;!!DZ3fjg!uR1u8UbK1TFiJzQDS6AOXJL5n-T_Ix3kl2O2GXTKg3JCKqO2SpouKhK1oiwSvSAUeGI_SzA$>
> I am using some tutorial test file (pdb) to get hands on GaMD/NAMD.
> When I run conventional MD (cMD) on my test system, it runs quite faster
> (160ns/day) on single GPU. But when I use the same system for GaMD,
> it takes >3days with the following preparatory steps.
> accelMDGcMDPrepSteps      200000
> accelMDGcMDSteps  1000000
> accelMDGEquiPrepSteps       200000
> accelMDGEquiSteps   25000000
> timestep 2.0 # fs
> What I found is that 'CUDASOAintegrate on' accelerates the simulations
> but in case of GaMD equilibration and production steps the
> 'CUDASOAintegrate on' is not supported.
> In case of cMD, >90% GPU is used but in case of GaMD , only 13% of
> GPU is utilized.
> Please help me solving this problem.
> Thank you,
> Venkat

Josh Vermaas
Assistant Professor, Plant Research Laboratory and Biochemistry and Molecular Biology
Michigan State University;!!DZ3fjg!t0ZMZB01ovvii2oVGNN5oZSjFtMB-IYViaahbs6O8LRj8wWMPyinb8Afl0AIeDqQvDlskOA$ 

This archive was generated by hypermail 2.1.6 : Fri Dec 31 2021 - 23:17:11 CST