From: Aron Broom (broomsday_at_gmail.com)
Date: Fri Feb 24 2012 - 14:49:28 CST
I've been running simulations in NAMD using AMBERFF03 and GLYCAM06 on GPUs
(M2070 mostly). I dragged out some old files from a 60ns run, and checked
the temperature at 10ns with that at 60ns by directly computing it from the
velocity files that were written at the end of each 10ns segment. I was
using Langevin dynamics set to be 300K, I get 297K at 10ns, and 296K at
60ns. So I'm not seeing what you see.
For reference, I was using 1fs, 2fs, 4fs multi-time-stepping with only
waters being rigid (SETTLE). The system size was ~101,000 atoms. I was
also using pressure control at 1 atm.
Now, this being said, every 10ns the system was restarted, but the
velocities were not rescaled, they were taken from the restart velocity
file along with the coordinates and extended system information, and I
think the random seed was even the same at all restarts (which was probably
stupid, but not relevant to this discussion), so I imagine this should have
been the same as just running one long 60ns simulation.
Is your system considerably smaller than mine? Perhaps the error creeps up
more slowly with more particles.
On Fri, Feb 24, 2012 at 1:51 AM, Norman Geist <
> Hi experts,****
> ** **
> we got a little issue here. We use NAMD on CPU and GPU with the amber FF.
> The systems run fine on CPU, but come with a consistent increase in
> temperature when running on GPUs. Is that a known problem. What to do about
> it. The rise is ca. 25K over 40 ns, so long simulations cannot be done
> without many rescales.****
> ** **
> Any ideas?****
> ** **
> PS: The system does not contain fixed atoms.****
-- Aron Broom M.Sc PhD Student Department of Chemistry University of Waterloo
This archive was generated by hypermail 2.1.6 : Tue Dec 31 2013 - 23:21:41 CST