From: Vermaas, Josh (vermaasj_at_msu.edu)
Date: Mon Nov 08 2021 - 19:57:14 CST
This is a bigish system, being run only on CPUs on a single node. This is going to be pretty slow. If you get CUDA operational, the 960 might help, but the 960 is also a bit old, so it may actually be a bottleneck itself. The ultimate solution is to get more hardware to throw at the problem, either by buying time, or by proposing your science to a supercomputing site.
From: <owner-namd-l_at_ks.uiuc.edu> on behalf of Amir Zeb <zebamir85_at_gmail.com>
Reply-To: "namd-l_at_ks.uiuc.edu" <namd-l_at_ks.uiuc.edu>, Amir Zeb <zebamir85_at_gmail.com>
Date: Monday, November 8, 2021 at 8:41 PM
To: "namd-l_at_ks.uiuc.edu" <namd-l_at_ks.uiuc.edu>
Subject: namd-l: Why simulation running on linux is damn slow??
Hello NAMD users,
I have a system of ~250000 atoms and I want to run simulation for 100 ns. The time step is 2, so I put run = 50000000 in config file. It took 4 days and hardly completed 6400000 steps, this means it'll take >20 days to complete 100 ns simulation with this speed. The command I used is:
namd2 +idle poll +p10 +devices 0 config.
namd > xxx.log
Details of my system:
CPU(s) 40, online cups list 0-39, Core(s) per socket 10, Sockets 2,
NVIDIA GM206 [GeForce GTX 960] (rev a1).
but when I put the command nvidia-smi,
it prints 'not found', but can be installed with: sudo aor install nvidia-340
and many others.
Please let me know what should I do to speed up the simulation?
This archive was generated by hypermail 2.1.6 : Fri Dec 31 2021 - 23:17:12 CST