Is GPU double-precision floating point performance important for NAMD?

From: Mert Gür (
Date: Thu Jun 02 2016 - 16:48:07 CDT

Dear all,

When I am running GPU accelerated MD simulations in NAMD, are there any
double precision floating number calculations performed on the GPU? In
other words how much should I care about the double precision performance
on the GPU if I am running NAMD?

Is there any source/documents that explains this extensively?

What is the double-precision point performance I am looking for in a GPU.

For example the K80 has
Peak double-precision floating point performance: 1.87 Tflops
Peak single-precision floating point performance 5.6 Tflops
CUDA cores 4,992
Memory size per board (GDDR5) 24 GB

whereas the Titan x has (if I am not mistaken)

Peak double-precision floating point performance: 200 GGflops
Peak single-precision floating point performance 7 Tflops
CUDA cores 4,992

The new GTX1080 is said to give 10.7 TFLOPs of single precision performance.

So if GPU doube-precision performance is not that important for NAMD;

Obviously GTX 1090 and Titan X would give me better single precision
performance than K80. Doesn't that mean I would get faster MD simualtions
on GTX1080 compared to the others?

If that is the case why should I select a K80?

Does double-precision performance of the GPU matter if I apply any type of
bias or perform accelerated MD?

My understanding (from past emails on the list) is that the "ECC error
correction" GPU feature is also not (that) important for NAMD. Can someone
ellaborate on this or point me to a link/document which I can read.



This archive was generated by hypermail 2.1.6 : Tue Dec 27 2016 - 23:22:14 CST