From: Giacomo Fiorin (giacomo.fiorin_at_gmail.com)
Date: Fri Nov 02 2018 - 06:48:30 CDT
Hi Sesha, it is generally not possible to use all of the cores of a GPU
simultaneously, since they implement very different types of calculations.
This is especially true for specialized circuits such as the tensor cores
of Volta-generation GPUs and the ray-tracing cores of the Turing ones. You
should look only at the cores that the MD simulation programs use: single-
and double-precision floating-point cores.
It is also nearly impossible to keep all cores of one type working all the
time, because loading data on/off takes a significant chunk of time. If
you compare a server-grade CPU to a heavy truck, a GPU would be closer to a
freight train: it handles much more, but also takes much longer to
The general principle is maximizing the results you get vs. what you have
to pay. If you can use a V100 for free (e.g. because the local AI experts
can only use it sporadically), great for you! If it costs you anything,
run benchmarks for your specific system on different hardware you have
access to, and decide for yourself.
One note about the specific software: although there are many codes that
perform well on single-GPU setup, you will find that NAMD is especially
competitive in multi-GPU ones. If your system is large, this will help you
for sure at some point.
On Fri, Nov 2, 2018 at 6:59 AM sesha surya vara prasad reddy karri <
> Hello friends,
> I am using GV100 nvidia card for NAMD studies. it accelerates my md
> system calculations. But i want to know whether all the cores of this card
> were used efficiently or not. Please give me an idea about this. Thank you
-- Giacomo Fiorin Associate Professor of Research, Temple University, Philadelphia, PA Contractor, National Institutes of Health, Bethesda, MD http://goo.gl/Q3TBQU https://github.com/giacomofiorin
This archive was generated by hypermail 2.1.6 : Sat Dec 07 2019 - 23:20:14 CST