From: Vermaas, Josh (vermaasj_at_msu.edu)
Date: Thu Apr 01 2021 - 12:52:30 CDT
The price is unlikely to be justified for the desktop/workstation/ad-hoc
computing you are describing. The big benefit for the A6000 or other
cards in what used to be called the quadro line are that they have gobs
of memory. So while my older 8GB GTX 980 card will sometimes throw out
of memory exceptions if I try rendering too big of a structure, the RTX
8000 in lab with its 48GB of RAM will render multi-million atom
structures just fine. In terms of CUDA performance, I believe that they
are pretty comparable for NAMD3 performance on smaller systems, although
I've never had hardware in hand to try. They both basically use the same
chip, the GA102. I think the real trick will be getting your hand on
either card though, as there have been relatively widespread shortages
for consumer GPU hardware.
On 4/1/21 12:18 PM, Goedde, Chris wrote:
> Hi all,
> I’m thinking of upgrading the GPU in the linux box I use for namd to take advantage of namd 3.0. Given the constraints of my current hardware, I’m trying to choose between a 10 GB GeForce RTX 3080 and a 48 GB A6000. The A6000 is about $4k more than the 3080, and I’m wondering if it’s worth it.
> My systems are generally relatively small, let’s say 10k to 25k atoms. I’m wondering how much difference I might expect between these two cards on systems of this size and if the difference in price can be justified.
> Thanks for any insight.
> Chris Goedde
> Department of Physics
> DePaul University
-- Josh Vermaas Assistant Professor, MSU-DOE Plant Research Lab and Department of Biochemisty and Molecular Biology vermaasj_at_msu.edu https://urldefense.com/v3/__https://prl.natsci.msu.edu/people/faculty/josh-vermaas/__;!!DZ3fjg!tfqRDHfuVb04CMjidGFwAtVj7yfwSNpjirTopBEQgS7hznWnOaylHbR2nNgkAldSxA$
This archive was generated by hypermail 2.1.6 : Fri Dec 31 2021 - 23:17:11 CST