From: Josh Vermaas (
Date: Tue May 06 2014 - 12:48:05 CDT

Hi Erik,

I'm sure others have opinions too, but these are my opinions/experiences.

For the most part, even the lower-end offerings of NVIDIA/AMD will
handle 100-200k atoms just fine. If it is just for a workstation (no
fancy 3-D screens, one monitor), your run of the mill discrete card will
function just fine, and it mostly comes down to what resolution you
expect the card to drive at a given refresh rate (VMD does just fine on
laptops without a discrete graphics card after all). In a workstation
environment, that generally means a NVIDIA GTX or a AMD Radeon will work
just fine (so long as you check that it supports the number of output
monitors you intend to drive). For pure visualization purposes, the more
professionally oriented cards (NVIDIA Tesla and AMD Firepro) don't have
any features you need.

In terms of which manufacturer you should choose, that really comes down
to if the CUDA features are needed for your workflow. VMD has CPU-based
algorithms to do all of the analysis that is also implemented in CUDA,
and for the 100-200k atom systems, both versions finish fast enough for
it not to be a big deal if you have CUDA support or not. There are other
considerations though that are external to VMD. If you are running
linux, the binary blob drivers NVIDIA puts out have had a better
reputation than those from AMD (although AMD has better open source
drivers). High end radeons tend to be slightly pricier than their GTX
equivalents since for whatever reason, bitcoin mining codes run faster,
creating additional demand for them.

For you, the MD code you run probably has a bigger impact. NAMD can only
take advantage of CUDA-capable (and therefore NVIDIA) GPUs, while other
codes may use OpenCL, and thus can use either. The main reason my
machine currently has a GTX 580 is because it was the best GPU I could
afford at the time that NAMD could take advantage of. Nowadays, there
are a lot of choices, but those are mostly made by your budget. If I
were to update my graphics card today, I'd consider carefully between a
GTX 780, 780 Ti or Titan, based primarily on their NAMD performance
relative to my current card. If I didn't have to consider running
simulations on my workstation, I'd be very happy with a 750 or 760
(which are less than half the cost).

Those are the general considerations, but sometimes these hardware
choices are made complex by mundane and dumb cabling issues. For
instance, does your computer have an extra cable coming out of the power
supply to feed a graphics card? The specfic cards I've mentioned all
require 1, and sometimes 2 supplemental cables coming from the power
supply to feed them, as they can't draw enough from the PCI bus. The NVS
295 you currently have doesn't look like it has them (it draws only 23W,
while the GTX 750 draws something like 100-200W). Since Dell tends not
to include extra cables your hardware doesn't need, that would limit the
options you have for a upgraded graphics card considerably.

-Josh Vermaas

On 05/06/2014 11:08 AM, Erik Nordgren wrote:
> Hi folks,
> I have what must be a very common question, although I tried searching
> the list archives and somehow didn't come up with anything very recent
> & relevant, so figured I'd post.
> Basically, I'm just wondering if folks who are accustomed to
> purchasing new hardware regularly could comment with thoughts on the
> "optimum" choice (in terms of power vs. cost) of a GPU to put in a
> desktop workstation today, for smooth visualization of VMD structures
> with, say, 100-200 K atoms. (I assume that the "sweet spot" for
> choosing a GPU is a moving target, with the ever-improving
> capabilities of cards, which is why posts on this subject from over a
> year ago are probably not very relevant anymore.) I should add that
> I'm not in the market for an entire brand-new workstation, but rather
> considering just upgrading the GPU in the linux box I already have (a
> Dell Precision T3500, few years old already), which at the moment has
> an NVIDA Quadro NVS 295.
> As a related question, is it true that the only GPU manufacturer worth
> seriously considering for VMD is NVIDIA (due to the CUDA optimizations)?
> Many thanks in advance for any & all suggestions!
> Erik
> --
> C. Erik Nordgren, Ph.D.
> Department of Chemistry
> University of Pennsylvania