From: Josh Vermaas (
Date: Sat Mar 14 2020 - 23:14:06 CDT

Hi Morgan,

I'm assuming these are laptop cards you'd be looking at to maximize
portability? So for molecular dynamics, the RAM used is basically
non-existent in the grand scheme of things, especially for a system that
small. GPU vs. CPU on laptop systems are a bit outside of my typical
experience, but here are the results I get on my own laptop running the
DHFR benchmark (23k atoms). My laptop has a P1000 (about 2 years older
in GPU than either of those options) with a i7-8850H (6 cores), which
was what I could put together around black friday last year for about
$1k. This is how it performs with NAMD 2.13.

2 CPU + 1 GPU: 17ns/day

4 CPU + 1 GPU: 19.3ns/day

6 CPU + 1 GPU: 19.7ns/day

6 CPUS alone: 5.5ns/day

Clearly I'm limited by how good/bad the GPU is, since adding cores
doesn't really materially improve performance. So from that perspective,
I'd get the 1660ti. But these results highlight that getting a
microsecond of trajectory with a laptop is going to be a bear. Laptops
are going to be noisy and hot the whole time its calculating (2 months
nonstop!). If it were me, I'd get an older desktop from surplus, upgrade
its GPU and possibly power supply, and let it calculate off in the
corner for 2 or 3 months. You'll be alot happier.


On 3/14/20 8:56 PM, Morgan Hoffman wrote:
> I am helping a graduate student spec out some hardware to run this.
> She has a limited budget and needs to be mobile.
> I see this app is GPU accelerated, but we are weighing between CPU and
> GPU.
> She is running an average of 25k particle simulation for 1microsecond
> file sizes looking around 88G
> I was debating between:
> 4cores (hyper threaded) with a 1660ti
> 6cores (hyper threaded) with a 1650
> Storage and RAM are similar and flexible and will like be NVME and 16-32GB
> I see a few things mentioned here about the GPU acceleration, does
> that alleviate the System Memory resources as the computation takes
> place on the GPU utilizing the video memory?
> Sorry to pollute the list with this but we lack the expertise.