From: Axel Kohlmeyer (
Date: Wed Jul 21 2010 - 09:09:19 CDT

> job on such a node would allow me to use it as it was here, my bothering was
> especially which graphic card is used in this  case, the super mega one on
> the node or my onboard locally. But as I was explained, I can use the whole
> power of the node, as long there is enough bandwidth.

as far as graphics is concerned. with OpenGL, the _local_ GPU determines
the render speed, unless you use something like VirtualGL. OpenGL will
be relayed via GLX to the GPU on your desktop. if the OpenGL performance
of your desktop sucks, then it will be slow, regardless of how powerful the
remote GPU is.

with VMD in general, it pays to have a powerful local GPU. specifically
with GLSL enabled (which is not the default!), you get very high graphics
quality and can offload work from the CPU to the GPU.

the fact that your file loading takes that long, is due to the nature
of parsing text files. this is _extremely_ slow. also they are huge
and in the case of .pdb have a high loss of detail.

> @ Axel: thank you also for the explanations, your email came after I started
> replying to Ian, so I am simply adding some lines.
> My questions is: what other file format instead of .pdb should I use? I saw
> something about .xyz files, which should be more simple, but could not

.xyz is also a text file format and thus slow to read.

the traditional way of dealing with this in NAMD/VMD is to use
a .psf and .dcd file combination. the .psf file is in text mode, but
needs to be read only once and has all the topology and atom
name/type and other property information. the .dcd file is binary
single precision floating point and contains only the coordinate
data. for kicks, try saving your trajectory as .dcd and then read it
back. you will be surprised. the problem of .dcd is that it _only_
contains coordinate and cell information, but no topology data.

within the molfile plugins of VMD, there is support for an experimental
format called .js that is optimized for speed, flexibility and large systems.
that would be a good starting point.

> explain myself yet what a trajectory file which is also mentioned in
> relation with VMD. I have a co called checkpoint file, that means for each
> simulation time I get a file containing all atom coordinates, velocites and
> potential energies. The form is

> “atom_number                 atom_type          atom_mass        x-coord
>               y-coord               z-coord                v_x         v_y
>         v_z         E_pot”
> What would you say is the best format I should use

any text based format is impractical for large systems.

and my thoughts on MD software developers making up
new, ad-hoc, inefficient and incompatible text based file
formats with each new code are not printable.

my current preferences are for using .psf + binary
with either .dcd or .xtc as binary coordinate format.

for the largest systems (up to 30,000,000 particles),
i have implemented enhanced trajectory writing modules
in, e.g., LAMMPS that allow to write out only a subset
of the system (e.g. discard the solvent).


> Kind regards
> Alen
> Dipl.-Phys. Alen-Pilip Prskalo
> Institut für Materialprüfung, Werkstoffkunde und Festigkeitslehre IMWF
> Universität Stuttgart
> Pfaffenwaldring 32
> 70569 Stuttgart
> Tel: +49 711 685 52579
> Fax: +49 711 685 62635
> Email:
> Email:
> From: Ian Stokes-Rees []
> Sent: Mittwoch, 21. Juli 2010 14:08
> To: Prskalo, Alen-Pilip
> Cc:
> Subject: Re: vmd-l: PC suitable for visualisation of large systems, maximum
> namuber of atoms in a .pdb file
> On 7/21/10 3:40 AM, Prskalo, Alen-Pilip wrote:
> I’m new in the VMD community, used Rasmol before. I am simulating systems of
> ~1.000.000 atoms in each frame and I intend to make nice videos later on. So
> it is obvious that I will have an enormous .pdb file containing large number
> of atoms AND frames to make the video look fluent. My question is: what is
> the best PC to do it? I do have a bit money on the side to upgrade my 4
> Core, 4GB RAM PC, but how? First idea is of course to by extra RAM, to go to
> 8 GB or even 16 GB. What about the graphic card, presently I have only
> onboard graphics, would a good graphic card make the loading and rotation
> easier and if, how exactly (from the technical stand point).
> I don't have direct experience with this, but from my indirect experience
> you would probably be well suited to purchase one of the new 8 or 12 core
> per CPU AMD systems which can get you 32 or 48 cores in a single box,
> include a recent nVidia CUDA 3.0 graphics card (a Tesla would be best, but
> an OEM model would probably work alright as well), and then buy at least 16
> GB of RAM, if not more.  From what I understand, you can put in as many
> Tesla cards as your system can handle and VMD will pick them up and use them
> all.
> Are you running the simulation on the computer, or is the simulation running
> on a separate MD cluster?  If it is running somewhere else, you could make
> do with fewer CPUs.  If it is running on the new computer you are going to
> get, you should aim for the 48 core AMD system.  Here in the US you can get
> such a system for about $7k, or "fully loaded" (fast disks, lots of RAM,
> etc. etc.) for under $10k.
> I am very interested to hear what other people recommend.
> Ian

Dr. Axel Kohlmeyer
Institute for Computational Molecular Science
Temple University, Philadelphia PA, USA.