From: John Stone (
Date: Tue Oct 26 2004 - 12:59:48 CDT

Hi Konrad,

On Tue, Oct 26, 2004 at 06:07:07PM +0200, wrote:
> The question is: why do people use Python within VMD? If most users
> just want to script VMD and happen to prefer Python to Tcl, that
> approach would help. If, on the other hand, most users are Python
> programmers who see VMD as "yet another Python module", then they
> wouldn't be interested: they would most certainly want to use the copy
> of Python they already have fine-tuned for their needs with added
> modules.

Even if I distribute a Python library with VMD, my intention is to allow
people to override it by setting the PYTHONHOME variable to their own
library, to allow them to use their own finely tuned library as they wish.
It'll still require that VMD is compiled against the same python version
that they are using, but this should help novices make use of Python-based
VMD extensions provided by others. Most VMD users are just want to use
Python based extensions and tools. They aren't writing the scripts for
themselves, they want to run tools written by others. Your tools,
IED, and various other packages are fine examples of software that people
might wish to run without having to get their hands dirty with setting
up a complex Python configuration just for VMD. If I can get them 95% of
the way there by default, I think that'll help a lot of people. I don't
want to prevent the expert users from using their own library, I just
want VMD to be easy-to-use for novices out-of-the-box as it were.

> I am not going to hide the fact that I am in the second category.

I understand precisely where you're coming from. :-)

[..Python compatitiblity..]
> No. Compatibility is pretty good over many generations of Python, but
> at the source code level. C modules need to be recompiled with every
> change in the second digit of the version number.

Right. I was trying to draw a comparison with what I'm used to for for Tcl.
In Tcl, even the C interfaces are stable across minor and sometimes
even not-so-minor version changes if you're willing to use the stubs
interface for everything. We aren't doing this for VMD yet, but it's been
on my todo list for a while when I have time.

> I see Python as an integral part of a computer system. It is there, and
> everything that needs it builds on the existing interpreter. That way
> all modules can be freely used together, which is what makes Python so
> powerful. Linux and MacOS (since 10.3) follow that approach: they come
> with Python preinstalled and nobody other then Python developers and
> testers would ever install another copy.

The problem with this is that if we link VMD directly to the system-provided
Python, we are subject to compatiblity problems whenever the OS provider
switches versions. This would force us to distribute binaries for N platforms
where each platform might potentially use a different version of Python,
Numeric, Tk, etc. I've avoided taking this strategy in the past because
it leads to way too many build configurations, and thus it makes supporting
the software that much more difficult. This is the reason I currently
build _all_ of the VMD versions using exactly the same revisions of
Tcl, Tk, FLTK, Python, Numeric, VRPN, etc. We could be building
VMD against the system-provided Tcl, Tk, and FLTK libraries in many
cases, and it'd make the VMD distribution smaller, but the compatibility
issues would be a nightmare to resolve.

What we do now leaves a lot to be desired from a software engineering
point of view, it's just the lowest-common-denomitor solution to the
compatibility issues that I've found thus far. If I can find a better
way of solving the problem that's robust and supportable by a 1-man-team,
then I'll definitely go after it :-)

Suggestions are certainly welcome here.

> >What would be ideal would be to implement text interpreters in VMD as
> >plugins that are completely dynamically linked. There are numerous
> Either that, or have interpreters in separate processes that
> communicate over sockets.

That would work fine assuming that the data being pushed around through
the interface is relatively lightweight and the additional overhead per
operation is small. Many of the analysis scripts people write locally
would perform badly with any additional overhead (some take a substantial
amount of time to run already, simply due to the massive size of the
trajectories they are analyzing. Many are numerous gigabytes in size..)
For most things though, such an implementation would probably work great.


NIH Resource for Macromolecular Modeling and Bioinformatics
Beckman Institute for Advanced Science and Technology
University of Illinois, 405 N. Mathews Ave, Urbana, IL 61801
Email:                 Phone: 217-244-3349              
  WWW:      Fax: 217-244-6078