From: Axel Kohlmeyer (akohlmey_at_gmail.com)
Date: Mon Oct 26 2009 - 08:26:03 CDT
On Mon, 2009-10-26 at 11:57 +0200, Nicholas M Glykos wrote:
> I should add as a side note that -at least to my experience- there is no
> single optimal way of tuning the operating system, such that it performs
> best for all possible problem sizes and protocols. If you tune the OS
> using the ApoA1 system, you shouldn't be surprised if it turns-out that it
> is not optimal for, say, a peptide simulation. For large centralised
> clusters, of course, you do not have a choice (which usually is a relief ;-)
if you have a software on your operating system, that is _that_
sensitive to two different systems in the same code, then there
is something wrong with the OS software that needs to be tweaked.
if a machine is configured to run NAMD jobs well, it should run
NAMD jobs well under all circumstances. if anything, people would
worry about different type applications, e.g. climate modeling,
quantum chemistry, or lattice quantum chromodynamics. in those
cases, one usually has to tweak the amount of memory, the size
and speed of scratch disk and the (aggregate) bandwidth and
latency of communication.
if one operates a machine for multiple purposes, the best option
is usually to first characterize the dominant usage and tune a
machine for that and then make sure the machine is usable and
the performance does not degrade too badly for other uses.
> My twocents,
-- Dr. Axel Kohlmeyer akohlmey_at_gmail.com Institute for Computational Molecular Science College of Science and Technology Temple University, Philadelphia PA, USA.
This archive was generated by hypermail 2.1.6 : Wed Feb 29 2012 - 15:53:24 CST