Re: Parallel Simulation Question

From: Gerard Rowe (GerardR_at_usca.edu)
Date: Tue Mar 19 2019 - 13:39:52 CDT

For Orca, the working directory is the same as the scratch directory. Stand-alone Orca submission scripts typically copy all the necessary files to a scratch folder before running the program. In the NAMD implementation, I believe you need the input file, point charge file, and, if picking up from a previous job, the GBW file. Every node that runs Orca in MPI mode does need access to that directory or a local temp directory in the same location if the calculation is being spread across compute nodes.

-Gerard

________________________________
From: owner-namd-l_at_ks.uiuc.edu <owner-namd-l_at_ks.uiuc.edu> on behalf of McGuire, Kelly <mcg05004_at_byui.edu>
Sent: Monday, March 18, 2019 11:57 PM
To: McGuire, Kelly; namd-l_at_ks.uiuc.edu
Subject: namd-l: Re: Parallel Simulation Question

Is the SCRATCH directory necessary to get ORCA to work in parallel? If so, what files need to be in the SCRATCH directory?

Kelly L. McGuire

PhD Candidate

Biophysics

Department of Physiology and Developmental Biology

Brigham Young University

LSB 3050

Provo, UT 84602

________________________________
From: owner-namd-l_at_ks.uiuc.edu <owner-namd-l_at_ks.uiuc.edu> on behalf of McGuire, Kelly <mcg05004_at_byui.edu>
Sent: Saturday, March 16, 2019 10:30 PM
To: namd-l
Subject: namd-l: Parallel Simulation Question

I was able to get ORCA set up for running in parallel with my QMMM simulation. I specified 16 processors for ORCA to use. But, it seems to be taking longer than one processor. One processor was able to finish one step of my 100 step QMMM minimization in 35 minutes. With 16 processors in parallel, it has bee running for 1 hour and is still running. This is still the first step of the 100 minimization steps. My ORCA output seems to be stopped a certain spot and nothing seems to be progressing. Here is what the ORCA output shows:

           ************************************************************
           * Program running with 16 parallel MPI-processes *
           * working on a common directory *
           ************************************************************
 One Electron integrals ... done
 Pre-screening matrix ... done

     I hasn't progressed from this point so far (1 hour runtime). I don't see any errors or warnings.

Kelly L. McGuire

PhD Candidate

Biophysics

Department of Physiology and Developmental Biology

Brigham Young University

LSB 3050

Provo, UT 84602

This archive was generated by hypermail 2.1.6 : Wed Dec 04 2019 - 23:20:38 CST