NAMD jobs in SLURM environment, not entering queueing system

From: Prathit Chatterjee (
Date: Mon Jun 28 2021 - 03:54:22 CDT

Dear Experts,
This is regarding GPU job submission with NAMD, compiled specifically for PACE CG force field, with CHARMM-GUI, in SLURM environment.
Kindly see my submit script below:





#SBATCH -n 1

#SBATCH -p g3090 # Using a 3090 node

#SBATCH --gres=gpu:1    # Number of GPUs (per node)

#SBATCH -o output.log

#SBATCH -e output.err

# Generated by CHARMM-GUI (;!!DZ3fjg!uzul5NRVXaxH6jBb2Q9G5YS_oEiOhHy617xQn-c3N4c6mvGLZWo1Ykiz6Ozuiwzv1w$ ) v3.5


# The following shell script assumes your NAMD executable is namd2 and that

# the NAMD inputs are located in the current directory.


# Only one processor is used below. To parallelize NAMD, use this scheme:

#     charmrun namd2 +p4 input_file.inp > output_file.out

# where the "4" in "+p4" is replaced with the actual number of processors you

# intend to use.

module load compiler/gcc-7.5.0 cuda/11.2  mpi/openmpi-4.0.2-gcc-7



set equi_prefix = step6.%d_equilibration

set prod_prefix = step7.1_production

set prod_step   = step7

# Running equilibration steps

set cnt    = 1

set cntmax = 6

while ( ${cnt} <= ${cntmax} )

    set step = `printf ${equi_prefix} ${cnt}`

##    /home2/Prathit/apps/NAMD_PACE_Source/Linux-x86_64-g++/charmrun /home2/Prathit/apps/NAMD_PACE_Source/Linux-x86_64-g++/namd2 ${step}.inp > ${step}.out

    /home2/Prathit/apps/NAMD_PACE_Source/Linux-x86_64-g++/namd2 ${step}.inp > ${step}.out

    @ cnt += 1


While the jobs are getting submitted, these are not entering the queueing system, the PIDs of the jobs are invisible with the command "nvidia-smi", but showing with the "top" command inside the gpu node.
Any suggestions in rectifying the current discrepancy will be greatly helpful.
Thank you and Regards,Prathit

This archive was generated by hypermail 2.1.6 : Fri Dec 31 2021 - 23:17:11 CST