From: Ajasja Ljubetič (ajasja.ljubetic_at_gmail.com)
Date: Sun Nov 03 2013 - 06:54:26 CST
On 3 November 2013 09:38, James Starlight <jmsstarlight_at_gmail.com> wrote:
> updating
>
> using namd2 +idlepoll +p4 +devices 0,1 ./restart.conf
> I've launched simulation on both GPUs (according to thermal monitoring in
> nvidia-settings) but only half of cpus were fully loaded.
>
> Yes, naturally. Look up what the +p4 switch does. (Also read up on
hyperthreading)
By the way how I could monitor real GPU loading as well as namd performance
> ( in ns\days or GFlops )?
>
Try looking in the namd log file for the ns/days speed.
And out of interest do report the ns/day of
namd2 +idlepoll +p6 +devices 0 ./restart.conf
vs
namd2 +idlepoll +p6 +devices 0,1 ./restart.conf
Regards,
Ajasja
>
>
> James
>
>
> 2013/11/1 James Starlight <jmsstarlight_at_gmail.com>
>
>> Ok. I'll try to make some simulations of this configure. The main issue
>> with which I can force is the possible conflict between that older cuda
>> library (used from vmd) and more newest development driver ( 5.5 version)
>> which comes from installed cuda-5.5.
>>
>> By the way how I could use both of the GPUs simultaneously ? Just use the
>> below command?
>>
>> namd2 +idlepoll +p4 +devices 0,1 ./restart.conf
>>
>> Where 0 and 1 are the ids of my GPUs? Is there additional options for
>> synchronization of the simulations in dual-GPU regime ?
>>
>> James
>>
>>
>> 2013/10/31 Aron Broom <broomsday_at_gmail.com>
>>
>>> don't replace anything, just point to the version of the library in your
>>> NAMD directory as you did. It should work fine.
>>>
>>>
>>> On Thu, Oct 31, 2013 at 1:24 PM, James Starlight <jmsstarlight_at_gmail.com
>>> > wrote:
>>>
>>>> Dear Namd users,
>>>>
>>>> I've build my new workstations consisted of two Titans with i6 (linux
>>>> recognize it like 12 core process but actually it consist of 6 nodes).
>>>>
>>>> Than I've installed lattest nvidia cuda-5.5 drivers (driver, toolkit as
>>>> well as samples) and define all paths to the bash as well as libconf files.
>>>>
>>>> When I tried to lunch namd I've obtain error that libcudart.so.4 is not
>>>> found (indeed in the cuda/lib and lib64 only libcudart.so.5 files are
>>>> present).
>>>>
>>>> I've found libcudart.so.4 only in VMDs folder and when I've provide it
>>>> in bash Namd have been worked.
>>>>
>>>> Should I change some modification to replace lib libcudart.so.5 to
>>>> libcudart.so.4 ? My namd output is
>>>>
>>>> CharmLB> Load balancer assumes all CPUs are same.
>>>> Charm++> Running on 1 unique compute nodes (12-way SMP).
>>>> Charm++> cpu topology info is gathered in 0.001 seconds.
>>>> Info: NAMD CVS-2013-10-31 for Linux-x86_64-multicore-CUDA
>>>> Info:
>>>> Info: Please visit http://www.ks.uiuc.edu/Research/namd/
>>>> Info: for updates, documentation, and support information.
>>>> Info:
>>>> Info: Please cite Phillips et al., J. Comp. Chem. 26:1781-1802 (2005)
>>>> Info: in all publications reporting results obtained with NAMD.
>>>> Info:
>>>> Info: Based on Charm++/Converse 60500 for multicore-linux64-iccstatic
>>>> Info: Built Thu Oct 31 02:26:47 CDT 2013 by jim on lisboa.ks.uiuc.edu
>>>> Info: 1 NAMD CVS-2013-10-31 Linux-x86_64-multicore-CUDA 1
>>>> drunk_telecaster own
>>>> Info: Running on 1 processors, 1 nodes, 1 physical nodes.
>>>> Info: CPU topology information available.
>>>> Info: Charm++/Converse parallel runtime startup completed at 0.00467801
>>>> s
>>>> Did not find +devices i,j,k,... argument, using all
>>>> Pe 0 physical rank 0 binding to CUDA device 0 on drunk_telecaster:
>>>> 'GeForce GTX TITAN' Mem: 6143MB Rev: 3.5
>>>> FATAL ERROR: No simulation config file specified on command line.
>>>>
>>>>
>>>> does it means the both of my GPUs ready to use ?
>>>>
>>>> James
>>>>
>>>>
>>>> 2013/10/30 Norman Geist <norman.geist_at_uni-greifswald.de>
>>>>
>>>>> 1 - No.
>>>>>
>>>>> 2 - Doesn't matter here most of the cases.
>>>>>
>>>>>
>>>>>
>>>>> Norman Geist.
>>>>>
>>>>>
>>>>>
>>>>> *Von:* owner-namd-l_at_ks.uiuc.edu [mailto:owner-namd-l_at_ks.uiuc.edu] *Im
>>>>> Auftrag von *James Starlight
>>>>>
>>>>> *Gesendet:* Mittwoch, 30. Oktober 2013 14:13
>>>>> *An:* Namd Mailing List
>>>>>
>>>>> *Betreff:* Re: namd-l: Two GPU-based workstation
>>>>>
>>>>>
>>>>>
>>>>> Some extra questions-
>>>>>
>>>>> 1- Do I need special drivers optimizing dual GPU in Debian ?
>>>>>
>>>>> 2- Should I compile NAMD from sources for optimal performance ?
>>>>> Previouslu I've used NAMD from Binaries ( using 1 gPU + 4 cores of i5 )
>>>>>
>>>>> James
>>>>>
>>>>>
>>>>>
>>>>> 2013/10/29 Norman Geist <norman.geist_at_uni-greifswald.de>
>>>>>
>>>>> >Just remember, NAMD is very memory bandwidth hungry.
>>>>>
>>>>>
>>>>>
>>>>> Guess you mean PCIE bandwidth hungry?
>>>>>
>>>>>
>>>>>
>>>>> Norman Geist.
>>>>>
>>>>>
>>>>>
>>>>> *Von:* owner-namd-l_at_ks.uiuc.edu [mailto:owner-namd-l_at_ks.uiuc.edu] *Im
>>>>> Auftrag von *Ajasja Ljubetic
>>>>> *Gesendet:* Dienstag, 29. Oktober 2013 14:34
>>>>> *An:* James Starlight
>>>>> *Cc:* Norman Geist; Namd Mailing List
>>>>>
>>>>>
>>>>> *Betreff:* Re: namd-l: Two GPU-based workstation
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> As I've told I have typical desktop with 6 pci/e slots and 1 coreI7
>>>>> (4 cores) + 2 GPU + 4 RAM slots (each of 4gb or 8gb I dont remember it now
>>>>> clearly :) ). I'd like to launch simulations ( water soluble as well as
>>>>> membrane proteins using NAMD with the explicit solvents (50k and 80k atoms
>>>>> resp) using both GPUs simultaneously and CPU for one run.
>>>>>
>>>>> What another extra modification of my desktop as well as simulation
>>>>> parameters should I take into account? Are any other specified drivers
>>>>> needed for typical Linux-based multi GPU workstation?
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Make sure to pick a motherboard with two 16x speed PCIE ports.
>>>>> (Probably PCIE 3.0?). Personally I don't think you will see the scaling you
>>>>> desire. I.e., the GPUs will be underutilized. But then again, YMMV. Just
>>>>> remember, NAMD is very memory bandwidth hungry.
>>>>>
>>>>>
>>>>>
>>>>> Best regards,
>>>>>
>>>>> Ajasja
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Aron Broom M.Sc
>>> PhD Student
>>> Department of Chemistry
>>> University of Waterloo
>>>
>>
>>
>
This archive was generated by hypermail 2.1.6 : Wed Dec 31 2014 - 23:21:51 CST