From: Johny Telecaster (johnytelecaster_at_gmail.com)
Date: Mon Mar 17 2014 - 02:38:56 CDT
Dear NAMD users!
I have the same workstation equipped with core i7 (6 cores) 32 gb ram
and 2 titans.
on typical systems (50-70k atoms) there are no difference between
simulation on one or dual GPU regime.
Can I use both titans (adding to each 3 cores) performing 2 different
simulation at the same time using 2 below NAMD flags for each systems?
+p3 +devices 0
+p3 +devices 1
Does it possible to increase performance in dual GPU regime on this
workstation (e.g adding RAM or change CPU on another with the same
socket)
Johny
2014-01-14 18:05 GMT+04:00, Michael Purdy <mdp3w_at_virginia.edu>:
> James, the Asus Z9PE-D8 WS MB is working well for me with two 8-core
> Xeons (E5-2670 2.60 GHz, sandy bridge), one GTX-690 and one Titan. I'm
> adding another Titan to this system in the near future.
>
> Michael Purdy
>
> On 01/14/2014 05:45 AM, James Starlight wrote:
>> Dear NAMD users!
>>
>> This time I'd like to upgrade my GPU-based workstation equiped with 2
>> TItans. I've mentioned previously that there were no increasing in
>> performance using both titans simultaneosusly with the single i7
>> processor and 32 gram). Consequently I'd like to change MB and
>> processor on this machine thinking about server-like MB with two
>> XENONS cpu's. Could some one provide me with the example of such
>> configuration (with exact models of MB's and CPUs) which could give me
>> best performance on the modelling on TWO GeForce Titans (I'm modelling
>> water soluble and membrane proteins in explicit solvent ~ overall
>> systems is 100k atoms). Also additional advises are welcome :)
>>
>>
>> James
>>
>>
>> 2013/11/11 James Starlight <jmsstarlight_at_gmail.com
>> <mailto:jmsstarlight_at_gmail.com>>
>>
>> Norman,
>>
>>
>> 1) I dont know why but I have this error with the PBC vectors in
>> case when I change fullElectFrequency value only (not any changing
>> in barostat options etc)
>>
>> 2) some benchmarks for the system of membrane receptor (60k atoms)
>>
>> using use +p6 +ppn3 +devices 0,1
>>
>> Info: Benchmark time: 3 CPUs 0.0472163 s/step 0.273242 days/ns
>> 290.363 MB memory
>>
>> using +ppn6 +devices 0,1
>>
>> Info: Benchmark time: 6 CPUs 0.0288685 s/step 0.167063 days/ns
>> 325.297 MB memory
>>
>> However n the second case I have still bad performance (~1ns/4
>> hour). For comparison in Gromacs with the same setups I have twice
>> better of the performance (although it can be possible also to
>> obtain gain for the usage of the both GPUs simultaneously)
>>
>>
>> Does anybody else have some experience with the GeForce titans ?
>>
>> James
>>
>>
>> 2013/11/11 Norman Geist <norman.geist_at_uni-greifswald.de
>> <mailto:norman.geist_at_uni-greifswald.de>>
>>
>> Read something about „Intel HT-Hyperthreading“.
>>
>> Yes.
>>
>> Norman Geist.
>>
>> *Von:*owner-namd-l_at_ks.uiuc.edu
>> <mailto:owner-namd-l_at_ks.uiuc.edu>
>> [mailto:owner-namd-l_at_ks.uiuc.edu
>> <mailto:owner-namd-l_at_ks.uiuc.edu>] *Im Auftrag von *James
>> Starlight
>>
>>
>> *Gesendet:* Montag, 11. November 2013 09:49
>> *An:* Norman Geist; Namd Mailing List
>>
>> *Betreff:* Re: namd-l: Two GPU-based workstation
>>
>> 2) about cores
>>
>> so If my physical number is 6 ( I dont really know why debian
>> recognize 12 cores)
>>
>> I should use +p6 +ppn3 +devices 0,1 do adjust each 3 cores for
>> each gpu shouldn't it ?
>>
>> 2013/11/11 James Starlight <jmsstarlight_at_gmail.com
>> <mailto:jmsstarlight_at_gmail.com>>
>>
>> Norman,
>>
>> using
>>
>> pmegridspacing 1;
>> fullElectFrequency 2;
>>
>> instead of
>>
>> PMEGridSizeX $fftx; # should be close to the cell size
>> PMEGridSizeY $ffty; # corresponds to the charmm input fftx/y/z
>> PMEGridSizeZ $fftz;
>> pmegridspacing 1;
>>
>> have crashed my simulation at the beginning
>>
>> namd2 +idlepoll +p12 +ppn6 +devices 0,1 ./aMD.conf >>
>> b2ar_p0gDiheBoostlog_20000
>> ------------- Processor 0 Exiting: Called CmiAbort ------------
>> Reason: FATAL ERROR: Periodic cell has become too small for
>> original patch grid!
>> Possible solutions are to restart from a recent checkpoint,
>> increase margin, or disable useFlexibleCell for liquid
>> simulation.
>>
>>
>> (this time I've start from the checkpoint of the previous
>> run, performed with the defined XYZ of the PME. Does it means
>> that I should to begin simulation from the beginning of the
>> equilibration phase with fullElectFrequency >1 or there are
>> alternative solutions?
>>
>> James
>>
>> 2013/11/11 Norman Geist <norman.geist_at_uni-greifswald.de
>> <mailto:norman.geist_at_uni-greifswald.de>>
>>
>> Why not simply setting „pmegridspacing 1“?
>>
>> I usually use numcpus/numgpus for each gpu. And really believe
>> me, you only got 6 cores not 12. And running 2 simulations on
>> only one cpu socket is inefficient.
>>
>> Norman Geist.
>>
>> *Von:*owner-namd-l_at_ks.uiuc.edu
>> <mailto:owner-namd-l_at_ks.uiuc.edu>
>> [mailto:owner-namd-l_at_ks.uiuc.edu
>> <mailto:owner-namd-l_at_ks.uiuc.edu>] *Im Auftrag von *James
>> Starlight
>>
>>
>> *Gesendet:* Samstag, 9. November 2013 15:53
>>
>> *An:* Namd Mailing List
>>
>>
>> *Betreff:* Re: namd-l: Two GPU-based workstation
>>
>> by the way increasing of the fullElectFrequency > 1 has end
>> simulation with the errors about unproperly set XYZ of the PME
>> boundaries ( with fullElectFrequency 1 I use 80 80 120
>> simulating membrane protein and have not any errors. How could
>> I change pme options ?
>>
>> also my question is the optimal balancing of the number of
>> CPUs for each GPU. Is there some impirical relationships
>> showing what amount of CPUs is needed for each GPU ?
>>
>> assuming that I obtained best performance using
>> namd2 +idlepoll +p12 +devices 0 ./aMD.conf
>>
>> I'd like to share some CPUs between both available GPUs for
>> the 2 parallel simulations.
>>
>> James
>>
>> 2013/11/8 James Starlight <jmsstarlight_at_gmail.com
>> <mailto:jmsstarlight_at_gmail.com>>
>>
>> Could fullelectfrequency 4 increase performance exactly
>> dual-gpu regime ?
>>
>> In case of running two simulations will it be enough to
>> provide each gpu with the 6 cores ? ( I suppose that I have
>> not obtain good performance in 2 gpu regime exactly due to
>> small number of cores for each gpu)
>>
>> James
>>
>> 2013/11/7 Ajasja Ljubetič <ajasja.ljubetic_at_gmail.com
>> <mailto:ajasja.ljubetic_at_gmail.com>>
>>
>> On 7 November 2013 06:32, James Starlight
>> <jmsstarlight_at_gmail.com <mailto:jmsstarlight_at_gmail.com>> wrote:
>>
>> I've gone to conclusion that using 2 GPUs simultaneously
>> gave me the same performance as 1 GPU like
>>
>> Yes, this is expected, for such small systems there is too
>> little work at each step to scale efficiently. You can however
>> run one (or two or three) independent simulations on each GPU.
>>
>> Regards,
>>
>> Ajasja
>>
>>
>>
>
>
This archive was generated by hypermail 2.1.6 : Wed Dec 31 2014 - 23:22:14 CST