[torquedev] Torque ncpus command

Lenox, Billy AMRDEC/Sentient Corp. billy.lenox at us.army.mil
Fri Oct 7 05:54:51 MDT 2011


I have Tried This:

> If you want 28 processors anywhere you can get them try using -l procs=28.

It still only runs on ONE NODE (seed001)

I have also done this:

> mpirun -np 28 -machinefile ${PBS_NODEFILE} ./prog

It still only runs on ONE NODE (seed001)

This is what is in my nodes file

Here /var/spool/torque/server_priv/nodes

seed001 np=8 batch
seed002 np=8 batch
seed003 np=4 batch
seed004 np=4 batch
seed005 np=4 batch

PBS_MOM is running on all nodes
PBS_SERVER is on the SERVER

I do a PBSNODES -A

All are FREE and READY for use.

I know I cant specify NODES:PROCESSORS in the Script.

I know if I create a file called hosts

seed001
seed002
seed003
seed004
seed005

And run the command like this in my script:

> mpirun -np 28 -machinefile /Users/name/hosts ./prog

This Works and runs on all nodes ${PBS_NODEFILE}

Still it not the correct way for PBS to work if I keep adding more NODES
To the listing for more PROCS to use. I would like for this to work somehow
correctly to use.

Thanks

Billy

> From: Ken Nielson <knielson at adaptivecomputing.com>
> Reply-To: Torque Developers mailing list <torquedev at supercluster.org>
> Date: Thu, 06 Oct 2011 18:04:57 -0600 (MDT)
> To: Torque Developers mailing list <torquedev at supercluster.org>
> Subject: Re: [torquedev] Torque ncpus command
> 
> ----- Original Message -----
>> From: "Christopher Samuel" <samuel at unimelb.edu.au>
>> To: torquedev at supercluster.org
>> Sent: Thursday, October 6, 2011 5:23:20 PM
>> Subject: Re: [torquedev] Torque ncpus command
>> 
>> -----BEGIN PGP SIGNED MESSAGE-----
>> Hash: SHA1
>> 
>> On 06/10/11 06:01, Lenox, Billy AMRDEC/Sentient Corp. wrote:
>> 
>>> When I submit the script it only runs on one node SEED001
>> 
>> It shouldn't run at all - ncpus=28 means you're asking
>> for 1 node with 28 cores, which you don't have..
>> 
>> - --
>>     Christopher Samuel - Senior Systems Administrator
>>  VLSCI - Victorian Life Sciences Computation Initiative
>>  Email: samuel at unimelb.edu.au Phone: +61 (0)3 903 55545
>>          http://www.vlsci.unimelb.edu.au/
>> 
>> -----BEGIN PGP SIGNATURE-----
>> Version: GnuPG v1.4.11 (GNU/Linux)
>> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
>> 
>> iEYEARECAAYFAk6OOGgACgkQO2KABBYQAh+g1ACeLyIwgY11rt0Z7IJOHffFJwES
>> 00wAni/hneuiw4UsGMg7TfcMPBHglLzL
>> =M4rJ
>> -----END PGP SIGNATURE-----
>> _______________________________________________
> 
> As far as TORQUE itself is concerned TORQUE does not know what to do with
> ncpus. It is up to the scheduler to decide. With Moab ncpus means give me X
> processors on a single node as Chris has said.
> 
> If you want 28 processors anywhere you can get them try using -l procs=28.
> 
> Ken Nielson
> Adaptive Computing
> _______________________________________________
> torquedev mailing list
> torquedev at supercluster.org
> http://www.supercluster.org/mailman/listinfo/torquedev



More information about the torquedev mailing list