[torqueusers] How to get assigned cpu count with torque?

Steve Young chemadm at hamilton.edu
Thu May 14 12:35:05 MDT 2009


Hmm.. well it looks like your close.... resources_assigned.nodect is  
showing up.  On our server I see these:

     resources_assigned.mem = 0b
     resources_assigned.ncpus = 3
     resources_assigned.nodect = 7

This is listed in a Qstat -Qf output for one of the execution queue's.  
However, some of the others don't have a memory line  and only nodect  
and ncpus. Also yet another 3rd queue only had ncpus entry. So it  
doesn't appear to be consistent. Hopefully someone might know how to  
make this information show up?

Here's my qmgr output:

[root at host]# qmgr -c "list server"
Server torque-server-name
	server_state = Scheduling
	scheduling = True
	total_jobs = 8279
	state_count = Transit:0 Queued:8274 Held:0 Waiting:0 Running:5  
Exiting:0
	default_queue = default
	log_events = 511
	mail_from = adm
	query_other_jobs = True
	resources_default.ncpus = 1
	resources_default.walltime = 24:00:00
	resources_assigned.mem = 956301312b
	resources_assigned.ncpus = 5
	resources_assigned.nodect = 9
	scheduler_iteration = 60
	node_check_rate = 150
	tcp_timeout = 6
	log_level = 2
	job_nanny = True
	pbs_version = 2.2.1
	net_counter = 4 2 2

[root at host]#

Remember though I'm still on version 2.2.1 so possibly something  
changed between our versions. I think it should show you this  
information the same way you'd expect from OpenPBS... the key will be  
figuring out what triggers that information to be displayed. Sorry I'm  
not more help.

-Steve


On May 14, 2009, at 1:55 PM, Edsall, William (WJ) wrote:

> I'm not seeing the assigned ncpus you're seeing, but that appears to  
> be the data I need.
> If i can get resources_assigned.ncpus = 4 to show up, i'll be a  
> happy camper.
>
> # qstat -Qf
> Queue: batch
>     queue_type = Execution
>     total_jobs = 2
>     state_count = Transit:0 Queued:1 Held:0 Waiting:0 Running:1  
> Exiting:0
>     resources_default.nodes = 1
>     resources_default.walltime = 01:00:00
>     mtime = 1233765351
>     resources_assigned.nodect = 1
>     enabled = True
>     started = True
>
> # qmgr -c "list server"
> Server <deleted>
>         server_state = Scheduling
>         scheduling = True
>         total_jobs = 0
>         state_count = Transit:0 Queued:0 Held:0 Waiting:0 Running:0  
> Exiting:0
>         acl_hosts = <deleted>
>         managers = <deleted>
>         operators = <deleted>
>         default_queue = batch
>         log_events = 511
>         mail_from = adm
>         resources_assigned.nodect = 0
>         scheduler_iteration = 600
>         node_check_rate = 150
>         tcp_timeout = 6
>         pbs_version = 2.4.0b1
>         next_job_number = 968
>         net_counter = 2 5 5
>
> From: chemadm at hamilton.edu [mailto:chemadm at hamilton.edu]
> Sent: Monday, May 11, 2009 12:14 PM
> To: Edsall, William (WJ)
> Cc: torqueusers at supercluster.org
> Subject: Re: [torqueusers] How to get assigned cpu count with torque?
>
> Hi,
> On our grid I did a list server in qmgr and I see the following:
>
> resources_assigned.mem = 419430400b
> resources_assigned.ncpus = 4
> resources_assigned.nodect = 8
>
>
> However, this I believe is the amount of resources that is assigned  
> to jobs. Not the total amount available. I'd expect the same  
> behavior in pbs pro but I'm just guessing here. So are you looking  
> for the total amount of resources across the whole cluster or just  
> the number of assigned resources to currently running jobs? Hope  
> this helps,
>
> -Steve
>
>
> On May 11, 2009, at 10:57 AM, Edsall, William (WJ) wrote:
>
>> Hello List,
>>  I'm trying to parse out the assigned cpu count. In pbs pro this is  
>> available under list server as resources_assigned.ncpus.
>>
>> Is this variable hiding somewhere in torque?
>>
>> William
>>
>> _______________________________________________
>> torqueusers mailing list
>> torqueusers at supercluster.org
>> http://www.supercluster.org/mailman/listinfo/torqueusers
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.supercluster.org/pipermail/torqueusers/attachments/20090514/2680a696/attachment.html 


More information about the torqueusers mailing list