[torqueusers] Does anyone use #shared?

Eva Hocks hocks at sdsc.edu
Fri Apr 13 10:57:40 MDT 2012

We are using cpuset on the vSMP nodes in non shared mode for compute jobs. Each
job gets' it's own cpuset/cores assigned.

shared mode would come in handy though when using the cores from the IO nodes to
access the local large filesystem from a user job. The lustre processes are
dedicated to those cores for filesystem access through the IO node interfaces.

We are still testing


On Fri, 13 Apr 2012 Gareth.Williams at csiro.au wrote:

> I have no objection.
> We are keen on using cpusets and allocating/dedicating cores.  We did run a custom setup for many years with a 'overload' queue which put jobs in a special shared cpuset and faked the number of cpus to make the scheduler work compatibly. The facility was not valued by the user-base. We've more-or-less abandoned that idea now that we're using the cpuset integration in torque which would not easily support such a model - and don't care much.
> In any case, we never used they #shared feature.
> Gareth
> From: David Beer [mailto:dbeer at adaptivecomputing.com]
> Sent: Tuesday, 10 April 2012 7:35 AM
> To: Torque Users Mailing List
> Subject: [torqueusers] Does anyone use #shared?
> All,
> Does anyone out there used shared execution slots? This feature allows any number of jobs to be assigned to the same execution slot, as long as the job requests only a shared processor. I don't know of any customer that uses these, and I'd like to remove the code to support this from the post 4.0 TORQUE (trunk in subversion). This would simplify a number of routines and get rid of quite a bit of spaghetti code, so it'd be great if nobody uses it. Does anyone have objections to removing this 'feature'?
> --
> David Beer | Software Engineer
> Adaptive Computing

More information about the torqueusers mailing list