[torqueusers] Torque Monthly Usage Accounting
Ole Holm Nielsen
Ole.H.Nielsen at fysik.dtu.dk
Fri Jan 6 08:43:13 MST 2006
Thanks for your comments. You're right about the multiplication of cput
by nodect being incorrect. The point was with PBSPro we didn't have the
TM interface, so parallel MPI jobs would only count the master node
CPU time (PBSPro), whereas with Torque+TM you get the correct
total CPU time on all nodes *provided* that your application does
use the TM interface ! Your modification is only valid under
this assumption, which will not always be satisfied.
That's why I've never bothered with accounting for CPU time, but
only with wallclock time ! IMHO, it's fair to charge users for
the walltime they reserve a certain number of nodes, rather than for
their CPU time which may be the result of terribly inefficient
use of the resources.
Your second point about Exit_status is a good one. Maybe you
can propose a nice, compact and useful output format for
pbsacct which includes failed jobs ? If we agree on a good
format, I could release a new version of the pbsacct tools.
etienne gondet wrote:
> I just had a try to pbsacct. It's just the easy tools I was looking for.
> I tried to add total cumulated cpu and I believe there is a mistake in
> the cpu computation.
> In pbsjobs : cput is computed according to the value of resources_used.cput
> which is the total cpu of cput over all nodes and ppn ? Anybody can
> confirm this point.
> #ETG modif for SBU = walltime*NCPUS
> # nodect = total_ncpus
> nodect = total_ncpus
> #ETG cpunodes[user] += nodect*cput
> cpunodes[user] += cput
> #ETG cpunodesecs += nodect*cput
> cpunodesecs += cput
Ole Holm Nielsen
Department of Physics, Technical University of Denmark
More information about the torqueusers