[Mauiusers] 2 feasible tasks found for job 104:0 in partition DEFAULT (10 Needed) inadequate feasible tasks found for job 104:0 in partition DEFAULT (2 < 10)]

Daniel Boone daniel.boone at kahosl.be
Thu May 24 01:53:51 MDT 2007


print server output of qmgr
----------------
create queue batch
set queue batch queue_type = Execution
set queue batch resources_default.mem = 2000mb
set queue batch resources_default.nodes = 1
set queue batch resources_default.pvmem = 16000mb
set queue batch resources_default.walltime = 06:00:00
set queue batch enabled = True
set queue batch started = True
#
# Set server attributes.
#
set server scheduling = True
set server managers = abaqus at em-research00
set server operators = abaqus at em-research00
set server default_queue = batch
set server log_events = 511
set server mail_from = adm
set server scheduler_iteration = 600
set server node_check_rate = 150
set server tcp_timeout = 6
set server pbs_version = 2.1.8
----------------------

-------------------
pbs-script:
-------------------

#!/bin/bash
#PBS -l nodes=10:ppn=2
#PBS -l walltime=05:00:00
#PBS -l mem=1900mb
#PBS -l vmem=15000mb
#PBS -j oe
#PBS -M daniel.boone at kahosl.be
#PBS -m bae
# Go to the directory from which you submitted the job
mkdir $PBS_O_WORKDIR
string="$PBS_O_WORKDIR/plus2gb.inp"

scp 10.1.0.52:$string $PBS_O_WORKDIR

cd $PBS_O_WORKDIR
#module load abaqus
#
/Apps/abaqus/Commands/abaqus job=plus2gb queue=abaqus10cpu input=Standard_plus2gbyte.inp cpus=10
---------------------------

abaqus environment file.
--------------------------
import os
os.environ['LAMRSH'] = 'ssh'

max_cpus=10

mp_host_list=[['em-research00',2],['10.1.0.97',2],['node1',2],['node2',2],['node3',2]]


run_mode = BATCH
scratch  = "/home/abaqus"

queue_name=["cpu","abaqus10cpu"]
queue_cmd="qsub -r n -q batch -S /bin/bash -V -l nodes=1:ppn=1 %S"
cpu="qsub -r n -q batch -S /bin/bash -V -l nodes=1:ppn=2 %S"
abaqus10cpu="qsub -r n -q batch -S /bin/bash -V -l nodes=5:ppn=2 %S"

pre_memory = "5000 mb"
standard_memory = "15000 mb"

---------------------------




I have 5 nodes with each 2 cpus.
Before I submit the task, I checked with pbsnodes -a and they all have
the state free.
Memory should be fine. Each host has 2GB physical memory and 17GB swap
memory.
So I think that should be sufficient.


Lennart Karlsson schreef:
> The two log lines are only informational, i.e. no error assumed,
> and tells that the job needs ten tasks but at the moment only two
> was found.
>  
> Your job was defined to need ten tasks, my best guess is that this means
> ten nodes in your environment.
>
> Maui could not find more than two free nodes. The situation is even
> worse: From your environment Maui could not find ten nodes that would
> be free anytime in the future, so it blocks (defers) the job for a
> while.
>
> So, please tell something about your cluster. Are there 10 nodes?
> What resources do you specify in your submit command? Perhaps you
> ask for large memory allocations that can not be found?
>
> -- Lennart Karlsson <Lennart.Karlsson at nsc.liu.se>
>    National Supercomputer Centre in Linkoping, Sweden
>    http://www.nsc.liu.se
>
>
> Daniel Boone wrote:
>   
>> Nobody that knows what tis means?
>>
>> Daniel Boone schreef:
>>     
>>> Hi
>>>
>>> I have the following message in my maui.log
>>>
>>> 05/16 16:02:19 INFO:     2 feasible tasks found for job 104:0 in
>>> partition DEFAULT (10 Needed)
>>> 05/16 16:02:19 INFO:     inadequate feasible tasks found for job 104:0
>>> in partition DEFAULT (2 < 10)
>>>
>>> Anybody an idea?
>>>
>>> maui.log:
>>>
>>> ------------------
>>>
>>> /usr/local/maui/log/maui.log
>>> -------------------------
>>> 05/16 16:02:19 MStatClearUsage([NONE],Idle)
>>> 05/16 16:02:19 MPolicyAdjustUsage(NULL,104,NULL,idle,PU,[ALL],1,NULL)
>>> 05/16 16:02:19 MPolicyAdjustUsage(NULL,104,NULL,idle,NULL,[ALL],1,NULL)
>>> 05/16 16:02:19 INFO:     total jobs selected (ALL): 1/1
>>> 05/16 16:02:19 INFO:     jobs selected:
>>> [000:   1]
>>> 05/16 16:02:19
>>> MQueueSelectJobs(SrcQ,DstQ,HARD,5120,4096,2140000000,EVERY,FReason,FALSE)
>>> 05/16 16:02:19 INFO:     total jobs selected in partition ALL: 1/1
>>> 05/16 16:02:19 MQueueScheduleRJobs(Q)
>>> 05/16 16:02:19
>>> MQueueSelectJobs(SrcQ,DstQ,SOFT,5120,4096,2140000000,EVERY,FReason,TRUE)
>>> 05/16 16:02:19 INFO:     total jobs selected in partition ALL: 1/1
>>> 05/16 16:02:19
>>> MQueueSelectJobs(SrcQ,DstQ,SOFT,5120,4096,2140000000,DEFAULT,FReason,TRUE)
>>> 05/16 16:02:19 INFO:     total jobs selected in partition DEFAULT: 1/1
>>> 05/16 16:02:19 MQueueScheduleIJobs(Q,DEFAULT)
>>> 05/16 16:02:19 INFO:     checking job 104(1)  state: Idle (ex: Idle)
>>> 05/16 16:02:19 MJobSelectMNL(104,DEFAULT,NULL,MNodeList,NodeMap,MaxSpeed,2)
>>>
>>>
>>> 05/16 16:02:19 MReqGetFNL(104,0,DEFAULT,NULL,DstNL,NC,TC,2140000000,0)
>>> 05/16 16:02:19 INFO:     2 feasible tasks found for job 104:0 in
>>> partition DEFAULT (10 Needed)
>>> 05/16 16:02:19 INFO:     inadequate feasible tasks found for job 104:0
>>> in partition DEFAULT (2 < 10)
>>> 05/16 16:02:19 INFO:  5/16 16:02:19
>>> MJobPReserve(104,DEFAULT,ResCount,ResCountRej)
>>>
>>>
>>> 05/16 16:02:19 MJobReserve(104,Priority)
>>> 05/16 16:02:19 MPolicyGetEStartTime(104,ALL,SOFT,Time)
>>> 05/16 16:02:19 INFO:     policy start time found for job 104 in 00:00:00
>>> 05/16 16:02:19
>>> MJobGetEStartTime(104,NULL,NodeCount,TaskCount,MNodeList,1179324139)
>>> 05/16 16:02:19 ALERT:    job 104 cannot run in any partition
>>> 05/16 16:02:19 ALERT:    cannot create new reservation for job 104
>>> (shape[1] 10)
>>> 05/16 16:02:19 ALERT:    cannot create new reservation for job 104
>>> 05/16 16:02:19 MJobSetHold(104,16,1:00:00,NoResources,cannot create
>>> reservation for job '104' (intital reservation attempt)
>>> )
>>> 05/16 16:02:19 ALERT:    job '104' cannot run (deferring job for 3600
>>> seconds)
>>> 05/16 16:02:19 WARNING:  cannot reserve priority job '104'
>>>    cannot locate adequate feasible tasks for job 104:0
>>> ------------------------
>>> ---------------------------------
>>>
>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> mauiusers mailing list
>>> mauiusers at supercluster.org
>>> http://www.supercluster.org/mailman/listinfo/mauiusers
>>>
>>>   
>>>       
>> _______________________________________________
>> mauiusers mailing list
>> mauiusers at supercluster.org
>> http://www.supercluster.org/mailman/listinfo/mauiusers
>>
>>     
>
>
>
>   


More information about the mauiusers mailing list