[Mauiusers] maui hangs/segfaults in 3.3.1

Paul Raines raines at nmr.mgh.harvard.edu
Wed Jul 25 10:30:53 MDT 2012


maui does still segfault even though I took out the CLASSCFG lines.

I noticed that the patchs my maui rpm applies include one that sets
MAX_MCLASS to 64 but leaves MMAX_CLASS at 16.  I still don't understand
why there are these two separate definitions, but I only see the later
used in two places in the whole code in ways I am pretty sure would not
lead to memory corruption.  ANyway, I increased MMAX_CLASS to 64 but it
still segfaults.

>From Steve's email, he said he had to increase MAX_JOBRA so I tried
increasing that from 16 to 64.  Still crashes.

SO I decided to give valgrid a try and I get immediately on running
valgrind just spitting out errors

# valgrind --tool=memcheck --leak-check=yes /usr/sbin/maui -d
==17334== Memcheck, a memory error detector
==17334== Copyright (C) 2002-2010, and GNU GPL'd, by Julian Seward et al.
==17334== Using Valgrind-3.6.0 and LibVEX; rerun with -h for copyright info
==17334== Command: /usr/sbin/maui -d
==17334==
==17334== Conditional jump or move depends on uninitialised value(s)
==17334==    at 0x46617F: MUStrDup (MUtil.c:390)
==17334==    by 0x405A41: main (Server.c:136)
==17334==
==17334== Conditional jump or move depends on uninitialised value(s)
==17334==    at 0x46612F: MUFree (MUtil.c:455)
==17334==    by 0x4661CF: MUStrDup (MUtil.c:400)
==17334==    by 0x405A41: main (Server.c:136)
==17334==
==17334== Warning: client switching stacks?  SP change: 0x7feebc578 --> 
0x7fec54
250
==17334==          to suppress, use: --max-stackframe=2523944 or greater
==17334== Invalid write of size 8
==17334==    at 0x42E571: MJobSelectMNL (MSched.c:1403)
==17334==    by 0x46C434: MQueueScheduleIJobs (MQueue.c:857)
==17334==    by 0x4273AA: m_schedule_on_partitions (MSched.c:6889)
==17334==    by 0x42DEAD: MSchedProcessJobs (MSched.c:7038)
==17334==    by 0x405C45: main (Server.c:192)
==17334==  Address 0x7fec54288 is on thread 1's stack
==17334==
==17334== Invalid write of size 8
==17334==    at 0x42E576: MJobSelectMNL (MSched.c:1403)
==17334==    by 0x46C434: MQueueScheduleIJobs (MQueue.c:857)
==17334==    by 0x4273AA: m_schedule_on_partitions (MSched.c:6889)
==17334==    by 0x42DEAD: MSchedProcessJobs (MSched.c:7038)
==17334==    by 0x405C45: main (Server.c:192)
==17334==  Address 0x7fec542a8 is on thread 1's stack
==17334==
==17334== Invalid write of size 4
==17334==    at 0x42E57B: MJobSelectMNL (MSched.c:1403)
==17334==    by 0x46C434: MQueueScheduleIJobs (MQueue.c:857)
==17334==    by 0x4273AA: m_schedule_on_partitions (MSched.c:6889)
==17334==    by 0x42DEAD: MSchedProcessJobs (MSched.c:7038)
==17334==    by 0x405C45: main (Server.c:192)
==17334==  Address 0x7fec54294 is on thread 1's stack
==17334==
==17334== Invalid write of size 8
==17334==    at 0x42E5BF: MJobSelectMNL (MSched.c:1483)
==17334==    by 0x7FEF5C86F: ???
==17334==    by 0x46C434: MQueueScheduleIJobs (MQueue.c:857)
==17334==    by 0x4273AA: m_schedule_on_partitions (MSched.c:6889)
==17334==    by 0x42DEAD: MSchedProcessJobs (MSched.c:7038)
==17334==    by 0x405C45: main (Server.c:192)
==17334==  Address 0x7fec54248 is on thread 1's stack
==17334==
==17334== Invalid read of size 8
==17334==    at 0x8714200: strstr (mc_replace_strmem.c:1037)
==17334==    by 0x42E5C3: MJobSelectMNL (MSched.c:1483)
==17334==    by 0x46C434: MQueueScheduleIJobs (MQueue.c:857)
==17334==    by 0x4273AA: m_schedule_on_partitions (MSched.c:6889)
==17334==    by 0x42DEAD: MSchedProcessJobs (MSched.c:7038)
==17334==    by 0x405C45: main (Server.c:192)
==17334==  Address 0x7fec54248 is on thread 1's stack
==17334==

till eventual valgrind gives up with

==27020== More than 10000000 total errors detected.  I'm not reporting any 
more.
==27020== Final error counts will be inaccurate.  Go fix your program!
==27020== Rerun with --error-limit=no to disable this cutoff.  Note
==27020== that errors may occur in your program without prior warning from
==27020== Valgrind, because errors are no longer being displayed.


Then eventually maui crashes with a bunch of these errors

==27020== Warning: invalid file descriptor -1 in syscall write()

and then

==27020== Process terminating with default action of signal 11 (SIGSEGV)
==27020==  Access not within mapped region at address 0x68
==27020==    at 0x36CD271693: _IO_file_sync@@GLIBC_2.2.5 (fileops.c:897)
==27020==    by 0x36CD265EE9: fflush (iofflush.c:43)
==27020==    by 0x47C08A: MJobWriteStats (MJob.c:7815)
==27020==    by 0x48643D: MJobProcessCompleted (MJob.c:9562)
==27020==    by 0x4A6EB7: MPBSWorkloadQuery (MPBSI.c:871)
==27020==    by 0x45F935: __MUTFunc (MUtil.c:4718)
==27020==    by 0x462396: MUThread (MUtil.c:4691)
==27020==    by 0x498ED3: MRMWorkloadQuery (MRM.c:595)
==27020==    by 0x49CB18: MRMGetInfo (MRM.c:364)
==27020==    by 0x42DC41: MSchedProcessJobs (MSched.c:6930)
==27020==    by 0x405C45: main (Server.c:192)
==27020==  If you believe this happened as a result of a stack
==27020==  overflow in your program's main thread (unlikely but
==27020==  possible), you can try to increase the size of the
==27020==  main thread stack using the --main-stacksize= flag.
==27020==  The main thread stack size used in this run was 10485760.
==27020==
==27020== 11 bytes in 1 blocks are definitely lost in loss record 4 of 90
==27020==    at 0x866AFDE: malloc (vg_replace_malloc.c:236)
==27020==    by 0x36CD27FB41: strdup (strdup.c:43)
==27020==    by 0x8C9D647: ???
==27020==    by 0x8C9DA9E: ???
==27020==    by 0x8C9E0F5: ???
==27020==    by 0x36CD2AA28C: getpwnam_r@@GLIBC_2.2.5 (getXXbyYY_r.c:253)
==27020==    by 0x36CD2A9C6F: getpwnam (getXXbyYY.c:117)
==27020==    by 0x4642C7: MUUIDFromName (MUtil.c:3664)
==27020==    by 0x44A2ED: MUserAdd (MUser.c:322)
==27020==    by 0x4247E5: MCredSetDefaults (MCred.c:2327)
==27020==    by 0x4AEF34: MSysInitialize (MSys.c:323)
==27020==    by 0x4059C6: main (Server.c:125)
==27020==
==27020== 18 bytes in 2 blocks are definitely lost in loss record 10 of 90
==27020==    at 0x866AFDE: malloc (vg_replace_malloc.c:236)
==27020==    by 0x36CD27FB41: strdup (strdup.c:43)
==27020==    by 0x4661C7: MUStrDup (MUtil.c:403)
==27020==    by 0x405A41: main (Server.c:136)
==27020==
==27020== 292 (52 direct, 240 indirect) bytes in 1 blocks are definitely lost 
in loss record 25 of 90
==27020==    at 0x866AFDE: malloc (vg_replace_malloc.c:236)
==27020==    by 0x36CD2F97DA: nss_parse_service_list (nsswitch.c:540)
==27020==    by 0x36CD2FA0D1: __nss_database_lookup (nsswitch.c:134)
==27020==    by 0x8C9D3BF: ???
==27020==    by 0x8C9E174: ???
==27020==    by 0x36CD2AA28C: getpwnam_r@@GLIBC_2.2.5 (getXXbyYY_r.c:253)
==27020==    by 0x36CD2A9C6F: getpwnam (getXXbyYY.c:117)
==27020==    by 0x4642C7: MUUIDFromName (MUtil.c:3664)
==27020==    by 0x44A2ED: MUserAdd (MUser.c:322)
==27020==    by 0x4247E5: MCredSetDefaults (MCred.c:2327)
==27020==    by 0x4AEF34: MSysInitialize (MSys.c:323)
==27020==    by 0x4059C6: main (Server.c:125)
==27020==
==27020== LEAK SUMMARY:
==27020==    definitely lost: 81 bytes in 4 blocks
==27020==    indirectly lost: 240 bytes in 10 blocks
==27020==      possibly lost: 0 bytes in 0 blocks
==27020==    still reachable: 29,542,182 bytes in 34,621 blocks
==27020==         suppressed: 0 bytes in 0 blocks
==27020== Reachable blocks (those to which a pointer was found) are not shown.
==27020== To see them, rerun with: --leak-check=full --show-reachable=yes
==27020==
==27020== For counts of detected and suppressed errors, rerun with: -v
==27020== Use --track-origins=yes to see where uninitialised values come from
==27020== ERROR SUMMARY: 10000003 errors from 174 contexts (suppressed: 2003 
from 7)
Segmentation fault


Those uninitialised value errors are easy to trace.  There is a bug at
line 134 in Server.c where

     tmpArgV[0] = NULL;

should be

     tmpArgV[aindex] = NULL;


Trying to track the invalid write errors, I am just getting lost.  Some
of the lines pointed to as having invalid writes don't make sense to me
as there appears to be no "writing" going on in them. MSched.c:1483 is
the line 'MTRAPJOB(J,FName);' for instance.  MSched.c:1403 is the first
curly bracket in the MJobSelectMNL function.

I don't know how accurate the lines numbers valgrind reports are though it was 
certainly accurate on the uninitialised value errors

Does the above give anyone any cluse what might be going on?


-- Paul Raines (http://help.nmr.mgh.harvard.edu)



On Wed, 18 Jul 2012 9:07am, Paul Raines wrote:

>
> I tried putting a watch on MSched.statfp to see if I could catch it getting 
> corrupted, but I just ended up with a segfault in a different location, this 
> time in the fprintf right before the fflush it segfaulted in last time you 
> see in the backtrace below.
>
> So I went in an commented out all the CLASSCFG lines in my maui.cfg and 
> restarted.  So far maui has been running longer than it ever has before 
> without hanging or crashing.  However, the whole reason for the CLASSCFG 
> lines was that maui seemed in the past to be ignoring the max_user_run set 
> for each of my queues.  I will need to monitor things to see if that is still 
> the case.
>
> One related question.  What I really want to limit on a per queue basis is
> not number of jobs but number of CPUs a user has running.  Is there anyway
> to do that?
>
> -- Paul Raines (http://help.nmr.mgh.harvard.edu)
>
>
>
> On Tue, 17 Jul 2012 4:03pm, Steve Johnson wrote:
>
>> On 07/17/2012 02:05 PM, Paul Raines wrote:
>>> No, I know nothing about that.  I think I can remove most of those 
>>> CLASSCFG
>>> lines as I was having problems in a previous torque getting max_user_run
>>> to actually work.  Or will just the fact that I have more than 16 queues
>>> defined in torque still be a problem?
>>> 
>>> Seems like maui should then give an error at startup saying too many 
>>> CLASSCFG
>>> in the config if MAX_CLASS is exceeded.
>> 
>> IIRC, maui will ignore any classes > 16, so it probably isn't clobbering 
>> memory elsewhere.  But if you notice queues not getting scheduled, that 
>> limit will be the problem unless you have a CLASSCFG[DEFAULT] defined.
>> 
>>> Where is this documented?  What is the difference between MAX_MCLASS 
>>> (default
>>> 64) and MAX_CLASS (default 16)?
>> 
>> Documented? Heh...good one. ;)
>> 
>> It looks like MMAX_CLASS is used in src/moab/Mutil.c and src/mcom/MS3I.c, 
>> whereas MAX_MCLASS is more widely used throughout the code.  Not sure if 
>> they're directly related.
>> 
>> You might check if there's a particular job that's triggering the 
>> segfault/hang and see if there's anything abnormal in its characteristics 
>> in Torque (uid, gid, super long or "strange" strings/paths, etc).  Try 
>> setting a break in MJobWriteStats and examine variables. If you find a 
>> bogus address, work backward to see where it got clobbered. Sorry I can't 
>> offer more help.
>> 
>> I had a crashing problem a couple weeks ago, but it appears to be 
>> unrelated. I followed the same path as you with gdb and also inserted some 
>> conditional printf's in the source to finally track it down to MMAX_JOBRA 
>> set too low. Sadly, the process took several hours.  Why such limits are 
>> hardcoded is beyond me.
>> 
>> // Steve
>> 
>> 
>>> 
>>> Thanks
>>> 
>>> -- Paul Raines (http://help.nmr.mgh.harvard.edu)
>>> 
>>> 
>>> 
>>> On Tue, 17 Jul 2012 2:56pm, Steve Johnson wrote:
>>> 
>>>> It looks like you have 17 CLASSCFG lines.  Have you increased MAX_MCLASS 
>>>> and
>>>> MMAX_CLASS in include/msched-common.h?
>>>> 
>>>> // Steve
>>>> 
>>>> 
>>>> On 07/17/2012 12:42 PM, Paul Raines wrote:
>>>>> 
>>>>> We have two separate clusters. One is an ancient cluster with nodes that 
>>>>> are
>>>>> dual Opterons and 4G RAM.  The other is newer with dual quad Xeon 
>>>>> E5472's and
>>>>> 32G RAM.  Recently we updated both clusters to CentOS6, torque-2.5.11 
>>>>> and
>>>>> maui 3.3.1.  So OS/software/config - wise they are identical.  I built
>>>>> torque/maui RPMs myself on an old Opteron node to install on both 
>>>>> clusters.
>>>>> 
>>>>> The older cluster has been running without any problems.  On the new one
>>>>> though maui keeps hanging or segfaulting within 1-8 hours of starting 
>>>>> maui.
>>>>> I installed the debuginfo RPMS and run maui in the debugger.
>>>>> 
>>>>> When it just hangs (doesn't crash but doesn't respond to any tools such
>>>>> as showq), this is what I see:
>>>>> 
>>>>> =========================================================================
>>>>> (gdb) run -d
>>>>> Starting program: /usr/sbin/maui -d
>>>>> *** glibc detected *** /usr/sbin/maui: corrupted double-linked list:
>>>>> 0x000000000
>>>>> 7f106a0 ***
>>>>> 
>>>>> 
>>>>> ^C
>>>>> Program received signal SIGINT, Interrupt.
>>>>> 0x00000036cd2f542e in __lll_lock_wait_private () from /lib64/libc.so.6
>>>>> (gdb) bt
>>>>> #0  0x00000036cd2f542e in __lll_lock_wait_private () from 
>>>>> /lib64/libc.so.6
>>>>> #1  0x00000036cd27bed5 in _L_lock_9323 () from /lib64/libc.so.6
>>>>> #2  0x00000036cd2797c6 in malloc () from /lib64/libc.so.6
>>>>> #3  0x00000036cca04c72 in local_strdup () from 
>>>>> /lib64/ld-linux-x86-64.so.2
>>>>> #4  0x00000036cca08636 in _dl_map_object () from 
>>>>> /lib64/ld-linux-x86-64.so.2
>>>>> #5  0x00000036cca12994 in dl_open_worker () from 
>>>>> /lib64/ld-linux-x86-64.so.2
>>>>> #6  0x00000036cca0e176 in _dl_catch_error () from 
>>>>> /lib64/ld-linux-x86-64.so.2
>>>>> #7  0x00000036cca1244a in _dl_open () from /lib64/ld-linux-x86-64.so.2
>>>>> #8  0x00000036cd323520 in do_dlopen () from /lib64/libc.so.6
>>>>> #9  0x00000036cca0e176 in _dl_catch_error () from 
>>>>> /lib64/ld-linux-x86-64.so.2
>>>>> #10 0x00000036cd323677 in __libc_dlopen_mode () from /lib64/libc.so.6
>>>>> #11 0x00000036cd2fbd51 in backtrace () from /lib64/libc.so.6
>>>>> #12 0x00000036cd26f98b in __libc_message () from /lib64/libc.so.6
>>>>> #13 0x00000036cd275296 in malloc_printerr () from /lib64/libc.so.6
>>>>> #14 0x00000036cd277efa in _int_free () from /lib64/libc.so.6
>>>>> #15 0x0000000000466136 in MUFree (Ptr=0x46bfbd0) at MUtil.c:460
>>>>> #16 0x00000000004499a5 in MUserDestroy (UP=0x46bfbd0) at MUser.c:682
>>>>> #17 0x00000000004499de in MUserFreeTable () at MUser.c:700
>>>>> #18 0x00000000004ac48f in MSysShutdown (Signo=0) at MSys.c:2540
>>>>> #19 0x0000000000418361 in UIProcessClients (SS=0x774d270,
>>>>>       TimeLimit=<value optimized out>) at UserI.c:527
>>>>> #20 0x0000000000405bb8 in main (ArgC=2, ArgV=<value optimized out>)
>>>>>       at Server.c:240
>>>>> (gdb) quit
>>>>> =========================================================================
>>>>> 
>>>>> 
>>>>> When it crashes this is what I see
>>>>> 
>>>>> =========================================================================
>>>>> (gdb) run -d
>>>>> Starting program: /usr/sbin/maui -d
>>>>> 
>>>>> 
>>>>> Program received signal SIGSEGV, Segmentation fault.
>>>>> 0x00000036cd265ee7 in _IO_fflush (fp=0x7f0d010) at iofflush.c:43
>>>>> 43            result = _IO_SYNC (fp) ? EOF : 0;
>>>>> (gdb)
>>>>> (gdb) bt
>>>>> #0  0x00000036cd265ee7 in _IO_fflush (fp=0x7f0d010) at iofflush.c:43
>>>>> #1  0x000000000047c07b in MJobWriteStats (J=0x9b61080) at MJob.c:7815
>>>>> #2  0x000000000048643e in MJobProcessCompleted (J=0x9b61080) at 
>>>>> MJob.c:9562
>>>>> #3  0x00000000004a6eb8 in MPBSWorkloadQuery (R=0x6a4b2e0,
>>>>>       JCount=0x7ffffff7b938, SC=<value optimized out>) at MPBSI.c:871
>>>>> #4  0x000000000045f926 in __MUTFunc (V=0x7ffffff7b830) at MUtil.c:4718
>>>>> #5  0x0000000000462387 in MUThread (F=<value optimized out>,
>>>>>       TimeOut=<value optimized out>, RC=<value optimized out>,
>>>>>       ACount=<value optimized out>, Lock=<value optimized out>) at
>>>>> MUtil.c:4691
>>>>> #6  0x0000000000498ed4 in MRMWorkloadQuery (WCount=0x7ffffff7b98c, 
>>>>> SC=0x0)
>>>>>       at MRM.c:595
>>>>> #7  0x000000000049cb19 in MRMGetInfo () at MRM.c:364
>>>>> #8  0x000000000042dc42 in MSchedProcessJobs (OldDay=0x7fffffffde40 
>>>>> "Tue",
>>>>>       GlobalSQ=0x7ffffffdbe30, GlobalHQ=0x7ffffffbbe30) at MSched.c:6930
>>>>> #9  0x0000000000405c46 in main (ArgC=2, ArgV=<value optimized out>)
>>>>>       at Server.c:192
>>>>> (gdb) frame
>>>>> #0  0x00000036cd265ee7 in _IO_fflush (fp=0x7f0d010) at iofflush.c:43
>>>>> 43            result = _IO_SYNC (fp) ? EOF : 0;
>>>>> (gdb) frame 1
>>>>> #1  0x000000000047c07b in MJobWriteStats (J=0x9b61080) at MJob.c:7815
>>>>> 7815        fflush(MSched.statfp);
>>>>> (gdb) list MJob.c:7815
>>>>> 7810
>>>>> 7811      if 
>>>>> (MJobToTString(J,DEFAULT_WORKLOAD_TRACE_VERSION,Buf,sizeof(Buf))
>>>>> == SUCCESS)
>>>>> 7812        {
>>>>> 7813        fprintf(MSched.statfp,"%s",Buf);
>>>>> 7814
>>>>> 7815        fflush(MSched.statfp);
>>>>> 7816
>>>>> 7817        DBG(4,fSTAT) DPrint("INFO:     job stats written for 
>>>>> '%s'\n",
>>>>> 7818          J->Name);
>>>>> 7819        }
>>>>> (gdb) p Buf
>>>>> $3 = "16828", ' ' <repeats 18 times>, "0   1    coutu     coutu  345600
>>>>> Completed  [max100:1] 1342534818 1342534819 1342534819 1342535999 [NONE]
>>>>> [NONE] [NONE] >=    0M >=      0M   [nonGPU] 1342534818   1    1
>>>>> [NONE]:DEFA"...
>>>>> (gdb)
>>>>> =========================================================================
>>>>> 
>>>>> My guess is some memory corruption has overwritten MSched.statfp which 
>>>>> is
>>>>> just a file handle and thus fflush crashes when it actually tries to
>>>>> write to it.   WHere that overwrite is occuring though is anyone's 
>>>>> guess.
>>>>> 
>>>>> I am hoping someone on this list might have a clue.  It is really a 
>>>>> mystery
>>>>> to me why I only see this on one cluster. They are exactly the same 
>>>>> config
>>>>> except for host name.  Here is my maui.cfg
>>>>> 
>>>>> =========================================================================
>>>>> ADMIN1                maui root
>>>>> ADMIN3                ALL
>>>>> ADMINHOST               launchpad.nmr.mgh.harvard.edu
>>>>> BACKFILLPOLICY        FIRSTFIT
>>>>> CLASSCFG[default] MAXPROCPERUSER=150
>>>>> CLASSCFG[extended] MAXPROCPERUSER=50 MAXPROC=250
>>>>> CLASSCFG[GPU] MAXPROCPERUSER=5000
>>>>> CLASSCFG[matlab] MAXPROCPERUSER=60
>>>>> CLASSCFG[max100] MAXPROCPERUSER=100
>>>>> CLASSCFG[max10] MAXPROCPERUSER=10
>>>>> CLASSCFG[max200] MAXPROCPERUSER=200
>>>>> CLASSCFG[max20] MAXPROCPERUSER=20
>>>>> CLASSCFG[max50] MAXPROCPERUSER=50
>>>>> CLASSCFG[max75] MAXPROCPERUSER=75
>>>>> CLASSCFG[p10] MAXPROCPERUSER=5000
>>>>> CLASSCFG[p20] MAXPROCPERUSER=5000
>>>>> CLASSCFG[p30] MAXPROCPERUSER=5000
>>>>> CLASSCFG[p40] MAXPROCPERUSER=5000
>>>>> CLASSCFG[p50] MAXPROCPERUSER=30
>>>>> CLASSCFG[p5] MAXPROCPERUSER=5000
>>>>> CLASSCFG[p60] MAXPROCPERUSER=20
>>>>> CLASSWEIGHT           10
>>>>> ENABLEMULTIREQJOBS TRUE
>>>>> ENFORCERESOURCELIMITS   OFF
>>>>> LOGFILEMAXSIZE        1000000000
>>>>> LOGFILE               /var/spool/maui/log/maui.log
>>>>> LOGLEVEL              2
>>>>> NODEALLOCATIONPOLICY  PRIORITY
>>>>> NODECFG[DEFAULT] PRIORITY=1000 PRIORITYF='PRIORITY + 3 * JOBCOUNT'
>>>>> QUEUETIMEWEIGHT       1
>>>>> RESERVATIONPOLICY     CURRENTHIGHEST
>>>>> RMCFG[base]             TYPE=PBS
>>>>> RMPOLLINTERVAL          00:00:30
>>>>> SERVERHOST              launchpad.nmr.mgh.harvard.edu
>>>>> SERVERMODE              NORMAL
>>>>> SERVERPORT              40559
>>>>> USERCFG[DEFAULT] MAXIPROC=8
>>>>> USERCFG[jonghwan] MAXPROC=300
>>>>> USERCFG[shafee] MAXPROC=300
>>>>> 
>>>>> I actually changed the LOGLEVEL from 3 to 2 at one point thinking the
>>>>> error is happening when writing to the log and lowering the amount it
>>>>> writes might affect things, but it didn't help
>>>>> 
>>>>> ---------------------------------------------------------------
>>>>> Paul Raines                     http://help.nmr.mgh.harvard.edu
>>>>> MGH/MIT/HMS Athinoula A. Martinos Center for Biomedical Imaging
>>>>> 149 (2301) 13th Street     Charlestown, MA 02129        USA
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> The information in this e-mail is intended only for the person to whom 
>>>>> it is
>>>>> addressed. If you believe this e-mail was sent to you in error and the 
>>>>> e-mail
>>>>> contains patient information, please contact the Partners Compliance
>>>>> HelpLine at
>>>>> http://www.partners.org/complianceline . If the e-mail was sent to you 
>>>>> in
>>>>> error
>>>>> but does not contain patient information, please contact the sender and
>>>>> properly
>>>>> dispose of the e-mail.
>>>>> 
>>>>> _______________________________________________
>>>>> mauiusers mailing list
>>>>> mauiusers at supercluster.org
>>>>> http://www.supercluster.org/mailman/listinfo/mauiusers
>>>>> 
>>>> 
>>>> 
>>>> 
>> 
>> 
>> 
>


More information about the mauiusers mailing list