[petsc-users] Debugging hints welcome

Matthew Knepley knepley at gmail.com
Thu Jul 14 06:21:50 CDT 2011


On Thu, Jul 14, 2011 at 1:44 AM, Clemens Domanig <clemens.domanig at uibk.ac.at
> wrote:

> I know it is lots of values I send but this will only run on a shared
> memory system. And to me it is strange that it is just ~20s with 3+
> MPI-proc.
>

You could have hit a level in the memory hierarchy, where it starts swapping
to disk. To definitely say would take a lot of work. However,
its very easy to definitely say whether there is deadlock. As both Barry and
Jed said, please connect with the debugger and look at the
stack trace.

    Matt


> Matthew Knepley wrote:
>
>  On Wed, Jul 13, 2011 at 4:56 PM, Clemens Domanig <
>> clemens.domanig at uibk.ac.at <mailto:clemens.domanig at uibk.**ac.at<clemens.domanig at uibk.ac.at>>>
>> wrote:
>>
>>    I tried with -mat_no_inode - no effect. Thats the output
>>
>>    [1] MatAssemblyBegin_MPIAIJ(): Stash has 0 entries, uses 0 mallocs.
>>    [0] MatStashScatterBegin_Private()**__: No of messages: 1
>>    [0] MatStashScatterBegin_Private()**__: Mesg_to: 1: size: 704692232
>>
>>
>>                 ^^^^^^^^^^ Do you really mean to set 700M of off-process
>> values?
>>
>> I think Barry is correct that it is just taking forever to send this.
>>
>>   Matt
>>
>>    [0] MatAssemblyBegin_MPIAIJ(): Stash has 88086528 entries, uses 13
>>    mallocs.
>>    [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 553824 X 553824; storage
>>    space: 24984360 unneeded,19875384 used
>>    [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues()
>>    is 0
>>    [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 42
>>
>>
>>    Am 2011-07-13 22:10, schrieb Matthew Knepley:
>>
>>        On Wed, Jul 13, 2011 at 4:08 PM, Clemens Domanig
>>        <clemens.domanig at uibk.ac.at <mailto:clemens.domanig at uibk.**ac.at<clemens.domanig at uibk.ac.at>
>> >
>>        <mailto:clemens.domanig at uibk._**_ac.at
>>        <mailto:clemens.domanig at uibk.**ac.at <clemens.domanig at uibk.ac.at>>>>
>> wrote:
>>
>>           Hi everyone,
>>
>>           maybe some can offer som debugging-hints for my problem.
>>
>>
>>        Its possible that there is a bug in the inode routines. Please try
>>        running with -mat_no_inode
>>
>>          Thanks,
>>
>>             Matt
>>
>>           My FEM-program uses a shell-element that has depending on the
>>           geometry 5 or 6 dof per node.
>>
>>           The program uses MPI for parallel solving (LU, mumps).
>>           It works fine with all examples that have onyl 5 dof per node
>> and
>>           that have a mixture of 5 and 6 dof per node.
>>           When doing examples that have 6 dof per node this happens:
>>           * when using more than 2 MPI processes everything seems to be
>>        fine.
>>           * when using 1 or 2 MPI processes MatAssemblyBegin() never
>>        finishes
>>
>>           This is the last output of -info, -mat_view_info, -vec_view_info
>>           (with 2 MPI processes, matrix size 1107648x1107648)
>>
>>           [1] MatAssemblyBegin_MPIAIJ(): Stash has 0 entries, uses 0
>>        mallocs.
>>           [0] MatStashScatterBegin_Private()**____: No of messages: 1
>>           [0] MatStashScatterBegin_Private()**____: Mesg_to: 1: size:
>>        704692232
>>           [0] MatAssemblyBegin_MPIAIJ(): Stash has 88086528 entries,
>>        uses 13
>>           mallocs.
>>           [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 553824 X 553824;
>>        storage
>>           space: 24984360 unneeded,19875384 used
>>           [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during
>>        MatSetValues()
>>           is 0
>>           [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 42
>>           [0] Mat_CheckInode(): Found 184608 nodes of 553824. Limit
>>        used: 5.
>>           Using Inode routines
>>
>>           Thx for your help - respectfully C. Domanig
>>
>>
>>
>>
>>        --
>>        What most experimenters take for granted before they begin their
>>        experiments is infinitely more interesting than any results to
>> which
>>        their experiments lead.
>>        -- Norbert Wiener
>>
>>
>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>
>


-- 
What most experimenters take for granted before they begin their experiments
is infinitely more interesting than any results to which their experiments
lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20110714/f7f3aa46/attachment.htm>


More information about the petsc-users mailing list