[petsc-users] Debugging hints welcome

Clemens Domanig clemens.domanig at uibk.ac.at
Thu Jul 14 00:44:07 CDT 2011


I know it is lots of values I send but this will only run on a shared 
memory system. And to me it is strange that it is just ~20s with 3+ 
MPI-proc.

Matthew Knepley wrote:
> On Wed, Jul 13, 2011 at 4:56 PM, Clemens Domanig 
> <clemens.domanig at uibk.ac.at <mailto:clemens.domanig at uibk.ac.at>> wrote:
> 
>     I tried with -mat_no_inode - no effect. Thats the output
> 
>     [1] MatAssemblyBegin_MPIAIJ(): Stash has 0 entries, uses 0 mallocs.
>     [0] MatStashScatterBegin_Private()__: No of messages: 1
>     [0] MatStashScatterBegin_Private()__: Mesg_to: 1: size: 704692232
> 
>                                                                         
>                    ^^^^^^^^^^ Do you really mean to set 700M of 
> off-process values?
> 
> I think Barry is correct that it is just taking forever to send this.
> 
>    Matt
> 
>     [0] MatAssemblyBegin_MPIAIJ(): Stash has 88086528 entries, uses 13
>     mallocs.
>     [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 553824 X 553824; storage
>     space: 24984360 unneeded,19875384 used
>     [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues()
>     is 0
>     [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 42
> 
> 
>     Am 2011-07-13 22:10, schrieb Matthew Knepley:
> 
>         On Wed, Jul 13, 2011 at 4:08 PM, Clemens Domanig
>         <clemens.domanig at uibk.ac.at <mailto:clemens.domanig at uibk.ac.at>
>         <mailto:clemens.domanig at uibk.__ac.at
>         <mailto:clemens.domanig at uibk.ac.at>>> wrote:
> 
>            Hi everyone,
> 
>            maybe some can offer som debugging-hints for my problem.
> 
> 
>         Its possible that there is a bug in the inode routines. Please try
>         running with -mat_no_inode
> 
>           Thanks,
> 
>              Matt
> 
>            My FEM-program uses a shell-element that has depending on the
>            geometry 5 or 6 dof per node.
> 
>            The program uses MPI for parallel solving (LU, mumps).
>            It works fine with all examples that have onyl 5 dof per node and
>            that have a mixture of 5 and 6 dof per node.
>            When doing examples that have 6 dof per node this happens:
>            * when using more than 2 MPI processes everything seems to be
>         fine.
>            * when using 1 or 2 MPI processes MatAssemblyBegin() never
>         finishes
> 
>            This is the last output of -info, -mat_view_info, -vec_view_info
>            (with 2 MPI processes, matrix size 1107648x1107648)
> 
>            [1] MatAssemblyBegin_MPIAIJ(): Stash has 0 entries, uses 0
>         mallocs.
>            [0] MatStashScatterBegin_Private()____: No of messages: 1
>            [0] MatStashScatterBegin_Private()____: Mesg_to: 1: size:
>         704692232
>            [0] MatAssemblyBegin_MPIAIJ(): Stash has 88086528 entries,
>         uses 13
>            mallocs.
>            [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 553824 X 553824;
>         storage
>            space: 24984360 unneeded,19875384 used
>            [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during
>         MatSetValues()
>            is 0
>            [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 42
>            [0] Mat_CheckInode(): Found 184608 nodes of 553824. Limit
>         used: 5.
>            Using Inode routines
> 
>            Thx for your help - respectfully C. Domanig
> 
> 
> 
> 
>         --
>         What most experimenters take for granted before they begin their
>         experiments is infinitely more interesting than any results to which
>         their experiments lead.
>         -- Norbert Wiener
> 
> 
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their 
> experiments is infinitely more interesting than any results to which 
> their experiments lead.
> -- Norbert Wiener



More information about the petsc-users mailing list