[petsc-users] Debugging hints welcome

Clemens Domanig clemens.domanig at uibk.ac.at
Wed Jul 13 16:08:17 CDT 2011


I preallocate a sparse matrix 1107648x1107648 at the beginning. I 
checked all indices when filling the matrix with MatSetValues and they 
are all smaller than 1107648.
Just to give you some time measurements:
1 MPI - "hangs"
2 MPI - "hangs" even after 20 min no change
3 MPI - 21s for assembly
4 MPI - 19s for assembly

will test with -start_in_debugger tomorrow morning
thx

Am 2011-07-13 22:59, schrieb Barry Smith:
>
>    If it really hangs on one process then just run with the option -start_in_debugger noxterm and type cont in the debugger; when you think it is hanging (after a few minutes I guess) hit control-c and then type where to see where it is "hanging".  My guess is that it is not hanging but with the size dof your preallocation is way off and it is just taking a huge amount of time to set the values. Suggest revisit where you determine the preallocation.
>
>
>      Barry
>
> On Jul 13, 2011, at 3:56 PM, Clemens Domanig wrote:
>
>> I tried with -mat_no_inode - no effect. Thats the output
>>
>> [1] MatAssemblyBegin_MPIAIJ(): Stash has 0 entries, uses 0 mallocs.
>> [0] MatStashScatterBegin_Private(): No of messages: 1
>> [0] MatStashScatterBegin_Private(): Mesg_to: 1: size: 704692232
>> [0] MatAssemblyBegin_MPIAIJ(): Stash has 88086528 entries, uses 13 mallocs.
>> [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 553824 X 553824; storage space: 24984360 unneeded,19875384 used
>> [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
>> [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 42
>>
>>
>> Am 2011-07-13 22:10, schrieb Matthew Knepley:
>>> On Wed, Jul 13, 2011 at 4:08 PM, Clemens Domanig
>>> <clemens.domanig at uibk.ac.at<mailto:clemens.domanig at uibk.ac.at>>  wrote:
>>>
>>>     Hi everyone,
>>>
>>>     maybe some can offer som debugging-hints for my problem.
>>>
>>>
>>> Its possible that there is a bug in the inode routines. Please try
>>> running with -mat_no_inode
>>>
>>>    Thanks,
>>>
>>>       Matt
>>>
>>>     My FEM-program uses a shell-element that has depending on the
>>>     geometry 5 or 6 dof per node.
>>>
>>>     The program uses MPI for parallel solving (LU, mumps).
>>>     It works fine with all examples that have onyl 5 dof per node and
>>>     that have a mixture of 5 and 6 dof per node.
>>>     When doing examples that have 6 dof per node this happens:
>>>     * when using more than 2 MPI processes everything seems to be fine.
>>>     * when using 1 or 2 MPI processes MatAssemblyBegin() never finishes
>>>
>>>     This is the last output of -info, -mat_view_info, -vec_view_info
>>>     (with 2 MPI processes, matrix size 1107648x1107648)
>>>
>>>     [1] MatAssemblyBegin_MPIAIJ(): Stash has 0 entries, uses 0 mallocs.
>>>     [0] MatStashScatterBegin_Private()__: No of messages: 1
>>>     [0] MatStashScatterBegin_Private()__: Mesg_to: 1: size: 704692232
>>>     [0] MatAssemblyBegin_MPIAIJ(): Stash has 88086528 entries, uses 13
>>>     mallocs.
>>>     [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 553824 X 553824; storage
>>>     space: 24984360 unneeded,19875384 used
>>>     [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues()
>>>     is 0
>>>     [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 42
>>>     [0] Mat_CheckInode(): Found 184608 nodes of 553824. Limit used: 5.
>>>     Using Inode routines
>>>
>>>     Thx for your help - respectfully C. Domanig
>>>
>>>
>>>
>>>
>>> --
>>> What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which
>>> their experiments lead.
>>> -- Norbert Wiener
>>
>



More information about the petsc-users mailing list