[petsc-users] Debugging hints welcome

Clemens Domanig clemens.domanig at uibk.ac.at
Wed Jul 13 15:08:36 CDT 2011


Hi everyone,

maybe some can offer som debugging-hints for my problem.

My FEM-program uses a shell-element that has depending on the geometry 5 
or 6 dof per node.

The program uses MPI for parallel solving (LU, mumps).
It works fine with all examples that have onyl 5 dof per node and that 
have a mixture of 5 and 6 dof per node.
When doing examples that have 6 dof per node this happens:
* when using more than 2 MPI processes everything seems to be fine.
* when using 1 or 2 MPI processes MatAssemblyBegin() never finishes

This is the last output of -info, -mat_view_info, -vec_view_info (with 2 
MPI processes, matrix size 1107648x1107648)

[1] MatAssemblyBegin_MPIAIJ(): Stash has 0 entries, uses 0 mallocs.
[0] MatStashScatterBegin_Private(): No of messages: 1
[0] MatStashScatterBegin_Private(): Mesg_to: 1: size: 704692232
[0] MatAssemblyBegin_MPIAIJ(): Stash has 88086528 entries, uses 13 mallocs.
[0] MatAssemblyEnd_SeqAIJ(): Matrix size: 553824 X 553824; storage 
space: 24984360 unneeded,19875384 used
[0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
[0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 42
[0] Mat_CheckInode(): Found 184608 nodes of 553824. Limit used: 5. Using 
Inode routines

Thx for your help - respectfully C. Domanig


More information about the petsc-users mailing list