Preallocation for external direct solvers
Barry Smith
bsmith at mcs.anl.gov
Tue Oct 2 19:53:29 CDT 2007
Paul,
Unfortunately there is no good fix for using the MatCreateSeqAIJ() directly.
You will need to use
MatCreate()
MatSetFromOptions() or MatSetType(mat,MATSUPERLU_DIST,ierr)
MatSeqAIJSetPreallocation(....)
MatMPIAIJSetPreallocation(....)
Note I call both so it will work properly on 1 or more processes.
You MUST call the MatSeqAIJSetPreallocation() AFTER the matrix type has
been set.
I have tried to update the manual pages for the matrix types to clarify
this for future users.
Thank you for pointing out the problem.
Barry
On Sat, 14 Apr 2007, Paul T. Bauman wrote:
> Hello,
>
> I'm developing some code and am using direct solvers (for the moment), in
> particular MUMPS and SuperLU. I noticed that PETSc is not preallocating the
> memory for it even though I specify maximum number of entries per row.
>
> call MatCreateSeqAIJ(PETSC_COMM_WORLD,total_num_dofs,total_num_dofs, &
> max_number_nonzeros_row,PETSC_NULL_INTEGER, J ,ierr)
>
> call MatSetFromOptions(J,ierr)
>
> call SNESSetJacobian(snes,J,J,FormJacobian,PETSC_NULL_OBJECT,ierr)
>
> When I print -info:
>
> [0] User provided function(): (Fortran):PETSc successfully started: procs 1
> [0] User provided function(): Running on machine: dhcp-67-30.ices.utexas.edu
> [0] PetscCommDuplicate(): Duplicating a communicator 1140850688 -2080374784
> max tags = 2147483647
> [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688
> -2080374784
> [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688
> -2080374784
> [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688
> -2080374784
> [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688
> -2080374784
> [0] MatConvert_AIJ_AIJMUMPS(): Using MUMPS for LU factorization and solves.
> [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688
> -2080374784
> 0 SNES Function norm 8.500000000000e-01
> [0] MatSetUpPreallocation(): Warning not preallocating matrix storage
>
> So, of course, this really slows down the code for even small problems (20000
> dof). Same story for SuperLU. Should I use a different call to do the
> preallocation?
>
> Thanks for any suggestions.
>
> Best,
>
> Paul
>
>
More information about the petsc-users
mailing list