matrix assembling time
Barry Smith
bsmith at mcs.anl.gov
Fri Mar 13 19:55:00 CDT 2009
On Mar 13, 2009, at 12:48 PM, Ravi Kannan wrote:
> Hi,
> This is Ravi Kannan from CFD Research Corporation. One basic
> question on
> the ordering of linear solvers in PETSc: If my A matrix (in AX=B) is a
> sparse matrix and the bandwidth of A (i.e. the distance between non
> zero
> elements) is high, does PETSc reorder the matrix/matrix-equations so
> as to
> solve more efficiently.
Depends on what you mean. All the direct solvers use reorderings
automatically
to reduce fill and hence limit memory and flop usage.
The iterative solvers do not. There is much less to gain by
reordering for iterative
solvers (no memory gain and only a relatively smallish improved cache
gain).
The "PETSc approach" is that one does the following
1) partitions the grid across processors (using a mesh partitioner)
and then
2) numbers the grid on each process in a reasonable ordering
BEFORE generating the linear system. Thus the sparse matrix
automatically gets
a good layout from the layout of the grid. So if you do 1) and 2) then
no additional
reordering is needed.
Barry
> If yes, is there any specific command to do the
> above?
>
> Thanks
> Ravi
>
>
>
> -----Original Message-----
> From: petsc-users-bounces at mcs.anl.gov
> [mailto:petsc-users-bounces at mcs.anl.gov]On Behalf Of Yixun Liu
> Sent: Friday, March 06, 2009 12:50 PM
> To: PETSC
> Subject: matrix assembling time
>
>
> Hi,
> Using PETSc the assembling time for a mesh with 6000 vertices is
> about
> 14 second parallelized on 4 processors, but another sequential program
> based on gmm lib is about 0.6 second. PETSc's solver is much faster
> than
> gmm, but I don't know why its assembling is so slow although I have
> preallocate an enough space for the matrix.
>
> MatMPIAIJSetPreallocation(sparseMeshMechanicalStiffnessMatrix, 1000,
> PETSC_NULL, 1000, PETSC_NULL);
>
> Yixun
>
>
More information about the petsc-users
mailing list