[petsc-users] Customizeing MatSetValuesBlocked(...)

Jinquan Zhong jzhong at scsolutions.com
Wed Aug 8 13:47:48 CDT 2012


Thanks, Satish.

Are you referring to preallocating memory for the size of A before MatSetValues(A, ...)?  I didn't do that.

Jinquan

-----Original Message-----
From: petsc-users-bounces at mcs.anl.gov [mailto:petsc-users-bounces at mcs.anl.gov] On Behalf Of Satish Balay
Sent: Wednesday, August 08, 2012 11:45 AM
To: PETSc users list
Subject: Re: [petsc-users] Customizeing MatSetValuesBlocked(...)

First - can you confirm that you've preallocated the matrix correctly?

run with '-info' and look for 'mallocs' and make sure its zero.

The second cost is the data movement between processes. To speed this up - try a few calls MatAssemblyBegin/End(MAT_FLUSH_ASSEMBLY).

i.e:

insert
<flush>
insert
<flush>
insert
<final-assembly>

Satish

On Wed, 8 Aug 2012, Jinquan Zhong wrote:

> Dear PETSc folks,
> 
> I have an emerging need to resolve the performance issue in building sparse matrix in PETSc.
> 
> I am are currently having some performance issues in building the sparse matrix in PETSc.  I am trying to input the inverted matrix from ScaLAPACK into PETSc.  This matrix is partitioned into series of submatrices with indices identified as block-cyclically distributed pattern.
> 
> Here are the options I have tested in PETSc to build them into PETSc and their limitations:
> 
> 
> 1.       MatSetValue(s)(...) is very easy to use.  However, the assembling process is extremely slow and expensive;
> 
> 2.       MatSetValuesBlocked(..) can speed up the assembly.  However, it is not well-fit for block-cyclically distributed matrices; and
> 
> 3.       MatCreateMPIAIJwithArrays(...) seems to be promising.  However, it is not straightforward to use for block-cyclically distributed matrices.
> 
> 4.       Customizeing MatSetValuesBlocked(...) such that we can specify the indices for (*mat->ops->setvaluesblocked)(...).  However, I have trouble to locate proper line for
> 
> mat->ops->setvaluesblocked=&unknownFunctionName
> 
> The question is what other options you have to resolve this performance issue? From your experience, do you guys have a worked example to build PETSc sparse matrix from block-cyclically distributed matrices obtained from ScaLAPACK with good performance?

> 
> I appreciate your inputs on this issue.
> 
> Thanks,
> 
> Jinquan
> 
> 



More information about the petsc-users mailing list