[petsc-users] Assembly of local Mat's to a parallel Mat
Hui Zhang
mike.hui.zhang at hotmail.com
Fri Apr 12 12:44:06 CDT 2013
On Apr 12, 2013, at 7:10 PM, Jed Brown wrote:
> Hui Zhang <mike.hui.zhang at hotmail.com> writes:
>
>> On Apr 12, 2013, at 5:22 PM, Jed Brown wrote:
>>
>> I can understand this method. A further question: since Ai itself is shared
>> by many processors, MatSetValues to A should be called by only one of the
>> processor sharing Ai. Is it right?
>
> If your Ai are already shared, why can't you just start by creating the
> big block-diagonal system containing all the Ai along the diagonal?
> Then MatGetSubMatrix() will give you the part if you really need to do
> something separate with it.
Thanks! My difficulty lie in that Ai is a result of matrix computations in a
sub-communcator (of the communicator of A). So I do not know a priori
the non-zero structure of Ai. Following the instructions of you, I think
I need to first MatGetRow of Ai, then MatSetValues to A. Is there a better way?
>
>>> or you can create a block diagonal parallel
>>> matrix A_i constructed by joining together all the diagonal blocks.
>>
>> Which function can do this? I only found diagonal block Mat with each block
>> of the same size and dense. Thanks!
>
> There is no function to do it in-place, but you can just create the big
> matrix and loop through the small matrix inserting rows. It's usually
> better to start with the big matrix.
>
More information about the petsc-users
mailing list