[petsc-users] MatSetSizes for MATMPIBAIJ
NovA
av.nova at gmail.com
Fri Feb 5 17:28:05 CST 2010
Thanks for the quick response!
2010/2/6 Matthew Knepley <knepley at gmail.com>:
> 1) It will distribute block rows, so you will get 8 and 4 as you want
How can it distribute in such a way, if it doesn't know block size yet?
Let's continue the example:
MatCreate(comm, &A);
MatSetSizes(A, PETSC_DECIDE,12, 12,12);
MatSetType(A, MATMPIBAIJ);
MatGetOwnershipRange(A, &rowS, &rowE);
Then loop local rows from rowS/4 to rowE/4 to preallocate storage
using MatMPIBAIJSetPreallocation(A,4,...)
What will be the rowS and rowE here? I think they can be not divisible
by 4 (block size)...
>
> 2) Warning: This is an incredibly small problem. Using 2 processors
> might not show any speedup at all.
Sure! It's just an example. I suppose it'll be much slower on two
processors actually.
Regards!
Andrey
>
> Matt
>
> On Fri, Feb 5, 2010 at 5:03 PM, NovA <av.nova at gmail.com> wrote:
>>
>> Hi everybody!
>>
>> I'm looking for the best way to distribute MPIBAIJ matrix among
>> several processors.
>>
>> For example, I have square matrix 3x3 of blocks 4x4 each (i.e. 12x12
>> values) and need to distribute it among 2 processors. The generic way
>> of creating such a matrix would be:
>> Mat A;
>> MatCreate(comm, &A);
>> MatSetSizes(A, loc_rows,12, 12,12);
>> MatSetType(A, MATMPIBAIJ);
>>
>> What is the optimal value for loc_rows here?
>>
>> Can I use PETSC_DECIDE for it? I suppose this will lead to division by
>> 6 rows per processor, which is not consistent with block size. How
>> MatMPIBAIJSetPreallocation(A,4,...) will deal with it?
>>
>> Most likely, I have to manually set loc_rows to 8 and 4 according to
>> the processor rank. But probably I miss some more clean way, when
>> PETSc really decides. :)
>>
>> Thanks in advance for any comments.
>>
>> Regards!
>> Andrey
>
>
>
> --
> What most experimenters take for granted before they begin their experiments
> is infinitely more interesting than any results to which their experiments
> lead.
> -- Norbert Wiener
>
More information about the petsc-users
mailing list