<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Mon, Mar 30, 2015 at 5:59 AM, Florian Lindner <span dir="ltr"><<a href="mailto:mailinglists@xgm.de" target="_blank">mailinglists@xgm.de</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
<br>
Am Freitag, 27. März 2015, 07:34:56 schrieb Matthew Knepley:<br>
> On Fri, Mar 27, 2015 at 7:31 AM, Florian Lindner <<a href="mailto:mailinglists@xgm.de">mailinglists@xgm.de</a>><br>
> wrote:<br>
> ><br>
> > Am Freitag, 27. März 2015, 07:26:11 schrieb Matthew Knepley:<br>
> > > On Fri, Mar 27, 2015 at 4:28 AM, Florian Lindner <<a href="mailto:mailinglists@xgm.de">mailinglists@xgm.de</a>><br>
> > > wrote:<br>
> > ><br>
> > > > Am Donnerstag, 26. März 2015, 07:34:27 schrieb Jed Brown:<br>
> > > > > Florian Lindner <<a href="mailto:mailinglists@xgm.de">mailinglists@xgm.de</a>> writes:<br>
> > > > ><br>
> > > > > > Hello,<br>
> > > > > ><br>
> > > > > > I'm using petsc with petsc4py.<br>
> > > > > ><br>
> > > > > > A matrix is created like that<br>
> > > > > ><br>
> > > > > > MPIrank = MPI.COMM_WORLD.Get_rank()<br>
> > > > > > MPIsize = MPI.COMM_WORLD.Get_size()<br>
> > > > > > print("MPI Rank = ", MPIrank)<br>
> > > > > > print("MPI Size = ", MPIsize)<br>
> > > > > > parts = partitions()<br>
> > > > > ><br>
> > > > > > print("Dimension= ", nSupport + dimension, "bsize = ",<br>
> > > > len(parts[MPIrank]))<br>
> > > > > ><br>
> > > > > > MPI.COMM_WORLD.Barrier() # Just to keep the output together<br>
> > > > > > A = PETSc.Mat(); A.createDense( (nSupport + dimension,<br>
> > nSupport +<br>
> > > > dimension), bsize = len(parts[MPIrank]) ) # <-- crash here<br>
> > > > ><br>
> > > > > bsize is collective (must be the same on all processes). It is used<br>
> > for<br>
> > > > > vector-valued problems (like elasticity -- bs=3 in 3 dimensions).<br>
> > > ><br>
> > > > It seems I'm still misunderstanding the bsize parameter.<br>
> > > ><br>
> > > > If I distribute a 10x10 matrix on three ranks I need to have a<br>
> > > > non-homogenous distribution, and thats what petsc does itself:<br>
> > > ><br>
> > ><br>
> > > blockSize really means the uniform block size of the matrix, thus is HAS<br>
> > to<br>
> > > divide the global size. If it does not,<br>
> > > you do not have a uniform block size, you have a bunch of different sized<br>
> > > blocks.<br>
> ><br>
> > But how can I set a parallel layout when the size of the matrix is not<br>
> > divisable by the number of ranks? When I omit bsize Petsc does that for me,<br>
> > by using block sizes of 4, 3 and 3 on the three different ranks. How can I<br>
> > set such a parallel layout manually?<br>
> ><br>
><br>
> I am going to reply in C because it is my native language:<br>
><br>
> MatCreate(comm, &A);<br>
> MatSetSizes(A, m, n, PETSC_DETERMINE, PETSC_DETERMINE);<br>
> MatSetFromOptions(A);<br>
> <Preallocation stuff here><br>
><br>
> You have each proc give its local size.<br>
<br>
Ok, that seems to be what I'm looking for...<br>
<br>
I've experienced some things where my understanding of petsc still seems to be far off, regarding the parallel layout:<br>
<br>
I have this code:<br>
<br>
ierr = MatSetSizes(matrix, 10, 10, PETSC_DECIDE, PETSC_DECIDE); CHKERRQ(ierr);<br>
<br>
MatGetOwnershipRange(matrix, &ownerBegin, &ownerEnd);<br>
cout << "Rank = " << MPIrank << " Begin = " << ownerBegin << " End = " << ownerEnd << endl;<br>
<br>
Complete test code: <a href="http://pastebin.com/xFM1fJnQ" target="_blank">http://pastebin.com/xFM1fJnQ</a><br>
<br>
If started with mpirun -n 3 I it prints<br>
<br>
Rank = 2 Begin = 0 End = 10<br>
Rank = 1 Begin = 0 End = 10<br>
Rank = 0 Begin = 0 End = 10<br></blockquote><div><br></div><div>You created three serial matrices since you used PETSC_COMM_SELF in MatCreate().</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> <br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
The same happens when I manually set the sizes per processor, like you suggested.<br>
<br>
int sizes[] = {4, 3, 3};<br>
MatSetSizes(matrix, sizes[MPIrank], sizes[MPIrank], PETSC_DECIDE, PETSC_DECIDE);<br>
<br>
I wonder why the range is always starting from 0, I was rather expecting something like<br>
<br>
Rank = 2 Begin = 7 End = 10<br>
Rank = 1 Begin = 4 End = 7<br>
Rank = 0 Begin = 0 End = 4<br>
<br>
Petsc4py prints what I expect:<br>
<br>
Rank = 1 Range = (4, 7) Size = 3<br>
Rank = 2 Range = (7, 10) Size = 3<br>
Rank = 0 Range = (0, 4) Size = 4<br>
<br>
<br>
Is this the way it should be?<br>
<br>
<br>
petsc4py's Mat::setSizes combines MatSetSizes and MatSetBlockSizes. I had some trouble what the correct datatype for size was, but figured it out now:<br>
<br>
size = [ (m, M), (n,N) ]<br>
size = [ (sizes[rank], PETSc.DETERMINE), (sizes[rank], PETSc.DETERMINE) ]<br>
<br>
The documentation on the python bindings is rather sparse...<br>
<br>
Thanks,<br>
Florian<br>
<br>
<br>
<br>
<br>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature">What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div>
</div></div>