<div dir="ltr">Ok, i created a tiny testcase just for this,<div><br></div><div>The output from n# calls are as follows:</div><div><br></div><div><font face="monospace, monospace">n1:</font></div><div><div><font face="monospace, monospace">Mat Object: 1 MPI processes</font></div><div><font face="monospace, monospace"> type: mpiaij</font></div><div><font face="monospace, monospace">row 0: (0, 1.) (1, 2.) (2, 4.) (3, 3.) </font></div><div><font face="monospace, monospace">row 1: (0, 2.) (1, 1.) (2, 3.) (3, 4.) </font></div><div><font face="monospace, monospace">row 2: (0, 4.) (1, 3.) (2, 1.) (3, 2.) </font></div><div><font face="monospace, monospace">row 3: (0, 3.) (1, 4.) (2, 2.) (3, 1.) </font></div></div><div><font face="monospace, monospace"><br></font></div><div><span style="font-family:monospace,monospace">n2:</span><br></div><div><div><font face="monospace, monospace">Mat Object: 2 MPI processes</font></div><div><font face="monospace, monospace"> type: mpiaij</font></div><div><font face="monospace, monospace">row 0: (0, 1.) (1, 2.) (2, 4.) (3, 3.) </font></div><div><font face="monospace, monospace">row 1: (0, 2.) (1, 1.) (2, 3.) (3, 4.) </font></div><div><font face="monospace, monospace">row 2: (0, 1.) (1, 2.) (2, 4.) (3, 3.) </font></div><div><font face="monospace, monospace">row 3: (0, 2.) (1, 1.) (2, 3.) (3, 4.) </font></div></div><div><font face="monospace, monospace"><br></font></div><div><font face="monospace, monospace">n4:</font></div><div><div><font face="monospace, monospace">Mat Object: 4 MPI processes</font></div><div><font face="monospace, monospace"> type: mpiaij</font></div><div><font face="monospace, monospace">row 0: (0, 1.) (1, 2.) (2, 4.) (3, 3.) </font></div><div><font face="monospace, monospace">row 1: (0, 1.) (1, 2.) (2, 4.) (3, 3.) </font></div><div><font face="monospace, monospace">row 2: (0, 1.) (1, 2.) (2, 4.) (3, 3.) </font></div><div><font face="monospace, monospace">row 3: (0, 1.) (1, 2.) (2, 4.) (3, 3.) </font></div></div><div><font face="monospace, monospace"><br></font></div><div><font face="monospace, monospace"><br></font></div><div><font face="monospace, monospace"><br></font></div><div><font face="arial, helvetica, sans-serif">It really gets messed, no idea what's happening.</font></div><div><br></div><div><br></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Sep 26, 2016 at 3:12 PM, Barry Smith <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><br>
> On Sep 26, 2016, at 5:07 PM, Manuel Valera <<a href="mailto:mvalera@mail.sdsu.edu">mvalera@mail.sdsu.edu</a>> wrote:<br>
><br>
> Ok i was using a big matrix before, from a smaller testcase i got the output and effectively, it looks like is not well read at all, results are attached for DRAW viewer, output is too big to use STDOUT even in the small testcase. n# is the number of processors requested.<br>
<br>
</span> You need to construct a very small test case so you can determine why the values do not end up where you expect them. There is no way around it.<br>
<span class="">><br>
> is there a way to create the matrix in one node and the distribute it as needed on the rest ? maybe that would work.<br>
<br>
</span> No the is not scalable. You become limited by the memory of the one node.<br>
<div><div class="h5"><br>
><br>
> Thanks<br>
><br>
> On Mon, Sep 26, 2016 at 2:40 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br>
><br>
> How large is the matrix? It will take a very long time if the matrix is large. Debug with a very small matrix.<br>
><br>
> Barry<br>
><br>
> > On Sep 26, 2016, at 4:34 PM, Manuel Valera <<a href="mailto:mvalera@mail.sdsu.edu">mvalera@mail.sdsu.edu</a>> wrote:<br>
> ><br>
> > Indeed there is something wrong with that call, it hangs out indefinitely showing only:<br>
> ><br>
> > Mat Object: 1 MPI processes<br>
> > type: mpiaij<br>
> ><br>
> > It draws my attention that this program works for 1 processor but not more, but it doesnt show anything for that viewer in either case.<br>
> ><br>
> > Thanks for the insight on the redundant calls, this is not very clear on documentation, which calls are included in others.<br>
> ><br>
> ><br>
> ><br>
> > On Mon, Sep 26, 2016 at 2:02 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br>
> ><br>
> > The call to MatCreateMPIAIJWithArrays() is likely interpreting the values you pass in different than you expect.<br>
> ><br>
> > Put a call to MatView(Ap,PETSC_VIEWER_<wbr>STDOUT_WORLD,ierr) after the MatCreateMPIAIJWithArray() to see what PETSc thinks the matrix is.<br>
> ><br>
> ><br>
> > > On Sep 26, 2016, at 3:42 PM, Manuel Valera <<a href="mailto:mvalera@mail.sdsu.edu">mvalera@mail.sdsu.edu</a>> wrote:<br>
> > ><br>
> > > Hello,<br>
> > ><br>
> > > I'm working on solve a linear system in parallel, following ex12 of the ksp tutorial i don't see major complication on doing so, so for a working linear system solver with PCJACOBI and KSPGCR i did only the following changes:<br>
> > ><br>
> > > call MatCreate(PETSC_COMM_WORLD,Ap,<wbr>ierr)<br>
> > > ! call MatSetType(Ap,MATSEQAIJ,ierr)<br>
> > > call MatSetType(Ap,MATMPIAIJ,ierr) !paralellization<br>
> > ><br>
> > > call MatSetSizes(Ap,PETSC_DECIDE,<wbr>PETSC_DECIDE,nbdp,nbdp,ierr);<br>
> > ><br>
> > > ! call MatSeqAIJSetPreallocationCSR(<wbr>Ap,iapi,japi,app,ierr)<br>
> > > call MatSetFromOptions(Ap,ierr)<br>
> ><br>
> > Note that none of the lines above are needed (or do anything) because the MatCreateMPIAIJWithArrays() creates the matrix from scratch itself.<br>
> ><br>
> > Barry<br>
> ><br>
> > > ! call MatCreateSeqAIJWithArrays(<wbr>PETSC_COMM_WORLD,nbdp,nbdp,<wbr>iapi,japi,app,Ap,ierr)<br>
> > > call MatCreateMPIAIJWithArrays(<wbr>PETSC_COMM_WORLD,floor(real(<wbr>nbdp)/sizel),PETSC_DECIDE,<wbr>nbdp,nbdp,iapi,japi,app,Ap,<wbr>ierr)<br>
> > ><br>
> > ><br>
> > > I grayed out the changes from sequential implementation.<br>
> > ><br>
> > > So, it does not complain at runtime until it reaches KSPSolve(), with the following error:<br>
> > ><br>
> > ><br>
> > > [1]PETSC ERROR: --------------------- Error Message ------------------------------<wbr>------------------------------<wbr>--<br>
> > > [1]PETSC ERROR: Object is in wrong state<br>
> > > [1]PETSC ERROR: Matrix is missing diagonal entry 0<br>
> > > [1]PETSC ERROR: See <a href="http://www.mcs.anl.gov/petsc/documentation/faq.html" rel="noreferrer" target="_blank">http://www.mcs.anl.gov/petsc/<wbr>documentation/faq.html</a> for trouble shooting.<br>
> > > [1]PETSC ERROR: Petsc Release Version 3.7.3, unknown<br>
> > > [1]PETSC ERROR: ./solvelinearmgPETSc � � on a arch-linux2-c-debug named valera-HP-xw4600-Workstation by valera Mon Sep 26 13:35:15 2016<br>
> > > [1]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-fblaslapack=1 --download-mpich=1 --download-ml=1<br>
> > > [1]PETSC ERROR: #1 MatILUFactorSymbolic_SeqAIJ() line 1733 in /home/valera/v5PETSc/petsc/<wbr>petsc/src/mat/impls/aij/seq/<wbr>aijfact.c<br>
> > > [1]PETSC ERROR: #2 MatILUFactorSymbolic() line 6579 in /home/valera/v5PETSc/petsc/<wbr>petsc/src/mat/interface/<wbr>matrix.c<br>
> > > [1]PETSC ERROR: #3 PCSetUp_ILU() line 212 in /home/valera/v5PETSc/petsc/<wbr>petsc/src/ksp/pc/impls/factor/<wbr>ilu/ilu.c<br>
> > > [1]PETSC ERROR: #4 PCSetUp() line 968 in /home/valera/v5PETSc/petsc/<wbr>petsc/src/ksp/pc/interface/<wbr>precon.c<br>
> > > [1]PETSC ERROR: #5 KSPSetUp() line 390 in /home/valera/v5PETSc/petsc/<wbr>petsc/src/ksp/ksp/interface/<wbr>itfunc.c<br>
> > > [1]PETSC ERROR: #6 PCSetUpOnBlocks_BJacobi_<wbr>Singleblock() line 650 in /home/valera/v5PETSc/petsc/<wbr>petsc/src/ksp/pc/impls/<wbr>bjacobi/bjacobi.c<br>
> > > [1]PETSC ERROR: #7 PCSetUpOnBlocks() line 1001 in /home/valera/v5PETSc/petsc/<wbr>petsc/src/ksp/pc/interface/<wbr>precon.c<br>
> > > [1]PETSC ERROR: #8 KSPSetUpOnBlocks() line 220 in /home/valera/v5PETSc/petsc/<wbr>petsc/src/ksp/ksp/interface/<wbr>itfunc.c<br>
> > > [1]PETSC ERROR: #9 KSPSolve() line 600 in /home/valera/v5PETSc/petsc/<wbr>petsc/src/ksp/ksp/interface/<wbr>itfunc.c<br>
> > > At line 333 of file solvelinearmgPETSc.f90<br>
> > > Fortran runtime error: Array bound mismatch for dimension 1 of array 'sol' (213120/106560)<br>
> > ><br>
> > ><br>
> > > This code works for -n 1 cores, but it gives this error when using more than one core.<br>
> > ><br>
> > > What am i missing?<br>
> > ><br>
> > > Regards,<br>
> > ><br>
> > > Manuel.<br>
> > ><br>
> > > <solvelinearmgPETSc.f90><br>
> ><br>
> ><br>
><br>
><br>
</div></div>> <n4.png><n2.png><n1.png><br>
<br>
</blockquote></div><br></div>