[petsc-users] Solve KSP in parallel.
Manuel Valera
mvalera at mail.sdsu.edu
Mon Sep 26 17:51:17 CDT 2016
Ok, i created a tiny testcase just for this,
The output from n# calls are as follows:
n1:
Mat Object: 1 MPI processes
type: mpiaij
row 0: (0, 1.) (1, 2.) (2, 4.) (3, 3.)
row 1: (0, 2.) (1, 1.) (2, 3.) (3, 4.)
row 2: (0, 4.) (1, 3.) (2, 1.) (3, 2.)
row 3: (0, 3.) (1, 4.) (2, 2.) (3, 1.)
n2:
Mat Object: 2 MPI processes
type: mpiaij
row 0: (0, 1.) (1, 2.) (2, 4.) (3, 3.)
row 1: (0, 2.) (1, 1.) (2, 3.) (3, 4.)
row 2: (0, 1.) (1, 2.) (2, 4.) (3, 3.)
row 3: (0, 2.) (1, 1.) (2, 3.) (3, 4.)
n4:
Mat Object: 4 MPI processes
type: mpiaij
row 0: (0, 1.) (1, 2.) (2, 4.) (3, 3.)
row 1: (0, 1.) (1, 2.) (2, 4.) (3, 3.)
row 2: (0, 1.) (1, 2.) (2, 4.) (3, 3.)
row 3: (0, 1.) (1, 2.) (2, 4.) (3, 3.)
It really gets messed, no idea what's happening.
On Mon, Sep 26, 2016 at 3:12 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>
> > On Sep 26, 2016, at 5:07 PM, Manuel Valera <mvalera at mail.sdsu.edu>
> wrote:
> >
> > Ok i was using a big matrix before, from a smaller testcase i got the
> output and effectively, it looks like is not well read at all, results are
> attached for DRAW viewer, output is too big to use STDOUT even in the small
> testcase. n# is the number of processors requested.
>
> You need to construct a very small test case so you can determine why
> the values do not end up where you expect them. There is no way around it.
> >
> > is there a way to create the matrix in one node and the distribute it as
> needed on the rest ? maybe that would work.
>
> No the is not scalable. You become limited by the memory of the one
> node.
>
> >
> > Thanks
> >
> > On Mon, Sep 26, 2016 at 2:40 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
> >
> > How large is the matrix? It will take a very long time if the matrix
> is large. Debug with a very small matrix.
> >
> > Barry
> >
> > > On Sep 26, 2016, at 4:34 PM, Manuel Valera <mvalera at mail.sdsu.edu>
> wrote:
> > >
> > > Indeed there is something wrong with that call, it hangs out
> indefinitely showing only:
> > >
> > > Mat Object: 1 MPI processes
> > > type: mpiaij
> > >
> > > It draws my attention that this program works for 1 processor but not
> more, but it doesnt show anything for that viewer in either case.
> > >
> > > Thanks for the insight on the redundant calls, this is not very clear
> on documentation, which calls are included in others.
> > >
> > >
> > >
> > > On Mon, Sep 26, 2016 at 2:02 PM, Barry Smith <bsmith at mcs.anl.gov>
> wrote:
> > >
> > > The call to MatCreateMPIAIJWithArrays() is likely interpreting the
> values you pass in different than you expect.
> > >
> > > Put a call to MatView(Ap,PETSC_VIEWER_STDOUT_WORLD,ierr) after
> the MatCreateMPIAIJWithArray() to see what PETSc thinks the matrix is.
> > >
> > >
> > > > On Sep 26, 2016, at 3:42 PM, Manuel Valera <mvalera at mail.sdsu.edu>
> wrote:
> > > >
> > > > Hello,
> > > >
> > > > I'm working on solve a linear system in parallel, following ex12 of
> the ksp tutorial i don't see major complication on doing so, so for a
> working linear system solver with PCJACOBI and KSPGCR i did only the
> following changes:
> > > >
> > > > call MatCreate(PETSC_COMM_WORLD,Ap,ierr)
> > > > ! call MatSetType(Ap,MATSEQAIJ,ierr)
> > > > call MatSetType(Ap,MATMPIAIJ,ierr) !paralellization
> > > >
> > > > call MatSetSizes(Ap,PETSC_DECIDE,PETSC_DECIDE,nbdp,nbdp,ierr);
> > > >
> > > > ! call MatSeqAIJSetPreallocationCSR(Ap,iapi,japi,app,ierr)
> > > > call MatSetFromOptions(Ap,ierr)
> > >
> > > Note that none of the lines above are needed (or do anything)
> because the MatCreateMPIAIJWithArrays() creates the matrix from scratch
> itself.
> > >
> > > Barry
> > >
> > > > ! call MatCreateSeqAIJWithArrays(PETSC_COMM_WORLD,nbdp,nbdp,
> iapi,japi,app,Ap,ierr)
> > > > call MatCreateMPIAIJWithArrays(PETSC_COMM_WORLD,floor(real(
> nbdp)/sizel),PETSC_DECIDE,nbdp,nbdp,iapi,japi,app,Ap,ierr)
> > > >
> > > >
> > > > I grayed out the changes from sequential implementation.
> > > >
> > > > So, it does not complain at runtime until it reaches KSPSolve(),
> with the following error:
> > > >
> > > >
> > > > [1]PETSC ERROR: --------------------- Error Message
> --------------------------------------------------------------
> > > > [1]PETSC ERROR: Object is in wrong state
> > > > [1]PETSC ERROR: Matrix is missing diagonal entry 0
> > > > [1]PETSC ERROR: See http://www.mcs.anl.gov/petsc/
> documentation/faq.html for trouble shooting.
> > > > [1]PETSC ERROR: Petsc Release Version 3.7.3, unknown
> > > > [1]PETSC ERROR: ./solvelinearmgPETSc
>
>
> � � on a
> arch-linux2-c-debug named valera-HP-xw4600-Workstation by valera Mon Sep 26
> 13:35:15 2016
> > > > [1]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++
> --with-fc=gfortran --download-fblaslapack=1 --download-mpich=1
> --download-ml=1
> > > > [1]PETSC ERROR: #1 MatILUFactorSymbolic_SeqAIJ() line 1733 in
> /home/valera/v5PETSc/petsc/petsc/src/mat/impls/aij/seq/aijfact.c
> > > > [1]PETSC ERROR: #2 MatILUFactorSymbolic() line 6579 in
> /home/valera/v5PETSc/petsc/petsc/src/mat/interface/matrix.c
> > > > [1]PETSC ERROR: #3 PCSetUp_ILU() line 212 in
> /home/valera/v5PETSc/petsc/petsc/src/ksp/pc/impls/factor/ilu/ilu.c
> > > > [1]PETSC ERROR: #4 PCSetUp() line 968 in /home/valera/v5PETSc/petsc/
> petsc/src/ksp/pc/interface/precon.c
> > > > [1]PETSC ERROR: #5 KSPSetUp() line 390 in /home/valera/v5PETSc/petsc/
> petsc/src/ksp/ksp/interface/itfunc.c
> > > > [1]PETSC ERROR: #6 PCSetUpOnBlocks_BJacobi_Singleblock() line 650
> in /home/valera/v5PETSc/petsc/petsc/src/ksp/pc/impls/bjacobi/bjacobi.c
> > > > [1]PETSC ERROR: #7 PCSetUpOnBlocks() line 1001 in
> /home/valera/v5PETSc/petsc/petsc/src/ksp/pc/interface/precon.c
> > > > [1]PETSC ERROR: #8 KSPSetUpOnBlocks() line 220 in
> /home/valera/v5PETSc/petsc/petsc/src/ksp/ksp/interface/itfunc.c
> > > > [1]PETSC ERROR: #9 KSPSolve() line 600 in /home/valera/v5PETSc/petsc/
> petsc/src/ksp/ksp/interface/itfunc.c
> > > > At line 333 of file solvelinearmgPETSc.f90
> > > > Fortran runtime error: Array bound mismatch for dimension 1 of array
> 'sol' (213120/106560)
> > > >
> > > >
> > > > This code works for -n 1 cores, but it gives this error when using
> more than one core.
> > > >
> > > > What am i missing?
> > > >
> > > > Regards,
> > > >
> > > > Manuel.
> > > >
> > > > <solvelinearmgPETSc.f90>
> > >
> > >
> >
> >
> > <n4.png><n2.png><n1.png>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20160926/69e5ca09/attachment.html>
More information about the petsc-users
mailing list