Schur system + MatShell
Satish Balay
balay at mcs.anl.gov
Tue Apr 22 09:52:07 CDT 2008
On Tue, 22 Apr 2008, Matthew Knepley wrote:
> On 4/22/08, tribur at vision.ee.ethz.ch <tribur at vision.ee.ethz.ch> wrote:
> > Dear Matt,
> >
> > > This does not make sense to me. You decide how PETSc partitions things (if
> > > you want), And, I really do not understand what you want in parallel.
> > > If you mean
> > > that you solve the local Schur complements independently, then use a local
> > > matrix for each one. The important thing is to work out the linear algebra
> > prior
> > > to coding. Then wrapping it with PETSc Mat/Vec is easy.
> > >
> >
> > The linear algebra is completely clear. Again: I have the local Schur
> > systems given (and NOT the solution of the local Schur systems), and I would
> > like to solve the global Schur complement system in parallel. The global
> > Schur complement system is theoretically constructed by putting and adding
> > elements of the local systems in certain locations of a global matrix.
> > Wrapping this with PETSc Mat/Vec, without the time-intensive assembling, is
> > not easy for me as a PETSc-beginner. But I'm curious of the solution you
> > propose...
>
> Did you verify that the Schur complement matrix was properly preallocated before
> assembly? This is the likely source of time. You can run with -info and search
> for "malloc" in the output.
Isn't this using MATDENSE? If that the case - then I think the problem
is due to wrong partitioning - causing communiation during
MatAssembly().
-info should clearly show the communication part aswell.
The fix would be to specify the local partition sizes for this matrix
- and not use PETSC_DECIDE.
Satish
More information about the petsc-users
mailing list