[petsc-users] generate entries on 'wrong' process

Matthew Knepley knepley at gmail.com
Fri Jan 20 13:54:45 CST 2012


On Fri, Jan 20, 2012 at 1:52 PM, Wen Jiang <jiangwen84 at gmail.com> wrote:

> Hi Jed,
>
> Could you cover it a bit more details why it will get deadlock unless the
> number of elements is *exactly* the same on every process? Thanks.
>

The flush call is collective. Everyone has to call it the same number of
times.

   Matt


> Regards,
> Wen
>
> Message: 5
> Date: Fri, 20 Jan 2012 11:36:17 -0600
> From: Jed Brown <jedbrown at mcs.anl.gov>
> Subject: Re: [petsc-users] generate entries on 'wrong' process
> To: PETSc users list <petsc-users at mcs.anl.gov>
> Message-ID:
>        <CAM9tzSnQvEDOSTrHbbMZVOyOg2+yPA2zeYM-ouk1HGr09tE7hA at mail.gmail.com
> >
> Content-Type: text/plain; charset="utf-8"
>
> On Fri, Jan 20, 2012 at 11:31, Wen Jiang <jiangwen84 at gmail.com> wrote:
>
> > The serial job is running without any problems and never stalls. Actually
> > the parallel jobs also running successfully on distributed-memory desktop
> > or on single node of cluster. It will get stuck if it is running on more
> > than one compute node(now it is running on two nodes). Both the serial
> job
> > and parallel job (running on distributed or cluster) I mentioned before
> > have the same size(dofs). But If I ran a smaller job on cluster with two
> > nodes, it might not get stuck and work fine.
> >
> > As you said before, I add MAT_ASSEMBLY_FLUSH after every element
> stiffness
> > matrix is inserted.
> >
>
> This will deadlock unless the number of elements is *exactly* the same on
> every process.
>
>
> > I got the output like below, and it gets stuck too.
> >
>
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120120/4d384210/attachment.htm>


More information about the petsc-users mailing list