[petsc-users] Partition of parallel AIJ sparce matrix
Matthew Knepley
knepley at gmail.com
Wed Jul 30 15:50:17 CDT 2014
On Wed, Jul 30, 2014 at 3:30 PM, Qin Lu <lu_qin_2000 at yahoo.com> wrote:
> In the context of domain decompostion, if the unknowns are ordered (to
> reduce the number of infills, for instance) in the way that each subdomain
> may not own consecutive unknown index, does this mean the partition of the
> domain will be different from the partition of the matrix?
>
> For example, if subdomain 1 (assigned to process 1) owns unknowns 1 and 3
> (associated with equation 1 and 3), subdomain 2 (assigned to process 2)
> owns unknowns 2 and 4 (associated with equation 2 and 4) , how can I make
> each process own consecutive rows?
>
You renumber the rows once they are partitioned.
Matt
> Thanks,
> Qin
>
> *From:* Barry Smith <bsmith at mcs.anl.gov>
> *To:* Qin Lu <lu_qin_2000 at yahoo.com>
> *Cc:* petsc-users <petsc-users at mcs.anl.gov>
> *Sent:* Wednesday, July 30, 2014 2:49 PM
> *Subject:* Re: [petsc-users] Partition of parallel AIJ sparce matrix
>
>
> On Jul 30, 2014, at 11:08 AM, Qin Lu <lu_qin_2000 at yahoo.com> wrote:
>
> > Hello,
> >
> > Does a process have to own consecutive rows of the matrix? For example,
> suppose the global AIJ is 4x4, partitioned by 2 processes. Does process 1
> have to own rows 1 and 2, process 2 own rows 3 and 4?
>
> Yes
>
> > Or process1 may own rows 1 and 3, and process 2 own row 2 and 4?
>
> However, the numbering of degrees of freedom is arbitrary. Just renumber
> you degrees of freedom so the first set is on process 0, the next on
> process 1 etc.
>
> Barry
>
>
> >
> > Thanks a lot for your help!
> >
> > Regards,
> > Qin
>
>
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140730/de52170d/attachment.html>
More information about the petsc-users
mailing list