[petsc-dev] [petsc-users] How to compute RARt with A and R as distributed (MPI) matrices ?
Hong
hzhang at mcs.anl.gov
Sun Jun 25 11:04:35 CDT 2017
This is merged to master branch. The help menu for MatRARt:
This routine is currently only implemented for pairs of AIJ matrices and
classes
which inherit from AIJ. Due to PETSc sparse matrix block row
distribution among processes,
parallel MatRARt is implemented via explicit transpose of R, which could
be very expensive.
We recommend using MatPtAP().
Thanks for your request and suggestion!
Hong
On Sun, Jun 25, 2017 at 7:36 AM, Franck Houssen <franck.houssen at inria.fr>
wrote:
> Great !
>
> ------------------------------
>
> *De: *"Hong" <hzhang at mcs.anl.gov>
> *À: *"Franck Houssen" <franck.houssen at inria.fr>
> *Cc: *"Stefano Zampini" <stefano.zampini at gmail.com>, "petsc-dev" <
> petsc-dev at mcs.anl.gov>
> *Envoyé: *Vendredi 23 Juin 2017 18:08:08
>
> *Objet: *Re: [petsc-dev] [petsc-users] How to compute RARt with A and R
> as distributed (MPI) matrices ?
>
> Franck:
> See
> https://bitbucket.org/petsc/petsc/commits/f3dcd100076caffe0af33a424f0d06
> cb26a76989
>
> Hong
>
>>
>> ------------------------------
>>
>> *De: *"Hong" <hzhang at mcs.anl.gov>
>> *À: *"Stefano Zampini" <stefano.zampini at gmail.com>
>> *Cc: *"Franck Houssen" <franck.houssen at inria.fr>, "petsc-dev" <
>> petsc-dev at mcs.anl.gov>
>> *Envoyé: *Vendredi 23 Juin 2017 16:59:42
>> *Objet: *Re: [petsc-dev] [petsc-users] How to compute RARt with A and R
>> as distributed (MPI) matrices ?
>>
>> Due to petsc sparse matrix block row distribution among processes,
>> we do not have an efficient way to implement parallel R * A * R^T
>> (explicit transpose of R is very expensive).
>>
>>
>> This sentence above is a perfect candidate to add in the doc : 1 line, 1
>> sentence that enables the user to understand WHY he must not use matRARt if
>> he has a distributed matrix in hand (= to understand WHY this is not
>> implemented and will throw an error).
>>
>>
>>
>> For P^t * A * P=P^t *(AP), we do local transpose P^t and sum (partical
>> P^t_local *(AP).
>>
>> I agree that a completed documentation would help users; but in reality,
>> with so few developers, so many PETSc functions collected over the years,
>> each of function has various implementations, implementations are changes
>> frequently ....
>>
>> We keep object operation tables in PETSc. When user invokes a
>> non-supported implementation, the execution throughs an error message. User
>> is informed from testing instead of reading user manual to find out which
>> detailed implementation is supported. We also provide instant email support
>> for user's request.
>>
>> We'll try to maintain an update user manual and welcome bug/error report.
>> I'll check the manual on MatRARt/PtAP to make it clear to users.
>>
>> Hong
>>
>> On Fri, Jun 23, 2017 at 7:13 AM, Stefano Zampini <
>> stefano.zampini at gmail.com> wrote:
>>
>>>
>>> > On Jun 23, 2017, at 10:10 AM, Franck Houssen <franck.houssen at inria.fr>
>>> wrote:
>>> >
>>> > By the way, I noticed this is exactly the same for
>>> MatRARt/PtAPNumeric/Symbolic: pages of the documentation are identical, so
>>> the user can not understand why to use the RARt or the PtAP.
>>>
>>> MatRARt computes: R * A * R^T
>>> MatPtAP computes: P^t * A * P
>>>
>>> they perform different operations
>>>
>>> > I believe adding a short note in MatRARt/PtAP pages and
>>> MatRARt/PtAPNumeric/Symbolic would be welcome to say "this method is meant
>>> to do/support this but not that”.
>>>
>>> An operation like RARt (or PtAP) may have a specialized implementation,
>>> or be available though composition of operatotions, i.e. PtAP = P^t * (A *
>>> P) or compute transpose of P (explicitly) and then call the triple matrix
>>> product.
>>> It’s not possible to keep track of all the possible cases.
>>>
>>> >
>>>
>>> > Franck
>>> >
>>> > ----- Mail original -----
>>> >> De: "Jed Brown" <jed at jedbrown.org>
>>> >> À: "Hong" <hzhang at mcs.anl.gov>
>>> >> Cc: "petsc-dev" <petsc-dev at mcs.anl.gov>, "PETSc users list" <
>>> petsc-users at mcs.anl.gov>
>>> >> Envoyé: Jeudi 22 Juin 2017 18:17:33
>>> >> Objet: Re: [petsc-dev] [petsc-users] How to compute RARt with A and R
>>> as distributed (MPI) matrices ?
>>> >>
>>> >> Hong <hzhang at mcs.anl.gov> writes:
>>> >>
>>> >>> Jed:
>>> >>>>
>>> >>>>>> Is it better this way or as a fallback when !A->ops->rart?
>>> MatPtAP
>>> >>>>>> handles other combinations like MAIJ.
>>> >>>>>>
>>> >>>>>
>>> >>>>> Do you mean
>>> >>>>> if ( !A->ops->rart) {
>>> >>>>> Mat Rt;
>>> >>>>> ierr = MatTranspose(R,MAT_INITIAL_MATRIX,&Rt);CHKERRQ(ierr);
>>> >>>>> ierr = MatMatMatMult(R,A,Rt,scall,fill,C);CHKERRQ(ierr);
>>> >>>>> ierr = MatDestroy(&Rt);CHKERRQ(ierr);
>>> >>>>> }
>>> >>>>> This does NOT work for most matrix formats because we do not have
>>> >>>> fallbacks
>>> >>>>> for MatTranspose() and MatMatMult().
>>> >>>>
>>> >>>> That's fine; they'll trigger an error and we'll be able to see from
>>> the
>>> >>>> stack that it can be made to work by either implementing the
>>> appropriate
>>> >>>> MatRARt or MatTranspose and MatMatMatMult.
>>> >>>>
>>> >>>
>>> >>> You prefer adding this default, even though it gives error in either
>>> >>> MatTranspose() or MatMatMatMult() depends on input matrix format?
>>> >>
>>> >> Yeah, in the sense that it gives more opportunities to succeed.
>>> >>
>>> >>> If so, we need add this type of 'default' to all mat operations --
>>> >>> currently, all routines do
>>> >>> if (!mat->ops-> )
>>> >>> SETERRQ1(PetscObjectComm((PetscObject)mat),PETSC_ERR_SUP,"Mat type
>>> >>> %s",((PetscObject)mat)->type_name);
>>> >>
>>> >> Probably.
>>> >>
>>>
>>>
>>
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20170625/23fb3dd4/attachment.html>
More information about the petsc-dev
mailing list