[petsc-dev] [petsc-users] How to compute RARt with A and R as distributed (MPI) matrices ?

Hong hzhang at mcs.anl.gov
Thu Jun 22 10:12:17 CDT 2017


Jed:
>
> >> Is it better this way or as a fallback when !A->ops->rart?  MatPtAP
> >> handles other combinations like MAIJ.
> >>
> >
> > Do you mean
> > if ( !A->ops->rart) {
> >     Mat Rt;
> >     ierr = MatTranspose(R,MAT_INITIAL_MATRIX,&Rt);CHKERRQ(ierr);
> >     ierr = MatMatMatMult(R,A,Rt,scall,fill,C);CHKERRQ(ierr);
> >     ierr = MatDestroy(&Rt);CHKERRQ(ierr);
> > }
> > This does NOT work for most matrix formats because we do not have
> fallbacks
> > for MatTranspose() and MatMatMult().
>
> That's fine; they'll trigger an error and we'll be able to see from the
> stack that it can be made to work by either implementing the appropriate
> MatRARt or MatTranspose and MatMatMatMult.
>

You prefer adding this default, even though it gives error in either
 MatTranspose() or MatMatMatMult() depends on input matrix format?

If so, we need add this type of 'default' to all mat operations --
currently, all routines do
if (!mat->ops-> )
SETERRQ1(PetscObjectComm((PetscObject)mat),PETSC_ERR_SUP,"Mat type
%s",((PetscObject)mat)->type_name);

Hong
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20170622/b69ca6ed/attachment.html>


More information about the petsc-dev mailing list