[petsc-users] Scaling a vector with an entry from a DMRedundant

Kevin Green kevin.richard.green at gmail.com
Tue Aug 21 11:45:24 CDT 2012


Hi all,

I've created a Matshell multiplication that acts on a vector derived from a
DMComposite which contains a DMDA and a DMRedundant of size 1 living on
process 0.  It does its job on a single process, but on multiple processes,
the VecAXPY (see below) fails with the error:

[0]PETSC ERROR: --------------------- Error Message
------------------------------------
[0]PETSC ERROR: Invalid argument!
[0]PETSC ERROR: Scalar value must be same on all processes, argument # 2!
[0]PETSC ERROR:
------------------------------------------------------------------------
[0]PETSC ERROR: Petsc Development HG revision:
9cac76b7c349ad267a1605750f02b8ef9d350ff1  HG Date: Tue Aug 21 10:07:43 2012
-0500
[0]PETSC ERROR: See docs/changes/index.html for recent updates.
[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[0]PETSC ERROR: See docs/index.html for manual pages.
[0]PETSC ERROR:
------------------------------------------------------------------------
[0]PETSC ERROR: ./ex3 on a arch-linu named lester by kg Tue Aug 21 12:36:10
2012
[0]PETSC ERROR: Libraries linked from
/home/kg/lib/petsc-dev/arch-linux2-c-debug/lib
[0]PETSC ERROR: Configure run at Tue Aug 21 11:54:03 2012
[0]PETSC ERROR: Configure options PETSC_ARCH=arch-linux2-c-debug
[0]PETSC ERROR:
------------------------------------------------------------------------
[0]PETSC ERROR: VecAXPY() line 550 in src/vec/vec/interface/rvector.c
[0]PETSC ERROR: ArcContShellMult() line 1049 in ex3.c
[0]PETSC ERROR: MatMult() line 2146 in src/mat/interface/matrix.c
[0]PETSC ERROR: PCApplyBAorAB() line 621 in src/ksp/pc/interface/precon.c
[0]PETSC ERROR: KSPGMRESCycle() line 155 in src/ksp/ksp/impls/gmres/gmres.c
[0]PETSC ERROR: KSPSolve_GMRES() line 231 in src/ksp/ksp/impls/gmres/gmres.c
[0]PETSC ERROR: KSPSolve() line 446 in src/ksp/ksp/interface/itfunc.c
[0]PETSC ERROR: main() line 425 in ex3.c
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.

Prodding the Matshell shows that a process that the DMRedundant doesn't
"live" on has size 0.  I guess the behaviour I expected is that every
process would have space for the redundant entry, but all processes would
be synchronized with the process where it lives upon access (or at least on
a scatter, broadcast, ... something).

The examples with DMRedundant are cases where only a single process needs
to know the "redundant" part (so why is that called redundant?), and do not
demonstrate this behaviour.

Do I have to broadcast the information myself before I can scale a global
vector?  Is there something that might work better than DMRedundant?  Any
suggestions would be welcome.

Cheers,
Kevin

Matshell multiplication routine:

/*


  ArcContShellMult - MatMult overload for linear system of a
pseudo-arclength continuation


  [du_post;dlam_post] = [dFdu dFdlam; du0 dlam0]*[du;dlam]





   Input Parameters:


.  A         - The matshell


.  x         - the state to operate on





   Output Parameters:


.  y         - the result


*/
PetscErrorCode ArcContShellMult(Mat A,Vec x,Vec y)
{
  PetscInt ierr;
  void *ctx;
  ArcContCtx *arc_ctx;

  Vec delta_U, delta_lam, delta_U_post, delta_lam_post;
  PetscScalar *del_lam_arr,lam_on_entry;
  PetscFunctionBegin;
  ierr = MatShellGetContext(A,&ctx); CHKERRQ(ierr);
  arc_ctx = (ArcContCtx*)(ctx);

  /*


    Get the components of the composite input and output


  */
  ierr =
DMCompositeGetAccess(arc_ctx->dmc,x,&delta_U,&delta_lam);CHKERRQ(ierr);
  ierr =
DMCompositeGetAccess(arc_ctx->dmc,y,&delta_U_post,&delta_lam_post);CHKERRQ(ierr);

  /*


    Get lambda component as usable array


  */
  ierr = VecGetArray(delta_lam,&del_lam_arr);CHKERRQ(ierr);
  lam_on_entry = del_lam_arr[0];

  /*


    Multiply and add each of the components


  */
  ierr = MatMult(arc_ctx->dFdu,delta_U,delta_U_post);CHKERRQ(ierr);
  ierr = VecAXPY(delta_U_post,del_lam_arr[0],arc_ctx->dFdlam);CHKERRQ(ierr);

  ierr = VecDot(arc_ctx->du0,delta_U,&del_lam_arr[0]);CHKERRQ(ierr);
  del_lam_arr[0] += arc_ctx->dlam0*lam_on_entry;

  /*


    Gather components back to the y vector


  */
  ierr = VecRestoreArray(delta_lam,&del_lam_arr);CHKERRQ(ierr);
  ierr = VecCopy(delta_lam,delta_lam_post);
  ierr =
DMCompositeRestoreAccess(arc_ctx->dmc,x,&delta_U,&delta_lam);CHKERRQ(ierr);
  ierr =
DMCompositeRestoreAccess(arc_ctx->dmc,y,&delta_U_post,&delta_lam_post);CHKERRQ(ierr);
  PetscFunctionReturn(0);
}
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120821/1c711f88/attachment-0001.html>


More information about the petsc-users mailing list