[petsc-users] ERROR: Argument out of range!; Local index XX too large XX (max) at 0! with MatSetValuesStencil
Matthew Knepley
knepley at gmail.com
Thu Jul 4 11:43:49 CDT 2013
On Thu, Jul 4, 2013 at 10:39 AM, Bishesh Khanal <bisheshkh at gmail.com> wrote:
>
>
>
> On Thu, Jul 4, 2013 at 3:11 PM, Bishesh Khanal <bisheshkh at gmail.com>wrote:
>
>>
>> On Thu, Jul 4, 2013 at 2:59 PM, Matthew Knepley <knepley at gmail.com>wrote:
>>
>>> On Thu, Jul 4, 2013 at 7:32 AM, Bishesh Khanal <bisheshkh at gmail.com>wrote:
>>>
>>>> Hi all,
>>>> I'm trying to use DMCreateMatrix and MatStencil to fill up a matrix
>>>> that results from a finite difference discretization of a PDE (objective is
>>>> to solve the resulting linear system in the form of Ax=b).
>>>>
>>>> However there are some errors and I'm not sure about the issue!
>>>> A short demo:
>>>> A 2D mXn grid, with two variables at each node (dof=2),
>>>> so the resulting A would be 2mn X 2mn.
>>>> Let's say the two variables are vx and vy, and the two associated
>>>> equations discretized are x-eq and y-eq.
>>>>
>>>> Here is the relevant part of the code: (mat1 variable for the A matrix):
>>>>
>>>
>>> Just sending the whole code is better. I suspect the logic is wrong for
>>> selecting the boundary.
>>>
>>> Matt
>>>
>>
>> Thanks, here is the complete code:
>>
>> #include <petscsys.h>
>> #include <petscdmda.h>
>>
>> #undef __FUNCT__
>> #define __FUNCT__ "main"
>> int main(int argc,char **argv)
>> {
>> PetscErrorCode ierr;
>> DM da;
>> PetscInt m=10,n=10;
>>
>> ierr = PetscInitialize(&argc,&argv,(char*)0,help);CHKERRQ(ierr);
>>
>> /* Create a DMDA and an associated vector */
>>
>> ierr = DMDACreate2d(PETSC_COMM_WORLD, DMDA_BOUNDARY_NONE,
>> DMDA_BOUNDARY_NONE,DMDA_STENCIL_BOX,m,n,
>>
>> PETSC_DECIDE,PETSC_DECIDE,2,1,NULL,NULL,&da);CHKERRQ(ierr);
>>
>> Mat mat1;
>> ierr = DMCreateMatrix(da,MATMPIAIJ,&mat1); CHKERRQ(ierr);
>> MatStencil row, col[4]; //let's test 4 non-zeros in one row.
>> PetscScalar val[4];
>> PetscScalar coeff = 2.; //just a constant coefficient for testing.
>> PetscInt i,j;
>> DMDALocalInfo info;
>>
>> ierr = DMDAGetLocalInfo(da,&info); CHKERRQ(ierr);
>>
>> for(i = info.ys; i < info.ys+info.ym; ++i){
>> for(j = info.xs; j < info.xs+info.xm; ++j){
>>
> row.i = i; row.j = j; //one node at a time
>> if (j == 0 || j == info.mx-1){ //left and right borders
>> //vx(i,j) = 0;
>> row.c = 0;
>> col[0].c = 0; val[0] = 1;
>> col[0].i = i; col[0].j = j;
>> ierr =
>> MatSetValuesStencil(mat1,1,&row,1,col,val,INSERT_VALUES);CHKERRQ(ierr);
>>
>> //vy: //vy(i,j) - c*vy(i,j+-1) = 0;
>> row.c = 1;
>> col[0].c = 1; val[1] = coeff;
>> col[1].c = 1;
>> col[1].i = i;
>> if(j == 0) //vy(i,j) - c*vy(i,j+1) = 0;
>> col[1].j = j+1;
>>
> else //vy(i,j) - c*vy(i,j-1) = 0;
>> col[1].j = j-1;
>>
>> ierr =
>> MatSetValuesStencil(mat1,1,&row,2,col,val,INSERT_VALUES);CHKERRQ(ierr);
>> }
>> else if (i == 0 || i == info.my-1){ //top and bottom borders
>> //vx: vx(i,j) - c* vx(i+-1,j) = 0;
>> row.c = 0;
>> col[0].c = 0; val[0] = 1;
>> col[0].i = i; col[0].j = j;
>> col[1].c = 0; val[1] = coeff;
>> if (i == 0) //vx(i,j) - c*vx(i+1,j) = 0;
>> col[1].i = i+1;
>> else //vx(i,j) - c*vx(i-1,j) = 0;
>> col[1].i = i-1;
>> col[1].j = j;
>> ierr =
>> MatSetValuesStencil(mat1,1,&row,2,col,val,INSERT_VALUES);CHKERRQ(ierr);
>>
>> //vy(i,j) = 0;
>> row.c = 1;
>> col[0].c = 1;
>> ierr =
>> MatSetValuesStencil(mat1,1,&row,1,col,val,INSERT_VALUES);CHKERRQ(ierr);
>> }
>> else { //Interior points:
>> row.c = 0;//x-eq
>> col[0].c = 0; val[0] = 2*coeff;
>> col[0].i = i; col[0].j = j+1;
>>
>
> It seems to me that the problem is caused because of this kind of
> assignment col[1].j = j+1 when j takes the value "info.xs+info.xm-1" and
> for similar places below with "i+1" when "i = info.ys+info.ym".
> My guess is that the each processor contains only the local portion of the
> matrix "mat1" corresponding to the grid area it is taking care of. It
> probably does not store the ghost values. But when for e.g.
> j = info.xs+info.xm-1, we have col[0].j = j+1 and MatSetValuesStencil
> needs accessing the region of another processor. Am I thinking correctly ?
> Iis this the case where I need each processor allocating a space for ghost
> values too?
> If so, how do I do it?
>
If you want to step outside the mesh, you either need
DMDA_BOUNDARY_PERIODIC or DMDA_BOUNDARY_GHOSTED.
Matt
> col[1].c = 0; val[1] = -val[0] - coeff;
>> col[1].i = i; col[1].j = j;
>> col[2].c = 1; val[2] = 4*coeff;
>> col[2].i = i+1; col[2].j = j;
>> col[3].c = 1; val[3] = 4*coeff;
>> col[3].i = i; col[3].j = j;
>> ierr =
>> MatSetValuesStencil(mat1,1,&row,4,col,val,INSERT_VALUES);CHKERRQ(ierr);
>>
>> row.c = 1; //y-eq
>> col[0].c = 1; val[0] = 2*coeff;
>> col[0].i = i; col[0].j = j+1;
>> col[1].c = 1; val[1] = -val[0] - coeff;
>> col[1].i = i; col[1].j = j;
>> col[2].c = 0; val[2] = 4*coeff;
>> col[2].i = i+1; col[2].j = j;
>> col[3].c = 0; val[3] = 4*coeff;
>> col[3].i = i; col[3].j = j;
>> ierr =
>> MatSetValuesStencil(mat1,1,&row,4,col,val,INSERT_VALUES);CHKERRQ(ierr);
>> }
>> }
>> }
>>
>> MatAssemblyBegin(mat1,MAT_FINAL_ASSEMBLY);
>> MatAssemblyEnd(mat1,MAT_FINAL_ASSEMBLY);
>> ierr =
>> MatSetOption(mat1,MAT_NEW_NONZERO_LOCATION_ERR,PETSC_TRUE);CHKERRQ(ierr);
>> ierr = PetscObjectSetName((PetscObject)mat1,"mat1");CHKERRQ(ierr);
>>
>> /* clean up and exit */
>> ierr = DMDestroy(&da);CHKERRQ(ierr);
>> ierr = MatDestroy(&mat1);
>> ierr = PetscFinalize();
>> return 0;
>> }
>>
>>
>>
>>
>>>
>>>
>>>> PetscInt m = 10, n=10;
>>>> ierr = DMDACreate2d(PETSC_COMM_WORLD, DMDA_BOUNDARY_NONE,
>>>> DMDA_BOUNDARY_NONE,DMDA_STENCIL_BOX,m,n,
>>>>
>>>> PETSC_DECIDE,PETSC_DECIDE,2,1,NULL,NULL,&da);CHKERRQ(ierr);
>>>>
>>>> Mat mat1;
>>>> ierr = DMCreateMatrix(da,MATMPIAIJ,&mat1); CHKERRQ(ierr);
>>>> MatStencil row, col[4]; //let's test 4 non-zeros in one row.
>>>> PetscScalar val[4];
>>>> PetscScalar coeff = 2.; //just a constant coefficient for testing.
>>>> PetscInt i,j;
>>>> DMDALocalInfo info;
>>>> ierr = DMDAGetLocalInfo(da,&info); CHKERRQ(ierr);
>>>> //Now fill up the matrix:
>>>> for(i = info.ys; i < info.ys+info.ym; ++i){
>>>> for(j = info.xs; j < info.xs+info.xm; ++j){
>>>> row.i = i; row.j = j; //one node at a time
>>>> if (j == 0 || j == info.mx-1){ //left and right borders
>>>> //vx(i,j) = 0;
>>>> row.c = 0;
>>>> col[0].c = 0; val[0] = 1;
>>>> col[0].i = i; col[0].j = j;
>>>> ierr =
>>>> MatSetValuesStencil(mat1,1,&row,1,col,val,INSERT_VALUES);CHKERRQ(ierr);
>>>>
>>>> //vy: //vy(i,j) - c*vy(i,j+-1) = 0;
>>>> row.c = 1;
>>>> col[0].c = 1; val[1] = coeff;
>>>> col[1].c = 1;
>>>> col[1].i = i;
>>>> if(j == 0) //vy(i,j) - c*vy(i,j+1) = 0;
>>>> col[1].j = j+1;
>>>> else //vy(i,j) - c*vy(i,j-1) = 0;
>>>> col[1].j = j-1;
>>>>
>>>> ierr =
>>>> MatSetValuesStencil(mat1,1,&row,2,col,val,INSERT_VALUES);CHKERRQ(ierr);
>>>> }
>>>> else if (i == 0 || i == info.my-1){ //top and bottom
>>>> borders
>>>> //vx: vx(i,j) - c* vx(i+-1,j) = 0;
>>>> row.c = 0;
>>>> col[0].c = 0; val[0] = 1;
>>>> col[0].i = i; col[0].j = j;
>>>> col[1].c = 0; val[1] = coeff;
>>>> if (i == 0) //vx(i,j) - c*vx(i+1,j) = 0;
>>>> col[1].i = i+1;
>>>> else //vx(i,j) - c*vx(i-1,j) = 0;
>>>> col[1].i = i-1;
>>>> col[1].j = j;
>>>> ierr =
>>>> MatSetValuesStencil(mat1,1,&row,2,col,val,INSERT_VALUES);CHKERRQ(ierr);
>>>>
>>>> //vy(i,j) = 0;
>>>> row.c = 1;
>>>> col[0].c = 1;
>>>> ierr =
>>>> MatSetValuesStencil(mat1,1,&row,1,col,val,INSERT_VALUES);CHKERRQ(ierr);
>>>> }
>>>> else { //Interior points:
>>>> row.c = 0;//x-eq
>>>> col[0].c = 0; val[0] = 2*coeff;
>>>> col[0].i = i; col[0].j = j+1;
>>>> col[1].c = 0; val[1] = -val[0] - coeff;
>>>> col[1].i = i; col[1].j = j;
>>>> col[2].c = 1; val[2] = 4*coeff;
>>>> col[2].i = i+1; col[2].j = j;
>>>> col[3].c = 1; val[3] = 4*coeff;
>>>> col[3].i = i; col[3].j = j;
>>>> ierr =
>>>> MatSetValuesStencil(mat1,1,&row,4,col,val,INSERT_VALUES);CHKERRQ(ierr);
>>>>
>>>> row.c = 1; //y-eq
>>>> col[0].c = 1; val[0] = 2*coeff;
>>>> col[0].i = i; col[0].j = j+1;
>>>> col[1].c = 1; val[1] = -val[0] - coeff;
>>>> col[1].i = i; col[1].j = j;
>>>> col[2].c = 0; val[2] = 4*coeff;
>>>> col[2].i = i+1; col[2].j = j;
>>>> col[3].c = 0; val[3] = 4*coeff;
>>>> col[3].i = i; col[3].j = j;
>>>> ierr =
>>>> MatSetValuesStencil(mat1,1,&row,4,col,val,INSERT_VALUES);CHKERRQ(ierr);
>>>> }
>>>> }
>>>> }
>>>>
>>>> MatAssemblyBegin(mat1,MAT_FINAL_ASSEMBLY);
>>>> MatAssemblyEnd(mat1,MAT_FINAL_ASSEMBLY);
>>>>
>>>> However I get the following error when run with 2 processors.
>>>> [0]PETSC ERROR: --------------------- Error Message
>>>> ------------------------------------
>>>> [0]PETSC ERROR: Argument out of range!
>>>> [0]PETSC ERROR: Local index 120 too large 120 (max) at 0!
>>>> [0]PETSC ERROR:
>>>> ------------------------------------------------------------------------
>>>> [0]PETSC ERROR: Petsc Release Version 3.4.1, Jun, 10, 2013
>>>> [0]PETSC ERROR: See docs/changes/index.html for recent updates.
>>>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
>>>> [0]PETSC ERROR: See docs/index.html for manual pages.
>>>> [0]PETSC ERROR:
>>>> ------------------------------------------------------------------------
>>>> [0]PETSC ERROR: examples/FDdmda_test on a arch-linux2-cxx-debug named
>>>> edwards by bkhanal Thu Jul 4 14:20:11 2013
>>>> [0]PETSC ERROR: Libraries linked from
>>>> /home/bkhanal/Documents/softwares/petsc-3.4.1/arch-linux2-cxx-debug/lib
>>>> [0]PETSC ERROR: Configure run at Wed Jun 19 11:04:51 2013
>>>> [0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=g77
>>>> --with-cxx=g++ --download-f-blas-lapack=1 --download-mpich=1
>>>> -with-clanguage=cxx --download-hypre=1
>>>> [0]PETSC ERROR:
>>>> ------------------------------------------------------------------------
>>>> [0]PETSC ERROR: ISLocalToGlobalMappingApply() line 444 in
>>>> /home/bkhanal/Documents/softwares/petsc-3.4.1/src/vec/is/utils/isltog.c
>>>> [0]PETSC ERROR: MatSetValuesLocal() line 1967 in
>>>> /home/bkhanal/Documents/softwares/petsc-3.4.1/src/mat/interface/matrix.c
>>>> [0]PETSC ERROR: MatSetValuesStencil() line 1339 in
>>>> /home/bkhanal/Documents/softwares/petsc-3.4.1/src/mat/interface/matrix.c
>>>> [0]PETSC ERROR: main() line 75 in
>>>> "unknowndirectory/"/user/bkhanal/home/works/cmake_tuts/petsc_test/examples/FDdmda_test.cxx
>>>> application called MPI_Abort(MPI_COMM_WORLD, 63) - process 0
>>>> [cli_0]: aborting job:
>>>> application called MPI_Abort(MPI_COMM_WORLD, 63) - process 0
>>>>
>>>>
>>>> ===================================================================================
>>>> = BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
>>>> = EXIT CODE: 63
>>>> = CLEANING UP REMAINING PROCESSES
>>>> = YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
>>>>
>>>> ===================================================================================
>>>>
>>>> I do not understand how the index (i,j) are out of range when set to
>>>> corresponding fields in row and col variables. What could be the possible
>>>> problem ? And any suggestions on the way to debug this sort of issues ?
>>>>
>>>> Thanks,
>>>> Bishesh
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> -- Norbert Wiener
>>>
>>
>>
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20130704/b3df3f6d/attachment.html>
More information about the petsc-users
mailing list