question
Nguyen, Hung V ERDC-ITL-MS
Hung.V.Nguyen at usace.army.mil
Thu Oct 9 15:21:22 CDT 2008
I am having trouble to use MatCreateMPIAIWithArrays(). Do you have any
example of using this function?
Thanks
-----Original Message-----
From: owner-petsc-users at mcs.anl.gov [mailto:owner-petsc-users at mcs.anl.gov] On
Behalf Of Barry Smith
Sent: Thursday, October 09, 2008 9:24 AM
To: petsc-users at mcs.anl.gov
Subject: Re: question
You can use the utilities: MatCreateSeqAIJWithArrays() or
MatCreateMPIAIWithArrays() they
handle all the details for you.
Barry
On Oct 9, 2008, at 9:04 AM, Nguyen, Hung V ERDC-ITL-MS wrote:
> All,
>
> I am looking for an example code that read A (in csr format) and b.
> Then it
> builds A and b petsc format and solves Ax = b.
>
> I found an example below, but it seems that it doesn't work.
>
> If you have similar like an example below or let me know where is a
> problem, I would appreciate very much.
>
> Thanks,
>
> -Hung
> ---
> /*
> Purpose: Test a sparse matrix solver.
> */
> #include <stdio.h>
> #include "petscksp.h"
>
> int main(int argc,char **args)
> {
> /* My sample sparse matrix A */
>
> /*
> 11.0 0 0 14.0 0
> 21.0 22.0 0 24.0 0
> 31.0 0 33. 34.0 35.0
> 0 0 43. 44.0 0
> 0 0 0 0 55.
> */
>
>
> const int sizeMat=5; // Matrix is 5 by 5.
> int i,j;
> int nonZero=12;
> double val[] ={11., 14.,21., 22., 24., 31., 33., 34., 35., 43.,
> 44., 55.};
> int col_ind[]={0, 3, 0, 1, 3, 0, 2, 3, 4, 2, 3, 4};
> int row_ptr[]={0, 2, 5, 9, 11, 12};
> double knB[]={2.0, 0.0, 1.0, 1.0, 2.0};
> double answer[]={-0.0348308, -0.152452, -0.150927, 0.170224,
> 0.0363636};
>
> // calculate row_index, vector_index and number of nonzero per row:
>
> int nZperRow[]={3,4,2,1};
> int row_ind[]={0,0, 1,1,1, 2,2,2,2, 3,3, 4};
> int vec_ind[]={0,1,2,3,4};
> double initX[]={9.,9.,9.,9.,9.};
>
>
>
> /*
> PetSc codes start.
> */
> printf("\n*** PetSC Testing phase. ***\n");
> /* Create variables of PetSc */
> Vec x,b,u; /* approx solution, RHS, exact solution
> */ /*a
> linear system, Ax = b. */
> Mat A; /* linear system matrix */
> KSP ksp; /* linear solver context */
> PetscInt Istart,Iend; /* Index for local matrix of
> each
> processor */
> PetscInt istart,iend; /* Index for local vector of
> each
> processor */
> PetscViewer viewer;
> PetscMPIInt rank;
> PetscErrorCode ierr;
> PetscTruth flg;
>
> static char help[] = "Parallel vector layout.\n\n";
> /* Initialization of PetSc */
> PetscInitialize(&argc,&args,(char*)0,help);
> MPI_Comm_rank(PETSC_COMM_WORLD,&rank);
>
> /*
> Create parallel matrix, specifying only its global
> dimensions. :
> When using MatCreate(), the matrix format can be
> specified at
> runtime. Also, the parallel partitioning of the matrix
> is
> determined by PETSc at runtime.
>
> Performance tuning note: For problems of substantial
> size,
> preallocation of matrix memory is crucial for attaining
> good
> performance. See the matrix chapter of the users manual
> for details.
> - Allocates memory for a sparse parallel matrix in AIJ
> format
>
> (the default parallel PETSc format: Compressed Sparse
> Row).
> */
> ierr = MatCreate(PETSC_COMM_WORLD,&A);CHKERRQ(ierr);
> ierr =
> MatSetSizes
> (A,PETSC_DECIDE,PETSC_DECIDE,sizeMat,sizeMat);CHKERRQ(ierr);
> ierr = MatSetType(A, MATAIJ);CHKERRQ(ierr);
> ierr = MatSetFromOptions(A);CHKERRQ(ierr);
>
> /*
> Currently, all PETSc parallel matrix formats are
> partitioned by
> contiguous chunks of rows across the processors.
> Determine
> which
> rows of the matrix are locally owned.
> */
> ierr = MatGetOwnershipRange(A,&Istart,&Iend);CHKERRQ(ierr);
> printf(" Rank= %d, Istart_row= %d, Iend_row+1 = %d \n", rank,
> Istart, Iend);
>
> /*
> ierr =
> MatMPIAIJSetPreallocationCSR
> (A,row_ptr,col_ind,PETSC_NULL);CHKERRQ(ierr)
> ;
> // Standard format, CSR
> ierr =
> MatSeqAIJSetPreallocation(A,0,nZperRow);CHKERRQ(ierr);
> // Defining the number of nonzero for each row.
> */
>
> ierr =
> MatMPIAIJSetPreallocationCSR
> (A,row_ptr,col_ind,PETSC_NULL);CHKERRQ(ierr)
> ;
> // Standard format, CSR
> ierr = MatSeqAIJSetPreallocation(A, 0,nZperRow);CHKERRQ(ierr);
> // Defining the number of nonzero for each row.
>
> /*
> Set matrix elements in parallel.
> - Each processor needs to insert only elements that it
> owns
> locally (but any non-local elements will be
> sent to the
> appropriate processor during matrix assembly).
> - Always specify global rows and columns of matrix
> entries.
> */
> /* Method 1: Efficient method. */
> for (i=row_ptr[Istart]; i<row_ptr[Iend]; i++)
> {
> //ierr =
> MatSetValue
> (A,row_ind[i],col_ind[i],val[i],INSERT_VALUES);CHKERRQ(ierr);
> ierr =
> MatSetValues(A,1,&(row_ind[i]),
> 1,&(col_ind[i]),&(val[i]),INSERT_VALUES);
> CHKER
> RQ(ierr);
> }
>
> ierr = MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr);
> ierr = MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr);
>
> /*
> Visaualize a matrix. Set a viewer's style.
> To see a dense matrix, use the following two lines:
> Line1: viewer = PETSC_VIEWER_STDOUT_(PETSC_COMM_WORLD);
> Line2: ierr =
> PetscViewerSetFormat(viewer,PETSC_VIEWER_ASCII_DENSE);
> */
> ierr = MatView(A,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr);
>
> /*
> Create parallel vectors.
> */
> ierr = VecCreate(PETSC_COMM_WORLD,&u);CHKERRQ(ierr);
> ierr = VecSetSizes(u,PETSC_DECIDE,sizeMat);CHKERRQ(ierr);
> ierr = VecSetFromOptions(u);CHKERRQ(ierr);
> ierr = VecDuplicate(u,&b);CHKERRQ(ierr);
> ierr = VecDuplicate(b,&x);CHKERRQ(ierr);
> /*
> PETSc parallel vectors are partitioned by
> contiguous chunks of rows across the processors.
> Determine
> which vector are locally owned.
> */
> VecGetOwnershipRange(b,&istart,&iend);
> /*
> Insert vector values
> */
> VecSetValues(u,sizeMat,vec_ind,answer,INSERT_VALUES);
> VecSetValues(x,sizeMat,vec_ind,initX,INSERT_VALUES);
> VecSetValues(b,sizeMat,vec_ind,knB,INSERT_VALUES);
> /*
> Assemble vector, using the 2-step process:
> VecAssemblyBegin(), VecAssemblyEnd()
> Computations can be done while messages are in
> transition
> by placing code between these two statements.
> */
> VecAssemblyBegin(u); VecAssemblyEnd(u);
> VecAssemblyBegin(x); VecAssemblyEnd(x);
> VecAssemblyBegin(b); VecAssemblyEnd(b);
> /*
> View the exact solution vector if desired
> */
> if(rank==0) printf("Vector u: \n");
> flg = 1;
> if (flg) {ierr =
> VecView(u,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr);}
> if(rank==0) printf("Vector x: \n");
> if (flg) {ierr =
> VecView(x,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr);}
> if(rank==0) printf("Vector b: \n");
> if (flg) {ierr =
> VecView(b,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr);}
>
> /*
> Create the linear solver and set various options
> */
> KSPCreate(PETSC_COMM_WORLD,&ksp);
> KSPSetOperators(ksp,A,A,DIFFERENT_NONZERO_PATTERN);
> KSPSetInitialGuessNonzero(ksp,PETSC_TRUE);
> KSPSetFromOptions(ksp);
>
> /*
> Solve the linear system
> */
> KSPSolve(ksp,b,x);
>
> if(rank==0) printf("Solved Vector x: \n");
> if (flg) {ierr =
> VecView(x,PETSC_VIEWER_STDOUT_WORLD);CHKERRQ(ierr);}
>
> /*
> Free work space. All PETSc objects should be destroyed
> when they are no longer needed.
> */
> ierr = KSPDestroy(ksp);CHKERRQ(ierr);
> ierr = MatDestroy(A);CHKERRQ(ierr);
> ierr = VecDestroy(u);CHKERRQ(ierr); ierr =
> VecDestroy(x);CHKERRQ(ierr);
> ierr = VecDestroy(b);CHKERRQ(ierr);
>
> ierr = PetscFinalize();CHKERRQ(ierr);
>
> printf("\n");
> return 0;
> }
>
More information about the petsc-users
mailing list