mat creation
Barry Smith
bsmith at mcs.anl.gov
Mon Feb 6 20:50:58 CST 2006
On Mon, 6 Feb 2006, Barry Smith wrote:
>
>
> On Mon, 6 Feb 2006, Roberto Gori wrote:
>
> > Many thanks Barry,
> >
> > MatMPIAIJSetPreallocationCSR could be a good solution for me.
> > It seems that to satisfy the MatMPIAIJSetPreallocationCSR requirements I should convert the nz array from a relative to an absolute representation of the row offsets but ...
> > I think this implies a not negligible drawback: my matrix is too large and offsets must be promoted to a long long int (64 bit) type.
> > Moreover the idxn array type also must be promoted to be compliant with a 64 bit PetscInt type. This requires the double amount of memory for nz and idxn.
> >
> > What do you think about the opportunity to provide another version of MatMPIAIJSetPreallocationCSR based on relative offsets to save memory?
> > Absolute offsets could be managed internally ...
>
These may be some confusing here; the idxn values are only relative to that process (i.e. idxn[0] == 0) this
means there won't be a problem so long as you have less then about 2 billion nonzeros on a process (which would
take > 28 gigabytes of memory).
Barry
> Your point is valid and noted but this would require writing another
> whole set of the AIJ routines that use the relative offsets. Maybe not so
> hard but a lot of code duplication.
>
> >
> > Finally I've look into the body of MatMPIAIJSetPreallocationCSR_MPIAIJ function ad I found this snippet of code
> >
> > for (i=0; i<m; i++) {
> > ii = i + rstart;
> > nnz = I[i+1]- I[i];
> > ierr = MatSetValues_MPIAIJ(B,1,&ii,nnz,J+I[i],values+(v ? I[i] : 0),INSERT_VALUES);CHKERRQ(ierr);
> > }
> >
> > There's no way to avoid the copy of the values? This doubles one more time the memory for the values array and spend time into the loop.
> > Morevover nnz corresponds exactly to my nz[i] ...
>
> The thing is PETSc does not just store the MPIAIJ matrices in a simple
> CSR format (the matrix is actually stored in two parts, see the MATMPIAIJ
> manual page). So the copy is needed.
>
> What we do. Is first count the number of nonzeros per row then
> preallocate the memory then compute the values in the matrix, see
> MatMPIAIJSetPreallocatio(). This may
> seem expensive but really is not, the counting is fast. You build the CSR
> matrix for your sparse matrix; using our approach is no harder and
> prevents the need for a copy. (In fact the reason PETSc has the
> Preallocation() and MatSetValues() stuff is so that users do not have
> to construct the CSR format directly.
>
> Barry
>
>
>
> >
> > ----- Original Message -----
> > From: Barry Smith
> > To: Roberto Gori
> > Cc: petsc-users at mcs.anl.gov
> > Sent: Monday, February 06, 2006 4:15 PM
> > Subject: Re: mat creation
> >
> >
> >
> >
> > On Mon, 6 Feb 2006, Roberto Gori wrote:
> >
> > > Hi, guys,
> > >
> > > I'm trying to solve a linear system with M equations and N unknowns (M >> N) using the LSQR method.
> > >
> > > I have a C parallel sparse matrix MxN with NZ nonzero elements stored in this way:
> > >
> > > double values[NZ]; // the nonzero values
> > > int idxn[NZ]; // the column indices
> > > int nz[M]; // the number of nonzero values for each row
> >
> > This needs to be instead indices to start of each row in idxn and values
> > instead of the length of each row.
> > >
> > > what is the best way to create a PETSc matrix?
> >
> > MatMPIAIJSetPreallocationCSR()
> >
> >
> > >
> > > Do exists a statement like VecCreateMPIWithArray for already distributed vectors that could do all the work in one shot without copy?
> >
> > Yes
> >
> > > Just another detail. Initially M is unknown. All my structures are
> > > allocated dynamically because the number of rows M (and the number
> > > of nonzero NZ also) is determined just before to solve the system,
> > > so I think it's no feasible for me to call MatCreateMPIAIJ at the
> > > begin of my code and to perform the MatSetValues loop.
> >
> > There is nothing saying that you must "call MatCreateMPIAIJ at the
> > beginning of your code", you can call it anytime that you have the
> > needed information.
> >
> > Barry
> >
> > >
> > > Roberto
> > >
> > > ================================================================
> > > | Roberto Gori CINECA
> > > | High Performance System Group Via Magnanelli 6/3 40033
> > > | e-mail: r.gori at cineca.it Casalecchio di Reno
> > > | Tel: 051/6171522 Bologna - ITALY
> > > | Fax: 051/6132198 http://www.cineca.it
> > > ================================================================
> > >
> > >
> >
> >
>
More information about the petsc-users
mailing list