matrix memory allocation

Matt Funk mafunk at nmsu.edu
Wed Mar 26 18:30:14 CDT 2008


Well,

it is of course possible that i am looking at this wrong. 

as to your question what it would save:

- case 1) : allocating all local rows on the local process at one time:
in order to determine the information for the nonzero entries for the local on 
and off diagonal matrices i have to iterate over my entire domain and do 
logic for each node to determine whether my surrounding nodes (for the 
Laplacian) are on the local processor or not and/or whether they are even in 
the domain. Because i have to do the same thing again once i get to the point 
where i insert values into the matrix this is really redundant, but necessary 
to determine the space to allocate.
i would like to eliminate this.

-case 2) : do not preallocate anything
 very slow, and not really an option.

so, what it would gain me would be the best compromise between memory 
allocation performance and logic overhead. I.e. the logic while iterating 
through the domain layout it executed only once instead of twice.

I hope it makes sense...

but i guess it is not possible anyway. too bad ...

thanks though
mat




On Wednesday 26 March 2008 17:07, Matthew Knepley wrote:
> This is not possible with the current implementation. I am not sure why you
> would want to do it. It seems to me to save nothing. Do you have some sort
> of performance problem?
>
>   Matt
>
> On Wed, Mar 26, 2008 at 5:01 PM, Matt Funk <mafunk at nmsu.edu> wrote:
> > Maybe i should clarify a little more.
> >  I guess what i am looking for is something that lets me split up the
> > memory allocation within a given processor.
> >
> >  The example you guys often show (for example on MatCreateMPIAIJ manual
> > page) is where the local process owns 3 rows.
> >
> >  But consider for example if the local process owns 1000 rows. Right now
> > (as i can see it) i have two options. Do not allocate memory at all and
> > use MatSetValues and allocate memory per row. This is very slow as you
> > guys state on your manual pages.
> >  The other option is to allocate all 1000 rows at a time which means
> > extra overhead.
> >
> >  However, what i would like to be able to (as an example), make 2 calls
> > for an allocation of 200 rows and a second call for an allocation of the
> > remaining 800 rows.
> >
> >  I do not know if this is possible as i don't know whether all the local
> > stuff is stored in on continuous array?
> >
> >  Anyway, i hope this clarifies my question a little more.
> >
> >  thank
> >  mat
> >
> >  On Wednesday 26 March 2008 14:31, Matt Funk wrote:
> >  > Hi,
> >  >
> >  > in order to create a sparse MPIMatrix (with preallocated memory) it is
> >  > necessary to give it the number of nonzero entries per row for the
> >  > local and off-diagonal submatrix.
> >  >
> >  > The way i do things right now is that i need to allocate the entire
> >  > matrix. However, my domain is decomposed into boxes where one or more
> >  > boxes reside on the current processor.
> >  >
> >  > So, i was wondering if it is possible to allocate the matrix memory
> >  > per box. Right now what i do is this:
> >  > 1) iterate over the whole domain (every single box) and extract the
> >  > info i need from each box for memory allocation.
> >  > 2) call MatCreateMPIAIJ to create the matrix/allocate memory.
> >  > 3) iterate over the whole domain again to get the info to insert the
> >  > values into the matrix.
> >  >
> >  > Is it possible to instead do something like:
> >  > 1) get info for values and memory allocation needed per box
> >  > 2) allocate memory for x number of rows in global matrix and insert
> >  > values for them (via MatSetValues)
> >  > 3) start at 1) till every box in the domain is covered.
> >  >
> >  > The problem in 2) is that i can insert the values, but there is no
> >  > memory preallocated for it, so it would be slow. If i could allocate
> >  > it, i think it would solve my problem.
> >  >
> >  > I hope i am making sense. By the way, i cannot use the DA construct
> >  > because i have overlapping values.
> >  >
> >  > thanks
> >  > mat




More information about the petsc-users mailing list