[petsc-users] slepc eating all my ram

Simon Burton simon at arrowtheory.com
Fri Jul 15 12:12:36 CDT 2016


Hi,

just like this?
./ex3 -eps_nev 1 -eps_type power -n 65536 -info

I still see:
[0] STSetUp(): Setting up new ST

and that's when memory usage reaches to 192GB and the machine can't take it.

I don't understand why the default behaviour creates a spectral transform 
object that then needs so much memory.

thanks,

Simon.


On Fri, 15 Jul 2016 11:13:27 -0500
Hong <hzhang at mcs.anl.gov> wrote:

> Simon :
> For '-eps_hermitian -eps_largest_magnitude', why do you need 'spectral
> transform'?
> Try slepc default method for ex3.c.
> 
> Hong
> 
> >
> > Hi,
> >
> > I'm running a slepc eigenvalue solver on a single machine with 198GB of
> > ram,
> > and solution space dimension 2^32. With double precision this means
> > each vector is 32GB. I'm using shell matrices to implement the matrix
> > vector product. I figured the easiest way to get eigenvalues is using
> > the slepc power method, but it is still eating all the ram.
> >
> > Running in gdb I see that slepc is allocating a bunch of vectors in
> > the spectral transform object (in STSetUp), and by this time it has
> > consumed
> > most of the 198GB of ram. I don't see why a spectral transform
> > shift of zero needs to alloc a whole bunch of memory.
> >
> > I'm wondering if there are some other options to slepc that can
> > reduce the memory footprint? A barebones implementation of the
> > power method only needs to keep two vectors, perhaps I should
> > just try doing this using petsc primitives. It's also possible that
> > I could spread the computation over two or more machines but
> > that's a whole other learning curve.
> >
> > The code I am running is essentially the laplacian grid
> > example from slepc (src/eps/examples/tutorials/ex3.c):
> >
> > ./ex3 -eps_hermitian -eps_largest_magnitude -eps_monitor ascii -eps_nev 1
> > -eps_type power -n 65536
> >
> > I also put this line in the source:
> > EPSSetDimensions(eps,1,2,1);
> >
> > Cheers,
> >
> > Simon.
> >
> >


More information about the petsc-users mailing list