[petsc-users] optimizing repeated calls to KSPsolve?
Luke Bloy
lbloy at seas.upenn.edu
Sat Dec 11 00:21:05 CST 2010
Matt thanks for the response. I'll give those a try. I'm also
interested in try the Cholesky decomposition is there particular
external packages that are required to use it?
Thanks again.
Luke
On 12/10/2010 06:22 PM, Matthew Knepley wrote:
> On Fri, Dec 10, 2010 at 11:03 PM, Luke Bloy <luke.bloy at gmail.com
> <mailto:luke.bloy at gmail.com>> wrote:
>
>
> Thanks for the response.
>
> On 12/10/2010 04:18 PM, Jed Brown wrote:
>> On Fri, Dec 10, 2010 at 22:15, Luke Bloy <luke.bloy at gmail.com
>> <mailto:luke.bloy at gmail.com>> wrote:
>>
>> My problem is that i have a large number (~500,000) of b
>> vectors that I would like to find solutions for. My plan is
>> to call KSPsolve repeatedly with each b. However I wonder if
>> there are any solvers or approaches that might benefit from
>> the fact that my A matrix does not change. Are there any
>> decompositions that might still be sparse that would offer a
>> speed up?
>>
>>
>> 1. What is the high-level problem you are trying to solve? There
>> might be a better way.
>>
> I'm solving a diffusion problem. essentially I have 2,000,000
> possible states for my system to be in. The system evolves based
> on a markov matrix M, which describes the probability the system
> moves from one state to another. This matrix is extremely sparse
> on the < 100,000,000 nonzero elements. The problem is to pump
> mass/energy into the system at certain states. What I'm interested
> in is the steady state behavior of the system.
>
> basically the dynamics can be summarized as
>
> d_{t+1} = M d_{t} + d_i
>
> Where d_t is the state vector at time t and d_i shows the states I
> am pumping energy into. I want to find d_t as t goes to infinity.
>
> My current approach is to solve the following system.
>
> (I-M) d = d_i
>
> I'm certainly open to any suggestions you might have.
>
>> 2. If you can afford the memory, a direct solve probably makes sense.
>
> My understanding is the inverses would generally be dense. I
> certainly don't have any memory to hold a 2 million by 2 million
> dense matrix, I have about 40G to play with. So perhaps a
> decomposition might work? Which might you suggest?
>
>
> Try -pc_type lu -pc_mat_factor_package <mumps, superlu_dist> once you
> have reconfigured using
>
> --download-superlu_dist --download-mumps
>
> They are sparse LU factorization packages that might work.
>
> Matt
>
> Thanks
> Luke
>
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which
> their experiments lead.
> -- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20101211/d3351d67/attachment-0001.htm>
More information about the petsc-users
mailing list