[petsc-users] optimizing repeated calls to KSPsolve?
Luke Bloy
luke.bloy at gmail.com
Sat Dec 11 00:49:49 CST 2010
On 12/10/2010 06:30 PM, Jed Brown wrote:
> On Sat, Dec 11, 2010 at 00:03, Luke Bloy <luke.bloy at gmail.com
> <mailto:luke.bloy at gmail.com>> wrote:
>
> I'm solving a diffusion problem. essentially I have 2,000,000
> possible states for my system to be in. The system evolves based
> on a markov matrix M, which describes the probability the system
> moves from one state to another. This matrix is extremely sparse
> on the < 100,000,000 nonzero elements. The problem is to pump
> mass/energy into the system at certain states. What I'm interested
> in is the steady state behavior of the system.
>
> basically the dynamics can be summarized as
>
> d_{t+1} = M d_{t} + d_i
>
> Where d_t is the state vector at time t and d_i shows the states I
> am pumping energy into. I want to find d_t as t goes to infinity.
>
> My current approach is to solve the following system.
>
> (I-M) d = d_i
>
>
> So you want to do this for some 500,000 d_i? What problem are you
> really trying to solve? Is it really to just brute-force compute
> states for all these inputs? What are you doing with the resulting
> 500k states (all 8 terabytes of it)? Are you, for example, looking
> for some d_i that would change the steady state d in a certain way?
>
Yes I'd like do do this for roughly 500,000 d_i. I'm solving a diffusion
problem, I only have local measures of the diffusion process which is
what i use to determine that matrix M. Now the 500,000 d_i are the
boundaries to my diffusion problem, what i need to know is who much of
what gets pumped in though a given boundary state exits the system
through the other boundaries. Do i really need to do all 500,000
Probably not. this is the highest resolution mesh of the boundary i can
compute from my data. A lower res mesh would probably be sufficient. But
I wont know until i do it either way I'd like to use the highest res
mesh that I can. If you can suggest an alternative approach I am all ears.
Luke
>> 2. If you can afford the memory, a direct solve probably makes sense.
>
> My understanding is the inverses would generally be dense. I
> certainly don't have any memory to hold a 2 million by 2 million
> dense matrix, I have about 40G to play with. So perhaps a
> decomposition might work? Which might you suggest?
>
>
> While inverses are almost always dense, sparse factorization is far
> from dense. For PDE problems factored in an optimal ordering, the
> memory asymptotics are n*log n in 2D and n^{4/3} in 3D. The time
> asymptotics are n^{3/2} and n^2 respectively. Compare to n^2 memory,
> n^3 time for dense.
>
> Jed
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20101211/946e5b8b/attachment.htm>
More information about the petsc-users
mailing list