<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix"><br>
<pre class="moz-signature" cols="72">
</pre>
On 1/1/2015 2:06 PM, Matthew Knepley wrote:<br>
</div>
<blockquote
cite="mid:CAMYG4Gmm3aWKtyz59VQrXZdm4H_aP0iYuD_ssfc0ycjDrPcGEw@mail.gmail.com"
type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">On Wed, Dec 31, 2014 at 11:20 PM, TAY
wee-beng <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:zonexo@gmail.com" target="_blank">zonexo@gmail.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<br>
<br>
I used to run my CFD code with 96 procs, with a grid size
of 231 x 461 x 368.<br>
<br>
I used MPI and partition my grid in the z direction. Hence
with 96 procs (8 nodes, each 12 procs), each procs has a
size of 231 x 461 x 3 or 231 x 461 x 4.<br>
<br>
It worked fine.<br>
<br>
Now I modified the code and added some more routines which
increases the fixed memory requirement per procs. However,
the grid size is still the same. But the code aborts while
solving the Poisson eqn, saying:<br>
<br>
Out of memory trying to allocate XXX bytes<br>
<br>
I'm using PETSc with HYPRE boomeramg to solve the linear
Poisson eqn. I am guessing that now the amt of memory per
procs is less because I added some routines which uses
some memory. The result is less memory available for the
solving of the Poisson eqn.<br>
</blockquote>
<div><br>
</div>
<div>I would try GAMG instead of Hypre. It tends to be
memory light compared to it.</div>
<div><br>
</div>
<div> -pc_type gamg</div>
<div><br>
</div>
<div> Thanks,</div>
<div><br>
</div>
<div> Matt</div>
</div>
</div>
</div>
</blockquote>
Hi Matt,<br>
<br>
To use gamg, must I use use DMDA to partition the grid?<br>
<br>
Also, does MPI partitioning in only the z direction affect parallel
performance? Since the MPI partition grid is almost like 2D plane
with 3-4 cell thickness.<br>
<br>
Lastly, using 10x10 = 100 procs seems to work for now, although
there's a wastage of 20 procs since each node has 12 procs.<br>
<br>
Thanks!<br>
<blockquote
cite="mid:CAMYG4Gmm3aWKtyz59VQrXZdm4H_aP0iYuD_ssfc0ycjDrPcGEw@mail.gmail.com"
type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">
<div> </div>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
I'm now changing to KSPBCGS but it seems to take forever.
When I abort it, the error msg is:<br>
<br>
Out of memory. This could be due to allocating<br>
[10]PETSC ERROR: too large an object or bleeding by not
properly<br>
[10]PETSC ERROR: destroying unneeded objects.<br>
[10]PETSC ERROR: Memory allocated 0 Memory used by process
<a moz-do-not-send="true" href="tel:4028370944"
value="+14028370944" target="_blank">4028370944</a><br>
[10]PETSC ERROR: Try running with -malloc_dump or
-malloc_log for info.<br>
<br>
I can't use more procs because some procs will have a size
of 231 x 461 x 2 (or even 1). This will give error since I
need to reference the nearby values along the z direction.<br>
<br>
So what options do I have? I'm thinking of these at the
moment:<br>
<br>
1. Remove as much fixed overhead memory per procs as
possible so that there's enough memory for each procs.<br>
<br>
2. Re-partition my grid in both x,y direction or x,y,z
direction so I will not encounter extremely skew grid
dimensions per procs. Btw, does having extremely skew grid
dimensions affect the performance in solving the linear
eqn?<br>
<br>
Are there other feasible options<span class="HOEnZb"><font
color="#888888"><br>
<br>
-- <br>
Thank you.<br>
<br>
Yours sincerely,<br>
<br>
TAY wee-beng<br>
<br>
</font></span></blockquote>
</div>
<br>
<br clear="all">
<div><br>
</div>
-- <br>
<div class="gmail_signature">What most experimenters take for
granted before they begin their experiments is infinitely
more interesting than any results to which their experiments
lead.<br>
-- Norbert Wiener</div>
</div>
</div>
</blockquote>
<br>
</body>
</html>