<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Thu, Jan 1, 2015 at 2:15 AM, TAY wee-beng <span dir="ltr"><<a href="mailto:zonexo@gmail.com" target="_blank">zonexo@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<div><br>
<pre cols="72"></pre>
On 1/1/2015 2:06 PM, Matthew Knepley wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">On Wed, Dec 31, 2014 at 11:20 PM, TAY
wee-beng <span dir="ltr"><<a href="mailto:zonexo@gmail.com" target="_blank">zonexo@gmail.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<br>
<br>
I used to run my CFD code with 96 procs, with a grid size
of 231 x 461 x 368.<br>
<br>
I used MPI and partition my grid in the z direction. Hence
with 96 procs (8 nodes, each 12 procs), each procs has a
size of 231 x 461 x 3 or 231 x 461 x 4.<br>
<br>
It worked fine.<br>
<br>
Now I modified the code and added some more routines which
increases the fixed memory requirement per procs. However,
the grid size is still the same. But the code aborts while
solving the Poisson eqn, saying:<br>
<br>
Out of memory trying to allocate XXX bytes<br>
<br>
I'm using PETSc with HYPRE boomeramg to solve the linear
Poisson eqn. I am guessing that now the amt of memory per
procs is less because I added some routines which uses
some memory. The result is less memory available for the
solving of the Poisson eqn.<br>
</blockquote>
<div><br>
</div>
<div>I would try GAMG instead of Hypre. It tends to be
memory light compared to it.</div>
<div><br>
</div>
<div> -pc_type gamg</div>
<div><br>
</div>
<div> Thanks,</div>
<div><br>
</div>
<div> Matt</div>
</div>
</div>
</div>
</blockquote>
Hi Matt,<br>
<br>
To use gamg, must I use use DMDA to partition the grid?<br></div></blockquote><div><br></div><div>No.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div bgcolor="#FFFFFF" text="#000000">
Also, does MPI partitioning in only the z direction affect parallel
performance? Since the MPI partition grid is almost like 2D plane
with 3-4 cell thickness.<br></div></blockquote><div><br></div><div>It could affect load balance.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div bgcolor="#FFFFFF" text="#000000">
Lastly, using 10x10 = 100 procs seems to work for now, although
there's a wastage of 20 procs since each node has 12 procs.<br></div></blockquote><div><br></div><div>PETSc attempts to make square domains, but you can prescribe your own domains if that helps.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div bgcolor="#FFFFFF" text="#000000">
Thanks!<br>
<blockquote type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">
<div> </div>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I'm now changing to KSPBCGS but it seems to take forever.
When I abort it, the error msg is:<br>
<br>
Out of memory. This could be due to allocating<br>
[10]PETSC ERROR: too large an object or bleeding by not
properly<br>
[10]PETSC ERROR: destroying unneeded objects.<br>
[10]PETSC ERROR: Memory allocated 0 Memory used by process
<a href="tel:4028370944" value="+14028370944" target="_blank">4028370944</a><br>
[10]PETSC ERROR: Try running with -malloc_dump or
-malloc_log for info.<br>
<br>
I can't use more procs because some procs will have a size
of 231 x 461 x 2 (or even 1). This will give error since I
need to reference the nearby values along the z direction.<br>
<br>
So what options do I have? I'm thinking of these at the
moment:<br>
<br>
1. Remove as much fixed overhead memory per procs as
possible so that there's enough memory for each procs.<br>
<br>
2. Re-partition my grid in both x,y direction or x,y,z
direction so I will not encounter extremely skew grid
dimensions per procs. Btw, does having extremely skew grid
dimensions affect the performance in solving the linear
eqn?<br>
<br>
Are there other feasible options<span class="HOEnZb"><font color="#888888"><span><font color="#888888"><br>
<br>
-- <br>
Thank you.<br>
<br>
Yours sincerely,<br>
<br>
TAY wee-beng<br>
<br>
</font></span></font></span></blockquote><span class="HOEnZb"><font color="#888888">
</font></span></div><span class="HOEnZb"><font color="#888888">
<br>
<br clear="all">
<div><br>
</div>
-- <br>
<div>What most experimenters take for
granted before they begin their experiments is infinitely
more interesting than any results to which their experiments
lead.<br>
-- Norbert Wiener</div>
</font></span></div>
</div>
</blockquote>
<br>
</div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature">What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div>
</div></div>