<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On 20 August 2015 at 11:01, TAY wee-beng <span dir="ltr"><<a href="mailto:zonexo@gmail.com" target="_blank">zonexo@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"><span class="">
<br>
<div>On 20/8/2015 3:29 PM, Dave May wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr"><br>
<div class="gmail_extra"><br>
<div class="gmail_quote">On 20 August 2015 at 05:28, TAY
wee-beng <span dir="ltr"><<a href="mailto:zonexo@gmail.com" target="_blank">zonexo@gmail.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<br>
<br>
I run my code on 1, 2 and 3 procs. KSP is used to solve
the Poisson eqn.<br>
<br>
Using MatView and VecView, I found that my LHS matrix and
RHS vec are the same for 1,2 and 3 procs.<br>
<br>
However, my pressure (ans) output is the almost the same
(due to truncation err) for 1,2 procs.<br>
<br>
But for 3 procs, the output is the same as for the 1,2
procs for all values except:<br>
<br>
1. the last few values for procs 0<br>
<br>
2. the first and last few values for procs 1 and 2.<br>
<br>
Shouldn't the output be the same when the LHS matrix and
RHS vec are the same? How can I debug to find the err?<span><font color="#888888"><br>
<br>
</font></span></blockquote>
<div><br>
</div>
<div>It's a bit hard to say much without knowing exactly
what solver configuration you actually ran and without
seeing the difference in the solution you are referring
too.<br>
<br>
</div>
<div>Some preconditioners have different behaviour in serial
and parallel. Thus, the convergence of the solver and the
residual history (and thus the answer) can look slightly
different. This difference will become smaller as you
solve the system more accurately. <br>
Do you solve the system accurately? e.g. something like
-ksp_rtol 1.0e-10<br>
<br>
</div>
<div>To avoid the problem mentioned above, try using
-pc_type jacobi. This PC is the same in serial and
parallel. Thus, if your A and b are identical on 1,2,3
procs, then the residuals and solution will also be
identical on 1,2,3 procs (upto machine precision).<br>
<br>
</div>
</div>
</div>
</div>
</blockquote></span>
Hi Dave,<br>
<br>
I tried using jacobi and it's the same result. I found out that the
error is actually due to mismatched size between DMDACreate3d and
MatGetOwnershipRange.<br>
<br>
Using <br>
<br>
<i><b>call
DMDACreate3d(MPI_COMM_WORLD,DM_BOUNDARY_NONE,DM_BOUNDARY_NONE,DM_BOUNDARY_NONE,DMDA_STENCIL_STAR,size_x,size_y,&</b></i><i><b><br>
</b></i><i><b><br>
</b></i><i><b>size_z,1,PETSC_DECIDE,PETSC_DECIDE,1,stencil_width,lx,PETSC_NULL_INTEGER,PETSC_NULL_INTEGER,da_w,ierr)</b></i><i><b><br>
</b></i><i><b><br>
</b></i><i><b>call
DMDAGetCorners(da_u,start_ijk(1),start_ijk(2),start_ijk(3),width_ijk(1),width_ijk(2),width_ijk(3),ierr)</b></i><br>
<br>
and <br>
<br>
<b><i>call
MatCreateAIJ(MPI_COMM_WORLD,PETSC_DECIDE,PETSC_DECIDE,size_x*size_y*size_z,size_x*size_y*size_z,7,PETSC_NULL_INTEGER,7,PETSC_NULL_INTEGER,A_mat,ierr)</i></b><b><i><br>
</i></b><b><i><br>
</i></b><b><i>call
MatGetOwnershipRange(A_mat,ijk_sta_p,ijk_end_p,ierr)</i></b><br>
<br>
Is this possible? Or is there an error somewhere? It happens when
using 3 procs, instead of 1 or 2.<br>
<br></div></blockquote><div><br></div><div>Sure it is possible you get a mismatch in the local sizes if you create the matrix this way as the matrix created knows nothing about the DMDA, and specifically, it does not know how it has been spatially decomposed.<br><br></div><div>If you want to ensure consistency between the DMDA and the matrix, <br>you should always use<br></div> DMCreateMatrix()<br><div>to create the matrix.<br></div><div>Any subsequent calls to MatGetOwnershipRange() will then be consistent with the DMDA parallel layout.<br></div><div><br> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div bgcolor="#FFFFFF" text="#000000">
For my size_x,size_y,size_z = 4,8,10, it was partitioned along z
direction with 1->4, 5->7, 8->10 using 3 procs with
DMDACreate3d which should give ownership (with Fortran index + 1)
of:<br>
<br>
myid,ijk_sta_p,ijk_end_p 1 129 192<br>
myid,ijk_sta_p,ijk_end_p 0 1 128<br>
myid,ijk_sta_p,ijk_end_p 2 193 320<br>
<br>
But with MatGetOwnershipRange, I got<br>
<br>
myid,ijk_sta_p,ijk_end_p 1 108 214<br>
myid,ijk_sta_p,ijk_end_p 0 1 107<br>
myid,ijk_sta_p,ijk_end_p 2 215 320<span class=""><br>
<br>
<blockquote type="cite">
<div dir="ltr">
<div class="gmail_extra">
<div class="gmail_quote">
<div>Thanks,<br>
</div>
<div> Dave<br>
</div>
<div><br>
<br>
</div>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span><font color="#888888">
<br>
-- <br>
Thank you<br>
<br>
Yours sincerely,<br>
<br>
TAY wee-beng<br>
<br>
</font></span></blockquote>
</div>
<br>
</div>
</div>
</blockquote>
<br>
</span></div>
</blockquote></div><br></div></div>