<html><body bgcolor="#FFFFFF"><div><br><br><br></div><div><br>On Aug 18, 2010, at 12:28 AM, "Li, Zhisong (lizs)" <<a href="mailto:lizs@mail.uc.edu">lizs@mail.uc.edu</a>> wrote:<br><br></div><div></div><blockquote type="cite"><div>
<div style="direction: ltr; font-family: Tahoma; color: rgb(0, 0, 0); font-size: 13px;">
<div style="">Hi, Petsc team,<br>
<br>
For quite a while, I have a wonder that whether two DA objects of equal dimensions (x, y, z) will generate the same domain decomposition in a parallel computation?<br></div></div></div></blockquote><div><br></div> Yes, the logic on default decompositions only depends on M N and P not on dof.<div><br><blockquote type="cite"><div><div style="direction: ltr; font-family: Tahoma; color: rgb(0, 0, 0); font-size: 13px;"><div style="">
<br>
In my understanding of PETSc, sometimes we have to use different DA objects because of the restrictions of the generated matrix and DOF associated with a DA. For example, for a given structured domain, we may use da1 to store the velocities (u, v, w, DOF =
3), and da2 to store the pressure (p, DOF = 1), as we want to compute them separately. Matrice from DAGetMatrix() must be of different sizes for da1 and da2 because of different DOF. On the other hand, we also want to project pressure to velocities point to
point. So whether da1 and da2 will distribute exactly the same portion of the overall domain on each computer node when we're running parallelly? If not, how can we achieve it?<br>
<br></div></div></div></blockquote> The final arguments to dacreate2d and dacreate3d allow you dictate the exact decomposition along with the m n and p arguments. But in you situation you need not worry about them.</div><div><br></div><div>Barry</div><div> <br><blockquote type="cite"><div><div style="direction: ltr; font-family: Tahoma; color: rgb(0, 0, 0); font-size: 13px;"><div style="">
<br>
Thank you very much!<br>
<br>
Regards,<br>
<br>
<br>
Zhisong Li<br>
</div>
</div>
</div></blockquote></div></body></html>