<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Tue, May 20, 2014 at 12:57 AM, Andrew Cramer <span dir="ltr"><<a href="mailto:andrewdalecramer@gmail.com" target="_blank">andrewdalecramer@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On 20 May 2014 12:27, Matthew Knepley <span dir="ltr"><<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div><div>On Mon, May 19, 2014 at 8:50 PM, Andrew Cramer <span dir="ltr"><<a href="mailto:andrewdalecramer@gmail.com" target="_blank">andrewdalecramer@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr">Hi All,<div><br></div><div>I'm new to PETSc and would like to use it as my linear elasticity solver within a structural optimisation program. Originally I was using GP-GPUs and CUDA for my solver but I would like to shift to using PETSc to leverage it's breadth of trustworthy solvers. We have some SMP servers and a couple compute clusters (one with GPUs, one without). I've been digging through the docs and I'd like some feedback on my plan and perhaps some pointers if at all possible.</div>
<div><br></div><div>The plan is to keep the 6000 lines or so of current code and try as much as possible to use PETSc as a 'drop-in'. This would require giving one field (array) of densities and receiving a 3d field (array) of displacements back. Providing the density field would be easy with the usual array construction functions on one node/process but pulling the displacements back to the 'controlling' node would be difficult.<br>
<br>I understand that this goes against the ethos of PETSc which is distributed all the way. My code is highly modular with differing objective functions and optimisers (some of which are written by other research groups) that I drop in and pull out. I don't want to throw all that away. I would need to relearn object oriented programming within PETSc (currently I use c++) and rewrite my entire code base. In terms of performance the optimisers typically rely heavily on tight loops of reductions once the solve is completed so I'm not sure that the speed-up would be too great rewriting them as distributed anyway.</div>
<div><br>Sorry for the long winded post but I'm just not sure how to move forward, I'm sick of implementing every solver I want to try in CUDA especially knowing that people have done it better than I can in PETSc. But it's a framework that I don't know how to interface with, all the examples seem to have the solve as the main thing rather than one part of a broader program.</div>
</div></blockquote><div><br></div></div></div><div>1) PETSc can do a good job on linear elasticity. GAMG is particularly effective, and we have an example: </div><div><br></div><div> <a href="http://www.mcs.anl.gov/petsc/petsc-dev/src/ksp/ksp/examples/tutorials/ex56.c.html" target="_blank">http://www.mcs.anl.gov/petsc/petsc-dev/src/ksp/ksp/examples/tutorials/ex56.c.html</a></div>
<div><br></div><div>2) You can use this function to go back and forth from 1 process</div><div><br></div><div> <a href="http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Vec/VecScatterCreateToZero.html" target="_blank">http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Vec/VecScatterCreateToZero.html</a></div>
<div><br></div><div>3) The expense of pushing all that data to nodes can large. You might be better off just using GAMG on 1 process, which is how I would start.</div><div><br></div><div> Matt</div><div><div>
</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div dir="ltr"><span><font color="#888888"><div>Andrew Cramer</div><div>University of Queensland, Australia</div><div>PhD Candidate<br></div></font></span></div>
</blockquote></div></div><span><font color="#888888"><br><br clear="all"><div><br></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener</font></span></div></div></blockquote><div><br></div><div><br></div>Thanks for your help, I was eyeing off ksp/ex29 as it uses DMDA which I thought would simplify things. I'll take a look at ex56 instead and see what I can do.</div>
</div></div></blockquote><div><br></div><div>If you have a completely structured grid, DMDA is definitely simpler, although it is a little awkward for cell-centered</div><div>discretizations. We have some really new support for arbitrary discretizations on top of DMDA, but it is alpha.</div>
<div><br></div><div> Thanks,</div><div><br></div><div> Matt </div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra">
<div class="gmail_quote"><span class="HOEnZb"><font color="#888888"><div>Andrew </div></font></span></div><br></div></div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener
</div></div>