[petsc-users] Accessing Global Vectors
Matthew Knepley
knepley at gmail.com
Tue May 20 05:22:35 CDT 2014
On Tue, May 20, 2014 at 12:57 AM, Andrew Cramer
<andrewdalecramer at gmail.com>wrote:
> On 20 May 2014 12:27, Matthew Knepley <knepley at gmail.com> wrote:
>
>> On Mon, May 19, 2014 at 8:50 PM, Andrew Cramer <
>> andrewdalecramer at gmail.com> wrote:
>>
>>> Hi All,
>>>
>>> I'm new to PETSc and would like to use it as my linear elasticity solver
>>> within a structural optimisation program. Originally I was using GP-GPUs
>>> and CUDA for my solver but I would like to shift to using PETSc to leverage
>>> it's breadth of trustworthy solvers. We have some SMP servers and a couple
>>> compute clusters (one with GPUs, one without). I've been digging through
>>> the docs and I'd like some feedback on my plan and perhaps some pointers if
>>> at all possible.
>>>
>>> The plan is to keep the 6000 lines or so of current code and try as much
>>> as possible to use PETSc as a 'drop-in'. This would require giving one
>>> field (array) of densities and receiving a 3d field (array) of
>>> displacements back. Providing the density field would be easy with the
>>> usual array construction functions on one node/process but pulling the
>>> displacements back to the 'controlling' node would be difficult.
>>>
>>> I understand that this goes against the ethos of PETSc which is
>>> distributed all the way. My code is highly modular with differing objective
>>> functions and optimisers (some of which are written by other research
>>> groups) that I drop in and pull out. I don't want to throw all that away. I
>>> would need to relearn object oriented programming within PETSc (currently I
>>> use c++) and rewrite my entire code base. In terms of performance the
>>> optimisers typically rely heavily on tight loops of reductions once the
>>> solve is completed so I'm not sure that the speed-up would be too great
>>> rewriting them as distributed anyway.
>>>
>>> Sorry for the long winded post but I'm just not sure how to move
>>> forward, I'm sick of implementing every solver I want to try in CUDA
>>> especially knowing that people have done it better than I can in PETSc. But
>>> it's a framework that I don't know how to interface with, all the examples
>>> seem to have the solve as the main thing rather than one part of a broader
>>> program.
>>>
>>
>> 1) PETSc can do a good job on linear elasticity. GAMG is particularly
>> effective, and we have an example:
>>
>>
>> http://www.mcs.anl.gov/petsc/petsc-dev/src/ksp/ksp/examples/tutorials/ex56.c.html
>>
>> 2) You can use this function to go back and forth from 1 process
>>
>>
>> http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Vec/VecScatterCreateToZero.html
>>
>> 3) The expense of pushing all that data to nodes can large. You might be
>> better off just using GAMG on 1 process, which is how I would start.
>>
>> Matt
>>
>>
>>> Andrew Cramer
>>> University of Queensland, Australia
>>> PhD Candidate
>>>
>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>
>
> Thanks for your help, I was eyeing off ksp/ex29 as it uses DMDA which I
> thought would simplify things. I'll take a look at ex56 instead and see
> what I can do.
>
If you have a completely structured grid, DMDA is definitely simpler,
although it is a little awkward for cell-centered
discretizations. We have some really new support for arbitrary
discretizations on top of DMDA, but it is alpha.
Thanks,
Matt
> Andrew
>
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140520/1b338233/attachment.html>
More information about the petsc-users
mailing list