[petsc-dev] Jack has produced a scalable sparse direct solver
Hong Zhang
hzhang at mcs.anl.gov
Mon Jul 23 12:29:17 CDT 2012
Jack:
FYI, Petsc examples for testing elemental interfaces are
petsc-dev/src/mat/examples/tests/ex38.c, ex39.c and ex145.c
petsc-dev/src/ksp/ksp/examples/tests/ex40.c
Hong
>
> On Mon, Jul 23, 2012 at 11:53 AM, Hong Zhang <hzhang at mcs.anl.gov> wrote:
>
>> Jack:
>>
>>>
>>> I'm getting started on creating documentation right now; it should end
>>> up looking very similar to Elemental's.
>>>
>>
>> The detailed instruction will be very helpful, especially
>> the sparse matrix data structures. Notify us when the documentation is
>> out.
>>
>
> Will do. I just pushed out documentation for the build system:
> http://poulson.github.com/Clique/build.html
>
> The rest should follow soon.
>
>
>> Is this a sparse Cholesky direct solver?
>>
>
> More or less. It currently supports LDL^T and LDL^H factorizations without
> pivoting, and the latter certainly works for Hermitian positive-definite
> matrices. The current implementation is slightly more general than Cholesky
> and should achieve roughly the same performance.
>
>
>> Currently, only mumps supports parallel sparse Cholesky direct solver.
>> Yours will be a good addition to the petsc external solvers.
>>
>
> Thanks! I'll do my best to help with the integration when the time comes.
>
>
>>
>>> Please keep in mind that there is not _yet_ any support for pivoting, as
>>> I needed Clique's functionality for indefinite complex symmetric matrices
>>> which are nice enough to be factored accurately without pivoting.
>>
>>
>> Petsc users likely use it as a preconditioner.
>> In case of zero pivot, you may provide a routine/option for adding a small
>> shift?
>>
>
> Yes, that is what I was thinking. For instance, for Helmholtz equations,
> one can get away with very small positive imaginary shifts (the physically
> corresponds to slightly damping waves).
>
>
>>
>>> Clique's nested dissection is built on top of a nodal graph bisection
>>> routine which uses a custom interface to parmetis; I suspect that the graph
>>> partitioning is currently the least scalable part of the black-box
>>> interface due to KLFM refinement being so hard to parallelize.
>>
>>
>> Xuan has almost finished the petsc-elemental interface
>> (see petsc-dev/src/mat/impls/elemental/).
>> We would appreciate if you can take a look at it and give us
>> your comments/suggests.
>>
>>
> I just took a quick look and will make time to go over it in detail later
> today. I'll send out patch if I have any concrete suggestions.
>
> Jack
>
>
>>>
>>> On Mon, Jul 23, 2012 at 10:12 AM, Hong Zhang <hzhang at mcs.anl.gov> wrote:
>>>
>>>> Xuan will take a look at this.
>>>> Hong
>>>>
>>>> On Mon, Jul 23, 2012 at 9:47 AM, Matthew Knepley <knepley at gmail.com>wrote:
>>>>
>>>>> I think we just need a small converter from AIJ:
>>>>>
>>>>>
>>>>> https://bitbucket.org/poulson/clique/src/dc417c7e9403/tests/DistSparseMatrix.cpp
>>>>>
>>>>> Matt
>>>>>
>>>>> --
>>>>> What most experimenters take for granted before they begin their
>>>>> experiments is infinitely more interesting than any results to which their
>>>>> experiments lead.
>>>>> -- Norbert Wiener
>>>>>
>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120723/dac5952a/attachment.html>
More information about the petsc-dev
mailing list