[petsc-dev] API changes in MatIS
Jed Brown
jedbrown at mcs.anl.gov
Tue Jun 5 06:52:36 CDT 2012
On Tue, Jun 5, 2012 at 4:40 AM, Stefano Zampini
<stefano.zampini at gmail.com>wrote:
> As I understood well, "second approach" is similar to dohrmann's except
> for a further reordering which should reduce the computational costs of
> factorizations? Such approach sounds to me over-complicated. I'm more
> inclined to solve directly the local saddle point problems (either with a
> change of basis or not). Are you planning to add interfaces to LDL^T
> solvers to PETSc?
>
Cholmod and Pastix do LDL^T, maybe others.
>
>
>>
>>
>>> You said you would have the new matrix class to support either more
>>>> subdomains per core, or more cores per subdomain. In the latter case,
>>>> threaded or mpi matrices (on subcomms)?
>>>>
>>>
>> I would plan to make it support any combination.
>>
>>
> It would be great. I'm wondering how you will accomplish the
> communications. I think there should be an intermediate step, before
> multiplying by each subdomain matrix owned by the proc (either multiproc or
> sequential), where you need to scatter values from the global vector to a
> "local" vector containing all the dofs (in the case of more subdomains)
> using some information generated by analyzing what the user has passed in
> when requesting matrix creation. Then a number of additional scatters to
> realize each single "subdomain" multiplication. How do you do this with
> mutliprocessors subdomains? PETSc does support scatters between two
> different comms (with size > 1)? I think that mixing multisubdomain procs
> and multiproc subdomain will be nasty to get it work but definitely an
> interesting challenge.
>
VecScatter should live on the "larger" comm when it needs to scatter to a
vector on a subcomm.
> Do you mean assembling the matrix on preselected vertices during
> MatAssemblyBegin/End? Note that this will imply that standard
> Neumann-Neumann methods will not work (they need the unassembled matrix to
> solve for the local Schur complements).
I'm not too concerned about that since I consider the classic N-N and
original FETI methods to be rather special-purpose compared to the newer
generation. I would like to limit the number of copies of a matrix to
control peak memory usage.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120605/f61c843e/attachment.html>
More information about the petsc-dev
mailing list