[petsc-dev] API changes in MatIS

Stefano Zampini stefano.zampini at gmail.com
Tue May 8 12:05:06 CDT 2012


Asking the first question:
>> The function works by first reordering local nodes with INTERNAL and
>> INTERFACE nodes (if the second argument LocalIS is not PETSC_NULL), in
>> order to be able to use memcpy when using local scatters of PCIS (e.g.
>> pcis->N_to_B). Then, it reorders the global numbering (if GlobalIS is not
>> PETSC_NULL) by exploiting data locality in the global vector and reduce the
>> amount of communications when scattering from global to local and
>> viceversa; in practice, a numbering of global nodes is constructed in such
>> a way that the greatest number of subdomain INTERNAL nodes will belong to
>> the local part of the global vector. Then, remaining INTERNAL nodes (notes
>> that if the parallel problem is well balanced, this will not occur) and
>> INTERFACE nodes are placed in the remaining global nodes, keeping them
>> sorted as they were in the previous global numbering. I think the latter
>> part  can be further modified. When I'll add to matis.c, maybe you can take
>> a look at the code and give me your suggestions.
>>
>
> Okay, this is different from what I thought you were describing. Moving
> the global dofs is a tricky thing because the vectors we want to solve
> against and interpret results for come from a higher level. Changing the
> global ordering really produces a different matrix (MatMult is different),
> so it's not "optimization". If the linear operator stayed the same, but you
> just changed the format, perhaps we could call it "optimize".
>
> Are you aware of PCRedistribute? It could be made to perform the
> transformation you are thinking about for MATIS, solving the problem in the
> "optimized" ordering, and mapping the solution back to the original
> ordering.
>
>
Not since now. I think it is were such an approach belongs, apart from the
fact that PCREDISTRIBUTE is not so visible from a standard user's
perspective (as me). I think I can put my code in PCSetUp_REDISTRIBUTE,
Adding . If I understood pcredistribute, I should do from the higher level
something like

MatCreateIS(A)
KSPCreate(true_ksp)
KSPSetOperators(true_ksp,A,A)
KSPSetType(true_ksp,KSPPREONLY)
KSPGetPC(true_ksp,red_pc)
PCSetType(red_pc,PC_REDISTRIBUTE)
KSPSetup(true_ksp)

KSPSolve(true_ksp,b,x)

I didn't understand the "diagonal portion" stuff.



> But why is this needed in the first place? Don't people who use MATIS also
> create a reasonable global ordering at setup time? It's bad for other parts
> of the algorithm to have a global numbering that doesn't line up well with
> the partition of elements.
>
>

Load balancing is assured if you come from a Mesh partitioner which does it
for you. For communications costs, optimizing the global ordering of a
MATIS object is not so simple. In principle you have more local dofs than
that stored in the local part of the  global vector you own. With my code
(I've tested it on structured grids) I've always observed more than 80% of
the whole number of global dofs assigned to the process to which they
belong in the local representation of MATIS (mostly INTERNAL nodes). As an
example, a usual Natural ordering for structured grids counts to about 1%.


>
>> Regarding the other question:
>> With MATIS (and in principle every subassembled matrix) you are forced to
>> use the local matrices assembled by the user or an assembled group of them
>> (as you surely know since you are developing a PA). You cannot safely (in a
>> theoretical way) move subdomain boundaries since you don't know (at least
>> in principle) how to reproduce the matrix values on the local interface
>> since they are neumann boundaries for the local PDE. The main question is:
>> how can I split the assembled values on the nodes which previously were
>> internal to the subdomain?
>>
>
> This can't be done, at least without more information or making
> assumptions about the discretization. That's why I asked.
>
>
>>  (I think this is the main question when thinking to a MatConvert
>> implementation which converts from a matrix obtained by DMDAGetMatrix to a
>> MATIS).
>>
>
> There is no reason why DMCreateMatrix() cannot return a MATIS. In any
> case, that's a better model than assembling in one format and trying to
> convert, especially since conversion really can't done in this direction (a
> MATIS can be converted to MATAIJ, not the other way).
>
>

I didn't used DM so far. How can you set up a DM for a MATIS object?


>
>>
>>>
>>> For a different, but related algorithm, I'm considering writing a MatPA
>>> (PA = partially assembled) which would choose some points to assemble
>>> internally. The reason for storing in partially assembled form is that I
>>> will want to solve a number of pinned Neumann problems to determine the
>>> coarse space and I'd rather not have to take unnecessary submatrices. Do
>>> you think this is a bad idea and I should just use MatIS? (I also want
>>> support for multiple subdomains per process and multiple processes per
>>> subdomain, though that could be added to MatIS.)
>>>
>>
>> Cool. Are you implementing FETI-DP? I'm planning to implement it in the
>> very next future for isogeometric analysis. Asking to your question, it
>> depends on how you need to solve these "pinned Neumann problems": you need
>> to solve unrelated local problems defined by the local matrices or solve a
>> parallel Neumann problem? In the first case, MatIS is your choice. In the
>> other case, I think you need to resort to a multigrid for a new MATIS
>> object defined by the subdomains of interest.
>> However, I will be glad to help you with PA if you want.
>
>
> I'm not that interested in FETI-DP since it's basically the same as BDDC
> and doesn't fix the exponential dependence on the number of levels. Also, I
> hate that it doesn't work well for irregular coefficients on subdomain
> boundaries. So I have a different algorithm in mind, more naturally viewed
> as a multigrid or multilevel Schwarz, but with aggressive coarsening
> enabled by the subdomains. One part of the method is compatible relaxation,
> for which pinned Neumann problems are used. The idea is to start with just
> enough pinning to make the subdomains non-singular, then enrich the coarse
> space by adaptively selecting vertices or edge/face moments that are not
> well controlled (as determined by compatible relaxation). Once the coarse
> injection is found, we construct the coarse basis functions by harmonic
> extension (again using pinned Neumann problems). With the multigrid
> hierarchy is constructed, the smoother would be based on overlapping
> subdomains or a multi-stage smoother involving both pinned Neumann problems
> and surface strips (or perhaps just point-block Jacobi). The problem with
> only using the pinned Neumann problems in the smoother seems to be that we
> can't remove the exponential dependence without allowing the smoother to
> move the injection (compatible relaxation is good for telling us whether a
> coarse space is good, but not actually the right thing for multigrid
> convergence).
>

So, for the smoother something like the hybrid approach proposed here
http://www.cs.nyu.edu/cs/faculty/widlund/ijnme_submit_revision_ijnme.pdf ?

-- 
Stefano
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120508/4b1ac38b/attachment.html>


More information about the petsc-dev mailing list