[petsc-users] On unknown ordering

Appel, Thibaut t.appel17 at imperial.ac.uk
Mon Nov 19 15:26:37 CST 2018


Hi Barry,

> Le 15 nov. 2018 à 18:16, Smith, Barry F. <bsmith at mcs.anl.gov> a écrit :
> 
> 
> 
>> On Nov 15, 2018, at 4:48 AM, Appel, Thibaut via petsc-users <petsc-users at mcs.anl.gov> wrote:
>> 
>> Good morning,
>> 
>> I would like to ask about the importance of the initial choice of ordering the unknowns when feeding a matrix to PETSc. 
>> 
>> I have a regular grid, using high-order finite differences and I simply divide rows of the matrix with PetscSplitOwnership using vertex major, natural ordering for the parallelism (not using DMDA)
> 
>    So each process is getting a slice of the domain? To minimize communication it is best to use "square-ish" subdomains instead of slices; this is why the DMDA tries to use "square-ish" subdomains. I don't know the relationship between convergence rate and the shapes of the subdomains, it will depend on the operator and possibly "flow direction" etc. 

Yes absolutely, that’s what it is. The MPI ownership follows the rows of the matrix ordered w.r.t. something like row = idof + i*ndof + j*nx*ndof
I’m implementing a DMDA interface and will see the effect. 



>> 
>> My understanding is that when using LU-MUMPS, this does not matter because either serial or parallel analysis is performed and all the rows are reordered ‘optimally’ before the LU factorization. Quality of reordering might suffer from parallel analysis.
>> 
>> But if I use the default block Jacobi with ILU with one block per processor, the initial ordering seems to have an influence because some tightly coupled degrees of freedom might lay on different processes and the ILU becomes less powerful. You can change the ordering on each block but this won’t necessarily make things better.
>> 
>> Are my observations accurate? Is there a recommended ordering type for a block Jacobi approach in my case? Could I expect natural improvements in fill-in or better GMRES robustness opting for parallelism offered by DMDA?
> 
>     You might consider using -pc_type asm (additive Schwarz method) instead of block Jacobi. This "reintroduces" some of the tight coupling that is discarded when slicing up the domain for block Jacobi.
> 
>   Barry
> 
>> 
>> Thank you,
>> 
>> Thibaut
> 

Thibaut


More information about the petsc-users mailing list