mesh ordering and partition

Barry Smith bsmith at mcs.anl.gov
Fri May 16 12:42:56 CDT 2008


On May 16, 2008, at 11:47 AM, Gong Ding wrote:

>
> ----- Original Message ----- From: "Matthew Knepley" <knepley at gmail.com 
> >
> To: <petsc-users at mcs.anl.gov>
> Sent: Saturday, May 17, 2008 12:19 AM
> Subject: Re: mesh ordering and partition
>
>
>>
>> I think, if you are using the serial PETSc ILU, you should just use a
>> MatOrdering,
>> which can be done from the command line:
>>
>> -pc_type ilu -pc_factor_mat_ordering_type rcm
>>
>> which I tested on KSP ex2.
>>
>> Matt
>
>
> I am developing parallel code for 3D semiconductor device simulation.
> From the experience of 2D code, the GMRES solver with ILU works well  
> (the matrix is asymmetric.)
> As a result, I'd like to use GMRES+ILU again for 3D,  in parallel.
> Does   -pc_type ilu -pc_factor_mat_ordering_type rcm still work?
> Since the parallel martrix requires continuous index in subdomain,  
> the matrix ordering seems troublesome.
> maybe only a local ordering can be done... Am I right?
>

    PETSc does not have any parallel ILU, so when you run in parallel  
you must be either using
block Jacobi or the overlapping additive Schwarz method (block Jacobi  
with overlap between the blocks)
and ILU on the subdomains. In this case you must use the -sub prefix  
on ilu and ordering

>> -sub_pc_type ilu -sub_pc_factor_mat_ordering_type rcm
>

    The RCM ordering is done on the submatrix on each process, it is  
not parallel.

    It is important to note also that though rcm "may" improve the  
convergence rate of the ILU
slightly, using an ordering on the factorization does require some  
permutation of the vectors on
input and output to the MatSolve (which takes a little bit of time).  
You really need to run both and
see if one is faster than the other (use -log_summary as an option).

   Barry




> Gong Ding
>
>
>




More information about the petsc-users mailing list