[petsc-dev] [pflotran-dev: 976] Re: [pflotran-users: 998] Distorted structured grid

Jed Brown jedbrown at mcs.anl.gov
Fri Nov 1 21:57:03 CDT 2013


Paolo Orsini <paolo.orsini at gmail.com> writes:

> Hi Jed,
>
> I believe you are right.
> Multiplying the connectivity matrix by its tranpose returns an ncell x
> ncell matrix with non-zero entries only for those cell sharing at least one
> node. Hence the Dual Graph we need....
>
> Following your suggestions I change the code as follows:
>
> !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
> call MatCreateMPIAdj(option%mycomm,num_cells_local_old, &
>                        unstructured_grid%num_vertices_global, &
>                        local_vertex_offset, &
>                        local_vertices,PETSC_NULL_INTEGER,Adj_mat,ierr)
>
> call MatConvert(Adj_mat, MATAIJ,MAT_INITIAL_MATRIX,AIJ_mat,ierr)
>
> call
> MatMatTransposeMult(AIJ_mat,AIJ_mat,MAT_INITIAL_MATRIX,PETSC_DEFAULT_DOUBLE_PRECISION,Dual_mat,ierr)
> !!  call MatMeshToCellGraph(Adj_mat,num_common_vertices,Dual_mat,ierr)
>
>   call MatDestroy(Adj_mat,ierr)
>   call MatDestroy(AIJ_mat,ierr)
> !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
>
> When I try to run, the MatConvert takes ages, (I suppose I can form
> directly the AIJ_mat), 

I see MatConvert_Basic does not even try to preallocate, which is
certainly why it takes a long time.  That's easy to fix in PETSc (at
least for AIJ).

PETSc devs, why doesn't it make one pass counting and another pass
inserting?

> the PTScotch partitioning seems quick, but then I get an error later
> on in the code, when attempting to compute the internal
> connection.... Something is wrong...

Always send the entire error message.

> I need to double check... do you see anything wrong so far?
>
> P
>
>
>
>
>
>
> On Fri, Nov 1, 2013 at 10:18 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:
>
>> Please keep the discussion on at least one list so it can be archived
>> and so others can comment.
>>
>> Paolo Orsini <paolo.orsini at gmail.com> writes:
>>
>> > I really don't know how the DualGraph is going to affect the the parallel
>> > computation efficiency...
>> > I am trying to use this partitioning as a black box, not sure what's the
>> > right thing to do..
>> >
>> > In the code calling the partitioning (PFLOTRAN), i can see that MPIAdj is
>> > built, then the Dual graph is computed, and finally the partitioning
>> takes
>> > place, see below:
>> >
>> >
>> !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
>> >   call MatCreateMPIAdj(option%mycomm,num_cells_local_old, &
>> >                        unstructured_grid%num_vertices_global, &
>> >                        local_vertex_offset, &
>> >                        local_vertices,PETSC_NULL_INTEGER,Adj_mat,ierr)
>> >
>> > call MatMeshToCellGraph(Adj_mat,num_common_vertices,Dual_mat,ierr)
>>
>> You can MatConvert Adj_mat to AIJ or assemble as AIJ to begin with, then
>> use MatMatTransposeMult to create Dual_mat (which will be in AIJ
>> format).
>>
>> > .........
>> >
>> >   call MatPartitioningCreate(option%mycomm,Part,ierr)
>> >   call MatPartitioningSetAdjacency(Part,Dual_mat,ierr)
>> >   call MatPartitioningSetFromOptions(Part,ierr)
>> >   call MatPartitioningApply(Part,is_new,ierr)
>> >   call MatPartitioningDestroy(Part,ierr)
>> >
>> >   allocate(cell_counts(option%mycommsize))
>> >   call ISPartitioningCount(is_new,option%mycommsize,cell_counts,ierr)
>> >   num_cells_local_new = cell_counts(option%myrank+1)
>> >   call
>> > MPI_Allreduce(num_cells_local_new,iflag,ONE_INTEGER_MPI,MPIU_INTEGER, &
>> >                      MPI_MIN,option%mycomm,ierr)
>> >
>> !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
>> >
>> > You are telling me we can build a dual graph simply multiplying Adj_mat
>> by
>> > its transpose?
>>
>> Yeah, doesn't it connect all cells that share at least one vertex?  That
>> is exactly the nonzero pattern of the matrix product.
>>
>> > Why using a simple Aij when we are pre-processing a parallel process?
>>
>> MPIAIJ implements MatMatTransposeMult, but we don't have an
>> implementation for MPIAdj (without calling ParMETIS).
>>
>> > Sorry, may be what I want to do is less black box than thought...
>> >
>> > And thanks again for your patience and help
>> >
>> > Paolo
>> >
>> >
>> >
>> >
>> > On Fri, Nov 1, 2013 at 9:17 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:
>> >
>> >> Paolo Orsini <paolo.orsini at gmail.com> writes:
>> >>
>> >> > Hi Jed,
>> >> >
>> >> > Thanks for your prompt reply.
>> >> >
>> >> > Is it possible to do the partitioning with PTSchotch, without
>> >> > using MatMeshToCellGraph?
>> >> >
>> >> > Is there any other function (not using ParMetis) to compute the
>> adjacency
>> >> > graph needed by MatPartitioningSetAdjacency, for parallel unstructured
>> >> > grids?
>> >>
>> >> You can either compute the dual graph yourself or we can implement that
>> >> function without calling ParMETIS.  Actually, I believe the operation is
>> >> just symbolic
>> >>
>> >>   DualGraph = E * E^T
>> >>
>> >> where E is the element connectivity matrix.  So you could build a normal
>> >> (AIJ instead of MPIAdj) matrix with the connectivity, then call
>> >> MatMatTransposeMult().
>> >>
>> >> Would that work for you?
>> >>
>>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 835 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20131101/7701e98d/attachment.sig>


More information about the petsc-dev mailing list