[petsc-dev] [pflotran-dev: 1080] Re: [pflotran-users: 998] Distorted structured grid
Gautam Bisht
gbisht at lbl.gov
Sun Nov 3 11:48:32 CST 2013
On Sun, Nov 3, 2013 at 7:40 AM, Gautam Bisht <gbisht at lbl.gov> wrote:
> Hi Paolo,
>
>
> On Sun, Nov 3, 2013 at 3:53 AM, Paolo Orsini <paolo.orsini at gmail.com>wrote:
>
>> Hi Jed,
>>
>> I found out what the problem is.
>> First of all I printed all the matrices out, to be sure that MatConvert
>> was working fine.
>> And that was ok: AIJ_mat had the same connectivity table than Adj_mat.
>> Then I compared the dual graph (stored in DualMat) computed by the
>> Parmetis routine (MatMeshToCellGraph) and with MatMatTransposeMult. And
>> they were quite different.
>>
>> The computation of the dual graph of the mesh is a bit more complicated
>> than multiplying the adjacency matrix by its transpose, but not far off.
>> With this operation, also the cells that share only one node are connected
>> in the dual graph.
>> Instead, the minimum number of common nodes is >1 (2 in 2D probelms, 3 in
>> 3D problems). In fact, this is an input of MatMeshToCellGraph, I should
>> have understood this before.
>>
>> This can be computed doing the transpose adjacency matrix (Adj_T), then
>> doing the multiplication line by line of Adj time Adj_T, and discard the
>> non zero entries coming from to elements that share a number of nodes less
>> than the minimum number of common nodes imposed. I have not implement this
>> yet, any suggestion is welcome.
>>
>> I also found out that Schotch has a facility to compute a dual graph from
>> a mesh, but not PTScotch.
>> Once the graph is computed, PTSchotch can load the central dual graph,
>> and distribute it into several processors during the loading.
>> Am i right to say that PETSC is interfaced only with PTSchotch and not
>> with Scotch?
>>
>> To check if the PTSchotch partition works (within PFLOTRAN ), I am
>> computing a DualMat with parmetis, saving it into a file. Then I recompile
>> the code (using a petsc compiled with ptscotch), an load the DualMat from a
>> file rather then forming a new one. I did a successful test when running on
>> one processor. but I am having trouble when try on more.
>>
>> I though the the dual graph was computed only once, even during the mpi
>> process, instead it seems to be recomputed more than once. Not sure why....
>> sure i am missing something ???
>>
>
> In PFLOTRAN, MatCreateMPIAdj() is called:
> - Once, if unstructured grid is specified in IMPLICIT format [in
> unstructured_grid.F90];
>
- Twice, if unstructured grid is specified in EXPLICIT format [in
> unstructured_grid.F90].
>
Typo on my part, EXPLICIT format is read from *unstructured_explicit.F90*
> The first call creates a Adjacency matrix based on 'CONNECTIONS' data of
> the explicit grid; while the second call is used to create a Dual matrix.
>
> In order to better understand the code, you could compile the code with
> 'ugrid_debug=1' and look at the Adj*.out/Dual*.out for the two runs:
> - regression_tests/default/discretization/mixed_explicit.in
> - regression_tests/default/discretization/mixed_implicit.in
>
> -Gautam.
>
>
>
>> See below the original call from PFLOTRAN:
>>
>> xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
>> !!! this is the pass where the adjacency matrix is filled
>> num_cells_local_old = unstructured_grid%nlmax
>> allocate(local_vertices(unstructured_grid%max_nvert_per_cell* &
>> num_cells_local_old))
>> allocate(local_vertex_offset(num_cells_local_old+1))
>> local_vertices = 0
>> local_vertex_offset = 0
>> count = 0
>> local_vertex_offset(1) = 0
>> do local_id = 1, num_cells_local_old
>> do ivertex = 1, unstructured_grid%max_nvert_per_cell
>> if (unstructured_grid%cell_vertices(ivertex,local_id) == 0) exit
>> count = count + 1
>> ! local vertices must be zero-based for MatCreateMPIAdj; thus
>> subtract 1
>> local_vertices(count) = &
>> unstructured_grid%cell_vertices(ivertex,local_id) - 1
>> enddo
>> local_vertex_offset(local_id+1) = count
>> enddo
>>
>> select case (unstructured_grid%grid_type)
>> case(TWO_DIM_GRID)
>> num_common_vertices = 2 ! cells must share at least this number of
>> vertices
>> case(THREE_DIM_GRID)
>> num_common_vertices = 3 ! cells must share at least this number of
>> vertices
>> case default
>> option%io_buffer = 'Grid type not recognized '
>> call printErrMsg(option)
>> end select
>>
>> ! determine the global offset from 0 for cells on this rank
>> global_offset_old = 0
>> call MPI_Exscan(num_cells_local_old,global_offset_old, &
>> ONE_INTEGER_MPI,MPIU_INTEGER,MPI_SUM,option%mycomm,ierr)
>>
>> !! the adjacency matrix is computed
>> call MatCreateMPIAdj(option%mycomm,num_cells_local_old, &
>> unstructured_grid%num_vertices_global, &
>> local_vertex_offset, &
>> local_vertices,PETSC_NULL_INTEGER,Adj_mat,ierr)
>>
>> !! the mesh dual graph is computed
>> #if defined(PETSC_HAVE_PARMETIS)
>> call MatMeshToCellGraph(Adj_mat,num_common_vertices,Dual_mat,ierr)
>> #endif
>> call MatDestroy(Adj_mat,ierr)
>>
>> call UGridPartition(unstructured_grid,option,Dual_mat,is_new, &
>> num_cells_local_new)
>>
>> if (allocated(local_vertices)) deallocate(local_vertices)
>> if (allocated(local_vertex_offset)) deallocate(local_vertex_offset)
>>
>> xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
>>
>> Paolo
>>
>>
>>
>>
>> On Sat, Nov 2, 2013 at 3:57 AM, Jed Brown <jedbrown at mcs.anl.gov> wrote:
>>
>>> Paolo Orsini <paolo.orsini at gmail.com> writes:
>>>
>>> > Hi Jed,
>>> >
>>> > I believe you are right.
>>> > Multiplying the connectivity matrix by its tranpose returns an ncell x
>>> > ncell matrix with non-zero entries only for those cell sharing at
>>> least one
>>> > node. Hence the Dual Graph we need....
>>> >
>>> > Following your suggestions I change the code as follows:
>>> >
>>> >
>>> !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
>>> > call MatCreateMPIAdj(option%mycomm,num_cells_local_old, &
>>> > unstructured_grid%num_vertices_global, &
>>> > local_vertex_offset, &
>>> > local_vertices,PETSC_NULL_INTEGER,Adj_mat,ierr)
>>> >
>>> > call MatConvert(Adj_mat, MATAIJ,MAT_INITIAL_MATRIX,AIJ_mat,ierr)
>>> >
>>> > call
>>> >
>>> MatMatTransposeMult(AIJ_mat,AIJ_mat,MAT_INITIAL_MATRIX,PETSC_DEFAULT_DOUBLE_PRECISION,Dual_mat,ierr)
>>> > !! call MatMeshToCellGraph(Adj_mat,num_common_vertices,Dual_mat,ierr)
>>> >
>>> > call MatDestroy(Adj_mat,ierr)
>>> > call MatDestroy(AIJ_mat,ierr)
>>> >
>>> !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
>>> >
>>> > When I try to run, the MatConvert takes ages, (I suppose I can form
>>> > directly the AIJ_mat),
>>>
>>> I see MatConvert_Basic does not even try to preallocate, which is
>>> certainly why it takes a long time. That's easy to fix in PETSc (at
>>> least for AIJ).
>>>
>>> PETSc devs, why doesn't it make one pass counting and another pass
>>> inserting?
>>>
>>> > the PTScotch partitioning seems quick, but then I get an error later
>>> > on in the code, when attempting to compute the internal
>>> > connection.... Something is wrong...
>>>
>>> Always send the entire error message.
>>>
>>> > I need to double check... do you see anything wrong so far?
>>> >
>>> > P
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > On Fri, Nov 1, 2013 at 10:18 PM, Jed Brown <jedbrown at mcs.anl.gov>
>>> wrote:
>>> >
>>> >> Please keep the discussion on at least one list so it can be archived
>>> >> and so others can comment.
>>> >>
>>> >> Paolo Orsini <paolo.orsini at gmail.com> writes:
>>> >>
>>> >> > I really don't know how the DualGraph is going to affect the the
>>> parallel
>>> >> > computation efficiency...
>>> >> > I am trying to use this partitioning as a black box, not sure
>>> what's the
>>> >> > right thing to do..
>>> >> >
>>> >> > In the code calling the partitioning (PFLOTRAN), i can see that
>>> MPIAdj is
>>> >> > built, then the Dual graph is computed, and finally the partitioning
>>> >> takes
>>> >> > place, see below:
>>> >> >
>>> >> >
>>> >>
>>> !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
>>> >> > call MatCreateMPIAdj(option%mycomm,num_cells_local_old, &
>>> >> > unstructured_grid%num_vertices_global, &
>>> >> > local_vertex_offset, &
>>> >> >
>>> local_vertices,PETSC_NULL_INTEGER,Adj_mat,ierr)
>>> >> >
>>> >> > call MatMeshToCellGraph(Adj_mat,num_common_vertices,Dual_mat,ierr)
>>> >>
>>> >> You can MatConvert Adj_mat to AIJ or assemble as AIJ to begin with,
>>> then
>>> >> use MatMatTransposeMult to create Dual_mat (which will be in AIJ
>>> >> format).
>>> >>
>>> >> > .........
>>> >> >
>>> >> > call MatPartitioningCreate(option%mycomm,Part,ierr)
>>> >> > call MatPartitioningSetAdjacency(Part,Dual_mat,ierr)
>>> >> > call MatPartitioningSetFromOptions(Part,ierr)
>>> >> > call MatPartitioningApply(Part,is_new,ierr)
>>> >> > call MatPartitioningDestroy(Part,ierr)
>>> >> >
>>> >> > allocate(cell_counts(option%mycommsize))
>>> >> > call
>>> ISPartitioningCount(is_new,option%mycommsize,cell_counts,ierr)
>>> >> > num_cells_local_new = cell_counts(option%myrank+1)
>>> >> > call
>>> >> >
>>> MPI_Allreduce(num_cells_local_new,iflag,ONE_INTEGER_MPI,MPIU_INTEGER, &
>>> >> > MPI_MIN,option%mycomm,ierr)
>>> >> >
>>> >>
>>> !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
>>> >> >
>>> >> > You are telling me we can build a dual graph simply multiplying
>>> Adj_mat
>>> >> by
>>> >> > its transpose?
>>> >>
>>> >> Yeah, doesn't it connect all cells that share at least one vertex?
>>> That
>>> >> is exactly the nonzero pattern of the matrix product.
>>> >>
>>> >> > Why using a simple Aij when we are pre-processing a parallel
>>> process?
>>> >>
>>> >> MPIAIJ implements MatMatTransposeMult, but we don't have an
>>> >> implementation for MPIAdj (without calling ParMETIS).
>>> >>
>>> >> > Sorry, may be what I want to do is less black box than thought...
>>> >> >
>>> >> > And thanks again for your patience and help
>>> >> >
>>> >> > Paolo
>>> >> >
>>> >> >
>>> >> >
>>> >> >
>>> >> > On Fri, Nov 1, 2013 at 9:17 PM, Jed Brown <jedbrown at mcs.anl.gov>
>>> wrote:
>>> >> >
>>> >> >> Paolo Orsini <paolo.orsini at gmail.com> writes:
>>> >> >>
>>> >> >> > Hi Jed,
>>> >> >> >
>>> >> >> > Thanks for your prompt reply.
>>> >> >> >
>>> >> >> > Is it possible to do the partitioning with PTSchotch, without
>>> >> >> > using MatMeshToCellGraph?
>>> >> >> >
>>> >> >> > Is there any other function (not using ParMetis) to compute the
>>> >> adjacency
>>> >> >> > graph needed by MatPartitioningSetAdjacency, for parallel
>>> unstructured
>>> >> >> > grids?
>>> >> >>
>>> >> >> You can either compute the dual graph yourself or we can implement
>>> that
>>> >> >> function without calling ParMETIS. Actually, I believe the
>>> operation is
>>> >> >> just symbolic
>>> >> >>
>>> >> >> DualGraph = E * E^T
>>> >> >>
>>> >> >> where E is the element connectivity matrix. So you could build a
>>> normal
>>> >> >> (AIJ instead of MPIAdj) matrix with the connectivity, then call
>>> >> >> MatMatTransposeMult().
>>> >> >>
>>> >> >> Would that work for you?
>>> >> >>
>>> >>
>>>
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "pflotran-dev" group.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/pflotran-dev/CACGnotZfbdvEieXFn%2B6RYzj4o4yM_%3DK0NkHcFeK5CESL%2BovcKw%40mail.gmail.com
>> .
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20131103/40193d08/attachment.html>
More information about the petsc-dev
mailing list