[petsc-users] MAT_NEW_NONZERO_LOCATIONS working?
Hong
hzhang at mcs.anl.gov
Wed Jun 20 10:02:54 CDT 2018
Marius:
I added support for parallel sparse RHS (in host) for MatMatSolve_MUMPS()
https://bitbucket.org/petsc/petsc/commits/2b691707dd0cf456c808def006e14b6f56b364b6
It is in the branch hzhang/mumps-spRHS.
You may test it.
I'll further cleanup the routine, test it, then merge it to petsc.
Hong
On Mon, Jun 4, 2018 at 9:54 AM, Hong <hzhang at mcs.anl.gov> wrote:
>
>
> On Mon, Jun 4, 2018 at 1:03 PM, Jean-Yves LExcellent <
> Jean-Yves.L.Excellent at ens-lyon.fr> wrote:
>
>>
>> Thanks for the details of your needs.
>>
>> For the first application, the sparse RHS feature with distributed
>> solution should effectively be fine.
>>
> I'll add parallel support of this feature in petsc after I'm back the ANL
> after June 14th.
> Hong
>
>>
>> For the second one, a future distributed RHS feature (not currently
>> available in MUMPS) might help if the centralized sparse RHS is too
>> memory consuming, depending on the size of B1.
>>
>> Regards,
>> Jean-Yves, for the MUMPS developers
>>
>>
>> I want to invert a rather large sparse matrix for this using a sparse
>> rhs with centralized input would be ok as long as the solution is
>> distributed.
>>
>> and the second application I have in mind is solving a system of the from
>> AX=B where A and B are sparse and B is given by a block matrix of the form
>> B=[B1 0, 0 0] where B1 is dense but the dimension is (much) smaller than
>> that of the whole matrix B.
>>
>> Marius:
>> Current PETSc interface supports sequential sparse multiple right-hand
>> side, but not distributed.
>> It turns out that mumps does not support distributed sparse multiple
>> right-hand sides at
>> the moment (see attached email).
>>
>> Jean-Yves invites you to communicate with him directly.
>> Let me know what we can help on this matter,
>> e.g., add support for parallel implementation of sparse multiple
>> right-hand side with
>> centralized rhs input?
>>
>> Hong
>>
>> ----------------------
>> Jean-Yves LExcellent <Jean-Yves.L.Excellent at ens-lyon.fr>
>> 5:14 AM (3 hours ago)
>>
>> to Hong, mumps-dev
>>
>>
>>
>>
>> Hello,
>>
>> We do not support distributed sparse multiple right-hand sides at
>> the moment. From the feedback we have from applications, the
>> right-hand sides are often very sparse, and having them distributed
>> did not seem critical.
>>
>> Since we are specifying a distributed right-hand sides feature at the
>> moment, could you let us know in more detail the need regarding
>> distributed sparse right-hand side (e.g., do all columns have the same
>> nonzero structure in that case) or put us in contact with the user who
>> needs this?
>>
>> Thanks,
>> Jean-Yves and Patrick
>>
>>>
>>> Thanks a lot guys, very helpful.
>>>
>>>
>>>
>>>
>>> I see MUMPS http://mumps.enseeiht.fr/
>>>
>>> Sparse multiple right-hand side, distributed solution; Exploitation of
>>> sparsity in the right-hand sidesPETSc interface computes mumps distributed
>>> solution as default (this is not new) (ICNTL(21) = 1)
>>>
>>> I will add support for Sparse multiple right-hand side.
>>>
>>> Hong
>>>
>>> On Thu, May 31, 2018 at 11:25 AM, Smith, Barry F. <bsmith at mcs.anl.gov
>>> [mailto:bsmith at mcs.anl.gov]> wrote:
>>> Hong,
>>>
>>> Can you see about adding support for distributed right hand side?
>>>
>>> Thanks
>>>
>>> Barry
>>>
>>> > On May 31, 2018, at 2:37 AM, Marius Buerkle <mbuerkle at web.de[mailto:
>>> mbuerkle at web.de]> wrote:
>>> >
>>> > The fix for MAT_NEW_NONZERO_LOCATIONS, thanks again.
>>> >
>>> > I have yet another question, sorry. The recent version of MUMPS
>>> supports distributed and sparse RHS is there any chance that this will be
>>> supported in PETSc in the near future?
>>> >
>>> >
>>> >
>>> >
>>> >> On May 30, 2018, at 6:55 PM, Marius Buerkle <mbuerkle at web.de[mailto:
>>> mbuerkle at web.de]> wrote:
>>> >>
>>> >> Thanks for the quick fix, I will test it and report back.
>>> >> I have another maybe related question, if MAT_NEW_NONZERO_LOCATIONS
>>> is true and let's say 1 new nonzero position is created it does not
>>> allocated 1 but several new nonzeros but only use 1.
>>> >
>>> > Correct
>>> >
>>> >> I think that is normal, right?
>>> >
>>> > Yes
>>> >
>>> >> But, at least as far as I understand the manual, a subsequent call of
>>> mat assemble with
>>> >> MAT_FINAL_ASSEMBLY should compress out the unused allocations and
>>> release the memory, is this correct?
>>> >
>>> > It "compresses it out" (by shifting all the nonzero entries to the
>>> beginning of the internal i, j, and a arrays), but does NOT release any
>>> memory. Since the values are stored in one big contiguous array (obtained
>>> with a single malloc) it cannot just free part of the array, so the extra
>>> locations just sit harmlessly at the end if the array unused.
>>> >
>>> >> If so, this did not work for me, even after doing
>>> >> MAT_FINAL_ASSEMBLY the unused nonzero allocations remain. Is this
>>> normal?
>>> >
>>> > Yes,
>>> >
>>> > Barry
>>> >
>>> >>
>>> >>>
>>> >>> Fixed in the branch barry/fix-mat-new-nonzero-locations/maint
>>> >>>
>>> >>> Once this passes testing it will go into the maint branch and then
>>> the next patch release but you can use it now in the branch
>>> barry/fix-mat-new-nonzero-locations/maint
>>> >>>
>>> >>> Thanks for the report and reproducible example
>>> >>>
>>> >>> Barry
>>> >>>
>>> >>>
>>> >>>> On May 29, 2018, at 7:51 PM, Marius Buerkle <mbuerkle at web.de
>>> [mailto:mbuerkle at web.de]> wrote:
>>> >>>>
>>> >>>> Sure, I made a small reproducer, it is Fortran though I hope that
>>> is ok. If MAT_NEW_NONZERO_LOCATIONS is set to false I get an error, if it
>>> is set to true the new nonzero element is inserted, if
>>> MAT_NEW_NONZERO_LOCATIONS is false and either MAT_NEW_NONZERO_LOCATION_ERR
>>> or MAT_NEW_NONZERO_ALLOCATION_ERR is set to false afterwards then the new
>>> nonzero is also created without an error, but if MAT_NEW_NONZERO_LOCATIONS
>>> is set to false after MAT_NEW_NONZERO_LOCATION_ERR/MAT_NEW_NONZERO_ALLOCATION_ERR
>>> have been set to false I get an error again.
>>> >>>>
>>> >>>>
>>> >>>> program newnonzero
>>> >>>> #include <petsc/finclude/petscmat.h>
>>> >>>> use petscmat
>>> >>>> implicit none
>>> >>>>
>>> >>>> Mat :: A
>>> >>>> PetscInt :: dnnz,onnz,n,m,idxm(1),idxn(1),nl1,nl2
>>> >>>> PetscScalar :: v(1)
>>> >>>> PetscReal :: info(MAT_INFO_SIZE)
>>> >>>> PetscErrorCode :: ierr
>>> >>>>
>>> >>>> integer :: nproc,iproc,i
>>> >>>>
>>> >>>> call PetscInitialize(PETSC_NULL_CHARACTER,ierr)
>>> >>>>
>>> >>>> call MPI_COMM_SIZE(PETSC_COMM_WORLD, nproc,ierr)
>>> >>>>
>>> >>>> call MPI_Comm_rank( PETSC_COMM_WORLD, iproc, ierr )
>>> >>>>
>>> >>>> n=3
>>> >>>> m=n
>>> >>>> call MatCreateAIJ(PETSC_COMM_WORLD,PETSC_DECIDE,PETSC_DECIDE,n,m,
>>> 1,PETSC_NULL_INTEGER,0,PETSC_NULL_INTEGER,A,ierr)
>>> >>>>
>>> >>>>
>>> >>>> call MatGetOwnershipRange(A,nl1,nl2,ierr)
>>> >>>> do i=nl1,nl2-1
>>> >>>> idxn(1)=i
>>> >>>> idxm(1)=i
>>> >>>> v(1)=1d0
>>> >>>> call MatSetValues(A,1,idxn,1,idxm, v,INSERT_VALUES,ierr)
>>> >>>> end do
>>> >>>> call MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr)
>>> >>>> call MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr)
>>> >>>>
>>> >>>> call MatSetOption(A,MAT_NEW_NONZERO_LOCATIONS,PETSC_FALSE,ierr)
>>> >>>> !~ call MatSetOption(A,MAT_NEW_NONZERO_LOCATION_ERR,PETSC_FALSE,
>>> ierr)
>>> >>>> !~ call MatSetOption(A,MAT_NEW_NONZERO_ALLOCATION_ERR
>>> ,PETSC_FALSE,ierr)
>>> >>>> !~ call MatSetOption(A,MAT_NEW_NONZERO_LOCATIONS,PETSC_FALSE,ierr)
>>> >>>>
>>> >>>>
>>> >>>> idxn(1)=0
>>> >>>> idxm(1)=n-1
>>> >>>> if ((idxn(1).ge.nl1).and.(idxn(1).le.nl2-1)) then
>>> >>>> v(1)=2d0
>>> >>>> call MatSetValues(A,1,idxn,1,idxm, v,INSERT_VALUES,ierr)
>>> >>>> end if
>>> >>>> call MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY,ierr)
>>> >>>> call MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY,ierr)
>>> >>>>
>>> >>>> if ((idxn(1).ge.nl1).and.(idxn(1).le.nl2-1)) then
>>> >>>> v(1)=2d0
>>> >>>> call MatGetValues(A,1,idxn,1,idxm, v,ierr)
>>> >>>> write(6,*) v
>>> >>>> end if
>>> >>>>
>>> >>>> call PetscFinalize(ierr)
>>> >>>>
>>> >>>> end program newnonzero
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> $ mpiexec.hydra -n 3 ./a.out
>>> >>>> [0]PETSC ERROR: --------------------- Error Message
>>> --------------------------------------------------------------
>>> >>>> [0]PETSC ERROR: Argument out of range
>>> >>>> [0]PETSC ERROR: Inserting a new nonzero at global row/column (0, 2)
>>> into matrix
>>> >>>> [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/d
>>> ocumentation/faq.html[http://www.mcs.anl.gov/petsc/documenta
>>> tion/faq.html] for trouble shooting.
>>> >>>> [0]PETSC ERROR: Petsc Release Version 3.9.2, May, 20, 2018
>>> >>>> [0]PETSC ERROR: ./a.out on a named tono-hpc1 by marius Wed May 30
>>> 09:42:40 2018
>>> >>>> [0]PETSC ERROR: Configure options --prefix=/home/marius/prog/petsc/3.9.2
>>> --download-elemental=yes --download-metis=yes --download-parmetis=yes
>>> --download-mumps=yes --with-scalapack-lib="/home/ma
>>> rius/intel/compilers_and_libraries_2018.2.199/linux/mkl/lib/
>>> intel64/libmkl_scalapack_lp64.a -Wl,--start-group
>>> /home/marius/intel/compilers_and_libraries_2018.2.199/linux/
>>> mkl/lib/intel64/libmkl_intel_lp64.a /home/marius/intel/compilers_a
>>> nd_libraries_2018.2.199/linux/mkl/lib/intel64/libmkl_sequential.a
>>> /home/marius/intel/compilers_and_libraries_2018.2.199/linux/mkl/lib/intel64/libmkl_core.a
>>> /home/marius/intel/compilers_and_libraries_2018.2.199/linux/
>>> mkl/lib/intel64/libmkl_blacs_intelmpi_lp64.a -Wl,--end-group -lpthread
>>> -lm -ldl" --FC=mpiifort --CC=mpicc --CXX=mpicxx --with-scalar-type=complex
>>> --with-mpi-dir= --with-blaslapack-lib="/home/m
>>> arius/intel/compilers_and_libraries_2018.2.199/linux/mkl/
>>> lib/intel64/libmkl_scalapack_lp64.a -Wl,--start-group
>>> /home/marius/intel/compilers_and_libraries_2018.2.199/linux/
>>> mkl/lib/intel64/libmkl_intel_lp64.a /home/marius/intel/compilers_a
>>> nd_libraries_2018.2.199/linux/mkl/lib/intel64/libmkl_sequential.a
>>> /home/marius/intel/compilers_and_libraries_2018.2.199/linux/mkl/lib/intel64/libmkl_core.a
>>> /home/marius/intel/compilers_and_libraries_2018.2.199/linux/
>>> mkl/lib/intel64/libmkl_blacs_intelmpi_lp64.a -Wl,--end-group -lpthread
>>> -lm -ldl" --with-cxx-dialect=C++11 --download-superlu_dist=yes
>>> --download-ptscotch=yes --with-x --with-debugging=1 --download-superlu=yes
>>> --with-mkl_cpardiso=1 --with-mkl_pardiso=1 --with-scalapack=1
>>> >>>> [0]PETSC ERROR: #1 MatSetValues_MPIAIJ() line 607 in
>>> /home/marius/prog/petsc/petsc-3.9.2/src/mat/impls/aij/mpi/mpiaij.c
>>> >>>> [0]PETSC ERROR: #2 MatSetValues() line 1312 in
>>> /home/marius/prog/petsc/petsc-3.9.2/src/mat/interface/matrix.c
>>> >>>> (0.000000000000000E+000,0.000000000000000E+000)
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> Please send complete error message; type of matrix used etc.
>>> Ideally code that demonstrates the problem.
>>> >>>>
>>> >>>> Barry
>>> >>>>
>>> >>>>
>>> >>>>> On May 29, 2018, at 3:31 AM, Marius Buerkle <mbuerkle at web.de
>>> [mailto:mbuerkle at web.de]> wrote:
>>> >>>>>
>>> >>>>>
>>> >>>>> Hi,
>>> >>>>>
>>> >>>>> I tried to set MAT_NEW_NONZERO_LOCATIONS to false, as far as I
>>> understood MatSetValues should simply ignore entries which would give rise
>>> to new nonzero values not creating a new entry and not cause an error, but
>>> I get "[1]PETSC ERROR: Inserting a new nonzero at global row/column". Is
>>> this option supposed to work or not?
>>> >>>>
>>> >>>
>>> >>>
>>> >
>>>
>>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180620/df970814/attachment-0001.html>
More information about the petsc-users
mailing list