[petsc-users] MatCreateMPIAIJWithSplitArrays
Alejandro Marcos Aragón
alejandro.aragon at gmail.com
Wed Mar 23 04:59:11 CDT 2011
Hi PETSc users,
My code uses a custom sparse matrix format that uses a hashed container for the entries of the matrix. The interface can then create a matrix in compressed row (or compressed column) storage format. That is, it can give me the three arrays needed to represent the sparse matrix in these formats. I would like to use the PETSc parallel solver, so then I thought that it would be good to try the MatCreateMPIAIJWithSplitArrays function so that I don't have to copy the values to the PETSc matrix again.
Now my question to you is that I really don't get the point of having a diagonal and off-diagonal blocks of the sparse matrix. In the compressed row storage format, there is no distinction between these two blocks. Besides, I don't think there is a clear way to determine which is the boundary between these two blocks. Can someone point me how I should use this function, or if there is a better function that can take the three arrays that I have at this point?
Also, since the sparse matrix in each process is the result of a finite element assembly routine, some rows are overlapped among the processes (there are several finite element nodes shared among the processes). At this point using the MatSetValues with the ADD_VALUES flag works fine, but I want to make sure that if I use the MatCreateMPIAIJWithSplitArrays (where I need to set the number of local rows) I can still get this behavior. In other words, if I sum the number of local rows in each process, I get a total number of rows that is greater than the number of global rows because of superposition.
Thank you all,
Alejandro M. Aragón, Ph.D.
More information about the petsc-users
mailing list