[petsc-users] Parallelizing a matrix-free code

Michael Werner michael.werner at dlr.de
Mon Oct 16 02:26:57 CDT 2017


Hello,

I'm having trouble with parallelizing a matrix-free code with PETSc. In 
this code, I use an external CFD code to provide the matrix-vector 
product for an iterative solver in PETSc. To increase convergence rate, 
I'm using an explicitly stored Jacobian matrix to precondition the 
solver. This works fine for serial runs. However, when I try to use 
multiple processes, I face the problem that PETSc decomposes the 
preconditioner matrix, and probably also the shell matrix, in a 
different way than the external CFD code decomposes the grid.

The Jacobian matrix is built in a way, that its rows and columns 
correspond to the global IDs of the individual points in my CFD mesh

The CFD code decomposes the domain based on the proximity of points to 
each other, so that the resulting subgrids are coherent. However, since 
its an unstructured grid, those subgrids are not necessarily made up of 
points with successive global IDs. This is a problem, since PETSc seems 
to partition the matrix in  coherent slices.

I'm not sure what the best approach to this problem might be. Is it 
maybe possible to exactly tell PETSc, which rows/columns it should 
assign to the individual processes?









More information about the petsc-users mailing list