[petsc-users] Handling dense matrices in RBF interpolation

Jed Brown jed at jedbrown.org
Fri Oct 24 09:59:32 CDT 2014


Florian Lindner <mailinglists at xgm.de> writes:

> Hello,
>
> I am redoing an radial basis function interpolation algorithm using petsc in our software.
>
> Using basis function with compact carrier is blazingly fast. These basis function result in sparse matrices.
>
>   ierr = MatSetType(matrixC, MATSBAIJ); CHKERRV(ierr); // create symmetric, block sparse matrix.
>   ierr = MatSetSizes(matrixC, PETSC_DECIDE, PETSC_DECIDE, n, n); CHKERRV(ierr);
>   ierr = MatSetOption(matrixC, MAT_SYMMETRY_ETERNAL, PETSC_TRUE); CHKERRV(ierr);
>   ierr = MatSetUp(matrixC); CHKERRV(ierr);
>
>   ierr = MatSetType(matrixA, MATAIJ); CHKERRV(ierr); // create sparse matrix.
>   ierr = MatSetSizes(matrixA, PETSC_DECIDE, PETSC_DECIDE, outputSize, n); CHKERRV(ierr);
>   ierr = MatSetUp(matrixA); CHKERRV(ierr);
>
> Benchmark is a direct dense matrix / LU decomposition, a rather naive implementation. The code runs sequentially, using PETSC_COMM_SELF.
>
> Using the matrix creation code above it's about 10 or 20 times faster, still without preallocation. During matrix assembly, which involves looping over rows and then over columns I collect all columns in an array and set each row using MatSetValues.
>
> Since the user can select the basis function for the RBF at runtime, my code is not aware if it's a compact carrier or not, thus it don't know if it results in sparse or dense matrices. Basis functions that produce dense matrices make the petsc code more then 50 time slower.

Why not analyze the basis functions so you can choose an appropriate data structure?

> Running the code with n = 503 (mesh size of 500 + polynom) it takes about 16 seconds, with 32% filling matrix A, 15% filling matrix C and 51% doing KSPSolve of matrix C (A is not KSPSolved). (dense matrices)
>
> If I just preallocate a practically dense matrix:
>
>   ierr = MatSeqAIJSetPreallocation(matrixA, n, NULL); CHKERRV(ierr); 
>
> runtime changes to 10 seconds, with 0% (zero) percent spent with filling matrix A. (KSPSolve 76%, Matrix C: 22%). For the sparse case it's that way still a lot faster then before, though slower then without preallocation.
>
> However, trying to preallocate C, which is of type MATSBAIJ fails for me:
>
>   ierr = MatSeqSBAIJSetPreallocation(_matrixC.matrix, n, n, NULL);
>
> results in:
>
>   [0]PETSC ERROR: Arguments are incompatible
>   [0]PETSC ERROR: Cannot change block size 1 to 503

You probably want bs=1 instead of bs=n in the call above.

> What would you advise me to do here? How can I make the KSPSolve faster when it is a dense matrix?

Uh, find a data-sparse representation or reformulate to expose such
sparsity.  And use appropriate data structures.

> I could guess the preallocation. 

Yes, you should definitely preallocate when assembling sparse matrices.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 818 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20141024/ba1265a3/attachment-0001.pgp>


More information about the petsc-users mailing list