[petsc-users] Handling dense matrices in RBF interpolation

Florian Lindner mailinglists at xgm.de
Fri Oct 24 04:23:30 CDT 2014


Hello,

I am redoing an radial basis function interpolation algorithm using petsc in our software.

Using basis function with compact carrier is blazingly fast. These basis function result in sparse matrices.

  ierr = MatSetType(matrixC, MATSBAIJ); CHKERRV(ierr); // create symmetric, block sparse matrix.
  ierr = MatSetSizes(matrixC, PETSC_DECIDE, PETSC_DECIDE, n, n); CHKERRV(ierr);
  ierr = MatSetOption(matrixC, MAT_SYMMETRY_ETERNAL, PETSC_TRUE); CHKERRV(ierr);
  ierr = MatSetUp(matrixC); CHKERRV(ierr);

  ierr = MatSetType(matrixA, MATAIJ); CHKERRV(ierr); // create sparse matrix.
  ierr = MatSetSizes(matrixA, PETSC_DECIDE, PETSC_DECIDE, outputSize, n); CHKERRV(ierr);
  ierr = MatSetUp(matrixA); CHKERRV(ierr);

Benchmark is a direct dense matrix / LU decomposition, a rather naive implementation. The code runs sequentially, using PETSC_COMM_SELF.

Using the matrix creation code above it's about 10 or 20 times faster, still without preallocation. During matrix assembly, which involves looping over rows and then over columns I collect all columns in an array and set each row using MatSetValues.

Since the user can select the basis function for the RBF at runtime, my code is not aware if it's a compact carrier or not, thus it don't know if it results in sparse or dense matrices. Basis functions that produce dense matrices make the petsc code more then 50 time slower.

Running the code with n = 503 (mesh size of 500 + polynom) it takes about 16 seconds, with 32% filling matrix A, 15% filling matrix C and 51% doing KSPSolve of matrix C (A is not KSPSolved). (dense matrices)

If I just preallocate a practically dense matrix:

  ierr = MatSeqAIJSetPreallocation(matrixA, n, NULL); CHKERRV(ierr); 

runtime changes to 10 seconds, with 0% (zero) percent spent with filling matrix A. (KSPSolve 76%, Matrix C: 22%). For the sparse case it's that way still a lot faster then before, though slower then without preallocation.

However, trying to preallocate C, which is of type MATSBAIJ fails for me:

  ierr = MatSeqSBAIJSetPreallocation(_matrixC.matrix, n, n, NULL);

results in:

  [0]PETSC ERROR: Arguments are incompatible
  [0]PETSC ERROR: Cannot change block size 1 to 503

What would you advise me to do here? How can I make the KSPSolve faster when it is a dense matrix?

I could guess the preallocation. Create a list of possible basis functions and if they produce sparse or dense matrices. Preallocate full or select MATDENSE type for dense matrices.

Any ideas?

Thanks a lot,
Florian



More information about the petsc-users mailing list