[petsc-users] Patching in generalized eigen value problems

Barry Smith bsmith at mcs.anl.gov
Sun Apr 9 14:34:13 CDT 2017


> On Apr 9, 2017, at 2:21 PM, Daralagodu Dattatreya Jois, Sathwik Bharadw <sdaralagodudatta at wpi.edu> wrote:
> 
> Dear petsc users,
> 
> I am solving for generalized eigen value problems using petsc and slepc. 
> Our equation will be of the form, 
> 
> A X=λ B X. 
> 
> I am constructing the A and B matrix of type MATMPIAIJ. Let us consider that 
> both of my matrices are of dimension 10*10. When we are solving for a closed 
> geometry, we require to add all the entries of the last (9th) row and column to 
> the first (0th) row and column respectively for both matrices. In a high density mesh, 
> I will have a large number of such row to row and column to column additions. 
> For example, I may have to add last 200 rows and columns to first 200 rows and columns
> respectively. We will then zero the copied row and column expect the diagonal
> element (9th row/column in the former case). 

   Where is this "strange" operation coming from?

   Boundary conditions? 

  Is there any way to assemble matrices initially with these sums instead of doing it after the fact? 

 Why is it always the "last rows" and the "first rows"? 

  What happens when you run in parallel where first and last rows are on different processes? 

  How large will the matrices get?

   Are the matrices symmetric?




> 
> I understand that MatGetRow, MatGetColumnVector, MatGetValues or any other 
> MatGet- or VecGet- functions are not collective. Can you suggest any
> efficient algorithm or function to achieve this way of patching? 
> 
> One way I can think of is to obtain the column vector using MatGetColumnVector and
> row vector by MatZeroRows and then scatter these vectors to all processes. Once we have 
> entire row/column vector entries in each process, we can add the values to the matrix by 
> their global index. Of course, care should be taken to add the values of diagonal element 
> only once. But this will be a quite slow process. 
> Any ideas are appreciated. 
>  
> Thanks,
> Sathwik Bharadwaj



More information about the petsc-users mailing list