[petsc-users] Newbie question : sequential vs. parallel and matrix creation

Barry Smith bsmith at mcs.anl.gov
Sat Feb 18 17:46:49 CST 2017


> On Feb 18, 2017, at 4:58 PM, lixin chu <lixin_chu at yahoo.com> wrote:
> 
> Hello,
> 
> Some newbie questions I have wrt matrix creation, thank you for any help:
> 
> 1. Is it correct to say that a sequential matrix is created in one process (for example, the root process), and then distributed to all processes with MatAssemblyBegin and MatAssemblyEnd ? Or sequential matrix only works with running with 1 MPI process only?

   No, sequential matrix lives on any single process. Process 0 can have a sequential matrix, Process 1 can have a different sequential matrix etc.

> 
> 2. For a parallel matrix creation, each process will set its values, so I need to provide the data for each process on all the machine ?

   For a parallel matrix any process in that matrix's communication can set any values into the matrix. For good performance each process should set mostly the values owned by that process. 

> 
> 3. MatSetValues(Mat mat,PetscInt m,const PetscInt idxm[],PetscInt n,const PetscInt idxn[],const PetscScalar v[],InsertMode addv)
>   According to the manual page #59 : This routine inserts or adds a logically dense subblock of dimension m*n into the matrix ...
> 
>     I am not sure if extracting the non zero elements and forming a 'dense block' of data from a large sparse matrix is efficient. My original matrix data is column major. I am thinking of creating and loading the matrix in a column by column way, with n = 1 and using MAT_ROW_ORIENTED = FALSE. Is it efficient ?

   I am guessing you are currently creating a sparse matrix, using some sparse matrix data format (compressed sparse column) on one process and you are wanting to know how to reuse this sparse matrix data format within PETSc? 

    The answer is you don't want to do that? You want to throw away your old sparse matrix data format and just keep the code that generates your sparse matrix entries and call MatSetValues() directly to put these entries into a PETSc Mat. What type of discretization are you using? Finite element, finite difference? Something else? Many PETSc examples show you how to use MatSetValues(). You don't call MatSetValues() once with the entire matrix, you call it with small numbers of matrix entries that are natural for your discretization; for example for finite elements you call it one element at a time, for finite difference you usually call it for one one row of the matrix at a time.

>     I think I need to pre-allocate memory, but the API for parallel matrix MatMPIAIJSetPreallocation () requires to have the none zero info for DIAGONAL portion and NON-DIAGONAL portion separately.  This seems to add more work when converting my sparse matrix to PETSc format...

   Yes there is some work in setting preallocation. 
>  
> 4. Ideally, I would like to load the matrix data in the main process only, then distribute to all other processes. What is the best way to do this ?

   MatLoad() is exactly 
>     
> 
> 5. MatView and MatLoad
>     MatView seems to create one file with data for all processes (I only tested with one machine) :
>         Vec Object: 4 MPI processes
>         type: mpi
>         Process [0]
>         0.
>         Process [1]
>         1.
>         Process [2]
>         2.
>         Process [3]
>         3.
> 
>     So do I have to manually distribute this file to all machines ?

With MatLoad() only the zeroth process in the MPI_Comm needs access to the file.

> 
> 
> Many thanks again !    
>    
>  
> rgds
> lixin



More information about the petsc-users mailing list