[petsc-users] Newbie question : sequential vs. parallel and matrix creation
lixin chu
lixin_chu at yahoo.com
Sat Feb 18 19:06:40 CST 2017
Super, Barry !
Sent from Yahoo Mail on Android
On Sun, 19 Feb 2017 at 9:04, Barry Smith<bsmith at mcs.anl.gov> wrote:
> On Feb 18, 2017, at 6:55 PM, lixin chu <lixin_chu at yahoo.com> wrote:
>
> Thank you again for your super responsive reply, Barry !
>
> The matrix data I have is a file from NASTRAN, with DMAP command, column major, one single file.
Write a __sequential__ program that reads in the file and then calls MatView() to a binary file and then in the "real" parallel program (a completely different code then the one used to convert from NASTRAN format to PETSc binary format) use MatLoad() to load the binary file in parallel. Never, never, never write a parallel program that tries to read in a NASTRAN data file!
See src/mat/examples/tests/ex32.c or ex78.c for examples of programs that read in a sequential matrix and store it with MatView.
Yes you can MatView() a sequential matrix and then read it in in parallel with MatLoad().
Barry
>
> So I think MatView/MatLoad will be the best for my case. I need to develop a program for this conversion. The program will create a parallel matrix. But since the data is only available on root process, it might take a while to populate all the values in the root process and distribute to all brfore MatView can create the file.
>
> I donot think I can create a sequential matrix (MatView will hence create a file for one process ?) and MatLoad into a parallel matrix ?
>
> Really appreciate your help
> LX
>
> Sent from Yahoo Mail on Android
>
> On Sun, 19 Feb 2017 at 7:46, Barry Smith
> <bsmith at mcs.anl.gov> wrote:
>
> > On Feb 18, 2017, at 4:58 PM, lixin chu <lixin_chu at yahoo.com> wrote:
> >
> > Hello,
> >
> > Some newbie questions I have wrt matrix creation, thank you for any help:
> >
> > 1. Is it correct to say that a sequential matrix is created in one process (for example, the root process), and then distributed to all processes with MatAssemblyBegin and MatAssemblyEnd ? Or sequential matrix only works with running with 1 MPI process only?
>
> No, sequential matrix lives on any single process. Process 0 can have a sequential matrix, Process 1 can have a different sequential matrix etc.
>
> >
> > 2. For a parallel matrix creation, each process will set its values, so I need to provide the data for each process on all the machine ?
>
> For a parallel matrix any process in that matrix's communication can set any values into the matrix. For good performance each process should set mostly the values owned by that process.
>
> >
> > 3. MatSetValues(Mat mat,PetscInt m,const PetscInt idxm[],PetscInt n,const PetscInt idxn[],const PetscScalar v[],InsertMode addv)
> > According to the manual page #59 : This routine inserts or adds a logically dense subblock of dimension m*n into the matrix ...
> >
> > I am not sure if extracting the non zero elements and forming a 'dense block' of data from a large sparse matrix is efficient. My original matrix data is column major. I am thinking of creating and loading the matrix in a column by column way, with n = 1 and using MAT_ROW_ORIENTED = FALSE. Is it efficient ?
>
> I am guessing you are currently creating a sparse matrix, using some sparse matrix data format (compressed sparse column) on one process and you are wanting to know how to reuse this sparse matrix data format within PETSc?
>
> The answer is you don't want to do that? You want to throw away your old sparse matrix data format and just keep the code that generates your sparse matrix entries and call MatSetValues() directly to put these entries into a PETSc Mat. What type of discretization are you using? Finite element, finite difference? Something else? Many PETSc examples show you how to use MatSetValues(). You don't call MatSetValues() once with the entire matrix, you call it with small numbers of matrix entries that are natural for your discretization; for example for finite elements you call it one element at a time, for finite difference you usually call it for one one row of the matrix at a time.
>
> > I think I need to pre-allocate memory, but the API for parallel matrix MatMPIAIJSetPreallocation () requires to have the none zero info for DIAGONAL portion and NON-DIAGONAL portion separately. This seems to add more work when converting my sparse matrix to PETSc format...
>
> Yes there is some work in setting preallocation.
> >
> > 4. Ideally, I would like to load the matrix data in the main process only, then distribute to all other processes. What is the best way to do this ?
>
> MatLoad() is exactly
> >
> >
> > 5. MatView and MatLoad
> > MatView seems to create one file with data for all processes (I only tested with one machine) :
> > Vec Object: 4 MPI processes
> > type: mpi
> > Process [0]
> > 0.
> > Process [1]
> > 1.
> > Process [2]
> > 2.
> > Process [3]
> > 3.
> >
> > So do I have to manually distribute this file to all machines ?
>
> With MatLoad() only the zeroth process in the MPI_Comm needs access to the file.
>
>
> >
> >
> > Many thanks again !
> >
> >
> > rgds
> > lixin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20170219/464413bd/attachment.html>
More information about the petsc-users
mailing list