[petsc-users] Storage space for symmetric (SBAIJ) matrix

Daniel Langr daniel.langr at gmail.com
Tue Sep 21 13:44:12 CDT 2010


Barry,

we do not need and do not want to use PETSc for writing a matrix into a 
file. Such file should be independent of any particular solver. That's 
why we want do use HDF5 library with parallel I/O capabilities. I can 
simply store CSR (or COO or any other scheme) arrays for the upper 
triangular part of a matrix to the file and some supporting information 
such as number of rows and nonzeroes. Then, if I want to solve a problem 
with PETSc/SLEPc, I need to effectively load the matrix into it. 
Supposing hundreds or thousands of nodes (maybe not the same as for 
matrix construction procedure) the parallel I/O again would be 
essential. Anyway, the matrix data is supposed to be much bigger than 
the memory of one node. We (or our physics) need to exploit all the 
memory available and there exist no upper bounds for them :).

As Jed mentioned, we can write only the state to the file. But parallel 
I/O is also part of our project and research, that's why we bother :). 
Also, when we want to compare different methods for solution, we would 
need to construct the matrix multiple times instead of just read it from 
the file, which can by quicker.

Daniel



> On Sep 21, 2010, at 9:35 AM, Daniel Langr wrote:
>
>> Our preliminary idea is to construct a matrix with some legacy code, store matrix into a file (as we would do anyway for checkpointing purposes)
>
>     Is this legacy code parallel? If not when you could only create SeqSBAIJ matrices anyways, correct? So you don't need parallel create from arrays just sequential from arrays?
>
>     All the MatView() and MatLoad() would be handled by PETSc so you don't need to worry about reading in the matrix in parallel (we do it for you).
>
>     Barry
>
>
>> and then load it into a solver. We are free to choose matrix storage scheme for a file, so we could prepare data to be in the format of arrays to be loaded into PETSc. For binary I/O we are experimenting with parallel HDF5 capabilities using MPI-I/O underneath. (PETSc has a HDF5 viewer, but if I am not wrong, it does not use parallel I/O). For really big problems parallel I/O is a must for us.
>>
>> We are solving a nuclear structure problem, particularly a symmetry-adapted no-core shell model computations of a nuclei. (I do not understand much that kind of physics, my part is the eigensolver :).
>>
>> Daniel
>>
>>
>>
>>
>> Dne 21.9.2010 16:24, Jed Brown napsal(a):
>>> On Tue, Sep 21, 2010 at 16:20, Daniel Langr<daniel.langr at gmail.com>   wrote:
>>>> thanks much for your comprehensive answer, it will certainly help. I will
>>>> look at the example codes. As for matrix assembly process, I would prefer
>>>> constructing a matrix from arrays (to avoid dynamic assembly and additional
>>>> memory costs) but there is nothing like MatCreateMPISBAIJWithArrays() or
>>>> better MatCreateMPISBAIJWithSplitArrays() for symmetric matrices as for
>>>> unsymmetric ones in PETSc.
>>>
>>> This would be easy to add, but how would you go about building the
>>> arrays yourself?  What sort of problems are you solving?
>>>
>>> Jed


More information about the petsc-users mailing list