CSR Sparse Matrix Query

Dr. Timothy Stitt timothy.stitt at ichec.ie
Tue Apr 24 08:07:26 CDT 2007


Thanks Barry....I am now becoming overwhelmed with the different 
approaches possible. Incidentally my target platform is a large 
shared-memory machine (with 256GB RAM and 32 processors) so I am not 
overly concerned right now with the matrix storage size. I just want to 
reduce my time to completion. As I scale up though I will need to think 
about distributing the data but a quick fix as you say Barry is ideal 
for me now.

Thanks for that.

Tim.

Barry Smith wrote:
>    In petsc-dev http://www-unix.mcs.anl.gov/petsc/petsc-as/developers/index.html 
> there is code that allows creating a matrix/linear system on one process and 
> solving it in parallel, See PCType PCOPENMP 
> http://www-unix.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-dev/docs/manualpages/PC/PCOPENMP.html
>
> This is a quick-and-dirty stop-gap way of having a parallel code (where your 
> code is not parallel). It is not a long term solution but can get good speedup 
> for several processes.
>
>    Barry
>
>  There is also a low-level routine that takes a seqaij and distributes it to a 
> parallel form called MatDistribute_MPIAIJ(), but using PCOPENMP is much easier 
> to use.
>
>
>
> On Tue, 24 Apr 2007, Dr. Timothy Stitt wrote:
>
>   
>> Matt,
>>
>> Thanks for the reply. I was hoping that there was some sort of automatic
>> distribution from the global operator matrix to a distributed form (in the one
>> call) but obviously there is more to be done. I will go back and follow
>> Randy's suggestions now that I know the conversion from global to distributed
>> matrices isn't as straightforward as I hoped.
>>
>> Regards,
>>
>> Tim.
>>
>> Matthew Knepley wrote:
>>     
>>> On 4/24/07, Dr. Timothy Stitt <timothy.stitt at ichec.ie> wrote:
>>>       
>>>> Hi everyone...again.
>>>>
>>>> Thanks to the great support I have now got my solver working (in
>>>> Fortran) albeit in a sequential non-distributed version. My problem is
>>>> that I planned to substitute the call to MatCreateSeqAIJWithArrays()
>>>> with a call to MatCreateMPIAIJWithArrays() (along with suitable
>>>> modification to the arguments) to obtain a parallel version.
>>>>
>>>> Anyhow, I have come upon a few problems() unfortunately. Firstly here
>>>> are the code segments I am trying to update (my sparse matrix is square
>>>> with global dimensions [rows,rows], size is returned by MPI_COMM_SIZE()):
>>>>
>>>>
>>>> --------------------------------------------------------------------------------------------------------------- 
>>>> call MatCreateSeqAIJWithArrays(PETSC_COMM_WORLD,rows,rows,&
>>>>         csrRowPointers,csrColumnIndices,csrElements,A,ierr)
>>>>
>>>> ---------------------------------------------------------------------------------------------------------------- 
>>>>
>>>> TO
>>>>
>>>>
>>>> --------------------------------------------------------------------------------------------------------------- 
>>>> call
>>>> MatCreateMPIAIJWithArrays(PETSC_COMM_WORLD,rows/size,rows/size,rows,rows,& 
>>>>         csrRowPointers,csrColumnIndices,csrElements,A,ierr)
>>>>
>>>> ---------------------------------------------------------------------------------------------------------------- 
>>>>
>>>> 1. Firstly, when I compile the code I keep getting:
>>>>
>>>>  solver.o(.text+0x202): In function `MAIN__':
>>>> : undefined reference to `matcreatempiaijwitharrays_'
>>>>         
>>> There is no Fortran interface for this function (since no one had ever
>>> asked).
>>>
>>>       
>>>> 2. Do the arguments to the routine look correct ? I would prefer for
>>>> PETSc to determine as many of the arguments as possible but I wasn't
>>>> sure which ones to supply explicitly or allow PETSc to infer implicitly.
>>>>         
>>> No, I think you are going to have some problems here. This function is
>>> intended
>>> for people who already have a parallel CSR matrix constructed. From your
>>> discussion above, this does not appear to be the case. I see several
>>> pitfalls:
>>>
>>>  a) size may not evenly divide rows
>>>
>>>  b) the arrays must be only for rows local to the process, but contain
>>> global
>>>      indices
>>>
>>> I would encourage you to follow the steps that Randy outlined, namely create
>>> the matrix (letting PETSc partition the rows), preallocate the storage by
>>> determining local row lengths, and the setting values with MatSetValues for
>>> each row.
>>>
>>>  Thanks,
>>>
>>>     Matt
>>>
>>>       
>>>> 3. When constructing the operator matrix is it o.k. to build it only on
>>>> node with rank 0 or does it have to be duplicated across all nodes
>>>> before the call to the MatCreateMPIxxxxx() routine? The matrix is very
>>>> large and hence wouldn't be suitable for replication.
>>>>
>>>> I hope this is the last of my worries. Thanks again for your support and
>>>> patience.
>>>>
>>>> Regards,
>>>>
>>>> Tim.
>>>>
>>>>
>>>>
>>>> Satish Balay wrote:
>>>>         
>>>>> On Mon, 23 Apr 2007, Satish Balay wrote:
>>>>>
>>>>>
>>>>>           
>>>>>> On Mon, 23 Apr 2007, Dr. Timothy Stitt wrote:
>>>>>>
>>>>>>
>>>>>>             
>>>>>>> I am using Fortran. Do I have to perform some operation to switch
>>>>>>> from column-major to row-major representation before calling the
>>>>>>> MatCreateSeqAIJWithArrays routine?
>>>>>>>
>>>>>>>               
>>>>>> MatCreateSeqAIJWithArrays expects a 1d array for the values [and
>>>>>> rows,column indicies as well] - so this issue of row-major vs
>>>>>> column-major does not exist.
>>>>>>
>>>>>>             
>>>>> However the CSR format which is used for the input, is equivalent to a
>>>>> row-major ordering.. i.e for the following matrix:
>>>>>
>>>>> 1 0 0
>>>>> 2 0 3
>>>>> 0 4 5
>>>>>
>>>>> The call expects the following
>>>>>
>>>>> i =  {0,1,3,5}  [size = nrow+1  = 3+1]
>>>>> j =  {0,0,2,1,2}  [size = nz = 5]
>>>>> v =  {1,2,3,4,5}  [size = nz = 5]
>>>>>
>>>>> Satish
>>>>>
>>>>>
>>>>>           
>>>> -- 
>>>> Dr. Timothy Stitt <timothy_dot_stitt_at_ichec.ie>
>>>> HPC Application Consultant - ICHEC (www.ichec.ie)
>>>>
>>>> Dublin Institute for Advanced Studies
>>>> 5 Merrion Square - Dublin 2 - Ireland
>>>>
>>>> +353-1-6621333 (tel) / +353-1-6621477 (fax) / +353-874195427 (mobile)
>>>>
>>>>
>>>>         
>>>       
>>
>>     
>
>   


-- 
Dr. Timothy Stitt <timothy_dot_stitt_at_ichec.ie>
HPC Application Consultant - ICHEC (www.ichec.ie)

Dublin Institute for Advanced Studies
5 Merrion Square - Dublin 2 - Ireland

+353-1-6621333 (tel) / +353-1-6621477 (fax) / +353-874195427 (mobile)




More information about the petsc-users mailing list