[petsc-users] Assembling a finite element matrix in parallel

Matthew Knepley knepley at gmail.com
Sun Apr 27 06:54:55 CDT 2014


On Sun, Apr 27, 2014 at 6:25 AM, Justin Dong <jsd1 at rice.edu> wrote:

> Hi Matt,
>
> For option 1), that would be using MPI and not any special functions in
> PETSc? I ask since I've never done parallel programming before. I tried
> using OpenMP to parallelize that for loop but it resulted in some errors
> and I didn't investigate further, but I'm assuming it's because each
> process wasn't communicating properly with the others during MatSetValues?
>

Yes, using MPI, so each process owns a set of elements that it loops over.
The Mat object manages the global
values as long as you use global indices for the (row, column). There are
of course many refinements for this,
but I think the thing to do is get something working fast, and then
optimize it.

  Thanks,

     Matt


> Sincerely,
> Justin
>
>
> On Sun, Apr 27, 2014 at 5:57 AM, Matthew Knepley <knepley at gmail.com>wrote:
>
>> On Sun, Apr 27, 2014 at 4:14 AM, Justin Dong <jsd1 at rice.edu> wrote:
>>
>>> Hi all,
>>>
>>> I'm currently coding a finite element problem in PETSc. I'm computing
>>> all of the matrices by myself and would prefer to do it that way. I want to
>>> parallelize the assembly of the global matrices. This is a simplified
>>> version of the assembly routine (pseudocode):
>>>
>>> for (k = 0; k < nelements; ++k)
>>> {
>>>     get_index(k,ie,je); /* ie and je are arrays that indicate where to
>>> place A_local */
>>>     compute_local_matrix(A_local,...);
>>>
>>>     MatSetValues(A_global, nlocal, ie, nlocal, je, A_local, ADD_VALUES);
>>> }
>>>
>>> This is for DG finite elements and I know the matrix easily be assembled
>>> in parallel. Even on my laptop, this would allow me to significantly speed
>>> up my code. The only problem is, I'm not sure how to do this in PETSc. I'd
>>> assume I need to use MatCreateMPIAIJ instead of MatCreateSeqAIJ, but wasn't
>>> able to find any using tutorials on how I might do this.
>>>
>>
>> 1) If you just split this loop across processes, it would work
>> immediately. However, that can be non-optimal in terms
>>      of communication.
>>
>> 2) PETSc matrices are distributed by contiguous blocks of rows (see
>> manual section on matrices), so you would like
>>     those row blocks to correspond roughly to your element blocks.
>>
>> 3) You will also have to preallocate, but that should be the same as you
>> do now for the sequential case, except you
>>      check whether a column is inside the diagonal block.
>>
>>   Thanks,
>>
>>       Matt
>>
>>
>>> Sincerely,
>>> Justin
>>>
>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140427/c86f2ab6/attachment.html>


More information about the petsc-users mailing list