[petsc-users] Assembling a finite element matrix in parallel

Justin Dong jsd1 at rice.edu
Sun Apr 27 06:25:56 CDT 2014


Hi Matt,

For option 1), that would be using MPI and not any special functions in
PETSc? I ask since I've never done parallel programming before. I tried
using OpenMP to parallelize that for loop but it resulted in some errors
and I didn't investigate further, but I'm assuming it's because each
process wasn't communicating properly with the others during MatSetValues?

Sincerely,
Justin


On Sun, Apr 27, 2014 at 5:57 AM, Matthew Knepley <knepley at gmail.com> wrote:

> On Sun, Apr 27, 2014 at 4:14 AM, Justin Dong <jsd1 at rice.edu> wrote:
>
>> Hi all,
>>
>> I'm currently coding a finite element problem in PETSc. I'm computing all
>> of the matrices by myself and would prefer to do it that way. I want to
>> parallelize the assembly of the global matrices. This is a simplified
>> version of the assembly routine (pseudocode):
>>
>> for (k = 0; k < nelements; ++k)
>> {
>>     get_index(k,ie,je); /* ie and je are arrays that indicate where to
>> place A_local */
>>     compute_local_matrix(A_local,...);
>>
>>     MatSetValues(A_global, nlocal, ie, nlocal, je, A_local, ADD_VALUES);
>> }
>>
>> This is for DG finite elements and I know the matrix easily be assembled
>> in parallel. Even on my laptop, this would allow me to significantly speed
>> up my code. The only problem is, I'm not sure how to do this in PETSc. I'd
>> assume I need to use MatCreateMPIAIJ instead of MatCreateSeqAIJ, but wasn't
>> able to find any using tutorials on how I might do this.
>>
>
> 1) If you just split this loop across processes, it would work
> immediately. However, that can be non-optimal in terms
>      of communication.
>
> 2) PETSc matrices are distributed by contiguous blocks of rows (see manual
> section on matrices), so you would like
>     those row blocks to correspond roughly to your element blocks.
>
> 3) You will also have to preallocate, but that should be the same as you
> do now for the sequential case, except you
>      check whether a column is inside the diagonal block.
>
>   Thanks,
>
>       Matt
>
>
>> Sincerely,
>> Justin
>>
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140427/dcb1e9cf/attachment.html>


More information about the petsc-users mailing list