Schur system + MatShell
tribur at vision.ee.ethz.ch
tribur at vision.ee.ethz.ch
Thu Apr 24 16:32:08 CDT 2008
Dear,
> On Tue, 22 Apr 2008, Matthew Knepley wrote:
>> Did you verify that the Schur complement matrix was properly
>> preallocated before
>> assembly? This is the likely source of time. You can run with -info
>> and search
>> for "malloc" in the output.
Preallocation doesn't make sense in case of MATDENSE, does it?
> Isn't this using MATDENSE? If that the case - then I think the problem
> is due to wrong partitioning - causing communiation during
> MatAssembly().
>
> -info should clearly show the communication part aswell.
>
> The fix would be to specify the local partition sizes for this matrix
> - and not use PETSC_DECIDE.
>
> Satish
Hm, I think communication during MatAssembly() is necessary, because
the global Schur complement is obtained by summing up elements of the
local ones. This also means that the sum of the sizes of the local
complements is greater than the size of the global Schur complement.
Therefore, I can not specify the local partition sizes according to the
real sizes of the local Schur complements, otherwise the global size
was an unrealistic number (in PETSc the global size is ALWAYS the sum
of the local ones, isn't it?). Do you know what I mean? Is there
another possibility of partitioning?
Anyway, I got the thing in MATSHELL-format running, and it's really
much faster: In an unstructured mesh of 321493 nodes, partitioned into
7 subdomains with 25577 interface nodes (= size of global Schur
complement), e.g., the solving of the Schur complement takes now 3 min
instead of 38 min for the assembling+solving using MATDENSE.
Thank you again for your help and attention,
Kathrin
More information about the petsc-users
mailing list