question on DA's and performance
Randall Mackie
randy at geosystem.us
Thu Apr 27 00:23:37 CDT 2006
Barry Smith wrote:
>
> Randy,
>
> 1) I'd first change the scaling of the
>> with coefficient matrix values set to 1.0),
> to match the other diagonal entries in the matrix. For example, if the
> diagonal entries are usually 10,000 then multiply the "boundary condition"
> rows by 10,000. (Note the diagonal entries may be proportional to h, or
> 1/h etc
> so make sure you get the same scaling in your boundary condition rows.
> For example,
> if the diagonal entries double when you refine the grid make sure your
> "boundary condition" equations scale the same way.)
I suspected this might be a problem. The diagonal entries have a wide dynamic
range, all the way up to 1e10. I will try this and let you know if it helps.
>
> 2) Are you using AIJ or BAIJ matrices? BAIJ is likely much better
> in your "new" code because it uses point block ILU(k) instead of point
> ILU(k).
I am using parallel AIJ matrices. Are there any examples that show how
to set up the BAIJ matrices? (in fortran?)
>
> 3) Are you using -pc_type bjacobi or -pc_type asm (and what did you do
> in the previous version?).
In the DA code, I am using:
-em_ksp_type bcgs \
-em_sub_pc_type ilu \
-divh_ksp_type cr \
-divh_sub_pc_type ilu \
In my previous code, I was using:
-em_ksp_type bcgs \
-em_sub_pc_type ilu \
-em_sub_pc_factor_levels 8 \
-em_sub_pc_factor_fill 4 \
-divh_ksp_type cr \
-divh_sub_pc_type icc \
>
> 4) If you use LU on the blocks does it work better than ILU(0) in
> terms of iterations? A little, a lot? If just a little I suggest trying
> -pc_type asm
I have tried asm, and it helps a little, but not enough to be competitive
with my previous code.
>
> Let us know how things go, you should be able to get similar convergence
> using the DAs.
I will try the scaling and the BAIJ matrix suggestions.
Thanks, Randy
>
> Barry
>
>
>
>
> On Wed, 26 Apr 2006, Randall Mackie wrote:
>
>> I've been using Petsc for a few years with reasonably good success. My
>> application is
>> 3D EM forward modeling and inversion.
>>
>> What has been working well is basically an adaptation of what I did in
>> serial mode,
>> by solving the following system of equations:
>>
>> |Mxx Mxy Mxz| |Hx| |bx|
>> |Myx Myy Myz| |Hy| = |by|
>> |Mzx Mzy Mzz| |Hz| |bz|
>>
>> Because this system is very stiff and ill-conditioned, the
>> preconditioner that has
>> been successfully used is the ILU(k) of the diagonal sub-blocks of the
>> coefficient
>> matrix only, not the ILU(k) of the entire matrix.
>>
>>
>> I've tried, with less success, converting to distributed arrays,
>> because in reality
>> my program requires alternating between solving the system above, and
>> solving
>> another system based on the divergences of the magnetic field, and so
>> this requires
>> a lot of message passing between the nodes. I think that using DA's
>> would require
>> passing only the ghost values instead of all the values as I am now
>> doing.
>>
>> So I coded up DA's, and it works, but only okay, and not as well as
>> the case above.
>> The main problem is that it seems to work best for ILU(0), whereas the
>> case above
>> works better and better with the more fill-in specified. I think I've
>> tried every
>> possible option, but I can't get any improvement over just using
>> ILU(0), but the problem
>> is that with ILU(0), it takes too many iterations, and so the total
>> time is more
>> than when I use my original method with, say, ILU(8).
>>
>> The differences between using DA's and the approach above is that the
>> fields are
>> interlaced in DA's, and the boundary values are included as unknowns
>> (with coefficient
>> matrix values set to 1.0), whereas my original way the boundary values
>> are
>> incorporated in the right-hand side. Maybe that hurts the condition of
>> my system.
>>
>> I would really like to use DA's to improve on the efficiency of the
>> code, but I
>> can't figure out how to do that.
>>
>> It was suggested to me last year to use PCFIELDSPLIT, but there are no
>> examples,
>> and I'm not a c programmer so it's hard for me to look at the source
>> code and
>> know what to do. (I'm only just now able to get back to this).
>>
>> Does anyone do any prescaling of their systems? If so, does anyone
>> have examples
>> of how this can be done?
>>
>> Any advice is greatly appreciated.
>>
>> Thanks, Randy
>>
>>
>
--
Randall Mackie
GSY-USA, Inc.
PMB# 643
2261 Market St.,
San Francisco, CA 94114-1600
Tel (415) 469-8649
Fax (415) 469-5044
California Registered Geophysicist
License No. GP 1034
More information about the petsc-users
mailing list