[petsc-dev] ASM for each field solve on GPUs

Barry Smith bsmith at petsc.dev
Wed Dec 30 19:12:24 CST 2020



> On Dec 30, 2020, at 6:45 PM, Mark Adams <mfadams at lbl.gov> wrote:
> 
> 
> 
> On Wed, Dec 30, 2020 at 7:12 PM Barry Smith <bsmith at petsc.dev <mailto:bsmith at petsc.dev>> wrote:
> 
>   If you are using direct solvers on each block on each GPU (several matrices on each GPU) you could pull apart, for example, MatSolve_SeqAIJCUSPARSE()
> and launch each of the matrix solves on a separate stream.   
> 
> Yes, that is what I want. The first step is to figure out the best way to get the blocks from Plex/Forest and get an exact solver working on the CPU with ASM.

  I don't think you want ASM or at most you it inside PCFIELDSPLIT. It is splits job to pull out fields, not ASM's job (that pulls out geometrically connected regions).

>  
> You could use a MatSolveBegin/MatSolveEnd style or as Jed may prefer a Wait() model. Maybe a couple hours coding to produce a prototype MatSolveBegin/MatSolveEnd from MatSolve_SeqAIJCUSPARSE.
> 
>   Note pulling apart a non-coupled single MatAIJ that contains all the matrices would be hugely expensive. Better to build each matrix already separate or use MatNest with only diagonal matrices.
> 
> The problem is that it runs in TS that uses DM, so I can't reorder the matrix without breaking TS. I mimic what DM does now. 

  DM decides the ordering, not TS. You could slip a MatSetLocalToGlobal mapping in that uninterlaces the variables to get your DM to build an uninterlaced matrix. For the vector it is easier but again you will need to uninterlace it.  Back in the classic Cray vector machine days interlacing was bad, with Intel CPUs it became good, now both approaches should be supported in software.

  All the DMs should support both interlaced and noninterlaced algebraic objects.

> 
> I run once on the CPU to get the metadata for GPU assembly from DMForest.  Maybe I should just get all the metadata that I need and throw the DM away after the setup solve and run TS without a DM...
>  
> 
>   Barry
> 
> 
> > On Dec 30, 2020, at 5:46 PM, Jed Brown <jed at jedbrown.org <mailto:jed at jedbrown.org>> wrote:
> > 
> > Mark Adams <mfadams at lbl.gov <mailto:mfadams at lbl.gov>> writes:
> > 
> >> I see that ASM has a DM and can get subdomains from it. I have a DMForest
> >> and I would like an ASM that has a subdomain for each field. How might I go
> >> about doing this? (the fields are not coupled in the matrix so this would
> >> give a block diagonal matrix, and thus exact with LU sub solvers.
> > 
> > The fields are already not coupled or you want to filter the matrix and give back a single matrix with coupling removed?
> > 
> > You can use Fieldsplit to get the math of field-based block Jacobi (or ASM, but overlap with fields tends to be expensive). Neither FieldSplit or ASM can run the (additive) solves concurrently (and most libraries would need something to drive the threads).
> > 
> >> I am then going to want to get these separate solves to be run in parallel
> >> on a GPU (I'm talking with Sherry about getting SuperLU working on these
> >> small problems). In looking at PCApply_ASM it looks like this will take
> >> some thought. KSPSolve would need to be non-blocking, etc., or a new apply
> >> op might be needed.
> >> 
> >> Thanks,
> >> Mark
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20201230/00f6b278/attachment-0001.html>


More information about the petsc-dev mailing list