[petsc-users] MemCpy (HtoD and DtoH) in Krylov solver

Xiangdong epscodes at gmail.com
Thu Jul 18 20:45:11 CDT 2019


Yes, nvprof can give the size of the data as well as the amount of time for
data movement. See the attached snapshots.

I can understand some of the numbers, but not the HtoD case.

In DtoH1, it is the data movement from VecMDot. The size of data is
8.192KB, which is sizeof(PetscScalar) * MDOT_WORKGROUP_NUM * 8 = 8*128*8 =
8192. My question is: instead of calling cublasDdot nv times, why do you
implement your own kernels? I guess it must be for performance, but can you
explain a little more?

In DtoH2, it is the data movement from VecNorm. The size of data is 8B,
which is just the sizeof(PetscScalar).

In DtoD1, it is the data movement from VecAXPY. The size of data is
17.952MB, which is exactly sizeof(PetscScalar)*length(b).

However, I do not understand the number in HostToDevice in gmres for np=1.
The size of data movement is 1.032KB. I thought this is related to the
updated upper Hessenberg matrix, but the number does not match. Can anyone
help me understand the data movement of HToD in GMRES for np=1?

Thank you.

Best,
Xiangdong

On Thu, Jul 18, 2019 at 1:14 PM Karl Rupp <rupp at iue.tuwien.ac.at> wrote:

> Hi,
>
> as you can see from the screenshot, the communication is merely for
> scalars from the dot-products and/or norms. These are needed on the host
> for the control flow and convergence checks and is true for any
> iterative solver.
>
> Best regards,
> Karli
>
>
>
> On 7/18/19 3:11 PM, Xiangdong via petsc-users wrote:
> >
> >
> > On Thu, Jul 18, 2019 at 5:11 AM Smith, Barry F. <bsmith at mcs.anl.gov
> > <mailto:bsmith at mcs.anl.gov>> wrote:
> >
> >
> >         1) What preconditioner are you using? If any.
> >
> > Currently I am using none as I want to understand how gmres works on GPU.
> >
> >
> >         2) Where/how are you getting this information about the
> >     MemCpy(HtoD) and one call MemCpy(DtoH)? We might like to utilize
> >     this same sort of information to plan future optimizations.
> >
> > I am using nvprof and nvvp from cuda toolkit. It looks like there are
> > one MemCpy(HtoD) and three MemCpy(DtoH) calls per iteration for np=1
> > case. See the attached snapshots.
> >
> >         3) Are you using more than 1 MPI rank?
> >
> >
> > I tried both np=1 and np=2. Attached please find snapshots from nvvp for
> > both np=1 and np=2 cases. The figures showing gpu calls with two pure
> > gmres iterations.
> >
> > Thanks.
> > Xiangdong
> >
> >
> >        If you use the master branch (which we highly recommend for
> >     anyone using GPUs and PETSc) the -log_view option will log
> >     communication between CPU and GPU and display it in the summary
> >     table. This is useful for seeing exactly what operations are doing
> >     vector communication between the CPU/GPU.
> >
> >        We welcome all feedback on the GPUs since it previously has only
> >     been lightly used.
> >
> >         Barry
> >
> >
> >      > On Jul 16, 2019, at 9:05 PM, Xiangdong via petsc-users
> >     <petsc-users at mcs.anl.gov <mailto:petsc-users at mcs.anl.gov>> wrote:
> >      >
> >      > Hello everyone,
> >      >
> >      > I am new to petsc gpu and have a simple question.
> >      >
> >      > When I tried to solve Ax=b where A is MATAIJCUSPARSE and b and x
> >     are VECSEQCUDA  with GMRES(or GCR) and pcnone, I found that during
> >     each krylov iteration, there are one call MemCpy(HtoD) and one call
> >     MemCpy(DtoH). Does that mean the Krylov solve is not 100% on GPU and
> >     the solve still needs some work from CPU? What are these MemCpys for
> >     during the each iteration?
> >      >
> >      > Thank you.
> >      >
> >      > Best,
> >      > Xiangdong
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20190718/99df57ec/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: DtoH1.png
Type: image/png
Size: 24407 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20190718/99df57ec/attachment-0004.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: DtoH2.png
Type: image/png
Size: 22221 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20190718/99df57ec/attachment-0005.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: DtoD1.png
Type: image/png
Size: 24194 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20190718/99df57ec/attachment-0006.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: HtoD1.png
Type: image/png
Size: 23920 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20190718/99df57ec/attachment-0007.png>


More information about the petsc-users mailing list