[petsc-dev] [petsc-maint #61515] data size of PETSc-dev GPU computing

Barry Smith bsmith at mcs.anl.gov
Mon Jan 17 21:08:31 CST 2011


  Hmm, could be a bug, could be the algorithm. Run with -ksp_view and send the output 

  What problem are you solving, is it a PETSc example? 

  Barry

On Jan 17, 2011, at 8:31 PM, li.luo at siat.ac.cn wrote:

> Hi,
> 
> 
> I met a problem when testing some examples on PETSc-dev for GPU computing, 
> that is, if one proc pair(1GPU-1CPU) is used, the grid size can be enlarged to even 2048*2048, the memory limitation;
> however, if more than one proc pairs are use, for example 4GPU-4CPU, the grid size is limited to about 200*200, if larger, the ksp solver would not converge. The same problem happens to 8GPU-8CPU limited by 500*500 or other size.
> 
> I wonder whether you have the same problem? Any error happens to type MPICUDA?  
> 
> Regards,
> Li Luo
> 
> 
>>> # Machine type: 
>>> CPU:  Intel(R) Xeon(R) CPU E5520
>>> GPU: Tesla T10
>>> CUDA Driver Version:                           3.20
>>> CUDA Capability Major/Minor version number:    1.3
>>> 
>>> # OS Version: 
>>> Linux console 2.6.18-128.el5 #1 SMP Wed Dec 17 11:41:38 EST 2008 x86_64 x86_64 x86_64 GNU/Linux
>>> 
>>> # PETSc Version:
>>> #define PETSC_VERSION_RELEASE    0
>>> #define PETSC_VERSION_MAJOR      3
>>> #define PETSC_VERSION_MINOR      1
>>> #define PETSC_VERSION_SUBMINOR   0
>>> #define PETSC_VERSION_PATCH      6
>>> #define PETSC_VERSION_DATE       "Mar, 25, 2010"
>>> #define PETSC_VERSION_PATCH_DATE "Thu Dec  9 00:02:47 CST 2010"
>>> 
>>> 
>>> # MPI implementation: 
>>> ictce3.2/impi/3.2.0.011/
>>> 
>>> # Compiler: 
>>> 
>>> 
>>> # Probable PETSc component:
>>> run with GPU
>>> # Configure
>>> ./config/configure.py  --download-f-blas-lapack=1 --with-mpi-dir=/bwfs/software/mpich2-1.2.1p1 --with-shared-libraries=0 --with-debugging=no --with-cuda-dir=/bwfs/home/liluo/cuda3.2_64 --with-thrust-dir=/bwfs/home/liluo/cuda3.2_64/include/thrust --with-cusp-dir=/bwfs/home/liluo/cuda3.2_64/include/cusp-library
> 
> 
> 
> 
> 
> 




More information about the petsc-dev mailing list