<html dir="ltr">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<style id="owaParaStyle" type="text/css">P {margin-top:0;margin-bottom:0;}</style>
</head>
<body ocsi="0" fpstyle="1">
<div style="direction: ltr;font-family: Arial;color: #000000;font-size: 14pt;">FYI, I think Paul is still on vacation through the end of this week.<br>
<br>
I have also had problems running with sacusp and sacusppoly and having thrust<br>
complain about not being able to allocate memory. I had not connected it with<br>
building with the txpetscgpu package. Perhaps I'll try to confirm that as well.<br>
<br>
Its hard for me to imagine doing without the txpetscgpu package right now.<br>
The package allows for some definite performance gains in the matvec using<br>
the different matrix storage formats. I've seen my matvecs be anywhere from<br>
2x to 5x faster on some problems using the "dia" format versus the default "csr"<br>
format. Also, Paul has added some support for running on multiple gpus that<br>
I am using. I'm not sure what is available in that area without his package.<br>
<br>
Thanks,<br>
<br>
Dave<br>
<div><br>
<div style="font-family: Tahoma; font-size: 13px;"><font size="2"><span style="font-size: 10pt;">--
<br>
Dave Nystrom<br>
LANL HPC-5<br>
Phone: 505-667-7913<br>
Email: wdn@lanl.gov<br>
Smail: Mail Stop B272<br>
Group HPC-5<br>
Los Alamos National Laboratory<br>
Los Alamos, NM 87545<br>
</span></font><br>
</div>
</div>
<div style="font-family: Times New Roman; color: rgb(0, 0, 0); font-size: 16px;">
<hr tabindex="-1">
<div style="direction: ltr;" id="divRpF431540"><font color="#000000" face="Tahoma" size="2"><b>From:</b> petsc-dev-bounces@mcs.anl.gov [petsc-dev-bounces@mcs.anl.gov] on behalf of John Fettig [john.fettig@gmail.com]<br>
<b>Sent:</b> Monday, February 27, 2012 2:48 PM<br>
<b>To:</b> For users of the development version of PETSc<br>
<b>Subject:</b> Re: [petsc-dev] PETSc GPU capabilities<br>
</font><br>
</div>
<div></div>
<div>Hi Paul,<br>
<br>
This is very interesting. I tried building the code with --download-txpetscgpu and it doesn't work for me. It runs out of memory, no matter how small the problem (this is ex2 from src/ksp/ksp/examples/tutorials):<br>
<br>
mpirun -np 1 ./ex2 -n 10 -m 10 -ksp_type cg -pc_type sacusp -mat_type aijcusp -vec_type cusp -cusp_storage_format csr -use_cusparse 0<br>
<br>
terminate called after throwing an instance of 'thrust::system::detail::bad_alloc'<br>
what(): std::bad_alloc: out of memory<br>
MPI Application rank 0 killed before MPI_Finalize() with signal 6<br>
<br>
This example works fine when I build without your gpu additions (and for much larger problems too). Am I doing something wrong?<br>
<br>
For reference, I'm using CUDA 4.1, CUSP 0.3, and Thrust 1.5.1<br>
<br>
John<br>
<br>
<div class="gmail_quote">On Fri, Feb 10, 2012 at 5:04 PM, Paul Mullowney <span dir="ltr">
<<a href="mailto:paulm@txcorp.com" target="_blank">paulm@txcorp.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
Hi All,<br>
<br>
I've been developing GPU capabilities for PETSc. The development has focused mostly on<br>
(1) An efficient multi-GPU SpMV, i.e. MatMult. This is working well.<br>
(2) Triangular Solve used in ILU preconditioners; i.e. MatSolve. The performance of this ... is what it is :|<br>
This code is in beta mode. Keep that in mind, if you decide to use it. It supports single and double precision, real numbers only! Complex will be supported at some point in the future, but not any time soon.<br>
<br>
To build with these capabilities, add the following to your configure line.<br>
--download-txpetscgpu=yes<br>
<br>
The capabilities of the SpMV code are accessed with the following 2 command line flags<br>
-cusp_storage_format csr (other options are coo (coordinate), ell (ellpack), dia (diagonal). hyb (hybrid) is not yet supported)<br>
-use_cusparse (this is a boolean and at the moment is only supported with csr format matrices. In the future, cusparse will work with ell, coo, and hyb formats).<br>
<br>
Regarding the number of GPUs to run on:<br>
Imagine a system with P nodes, N cores per node, and M GPUs per node. Then, to use only the GPUs, I would run with M ranks per node over P nodes. As an example, I have a system with 2 nodes. Each node has 8 cores, and 4 GPUs attached to each node (P=2, N=8,
M=4). In a PBS queue script, one would use 2 nodes at 4 processors per node. Each mpi rank (CPU processor) will be attached to a GPU.<br>
<br>
You do not need to explicitly manage the GPUs, apart from understanding what type of system you are running on. To learn how many devices are available per node, use the command line flag:<br>
-cuda_show_devices<span class="HOEnZb"><font color="#888888"><br>
<br>
-Paul<br>
</font></span></blockquote>
</div>
<br>
</div>
</div>
</div>
</body>
</html>