2012/3/5 Xiangze Zeng <span dir="ltr"><<a href="mailto:zengshixiangze@163.com">zengshixiangze@163.com</a>></span><br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div style="line-height:1.7;font-size:14px;font-family:arial"><div style="line-height:1.7;font-size:14px;font-family:arial"><div>Hi, Matt. </div><div><br></div><div>We know that when we compile the CUDA-C code, we use nvcc. But if we run the PETSc-CUDA code in GPU, what we do is just to change the type of Mat and Vec. In the PETSc-CUDA code compilings progress ,does it use nvcc? </div>
</div></div></blockquote><div><br></div><div>Yes.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="line-height:1.7;font-size:14px;font-family:arial">
<div style="line-height:1.7;font-size:14px;font-family:arial"><div>And when implement the PETSc for using GPU, do you ever consider the comupute capibility of the GPU. And in which place, the parameter of the comupute capibility of the GPU has been set? </div>
</div></div></blockquote><div><br></div><div>You configure using --with-cuda-arch=sm_13 for instance, or compute_12.</div><div><br></div><div>And you have given me an idea what your problem is. The GeForce 310 has no double precision floating point units.</div>
<div>Therefore, you must configure PETSc using --with-precision=single.</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div style="line-height:1.7;font-size:14px;font-family:arial"><div style="line-height:1.7;font-size:14px;font-family:arial"><div>Thank you.</div><div>Zeng</div><div> <br></div>在 2012-03-05 22:28:20,"Matthew Knepley" <<a href="mailto:petsc-maint@mcs.anl.gov" target="_blank">petsc-maint@mcs.anl.gov</a>> 写道:<br>
<blockquote style="margin:0px 0px 0px 0.8ex;padding-left:1ex;border-left-color:rgb(204,204,204);border-left-width:1px;border-left-style:solid">On Mon, Mar 5, 2012 at 8:18 AM, Xiangze Zeng <span dir="ltr"><<a href="mailto:zengshixiangze@163.com" target="_blank">zengshixiangze@163.com</a>></span> wrote:<br>
<div class="gmail_quote"><blockquote style="margin:0px 0px 0px 0.8ex;padding-left:1ex;border-left-color:rgb(204,204,204);border-left-width:1px;border-left-style:solid" class="gmail_quote">
Hi, Matt.<br>
I have tried the method you told me( I closed all the X systems),but the same error still occurs. Is there something wrong with the installation of the thrust?<br>
<br>
<br>
And I find the same problem in several web pages, like this <a href="http://code.google.com/p/thrust/wiki/Debugging" target="_blank">http://code.google.com/p/thrust/wiki/Debugging</a>, they all say the error is related to the compute capability . Is that the case? My GPU is GeForce 310, which supports compute capability 1.2 .<br>
</blockquote><div><br></div><div>One of the downsides of CUDA, at least from our perspective, is that these errors are very</div><div>hard to debug. If thrust compiles and the cudaInit() succeeds, there is not much else we</div>
<div>can do to debug on this machine. Maybe you can try running thrust examples?</div><div><br></div><div> Matt</div><div> </div><blockquote style="margin:0px 0px 0px 0.8ex;padding-left:1ex;border-left-color:rgb(204,204,204);border-left-width:1px;border-left-style:solid" class="gmail_quote">
Thank you.<br>
<br>
在 2012-03-02 23:55:13,"Matthew Knepley" <<a href="mailto:petsc-maint@mcs.anl.gov" target="_blank">petsc-maint@mcs.anl.gov</a>> 写道:<br>
On Fri, Mar 2, 2012 at 9:40 AM, Xiangze Zeng <<a href="mailto:zengshixiangze@163.com" target="_blank">zengshixiangze@163.com</a>> wrote:<br>
<br>
Thank both of you, Satish and Jed.<br>
<br>
<br>
This warning message has disappeared. But another problem occurs, it terminates with the same message as I run my own code:"<br>
terminate called after throwing an instance of 'thrust::system::system_error'<br>what(): invalid device function<br>
Aborted."<br>
(Several days ago, I thought I run the ex19 successfully, but today I found I was totally wrong when I find the warning message in the PETSc Performance Summary. So sorry about that!)<br>
<br>
<br>
<br>
Most likely, this a problem with another program locking your GPU (I assume you are running on your laptop/desktop). Start<br>
closing apps one at a time (especially graphical ones like a PDF reader) and try running each time.<br>
<br>
<br>
Matt<br>
<br>
May this new problem be related to the installation of the cuda( I can run cuda-C code successfully in my PC)?<br>
<br>
<br>
Thank you again!<br>
Zeng Xiangze<br>
<br>
At 2012-03-02 23:18:19,"Satish Balay" <<a href="mailto:petsc-maint@mcs.anl.gov" target="_blank">petsc-maint@mcs.anl.gov</a>> wrote:<br>
>On Fri, 2 Mar 2012, Xiangze Zeng wrote:<br>
><br>
<br>
>> Dear all,<br>
>><br>
>> After I have installed the PETSc to use the NVidia GPUS, I run the ex19. When I make ex19, it stopped with the message:<br>
>><br>
>><br>
>> "makefile:24: /conf/variables: No such file or directory<br>
>> makefile:25: /conf/rules: No such file or directory<br>
>> makefile:1023: /conf/test: No such file or directory<br>
>> make: *** No rule to make target `/conf/test'. Stop."<br>
>><br>
>><br>
>> Then I add the PETSC_DIR to the makefile. After that, I made ex19 successfully. But when I run it using the command :<br>
><br>
<br>
>Yes - you need PETSC_DIR/PETSC_ARCH values to be specified to<br>
>make. You can set these as env variables - or specify to make on the<br>
>command line - or add to makefile. [we recommended not modifying the<br>
>makefile - to keep it portable..]<br>
><br>
>><br>
>><br>
<br>
>> "./ex19 -da_vec_type mpicusp -da_mat_type mpiaijcusp -pc_type none -dmmg_nlevels 1 -da_grid_x 100 -da_grid_y 100 -log_summary -mat_no_inode -preload off -cusp_synchronize".<br>
>><br>
>><br>
>> In the PETSc Performance Summary, it says:"<br>
>> WARNING! There are options you set that were not used!<br>
>> WARNING! could be spelling mistake, etc!<br>
>> Option left: name:-da_mat_type value: mpiaijcusp<br>
>> Option left: name:-da_vec_type value: mpicusp"<br>
><br>
<br>
>Looks like the options are now changed to -dm_mat_type and -dm_vec_type.<br>
><br>
>Satish<br>
><br>
>><br>
>><br>
<br>
>> Is there any problem with it?<br>
>> Any response will be appreciated! Thank you so much!<br>
>><br>
>><br>
>> Zeng Xiangze<br>
>><br>
>><br>
>><br>
>><br>
>><br>
>><br>
>><br>
><br>
<br>
<br>
<br>
<br>
<br>
<br><span class="HOEnZb"><font color="#888888">
<br>
--<br>
What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener<br>
<br>
</font></span></blockquote></div><span class="HOEnZb"><font color="#888888"><br><br clear="all"><div><br></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener<br>
</font></span></blockquote></div></div><br><br><span title="neteasefooter"><span></span></span></blockquote></div><br><br clear="all"><div><br></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener<br>