<html><body><div><div><div>Hello Junchao,<br></div><div><br></div><div>Okay. Thank you for helping find this bug. I pull the newest version of petsc today. It seems this error has not been fixed in the present release version. Maybe I should wait for some days.<br></div><div><br></div><div>Best wishes,<br></div><div>
Langtian<br></div><div><br></div><div><br></div></div><blockquote type="cite"><div>On Oct 3, 2024, at 12:12 AM, Junchao Zhang <junchao.zhang@gmail.com> wrote:<br></div><div><br></div><div><br></div><div><div dir="ltr"><div>Hi, Langtian,<br></div><div> Thanks for the configure.log and I now see what's wrong. Since you compiled your code with nvcc, we mistakenly thought petsc was configured with cuda.<br></div><div> It is fixed in <a href="https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/7909__;!!G_uCfscf7eWS!dIQhZmHAR37vEaIvWtn-b9ggurJgaZ9R4aznsluuwVI2-1OfMczg_v0sTOQyO7Xkgb22zZuJiua8J3OFDzsQcUK3LiNP$" rel="noopener noreferrer">https://gitlab.com/petsc/petsc/-/merge_requests/7909</a>, which will be in petsc/release and main.<br></div><div> <br></div><div> Thanks.<br></div><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">--Junchao Zhang<br></div></div></div><div><br></div></div><div><br></div><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Oct 2, 2024 at 3:11 PM 刘浪天 <<a href="mailto:langtian.liu@icloud.com">langtian.liu@icloud.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div><div><div>Hi Junchao,<br></div><div><br></div><div>I check it, I haven't use cuda when install pure cpu version of petsc.<br></div><div>The configure.log has been attached. Thank you for your reply.<br></div><div><br></div><div>Best wishes,<br></div><div style="white-space:pre-wrap">--------------------
Langtian Liu
Institute for Theorectical Physics, Justus-Liebig-University Giessen
Heinrich-Buff-Ring 16, 35392 Giessen Germany
email: <a href="mailto:langtian.liu@icloud.com">langtian.liu@icloud.com</a> Tel: (+49)641 99 33342<br></div></div><div><br></div><blockquote type="cite"><div>On Oct 2, 2024, at 5:05 PM, Junchao Zhang <<a href="mailto:junchao.zhang@gmail.com">junchao.zhang@gmail.com</a>> wrote:<br></div><div><br></div><div><br></div><div><div dir="ltr"><div dir="ltr"><div><br></div></div><div><br></div><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Oct 2, 2024 at 3:57 AM 刘浪天 via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div><div>Hi all,<br></div><div><br></div><div>I am using the PETSc and SLEPc to solve the Faddeev equation of baryons. I encounter a problem of function MatCreateDense when changing from CPU to CPU-GPU computations.<br></div><div>At first, I write the codes in purely CPU computation in the following way and it works.<br></div><div>```<br></div></div><div style="background-color:rgb(19,19,20);color:rgb(235,235,235)"><pre style="font-family:"DejaVu Sans Mono",monospace;font-size:10.9pt"><span style="color:rgb(181,182,227)">Eigen</span>::<span style="color:rgb(185,188,209)">MatrixXcd </span><span style="color:rgb(255,255,255)">H_KER</span><span style="color:rgb(237,134,74)"><b>;<br></b></span><span style="color:rgb(181,182,227)">Eigen</span>::<span style="color:rgb(185,188,209)">MatrixXcd </span><span style="color:rgb(255,255,255)">G0</span><span style="color:rgb(237,134,74)"><b>;</b></span></pre><pre style="font-family:"DejaVu Sans Mono",monospace;font-size:10.9pt">printf(<span style="color:rgb(84,179,62)">"</span><span style="color:rgb(237,134,74)">\n</span><span style="color:rgb(84,179,62)">Compute the propagator matrix.</span><span style="color:rgb(237,134,74)">\n</span><span style="color:rgb(84,179,62)">"</span>)<span style="color:rgb(237,134,74)"><b>;<br></b></span>prop_matrix_nucleon_sc_av(<span style="color:rgb(255,255,255)">Mn</span><span style="color:rgb(237,134,74)"><b>, </b></span>pp_nodes<span style="color:rgb(237,134,74)"><b>, </b></span>cos1_nodes)<span style="color:rgb(237,134,74)"><b>;<br></b></span>printf(<span style="color:rgb(84,179,62)">"</span><span style="color:rgb(237,134,74)">\n</span><span style="color:rgb(84,179,62)">Compute the propagator matrix done.</span><span style="color:rgb(237,134,74)">\n</span><span style="color:rgb(84,179,62)">"</span>)<span style="color:rgb(237,134,74)"><b>;<br></b></span>printf(<span style="color:rgb(84,179,62)">"</span><span style="color:rgb(237,134,74)">\n</span><span style="color:rgb(84,179,62)">Compute the kernel matrix.</span><span style="color:rgb(237,134,74)">\n</span><span style="color:rgb(84,179,62)">"</span>)<span style="color:rgb(237,134,74)"><b>;<br></b></span>bse_kernel_nucleon_sc_av(<span style="color:rgb(255,255,255)">Mn</span><span style="color:rgb(237,134,74)"><b>, </b></span>pp_nodes<span style="color:rgb(237,134,74)"><b>, </b></span>pp_weights<span style="color:rgb(237,134,74)"><b>, </b></span>cos1_nodes<span style="color:rgb(237,134,74)"><b>, </b></span>cos1_weights)<span style="color:rgb(237,134,74)"><b>;<br></b></span>printf(<span style="color:rgb(84,179,62)">"</span><span style="color:rgb(237,134,74)">\n</span><span style="color:rgb(84,179,62)">Compute the kernel matrix done.</span><span style="color:rgb(237,134,74)">\n</span><span style="color:rgb(84,179,62)">"</span>)<span style="color:rgb(237,134,74)"><b>;<br></b></span>printf(<span style="color:rgb(84,179,62)">"</span><span style="color:rgb(237,134,74)">\n</span><span style="color:rgb(84,179,62)">Compute the full kernel matrix by multiplying kernel and propagator matrix.</span><span style="color:rgb(237,134,74)">\n</span><span style="color:rgb(84,179,62)">"</span>)<span style="color:rgb(237,134,74)"><b>;<br></b></span>MatrixXcd <span style="color:rgb(255,255,255)">kernel_temp </span>= H_KER * G0<span style="color:rgb(237,134,74)"><b>;<br></b></span>printf(<span style="color:rgb(84,179,62)">"</span><span style="color:rgb(237,134,74)">\n</span><span style="color:rgb(84,179,62)">Compute the full kernel matrix done.</span><span style="color:rgb(237,134,74)">\n</span><span style="color:rgb(84,179,62)">"</span>)<span style="color:rgb(237,134,74)"><b>;<br><br></b></span><span style="color:rgb(126,195,230)">// Solve the eigen system with SLEPc<br></span>printf(<span style="color:rgb(84,179,62)">"</span><span style="color:rgb(237,134,74)">\n</span><span style="color:rgb(84,179,62)">Solve the eigen system in the rest frame.</span><span style="color:rgb(237,134,74)">\n</span><span style="color:rgb(84,179,62)">"</span>)<span style="color:rgb(237,134,74)"><b>;<br></b></span><span style="color:rgb(126,195,230)">// Get the size of the Eigen matrix<br></span><span style="color:rgb(237,134,74)">int </span><span style="color:rgb(255,255,255)">nRows </span>= (<span style="color:rgb(237,134,74)">int</span>) kernel_temp.<span style="color:rgb(255,255,255)">rows</span>()<span style="color:rgb(237,134,74)"><b>;<br></b>int </span><span style="color:rgb(255,255,255)">nCols </span>= (<span style="color:rgb(237,134,74)">int</span>) kernel_temp.<span style="color:rgb(255,255,255)">cols</span>()<span style="color:rgb(237,134,74)"><b>;</b></span></pre><pre style="font-family:"DejaVu Sans Mono",monospace;font-size:10.9pt"><span style="color:rgb(126,195,230)">// Create PETSc matrix and share the data of kernel_temp<br></span>Mat <span style="color:rgb(255,255,255)">kernel</span><span style="color:rgb(237,134,74)"><b>;<br></b></span>PetscCall(MatCreateDense(PETSC_COMM_WORLD<span style="color:rgb(237,134,74)"><b>, </b></span>PETSC_DECIDE<span style="color:rgb(237,134,74)"><b>, </b></span>PETSC_DECIDE<span style="color:rgb(237,134,74)"><b>, </b></span>nRows<span style="color:rgb(237,134,74)"><b>, </b></span>nCols<span style="color:rgb(237,134,74)"><b>, </b></span>kernel_temp.data()<span style="color:rgb(237,134,74)"><b>, </b></span>&kernel))<span style="color:rgb(237,134,74)"><b>;<br></b></span>PetscCall(MatAssemblyBegin(kernel<span style="color:rgb(237,134,74)"><b>, </b></span>MAT_FINAL_ASSEMBLY))<span style="color:rgb(237,134,74)"><b>;<br></b></span>PetscCall(MatAssemblyEnd(kernel<span style="color:rgb(237,134,74)"><b>, </b></span>MAT_FINAL_ASSEMBLY))<span style="color:rgb(237,134,74)"><b>;</b></span></pre></div><div>```<br></div><div>Now I change to compute the propagator and kernel matrices in GPU and then compute the largest eigenvalues in CPU using SLEPc in the ways below.<br></div><div>```<br></div><div style="background-color:rgb(19,19,20);color:rgb(235,235,235)"><pre style="font-family:"DejaVu Sans Mono",monospace;font-size:10.9pt"><span style="color:rgb(185,188,209)">cuDoubleComplex </span>*<span style="color:rgb(255,255,255)">h_propmat</span><span style="color:rgb(237,134,74)"><b>;<br></b></span><span style="color:rgb(185,188,209)">cuDoubleComplex </span>*<span style="color:rgb(255,255,255)">h_kernelmat</span><span style="color:rgb(237,134,74)"><b>;</b></span></pre><pre style="font-family:"DejaVu Sans Mono",monospace;font-size:10.9pt"><span style="color:rgb(237,134,74)">int </span><span style="color:rgb(255,255,255)">dim </span>= <span style="color:rgb(144,139,37)">EIGHT </span>* <span style="color:rgb(255,255,255)">NP </span>* <span style="color:rgb(255,255,255)">NZ</span><span style="color:rgb(237,134,74)"><b>;<br></b></span>printf(<span style="color:rgb(84,179,62)">"</span><span style="color:rgb(237,134,74)">\n</span><span style="color:rgb(84,179,62)">Compute the propagator matrix.</span><span style="color:rgb(237,134,74)">\n</span><span style="color:rgb(84,179,62)">"</span>)<span style="color:rgb(237,134,74)"><b>;<br></b></span>prop_matrix_nucleon_sc_av_cuda(<span style="color:rgb(255,255,255)">Mn</span><span style="color:rgb(237,134,74)"><b>, </b></span><span style="color:rgb(255,255,255)">pp_nodes</span>.data()<span style="color:rgb(237,134,74)"><b>, </b></span><span style="color:rgb(255,255,255)">cos1_nodes</span>.data())<span style="color:rgb(237,134,74)"><b>;<br></b></span>printf(<span style="color:rgb(84,179,62)">"</span><span style="color:rgb(237,134,74)">\n</span><span style="color:rgb(84,179,62)">Compute the propagator matrix done.</span><span style="color:rgb(237,134,74)">\n</span><span style="color:rgb(84,179,62)">"</span>)<span style="color:rgb(237,134,74)"><b>;<br></b></span>printf(<span style="color:rgb(84,179,62)">"</span><span style="color:rgb(237,134,74)">\n</span><span style="color:rgb(84,179,62)">Compute the kernel matrix.</span><span style="color:rgb(237,134,74)">\n</span><span style="color:rgb(84,179,62)">"</span>)<span style="color:rgb(237,134,74)"><b>;<br></b></span>kernel_matrix_nucleon_sc_av_cuda(<span style="color:rgb(255,255,255)">Mn</span><span style="color:rgb(237,134,74)"><b>, </b></span><span style="color:rgb(255,255,255)">pp_nodes</span>.data()<span style="color:rgb(237,134,74)"><b>, </b></span><span style="color:rgb(255,255,255)">pp_weights</span>.data()<span style="color:rgb(237,134,74)"><b>, </b></span><span style="color:rgb(255,255,255)">cos1_nodes</span>.data()<span style="color:rgb(237,134,74)"><b>, </b></span><span style="color:rgb(255,255,255)">cos1_weights</span>.data())<span style="color:rgb(237,134,74)"><b>;<br></b></span>printf(<span style="color:rgb(84,179,62)">"</span><span style="color:rgb(237,134,74)">\n</span><span style="color:rgb(84,179,62)">Compute the kernel matrix done.</span><span style="color:rgb(237,134,74)">\n</span><span style="color:rgb(84,179,62)">"</span>)<span style="color:rgb(237,134,74)"><b>;<br></b></span>printf(<span style="color:rgb(84,179,62)">"</span><span style="color:rgb(237,134,74)">\n</span><span style="color:rgb(84,179,62)">Compute the full kernel matrix by multiplying kernel and propagator matrix.</span><span style="color:rgb(237,134,74)">\n</span><span style="color:rgb(84,179,62)">"</span>)<span style="color:rgb(237,134,74)"><b>;<br></b></span><span style="color:rgb(126,195,230)">// Map the raw arrays to Eigen matrices (column-major order)<br></span><span style="color:rgb(237,134,74)">auto </span>*<span style="color:rgb(255,255,255)">h_kernel_temp </span>= <span style="color:rgb(237,134,74)">new </span><span style="color:rgb(185,188,209)">cuDoubleComplex </span>[<span style="color:rgb(255,255,255)">dim</span>*<span style="color:rgb(255,255,255)">dim</span>]<span style="color:rgb(237,134,74)"><b>;<br></b></span>matmul_cublas_cuDoubleComplex(<span style="color:rgb(255,255,255)">h_kernelmat</span><span style="color:rgb(237,134,74)"><b>,</b></span><span style="color:rgb(255,255,255)">h_propmat</span><span style="color:rgb(237,134,74)"><b>,</b></span><span style="color:rgb(255,255,255)">h_kernel_temp</span><span style="color:rgb(237,134,74)"><b>,</b></span><span style="color:rgb(255,255,255)">dim</span><span style="color:rgb(237,134,74)"><b>,</b></span><span style="color:rgb(255,255,255)">dim</span><span style="color:rgb(237,134,74)"><b>,</b></span><span style="color:rgb(255,255,255)">dim</span>)<span style="color:rgb(237,134,74)"><b>;<br></b></span>printf(<span style="color:rgb(84,179,62)">"</span><span style="color:rgb(237,134,74)">\n</span><span style="color:rgb(84,179,62)">Compute the full kernel matrix done.</span><span style="color:rgb(237,134,74)">\n</span><span style="color:rgb(84,179,62)">"</span>)<span style="color:rgb(237,134,74)"><b>;<br><br></b></span><span style="color:rgb(126,195,230)">// Solve the eigen system with SLEPc<br></span>printf(<span style="color:rgb(84,179,62)">"</span><span style="color:rgb(237,134,74)">\n</span><span style="color:rgb(84,179,62)">Solve the eigen system in the rest frame.</span><span style="color:rgb(237,134,74)">\n</span><span style="color:rgb(84,179,62)">"</span>)<span style="color:rgb(237,134,74)"><b>;<br></b>int </span><span style="color:rgb(255,255,255)">nRows </span>= <span style="color:rgb(255,255,255)">dim</span><span style="color:rgb(237,134,74)"><b>;<br></b>int </span><span style="color:rgb(255,255,255)">nCols </span>= <span style="color:rgb(255,255,255)">dim</span><span style="color:rgb(237,134,74)"><b>;</b></span></pre><pre style="font-family:"DejaVu Sans Mono",monospace;font-size:10.9pt"><span style="color:rgb(126,195,230)">// Create PETSc matrix and share the data of kernel_temp<br></span><span style="color:rgb(185,188,209)">Mat </span>kernel<span style="color:rgb(237,134,74)"><b>;<br></b>auto</span>* <span style="color:rgb(255,255,255)">h_kernel </span>= (<span style="color:rgb(181,182,227)">std</span>::<span style="color:rgb(181,182,227)">complex</span><<span style="color:rgb(237,134,74)">double</span>>*)(<span style="color:rgb(255,255,255)">h_kernel_temp</span>)<span style="color:rgb(237,134,74)"><b>;<br></b></span><span style="color:rgb(144,139,37)">PetscCall</span>(MatCreateDense(<span style="color:rgb(255,255,255)">PETSC_COMM_WORLD</span><span style="color:rgb(237,134,74)"><b>, </b></span>PETSC_DECIDE<span style="color:rgb(237,134,74)"><b>, </b></span>PETSC_DECIDE<span style="color:rgb(237,134,74)"><b>, </b></span><span style="color:rgb(255,255,255)">nRows</span><span style="color:rgb(237,134,74)"><b>, </b></span><span style="color:rgb(255,255,255)">nCols</span><span style="color:rgb(237,134,74)"><b>, </b></span><span style="color:rgb(255,255,255)">h_kernel_temp</span><span style="color:rgb(237,134,74)"><b>, </b></span>&<span style="color:rgb(255,255,255)">kernel</span>))<span style="color:rgb(237,134,74)"><b>;<br></b></span><span style="color:rgb(144,139,37)">PetscCall</span>(MatAssemblyBegin(<span style="color:rgb(255,255,255)">kernel</span><span style="color:rgb(237,134,74)"><b>, </b></span><span style="color:rgb(237,148,255)"><i>MAT_FINAL_ASSEMBLY</i></span>))<span style="color:rgb(237,134,74)"><b>;<br></b></span><span style="color:rgb(144,139,37)">PetscCall</span>(MatAssemblyEnd(<span style="color:rgb(255,255,255)">kernel</span><span style="color:rgb(237,134,74)"><b>, </b></span><span style="color:rgb(237,148,255)"><i>MAT_FINAL_ASSEMBLY</i></span>))<span style="color:rgb(237,134,74)"><b>;</b></span></pre></div><div>But in this case, the compiler told me that the MatCreateDense function uses the data pointer as type of "thrust::complex<double>" instead of "std::complex<double>".<br></div><div>I am sure I only configured and install PETSc in purely CPU without GPU and this codes are written in the host function.<br></div></div></blockquote><div>Please double check that your PETSc was purely CPU configured. You can find it at the end of your configure.log to see if petsc is configured with CUDA.<br></div><div>Since <i>thrust::complex<double> </i>is a result of a petsc/cuda configuration, I have this doubt. <br></div><div><br></div><div> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div><br></div><div>Why the function changes its behavior? Did you also meet this problem when writing the cuda codes and how to solve this problem.<br></div><div>I tried to copy the data to a new thrust::complex<double> matrix but this is very time consuming since my matrix is very big. Is there a way to create the Mat from the original data without changing the data type to thrust::complex<double> in the cuda applications? Any response will be appreciated. Thank you!<br></div><div><br></div><div>Best wishes,<br></div><div>Langtian Liu<br></div><div><br></div><div>------<br></div><div>Institute for Theorectical Physics, Justus-Liebig-University Giessen<br></div><div>Heinrich-Buff-Ring 16, 35392 Giessen Germany<br></div><div><br></div></div></blockquote></div></div></div></blockquote></div><div><br></div></div></blockquote></div></div></blockquote></div><div><br></div></body></html>