<div dir="ltr">The MR is merged to petsc/release.<div><br></div><div>BTW, in MatCreateDense, the data pointer has to be a host pointer. It is better to always use PetscScalar* (instead of std::complex<double>*) to do the cast. </div><div><br clear="all"><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr">--Junchao Zhang</div></div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Oct 3, 2024 at 3:03 AM 刘浪天 <<a href="mailto:langtian.liu@icloud.com">langtian.liu@icloud.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div><div><div>Okay. I see :D<br></div><div style="white-space:pre-wrap">--------------------
Langtian Liu
Institute for Theorectical Physics, Justus-Liebig-University Giessen
Heinrich-Buff-Ring 16, 35392 Giessen Germany
email: <a href="mailto:langtian.liu@icloud.com" target="_blank">langtian.liu@icloud.com</a>
Tel: (+49)641 99 33342<br></div></div><div><br></div><blockquote type="cite"><div>On Oct 3, 2024, at 9:58 AM, Jose E. Roman <<a href="mailto:jroman@dsic.upv.es" target="_blank">jroman@dsic.upv.es</a>> wrote:<br></div><div><br></div><div><br></div><div><div><div>You have to wait until the merge request has the label "Merged" instead of "Open".<br></div><div><br></div><div><br></div><blockquote type="cite"><div>El 3 oct 2024, a las 9:55, 刘浪天 via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>> escribió:<br></div><div><br></div><div>Hello Junchao,<br></div><div><br></div><div>Okay. Thank you for helping find this bug. I pull the newest version of petsc today. It seems this error has not been fixed in the present release version. Maybe I should wait for some days.<br></div><div><br></div><div>Best wishes,<br></div><div>Langtian<br></div><div><br></div><div><br></div><blockquote type="cite"><div>On Oct 3, 2024, at 12:12 AM, Junchao Zhang <<a href="mailto:junchao.zhang@gmail.com" target="_blank">junchao.zhang@gmail.com</a>> wrote:<br></div><div><br></div><div><br></div><div>Hi, Langtian,<br></div><div> Thanks for the configure.log and I now see what's wrong. Since you compiled your code with nvcc, we mistakenly thought petsc was configured with cuda.<br></div><div> It is fixed in <a href="https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/7909__;!!G_uCfscf7eWS!ckm_ynevPO4eoYiGVdofOAnizLKzlNsHf92YV6b-QZr4V0eo-HEk7MWC5flyF7VZFiqAwC3eoy-BPWGyUs0bDjx4iNhW$" rel="noopener noreferrer" target="_blank">https://gitlab.com/petsc/petsc/-/merge_requests/7909</a>, which will be in petsc/release and main.<br></div><div> <br></div><div> Thanks.<br></div><div>--Junchao Zhang<br></div><div><br></div><div><br></div><div>On Wed, Oct 2, 2024 at 3:11 PM 刘浪天 <<a href="mailto:langtian.liu@icloud.com" target="_blank">langtian.liu@icloud.com</a>> wrote:<br></div><div>Hi Junchao,<br></div><div><br></div><div>I check it, I haven't use cuda when install pure cpu version of petsc.<br></div><div>The configure.log has been attached. Thank you for your reply.<br></div><div><br></div><div>Best wishes,<br></div><div>-------------------- Langtian Liu Institute for Theorectical Physics, Justus-Liebig-University Giessen Heinrich-Buff-Ring 16, 35392 Giessen Germany email: <a href="mailto:langtian.liu@icloud.com" target="_blank">langtian.liu@icloud.com</a> Tel: (+49)641 99 33342<br></div><div><br></div><blockquote type="cite"><div>On Oct 2, 2024, at 5:05 PM, Junchao Zhang <<a href="mailto:junchao.zhang@gmail.com" target="_blank">junchao.zhang@gmail.com</a>> wrote:<br></div><div><br></div><div><br></div><div><br></div><div><br></div><div>On Wed, Oct 2, 2024 at 3:57 AM 刘浪天 via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>> wrote:<br></div><div>Hi all,<br></div><div><br></div><div>I am using the PETSc and SLEPc to solve the Faddeev equation of baryons. I encounter a problem of function MatCreateDense when changing from CPU to CPU-GPU computations.<br></div><div>At first, I write the codes in purely CPU computation in the following way and it works.<br></div><div>```<br></div><div>Eigen::MatrixXcd H_KER;<br></div><div>Eigen::MatrixXcd G0;<br></div><div>printf("\nCompute the propagator matrix.\n");<br></div><div>prop_matrix_nucleon_sc_av(Mn, pp_nodes, cos1_nodes);<br></div><div>printf("\nCompute the propagator matrix done.\n");<br></div><div>printf("\nCompute the kernel matrix.\n");<br></div><div>bse_kernel_nucleon_sc_av(Mn, pp_nodes, pp_weights, cos1_nodes, cos1_weights);<br></div><div>printf("\nCompute the kernel matrix done.\n");<br></div><div>printf("\nCompute the full kernel matrix by multiplying kernel and propagator matrix.\n");<br></div><div>MatrixXcd kernel_temp = H_KER * G0;<br></div><div>printf("\nCompute the full kernel matrix done.\n");<br></div><div><br></div><div>// Solve the eigen system with SLEPc<br></div><div>printf("\nSolve the eigen system in the rest frame.\n");<br></div><div>// Get the size of the Eigen matrix<br></div><div>int nRows = (int) kernel_temp.rows();<br></div><div>int nCols = (int) kernel_temp.cols();<br></div><div>// Create PETSc matrix and share the data of kernel_temp<br></div><div>Mat kernel;<br></div><div>PetscCall(MatCreateDense(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, nRows, nCols, kernel_temp.data(), &kernel));<br></div><div>PetscCall(MatAssemblyBegin(kernel, MAT_FINAL_ASSEMBLY));<br></div><div>PetscCall(MatAssemblyEnd(kernel, MAT_FINAL_ASSEMBLY));<br></div><div>```<br></div><div>Now I change to compute the propagator and kernel matrices in GPU and then compute the largest eigenvalues in CPU using SLEPc in the ways below.<br></div><div>```<br></div><div>cuDoubleComplex *h_propmat;<br></div><div>cuDoubleComplex *h_kernelmat;<br></div><div>int dim = EIGHT * NP * NZ;<br></div><div>printf("\nCompute the propagator matrix.\n");<br></div><div>prop_matrix_nucleon_sc_av_cuda(Mn, pp_nodes.data(), cos1_nodes.data());<br></div><div>printf("\nCompute the propagator matrix done.\n");<br></div><div>printf("\nCompute the kernel matrix.\n");<br></div><div>kernel_matrix_nucleon_sc_av_cuda(Mn, pp_nodes.data(), pp_weights.data(), cos1_nodes.data(), cos1_weights.data());<br></div><div>printf("\nCompute the kernel matrix done.\n");<br></div><div>printf("\nCompute the full kernel matrix by multiplying kernel and propagator matrix.\n");<br></div><div>// Map the raw arrays to Eigen matrices (column-major order)<br></div><div>auto *h_kernel_temp = new cuDoubleComplex [dim*dim];<br></div><div>matmul_cublas_cuDoubleComplex(h_kernelmat,h_propmat,h_kernel_temp,dim,dim,dim);<br></div><div>printf("\nCompute the full kernel matrix done.\n");<br></div><div><br></div><div>// Solve the eigen system with SLEPc<br></div><div>printf("\nSolve the eigen system in the rest frame.\n");<br></div><div>int nRows = dim;<br></div><div>int nCols = dim;<br></div><div>// Create PETSc matrix and share the data of kernel_temp<br></div><div>Mat kernel;<br></div><div>auto* h_kernel = (std::complex<double>*)(h_kernel_temp);<br></div><div>PetscCall(MatCreateDense(PETSC_COMM_WORLD, PETSC_DECIDE, PETSC_DECIDE, nRows, nCols, h_kernel_temp, &kernel));<br></div><div>PetscCall(MatAssemblyBegin(kernel, MAT_FINAL_ASSEMBLY));<br></div><div>PetscCall(MatAssemblyEnd(kernel, MAT_FINAL_ASSEMBLY));<br></div><div>But in this case, the compiler told me that the MatCreateDense function uses the data pointer as type of "thrust::complex<double>" instead of "std::complex<double>".<br></div><div>I am sure I only configured and install PETSc in purely CPU without GPU and this codes are written in the host function.<br></div><div>Please double check that your PETSc was purely CPU configured. You can find it at the end of your configure.log to see if petsc is configured with CUDA.<br></div><div>Since thrust::complex<double> is a result of a petsc/cuda configuration, I have this doubt. <br></div><div><br></div><div> <br></div><div><br></div><div>Why the function changes its behavior? Did you also meet this problem when writing the cuda codes and how to solve this problem.<br></div><div>I tried to copy the data to a new thrust::complex<double> matrix but this is very time consuming since my matrix is very big. Is there a way to create the Mat from the original data without changing the data type to thrust::complex<double> in the cuda applications? Any response will be appreciated. Thank you!<br></div><div><br></div><div>Best wishes,<br></div><div>Langtian Liu<br></div><div><br></div><div>------<br></div><div>Institute for Theorectical Physics, Justus-Liebig-University Giessen<br></div><div>Heinrich-Buff-Ring 16, 35392 Giessen Germany<br></div></blockquote></blockquote></blockquote></div></div></blockquote></div><div><br></div></div></blockquote></div>