<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Aug 25, 2020 at 9:01 PM Sajid Ali <<a href="mailto:sajidsyed2021@u.northwestern.edu">sajidsyed2021@u.northwestern.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div dir="ltr"><div>Hi Barry, <br><br></div>Thanks for the explanation! Removing the calls to
PetscMalloc(Re)SetCUDAHost solved that issue.<br><br></div><div>Just to clarify, all PetscMalloc(s) happen on the host and there is no special PetscMalloc for device memory allocation ? (Say for an operation sequence PetscMalloc1(N, &ptr), VecCUDAGetArray(cudavec, &ptr) )<br></div><div><br></div></div></div></blockquote><div>PetscMallocSetCUDAHost() is to instruct petsc to use cudaHostAlloc() thereafter to allocate non-pagable host memory. <br></div><div>Yes, there is no variant of PetscMalloc doing cudaMalloc. </div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div></div></div>Thank You,<br><div><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div style="font-size:12.8px">Sajid Ali | PhD Candidate<br></div><div style="font-size:12.8px">Applied Physics<br></div><div style="font-size:12.8px">Northwestern University</div><div style="font-size:12.8px"><a href="http://s-sajid-ali.github.io" target="_blank">s-sajid-ali.github.io</a></div></div></div></div></div></div></div></div></div>
</blockquote></div></div>