<div dir="ltr"><div dir="ltr">On Thu, Oct 19, 2023 at 8:35 PM Jorge Nin <<a href="mailto:jorgenin@mit.edu">jorgenin@mit.edu</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>Hi Mathew,<div><br></div><div><div>Thanks for the response. It actually seems like the matrix is very sparse (<font color="#363636"><span style="white-space:pre-wrap">0.99% sparsity from what I’m measuring). It’s an FEA solver so it would make sense.</span></font></div><div><font color="#363636"><span style="white-space:pre-wrap">My current guess is the optimization flags are making a large difference for the M1 Mac, but I am also surprised it makes such a huge difference.</span></font></div><div><font color="#363636"><span style="white-space:pre-wrap"><br></span></font></div><div><font color="#363636"><span style="white-space:pre-wrap">It’s why I was asking if there was a resource or another to use my own version of PETSc with Conda.</span></font></div></div></div></blockquote><div><br></div><div>We do not know how Conda works unfortunately.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div><div><font color="#363636"><span style="white-space:pre-wrap">I believe a 2-3 x speed up is worth the hassle. </span></font></div><div><font color="#363636"><span style="white-space:pre-wrap"><br></span></font></div><div><font color="#363636"><span style="white-space:pre-wrap"><br></span></font></div><div><font color="#363636"><span style="white-space:pre-wrap">Best,</span></font></div><div><font color="#363636"><span style="white-space:pre-wrap">Jorge</span></font></div><div><font color="#363636" face="Menlo, Monaco, Courier New, monospace"><span style="white-space:pre-wrap"><br></span></font></div><div><font color="#363636" face="Menlo, Monaco, Courier New, monospace"><span style="white-space:pre-wrap"><br></span></font></div><div><font color="#363636" face="Menlo, Monaco, Courier New, monospace"><span style="white-space:pre-wrap"><br></span></font></div><div><blockquote type="cite"><div>On Oct 19, 2023, at 4:00 PM, Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> wrote:</div><br><div><div dir="ltr"><div dir="ltr">On Thu, Oct 19, 2023 at 3:54 PM Jorge Nin <<a href="mailto:jorgenin@mit.edu" target="_blank">jorgenin@mit.edu</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi,<br>
I was playing around with a self compiled version and, and a the Conda binary of Petsc on the same problem, on my M1 Mac.<br>
Interestingly I found that the Conda binary solves the problem 2-3 times slower vs the self compiled version. (For context I’m using the petsc4py python interface) <br>
<br>
I’ve attached two log views to show the comparison.<br>
<br>
I was mostly curious about the possible cause for this.<br></blockquote><div><br></div><div>All the time is in the LU numeric factorization. I don't know if your matrix is sparse or dense. I am guessing it is dense and different LAPACK implementations are linked. If it is sparse, then the compiler options are different between builds, but I would be surprised if it made this much difference.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
I was also curious how I could use my own compiled version of PETSc in my Conda install? <br>
<br>
<br>
Best,<br>
Jorge<br>
<br>
</blockquote></div><br clear="all"><div><br></div><span class="gmail_signature_prefix">-- </span><br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>
</div></blockquote></div><br></div></div></blockquote></div><br clear="all"><div><br></div><span class="gmail_signature_prefix">-- </span><br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>