<div dir="ltr"><div><div><div><div><div><div>Hi,<br><br></div>I was trying to use BVOrthogonalize() function in SLEPc. For smaller problems (10-20 vectors of length < 20,000) I'm able to use it without any trouble. For larger problems ( > 150 vectors of length > 400,000) the code aborts citing an MPI_AllReduce error with following message:<br><br>Scalar value must be same on all processes, argument # 3.<br><br></div>I was skeptical that the PETSc compilation might be faulty and tried to build a minimalistic version omitting the previously used -xcore-avx2 flags in CFLAGS abd CXXFLAGS. That seemed to have done the cure. <br><br></div>What perplexes me is that I have been using the same code with -xcore-avx2 flags in PETSc build on a local cluster at the University of Michigan without any problem. It is only until recently when I moved to Xsede's Comet machine, that I started getting this MPI_AllReduce error with -xcore-avx2.<br><br></div>Do you have any clue on why the same PETSc build fails on two different machines just because of a build flag?<br><br></div>Regards,<br></div>Bikash <br clear="all"><div><div><div><div><div><div><div><br>-- <br><div class="gmail_signature"><div dir="ltr"><div><div><div><div><font color="#666666">Bikash S. Kanungo<br></font></div><font color="#666666">PhD Student<br></font></div><font color="#666666">Computational Materials Physics Group<br></font></div><font color="#666666">Mechanical Engineering <br></font></div><font color="#666666">University of Michigan<br><br></font></div></div>
</div></div></div></div></div></div></div></div>