[petsc-users] MPI_AllReduce error with -xcore-avx2 flags

Bikash Kanungo bikash at umich.edu
Thu Jan 28 01:32:15 CST 2016


I was trying to use BVOrthogonalize() function in SLEPc. For smaller
problems (10-20 vectors of length < 20,000) I'm able to use it without any
trouble. For larger problems ( > 150 vectors of length > 400,000) the code
aborts citing an MPI_AllReduce error with following message:

Scalar value must be same on all processes, argument # 3.

I was skeptical that the PETSc compilation might be faulty and tried to
build a minimalistic version omitting the previously used -xcore-avx2 flags
in CFLAGS abd CXXFLAGS. That seemed to have done the cure.

What perplexes me is that I have been using the same code with -xcore-avx2
flags in PETSc build on a local cluster at the University of Michigan
without any problem. It is only until recently when I moved to Xsede's
Comet machine, that I started getting this MPI_AllReduce error with

Do you have any clue on why the same PETSc build fails on two different
machines just because of a build flag?


Bikash S. Kanungo
PhD Student
Computational Materials Physics Group
Mechanical Engineering
University of Michigan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20160128/1eb21ed7/attachment.html>

More information about the petsc-users mailing list