[petsc-users] MPI_AllReduce error with -xcore-avx2 flags

Jose E. Roman jroman at dsic.upv.es
Thu Jan 28 01:56:42 CST 2016

> El 28 ene 2016, a las 8:32, Bikash Kanungo <bikash at umich.edu> escribió:
> Hi,
> I was trying to use BVOrthogonalize() function in SLEPc. For smaller problems (10-20 vectors of length < 20,000) I'm able to use it without any trouble. For larger problems ( > 150 vectors of length > 400,000) the code aborts citing an MPI_AllReduce error with following message:
> Scalar value must be same on all processes, argument # 3.
> I was skeptical that the PETSc compilation might be faulty and tried to build a minimalistic version omitting the previously used -xcore-avx2 flags in CFLAGS abd CXXFLAGS. That seemed to have done the cure. 
> What perplexes me is that I have been using the same code with -xcore-avx2 flags in PETSc build on a local cluster at the University of Michigan without any problem. It is only until recently when I moved to Xsede's Comet machine, that I started getting this MPI_AllReduce error with -xcore-avx2.
> Do you have any clue on why the same PETSc build fails on two different machines just because of a build flag?
> Regards,
> Bikash 
> -- 
> Bikash S. Kanungo
> PhD Student
> Computational Materials Physics Group
> Mechanical Engineering 
> University of Michigan

Without the complete error message I cannot tell the exact point where it is failing.

More information about the petsc-users mailing list