[petsc-users] Slow MatAssembly and MatMat Mult

Barry Smith bsmith at mcs.anl.gov
Wed Feb 17 20:33:24 CST 2016


  This is an absolutely tiny problem for 480 processors I am not surprised by the terrible performance. You should run this sub calculation on a small subset of the processes.

  Barry

> On Feb 17, 2016, at 7:03 PM, Bikash Kanungo <bikash at umich.edu> wrote:
> 
> Hi,
> 
> I have two small (2500x2500) matrices parallelized across 480 processors. One of them is an MPIAIJ matrix while the other is an MPIDENSE matrix. I perform a MatMatMult involving these two matrices. I tried these operations on two machines, one is the local cluster at University of Michigan and the other is the XSEDE Comet machine. The Comet machine takes 10-20 times more time in steps involving MatAssembly and MatMatMult of the aforementioned matrices. I have other Petsc MatMult operations in the same code involving larger matrices (4 million x 4 million) which show similar timing on both machines. It's just those small parallel matrices that are inconsistent in terms of their timings. I used same the compilers and MPI libraries in both the machines except that I have suppressed "avx2" flag in Comet. I believe avx2 affects floating point operations and not communication. I would like to know what might be causing these inconsistencies only in case of the small matrices. Are there any network settings that I can look into and compare?
> 
> Regards,
> Bikash  
> 
> -- 
> Bikash S. Kanungo
> PhD Student
> Computational Materials Physics Group
> Mechanical Engineering 
> University of Michigan
> 



More information about the petsc-users mailing list