[petsc-users] Preallocation Memory of Finite Element Method's Sparse Matrices

Barry Smith bsmith at mcs.anl.gov
Fri Mar 21 15:15:44 CDT 2014


  Thank you for reporting this. It was our error. In fact 4 is not enough under certain circumstances; consider where each process has only a single degree of freedom (vertex) then it is coupled to 8 other vertices ALL on other processes. Thus we really need to use 8 instead of 4 as the maximum number of off process coupling.

  I have fixed this in master so it now runs on any number of processes.

   Barry

On Mar 21, 2014, at 9:11 AM, 吕超 <luchao at mail.iggcas.ac.cn> wrote:

> 
> 
> Your faithfully:
> 
>      Last e-mail has some literal error, sorry~
> 
>      program src/ksp/ksp/examples/tutorials/ex3.c.html is about Bilinear elements on the unit square for Laplacian.
> 
>      After preallocation using   
> 
>      "ierr  = MatMPIAIJSetPreallocation(A,9,NULL,5,NULL);CHKERRQ(ierr); /* More than necessary */",
> 
>      Results of commands of "mpiexec -n 2 ./ex3" and "mpiexec -n 3 ./ex3" are "Norm of error 2.22327e-06 Iterations 6" and "Norm of error 3.12849e-07 Iterations 8". Both results are good!
> 
>      However, if I use "mpiexec -n 4 ./ex3" or 5,6,7...precesses, error "[2]PETSC ERROR: New nonzero at (4,29) (here is for process 4, other positions for different processes) caused a malloc!" appear!. For me, this error is unbelievable, because first, the preallocation is more than necessary,how can the new malloc appear? Second, the global number 4 point originally has no neighbor vertices whose global number is 29! This error has tortured me for a long time.
> 
>      This error seems meaningless, however, my recent 3d finite element method cannot be caculated by more processes owing to the new nonzero malloc error! And this is why I want to use 4 or much more processes to compute ex3.c.
> 
>      Thank you for all previous assistence and hope you have a good life!
> 
> your sin cerely
> 
> LV CHAO
> 
> 2014/3/21
> 
> 
> 
> 



More information about the petsc-users mailing list