[petsc-dev] Scaling of GAMG - SA
Barry Smith
bsmith at mcs.anl.gov
Fri Sep 9 13:06:43 CDT 2011
Mark,
Yes these are things where one would think you could get better performance than we get but I am kind of at a loss as to how to improve them. Perhaps Boyana has ideas.
Barry
On Sep 9, 2011, at 12:10 PM, Mark F. Adams wrote:
>>>>
>>>> Mark,
>>>>
>>>> So is the issue that the scaling for more processes is ok but we are using too much memory "per grid point"? What are the biggest space hogs? For example when you do the matrix-matrix product you only need the graph not the numerical values, should we provide a way to do that product without storing the numerical values (a big memory saver)? Other things?
>>>>
>
> I have added perfect preallocation for the fine grid matrix and was able to run the 50^3 test on 512 (it failed before), so it looks like the memory manager is not perfect at recovering freed memory.
>
> As far as memory hogs and matrix-matrix product. Yes I should be able to get away w/o values for 1/2 of my matrix-matrix products. THe other half are for the prolongation smoothing in SA.
>
> Also, I'm seeing some bad times for the this:
>
> MatPtAPSymbolic 4 1.0 2.2037e+01 1.0 0.00e+00 0.0 1.1e+05 1.2e+05 6.0e+01 20 0 2 16 3 27 0 4 30 3 0
> MatPtAPNumeric 4 1.0 1.5283e+01 1.0 3.56e+09 1.2 7.1e+04 6.5e+04 4.4e+01 14 25 1 6 2 19 70 3 11 2 107817
>
> The numeric part ran at about 2/3 the flop rate of the matrix-vector product. RAP has higher arithmetic intensity so you can get higher flop rates with it, but this OK. (I get higher flop rates in prometheus but I have have BAIJ matrices and unroll the inner loops) But the symbolic part is not looking too good. The symbolic part can be amortized, you can get away w/o changing the non-zero structure even if the operator changes, as long as it does not change too much. So this is not a big issue now, but something to be aware of.
>
> Mark
>
>>>> Barry
>>>>
>>>>>
>>>>> Mark
>>>>>
>>>>> On Sep 8, 2011, at 3:23 PM, Barry Smith wrote:
>>>>>
>>>>>>
>>>>>> Mark,
>>>>>>
>>>>>> To help track down memory hogs what about running with -malloc and -malloc_log. Run a given size problem on a given number of nodes and save the output. Now run on a larger number of nodes and compare the output. It may help you quickly see what code is requesting much larger blocks on larger problems or more parallel.
>>>>>>
>>>>>> Report all problems to petsc-maint :-)
>>>>>>
>>>>>>
>>>>>> Barry
>>>>>>
>>>>>> On Sep 8, 2011, at 2:15 PM, Mark F. Adams wrote:
>>>>>>
>>>>>>> I'm looking for memory bottlenecks and am not finding any egregious error, although I have not dug all the way into the errors that I am getting.
>>>>>>>
>>>>>>> One run ran out of memory in the section of code beginning at about line 1335 of mpiov.c:
>>>>>>>
>>>>>>> /* Assemble the matrices */
>>>>>>> /* First assemble the local rows */
>>>>>>> {
>>>>>>> PetscInt ilen_row,*imat_ilen,*imat_j,*imat_i,old_row;
>>>>>>> PetscScalar *imat_a;
>>>>>>>
>>>>>>> This is puzzling because I see only MatGetRow as a possible source of memory allocation.
>>>>>>>
>>>>>>> I increased the problem size and the job died earlier, in:
>>>>>>>
>>>>>>> /* create mpi matrix C by concatinating C_seq */
>>>>>>> ierr = PetscObjectReference((PetscObject)mult->C_seq);CHKERRQ(ierr); /* prevent C_seq being destroyed by MatMerge() */
>>>>>>> ierr = MatMerge(((PetscObject)A)->comm,mult->C_seq,B->cmap->n,MAT_INITIAL_MATRIX,C);CHKERRQ(ierr);
>>>>>>>
>>>>>>> in mpimatmatmult.c
>>>>>>>
>>>>>>> This is quit tedious to find these. mallocs don't fail, the OOM killer just sweeps in, so printf is about the only option.
>>>>>>>
>>>>>>> On thing that is disconcerting is that with 512 processors I could only run the test with about 100,000 vertices per core. On 8 processors I was able to run about 420,000 vertices per core (this is doing weak speedup).
>>>>>>>
>>>>>>> These 512 processor runs have 64,000,000 vertices and each core has about 1.25 Gb of memory so there could be a global vector malloc (a PetescInt would be 1/4 Gb) lurking in the code someplace but I have not been able to find it.
>>>>>>>
>>>>>>> Mark
>>>>>>>
>>>>>>> On Sep 7, 2011, at 9:49 AM, Hong Zhang wrote:
>>>>>>>
>>>>>>>> Mark:
>>>>>>>> Testing
>>>>>>>> petsc-dev/src/ksp/ksp/examples/tutorials>mpiexec -n 8 ./ex56 -ne 81
>>>>>>>> -alpha 1.e-3 -ksp_type cg -pc_gamg_type sa
>>>>>>>>
>>>>>>>> The run hangs (-ne 71 runs well).
>>>>>>>> 'top' indicates that when the run hangs, each process takes approx.
>>>>>>>> 12%MEM, and very little %CPU. Then I run it with '-start_in_debugger'
>>>>>>>> and 'control-C' in all debugger windows
>>>>>>>> when it hangs, I get following stack:
>>>>>>>> (gdb) wehre
>>>>>>>> Undefined command: "wehre". Try "help".
>>>>>>>> (gdb) where
>>>>>>>> #0 0x00007fd6f6421f58 in *__GI___poll (fds=0x30f1ce0, nfds=8,
>>>>>>>> timeout=<value optimized out>) at ../sysdeps/unix/sysv/linux/poll.c:83
>>>>>>>> #1 0x000000000131beda in MPIDU_Sock_wait (sock_set=0x30f1af0,
>>>>>>>> millisecond_timeout=-1, eventp=0x7fff234c13e0) at sock_wait.i:124
>>>>>>>> #2 0x00000000012e7e4a in MPIDI_CH3i_Progress_wait (
>>>>>>>> blocking=<value optimized out>, state=0x7fff234c1420) at ch3_progress.c:185
>>>>>>>> #3 MPIDI_CH3I_Progress (blocking=<value optimized out>, state=0x7fff234c1420)
>>>>>>>> at ch3_progress.c:891
>>>>>>>> #4 0x00000000012b12a7 in MPIC_Wait (request_ptr=0x3457598) at helper_fns.c:539
>>>>>>>> #5 0x00000000012b2285 in MPIC_Recv (buf=0x30f6d70, count=1,
>>>>>>>> datatype=<value optimized out>, source=<value optimized out>,
>>>>>>>> tag=<value optimized out>, comm=<value optimized out>,
>>>>>>>> status=0x7fff234c1610) at helper_fns.c:103
>>>>>>>> #6 0x00000000012b2419 in MPIC_Recv_ft (buf=0x30f1ce0, count=8, datatype=-1,
>>>>>>>> source=3, tag=51342704, comm=1, status=0x7fff234c1610,
>>>>>>>> errflag=0x7fff234c17fc) at helper_fns.c:615
>>>>>>>> #7 0x00000000012a8b2a in MPIR_Reduce_binomial (sendbuf=<value optimized out>,
>>>>>>>> recvbuf=<value optimized out>, count=<value optimized out>,
>>>>>>>> datatype=1275069445, op=<value optimized out>, root=<value optimized out>,
>>>>>>>> comm_ptr=0x188a848, errflag=0x7fff234c17fc) at reduce.c:139
>>>>>>>> #8 MPIR_Reduce_intra (sendbuf=<value optimized out>,
>>>>>>>> recvbuf=<value optimized out>, count=<value optimized out>,
>>>>>>>> datatype=1275069445, op=<value optimized out>, root=<value optimized out>,
>>>>>>>> ---Type <return> to continue, or q <return> to quit---
>>>>>>>> comm_ptr=0x188a848, errflag=0x7fff234c17fc) at reduce.c:891
>>>>>>>> #9 0x00000000012a2eb3 in MPIR_Allreduce_intra (sendbuf=0x45ed278,
>>>>>>>> recvbuf=0x7fff234c1850, count=1, datatype=1275069445,
>>>>>>>> op=<value optimized out>, comm_ptr=0x188a710, errflag=0x7fff234c17fc)
>>>>>>>> at allreduce.c:174
>>>>>>>> #10 0x00000000012a3b6b in PMPI_Allreduce (sendbuf=0x45ed278,
>>>>>>>> recvbuf=0x7fff234c1850, count=-1, datatype=-1, op=51342704,
>>>>>>>> comm=<value optimized out>) at allreduce.c:840
>>>>>>>> #11 0x000000000069dcb2 in MatAssemblyBegin_MPIAIJ (mat=0x45ed030,
>>>>>>>> mode=MAT_FINAL_ASSEMBLY)
>>>>>>>> at /sandbox/hzhang/petsc-dev/src/mat/impls/aij/mpi/mpiaij.c:616
>>>>>>>> #12 0x000000000072f808 in MatAssemblyBegin (mat=0x45ed030,
>>>>>>>> type=MAT_FINAL_ASSEMBLY)
>>>>>>>> at /sandbox/hzhang/petsc-dev/src/mat/interface/matrix.c:4766
>>>>>>>> #13 0x0000000000a814fc in createProlongation (a_Amat=0x30ff2a0,
>>>>>>>> a_data=0x7fd6e0259760, a_dim=3, a_data_cols=6, a_useSA=PETSC_TRUE,
>>>>>>>> a_level=0, a_bs=0x7fff234c1f34, a_P_out=0x7fff234c1c58,
>>>>>>>> a_data_out=0x7fff234c1ee0, a_isOK=0x7fff234c1f10, a_emax=0x7fff234c1b60)
>>>>>>>> at /sandbox/hzhang/petsc-dev/src/ksp/pc/impls/gamg/createProlongation.c:1574
>>>>>>>> #14 0x0000000000a723aa in PCSetUp_GAMG (a_pc=0x31659d0)
>>>>>>>> at /sandbox/hzhang/petsc-dev/src/ksp/pc/impls/gamg/gamg.c:515
>>>>>>>> #15 0x0000000000797c19 in PCSetUp (pc=0x31659d0)
>>>>>>>> ---Type <return> to continue, or q <return> to quit---
>>>>>>>> at /sandbox/hzhang/petsc-dev/src/ksp/pc/interface/precon.c:819
>>>>>>>> #16 0x00000000007a8872 in KSPSetUp (ksp=0x312e650)
>>>>>>>> at /sandbox/hzhang/petsc-dev/src/ksp/ksp/interface/itfunc.c:260
>>>>>>>> #17 0x000000000040991c in main (argc=10, args=0x7fff234c5938) at ex56.c:210
>>>>>>>>
>>>>>>>> Is it the problem in PMPI_Allreduce()?
>>>>>>>>
>>>>>>>> Hong
>>>>>>>>
>>>>>>>>> Whoops, I had a bug in my print statements, I'm not sure where the code is dying,
>>>>>>>>> Mark
>>>>>>>>>
>>>>>>>>> On Sep 6, 2011, at 11:52 AM, Hong Zhang wrote:
>>>>>>>>>
>>>>>>>>>> Mark,
>>>>>>>>>> I realized that the global index set is allocated in
>>>>>>>>>>
>>>>>>>>>>> ierr = ISGetIndices(iscol[i],&icol[i]);CHKERRQ(ierr);
>>>>>>>>>>
>>>>>>>>>> I made a change that skips this when allcolumns are wanted.
>>>>>>>>>> Please pull and let me know what you get.
>>>>>>>>>> I tested it with 'mpiexec -n 8 ./ex56 -ne 71 -pc_gamg_type sa' successfully.
>>>>>>>>>>
>>>>>>>>>> Thanks for your patience,
>>>>>>>>>> Hong
>>>>>>>>>>
>>>>>>>>>>> PetscPrintf(PETSC_COMM_SELF,"\t%s ncol[0]=%d\n",__FUNCT__,ncol[0]);
>>>>>>>>>>> if(ncol[0]==64^3) print-stack
>>>>>>>>>>>
>>>>>>>>>>> Then run ex56 with 8 processors and '-ne 63 -pc_gamg_type sa'. The global size of this (bs=1) matrix is 64^3, so if you see 262144 you know you have an error.
>>>>>>>>>>>
>>>>>>>>>>> Actually it might be better to run with -info and search for this number (or the number actual equations: 3 * 64^3).
>>>>>>>>>>>
>>>>>>>>>>> Mark
>>>>>>>>>>>
>>>>>>>>>>> On Sep 5, 2011, at 11:02 PM, Barry Smith wrote:
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Sep 5, 2011, at 9:55 PM, Hong Zhang wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Mark,
>>>>>>>>>>>>> I see
>>>>>>>>>>>>> line 5050 in MatGetBrowsOfAcols()
>>>>>>>>>>>>> ierr = ISCreateStride(PETSC_COMM_SELF,B->cmap->N,0,1,&iscolb);CHKERRQ(ierr);
>>>>>>>>>>>>
>>>>>>>>>>>> This should be ok. It takes no space and is then passed in MatGetSubMatrices() which should detect that you are requesting all columns and never get all the column indices. You need to track through the debugger to see when it is expanding to an entire set of indices.
>>>>>>>>>>>>
>>>>>>>>>>>> Barry
>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> I'll try to get rid of it.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Hong
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Mon, Sep 5, 2011 at 9:38 PM, Hong Zhang <hzhang at mcs.anl.gov> wrote:
>>>>>>>>>>>>>> Mark,
>>>>>>>>>>>>>> MatGetSubMatrices_MPIAIJ_Local() is called twice by
>>>>>>>>>>>>>> MatMatMultSymbolic_MPIAIJ_MPIAIJ()
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> 1) MatGetBrowsOfAcols() -> ...->MatGetSubMatrices_MPIAIJ_Local()
>>>>>>>>>>>>>> with ncol[0]=all columns
>>>>>>>>>>>>>> 2) MatMPIAIJGetLocalMatCondensed()
>>>>>>>>>>>>>> ->...->MatMPIAIJGetLocalMatCondensed() with ncol[0] = Acol + Bcol
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Previously, 1) requires creation of a col map with all columns(global
>>>>>>>>>>>>>> size N). This is skipped in petsc-dev with flag=allcolumns
>>>>>>>>>>>>>> I suspect 2) or somewhere still create non-scalabal array/IS. I like
>>>>>>>>>>>>>> to see the error stack as you reported previously, e.g. your email
>>>>>>>>>>>>>> dated Tue, Aug 30, 2011 at 4:43 PM):
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> [2640]PETSC ERROR: MatGetSubMatrices_MPIAIJ() line 773 in
>>>>>>>>>>>>>> src/mat/impls/aij/mpi/mpiov.c
>>>>>>>>>>>>>> [2640]PETSC ERROR: MatGetSubMatrices() line 6349 in src/mat/interface/matrix.c
>>>>>>>>>>>>>> [2640]PETSC ERROR: MatGetBrowsOfAcols() line 5057 in
>>>>>>>>>>>>>> src/mat/impls/aij/mpi/mpiaij.c
>>>>>>>>>>>>>> [2640]PETSC ERROR: MatMatMultSymbolic_MPIAIJ_MPIAIJ() line 113 in
>>>>>>>>>>>>>> src/mat/impls/aij/mpi/mpimatmatmult.c
>>>>>>>>>>>>>> [2640]PETSC ERROR: MatMatMult_MPIAIJ_MPIAIJ() line 20 in
>>>>>>>>>>>>>> src/mat/impls/aij/mpi/mpimatmatmult.c
>>>>>>>>>>>>>> [2640]PETSC ERROR: MatMatMult() line 8244 in src/mat/interface/matrix.c
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Hong
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Mon, Sep 5, 2011 at 4:55 PM, Mark F. Adams <mark.adams at columbia.edu> wrote:
>>>>>>>>>>>>>>> Hong,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I'm tracking this down with printf debugging but it looks like the problem in MatGetSubMatrices_MPIAIJ_Local is still there:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> ierr = ISGetIndices(iscol[i],&icol[i]);CHKERRQ(ierr);
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> is called with a IS of the global size of the system (64000000 in this case). Here is my print output:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> MatGetSubMatrices_MPIAIJ 2222
>>>>>>>>>>>>>>> MatGetSubMatrices_MPIAIJ 33333 isIDent=1
>>>>>>>>>>>>>>> MatGetSubMatrices_MPIAIJ 4444 loc sz=64000000
>>>>>>>>>>>>>>> MatGetSubMatrices_MPIAIJ 55555 call MatGetSubMatrices_MPIAIJ_Local
>>>>>>>>>>>>>>> MatGetSubMatrices_MPIAIJ_Local 0000
>>>>>>>>>>>>>>> MatGetSubMatrices_MPIAIJ_Local ncol[0]=64000000
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Mark
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Sep 5, 2011, at 3:05 PM, Hong Zhang wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Mark:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Could be. I'm now getting OOM errors (w/o any stack info) with 32K cores. I've found a configuration and 4K cores that has the same problem and am trying to get a stack trace with DDT. I'm not getting much but I seem to have ISGetIndices_Stride on my stack. This is called from MatMatMult.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> MatMatMult() calls MatGetSubMatrices_MPIAIJ() twice,
>>>>>>>>>>>>>>>> 1st from MatGetBrowsOfAcols(), then from MatMPIAIJGetLocalMatCondensed().
>>>>>>>>>>>>>>>> I fixed the 1st one, but the 2nd might still not scalable. Having an
>>>>>>>>>>>>>>>> entire error stack would be helpful.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Hong
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I'll keep trying to get more data ... could there be a call to ISGetIndices_stride with the whole matrix?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> mark
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Satish
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> was returning 1, which signals a "buffer" error. I was perplexed, I suspect that something is getting trashed someplace. I've also changed the code so there are less non-zeros per row on the coarse grid matrices and this error does not happen anymore.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> This is a different bug. Can you send us the entire error stack, i.e.,
>>>>>>>>>>>>>>>>>>> the prior statements that call this function?
>>>>>>>>>>>>>>>>>>> The changes I made to MatGetSubMatrices_MPIAIJ() is not implemented in
>>>>>>>>>>>>>>>>>>> this call or has been passed successfully.
>>>>>>>>>>>>>>>>>>> I add Barry to this email thread for help :-)
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Hong
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> I'm going to keep scaling this up. moving the 64 bit indices now, and plan on going up to 140,608 cores. I will send you what output I have if I have any problems and we can try to debug it together,
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Mark
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Sep 5, 2011, at 11:36 AM, Hong Zhang wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Mark:
>>>>>>>>>>>>>>>>>>>>>> OK, I'll try to give you more precise bug reports so that you can try to figure it out if I can not reproduce it on 8 procs.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> I doubt the bug can be reproduced with 8 procs. Can you get the
>>>>>>>>>>>>>>>>>>>>> latest petsc-dev, run test on your machine
>>>>>>>>>>>>>>>>>>>>> and send the error report to me?
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Hong
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Sep 2, 2011, at 3:28 PM, Hong Zhang wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Satish,
>>>>>>>>>>>>>>>>>>>>>>> Is it a way to reproduce the error
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> I used mpiexec -n 512 ./ex56 -ne 255 ...
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> in petsc machine?
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Mark,
>>>>>>>>>>>>>>>>>>>>>>> Our petsc machine only has 8 cores/nodes. For debugging, less
>>>>>>>>>>>>>>>>>>>>>>> processors is better.
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Hong
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> Thanks OK, we can figure this out later, I was able to work around this.
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> petsc:/sandbox/hzhang/petsc-dev/src/ksp/ksp/examples/tutorials>mpiexec
>>>>>>>>>>>>>>>>>>>>>>>>> -n 8 ./ex56 -ne 111 -alpha 1.e-3 -ksp_monitor_short -ksp_type cg
>>>>>>>>>>>>>>>>>>>>>>>>> -pc_gamg_type sa
>>>>>>>>>>>>>>>>>>>>>>>>> [0]PCSetUp_GAMG level 0 N=4214784, n data rows=3, n data cols=6,
>>>>>>>>>>>>>>>>>>>>>>>>> nnz/row (ave)=79, np=8
>>>>>>>>>>>>>>>>>>>>>>>>> createProlongation ave nnz/row 27 --> 26
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> The run seems hang. 'Top' gives 4-5% of %CPU for each process.
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> Using '-ne 71', I get
>>>>>>>>>>>>>>>>>>>>>>>>> [3]PETSC ERROR: VecSetValues() line 796 in
>>>>>>>>>>>>>>>>>>>>>>>>> /sandbox/hzhang/petsc-dev/src/vec/vec/interface/rvector.c
>>>>>>>>>>>>>>>>>>>>>>>>> [7]PETSC ERROR: --------------------- Error Message
>>>>>>>>>>>>>>>>>>>>>>>>> ------------------------------------
>>>>>>>>>>>>>>>>>>>>>>>>> [7]PETSC ERROR: Invalid pointer!
>>>>>>>>>>>>>>>>>>>>>>>>> [7]PETSC ERROR: Null Pointer: Parameter # 3!
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> Humm, OK, thats a new bug.
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> I'm trying to fix problems in ParMetis now so we can worry about these errors later, I'll let you know when I have an test that you can run.
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> How many processors can you debug with? My errors are all with very large systems.
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>> Mark
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> How can I reproduce the error? How to chose ne?
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>> Hong
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> Hong: I now get this error:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> [4]PETSC ERROR: MatGetSubMatrices_MPIAIJ() line 780 in src/mat/impls/aij/mpi/mpiov.c
>>>>>>>>>>>>>>>>>>>>>>>>>> [4]PETSC ERROR: MatGetSubMatrices() line 6349 in src/mat/interface/matrix.c
>>>>>>>>>>>>>>>>>>>>>>>>>> [4]PETSC ERROR: MatMPIAIJGetLocalMatCondensed() line 4987 in src/mat/impls/aij/mpi/mpiaij.c
>>>>>>>>>>>>>>>>>>>>>>>>>> [4]PETSC ERROR: MatMatMultSymbolic_MPIAIJ_MPIAIJ() line 118 in src/mat/impls/aij/mpi/mpimatmatmult.c
>>>>>>>>>>>>>>>>>>>>>>>>>> [4]PETSC ERROR: MatMatMult_MPIAIJ_MPIAIJ() line 20 in src/mat/impls/aij/mpi/mpimatmatmult.c
>>>>>>>>>>>>>>>>>>>>>>>>>> [4]PETSC ERROR: MatMatMult() line 8244 in src/mat/interface/matrix.c
>>>>>>>>>>>>>>>>>>>>>>>>>> [4]PETSC ERROR: createProlongation() line 1651 in src/ksp/pc/impls/gamg/createProlongation.c
>>>>>>>>>>>>>>>>>>>>>>>>>> [4]PETSC ERROR: PCSetUp_GAMG() line 506 in src/ksp/pc/impls/gamg/gamg.c
>>>>>>>>>>>>>>>>>>>>>>>>>> [4]PETSC ERROR: PCSetUp() line 819 in src/ksp/pc/interface/precon.c
>>>>>>>>>>>>>>>>>>>>>>>>>> [4]PETSC ERROR: KSPSetUp() line 260 in src/ksp/ksp/interface/itfunc.c
>>>>>>>>>>>>>>>>>>>>>>>>>> [4]PETSC ERROR: main() line 208 in src/ksp/ksp/examples/tutorials/ex56.c
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> Do you have any ideas?
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> Mark
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> On Sep 2, 2011, at 11:28 AM, Hong Zhang wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> Mark:
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Humm, thats strange. You are updated of course ... I'm now crashing in the test so I'm reconfiguring and building.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> The test sets the PC type so you could try removing the '-pc_type gamg'.
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> Comment out
>>>>>>>>>>>>>>>>>>>>>>>>>>> //ierr = PCSetType( pc, PCGAMG );
>>>>>>>>>>>>>>>>>>>>>>>>>>> I can run ex56.c with most pc's, except gamg. With '-pc_type gamg', I get
>>>>>>>>>>>>>>>>>>>>>>>>>>> mpiexec -n 8 ./ex56 -ne 11 -alpha 1.e-3 -ksp_monitor_short -ksp_type
>>>>>>>>>>>>>>>>>>>>>>>>>>> cg -pc_gamg_type sa -pc_type gamg
>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Unable to find requested PC type gamg!
>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR:
>>>>>>>>>>>>>>>>>>>>>>>>>>> ------------------------------------------------------------------------
>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Petsc Development HG revision:
>>>>>>>>>>>>>>>>>>>>>>>>>>> f7b097e4389fe74450bf2cad4a8cd1832204771d HG Date: Mon Aug 22 14:56:21
>>>>>>>>>>>>>>>>>>>>>>>>>>> 2011 -0500
>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR:
>>>>>>>>>>>>>>>>>>>>>>>>>>> ------------------------------------------------------------------------
>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: ./ex56 on a arch-linu named petsc by hzhang Fri Sep 2
>>>>>>>>>>>>>>>>>>>>>>>>>>> 10:25:45 2011
>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Libraries linked from /sandbox/hzhang/petsc-dev/arch-linux/lib
>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Configure run at Mon Aug 22 16:33:40 2011
>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Configure options --download-blacs --download-ml
>>>>>>>>>>>>>>>>>>>>>>>>>>> --download-mpich --download-mumps --download-parmetis
>>>>>>>>>>>>>>>>>>>>>>>>>>> --download-scalapack --download-sundials --download-superlu
>>>>>>>>>>>>>>>>>>>>>>>>>>> --download-superlu_dist --with-cc=gcc --with-clanguage=cxx
>>>>>>>>>>>>>>>>>>>>>>>>>>> --with-cxx=g++ --with-fc="gfortran -m64" --download-fftw
>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR:
>>>>>>>>>>>>>>>>>>>>>>>>>>> ------------------------------------------------------------------------
>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: PCSetType() line 67 in
>>>>>>>>>>>>>>>>>>>>>>>>>>> /sandbox/hzhang/petsc-dev/src/ksp/pc/interface/pcset.c
>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: PCSetFromOptions() line 184 in
>>>>>>>>>>>>>>>>>>>>>>>>>>> /sandbox/hzhang/petsc-dev/src/ksp/pc/interface/pcset.c
>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: KSPSetFromOptions() line 286 in
>>>>>>>>>>>>>>>>>>>>>>>>>>> /sandbox/hzhang/petsc-dev/src/ksp/ksp/interface/itcl.c
>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: main() line 51 in src/ksp/ksp/examples/tutorials/ex56.c
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> Run ex56.c with '-help', I do not see 'gamg' showing up under help
>>>>>>>>>>>>>>>>>>>>>>>>>>> menu '-pc_type'.
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> Hong
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> M
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Sep 2, 2011, at 10:32 AM, Hong Zhang wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Mark :
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> If you look in ex56.c and search for viewer you will see code to write the matrix. I used 4096 processor to get this error with:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ex56 -ne 511 -pc_gamg_type sa
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> This matrix is 1/4 of a TeraByte, there is noting special about it, its just big.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I get
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> petsc-dev/src/ksp/ksp/examples/tutorials>make runex56
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 1,16c1,20
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> < [0]PCSetUp_GAMG level 0 N=5184, n data rows=3, n data cols=6,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> nnz/row (ave)=68, np=8
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> < createProlongation ave nnz/row 23 --> 21
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> < [0]PCSetUp_GAMG 1) N=360, bs=6, n data cols=6, nnz/row (ave)=117,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 1 active pes
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> < [0]PCSetUp_GAMG 2 levels
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> < PCSetUp_GAMG max eigen = 3.368662e+00 min = 2.227951e-01
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> < 0 KSP Residual norm 73.6627
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> < 1 KSP Residual norm 34.2318
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> < 2 KSP Residual norm 10.1506
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> < 3 KSP Residual norm 2.43678
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> < 4 KSP Residual norm 0.300919
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> < 5 KSP Residual norm 0.0383077
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> < 6 KSP Residual norm 0.0244548
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> < 7 KSP Residual norm 0.00856266
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> < 8 KSP Residual norm 0.00536436
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> < 9 KSP Residual norm 0.00173613
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> < 10 KSP Residual norm 0.000483957
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ---
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: --------------------- Error Message ------------------------------------
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Unknown type. Check for miss-spelling or missing external package needed for type
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> seehttp://www.mcs.anl.gov/petsc/petsc-as/documentation/installation.html#external!
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Unable to find requested PC type gamg!
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: ------------------------------------------------------------------------
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Petsc Development HG revision: f7b097e4389fe74450bf2cad4a8cd1832204771d HG Date: Mon Aug 22 14:56:21 2011 -0500
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: See docs/changes/index.html for recent updates.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: See docs/index.html for manual pages.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: ------------------------------------------------------------------------
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: ./ex56 on a arch-linu named petsc by hzhang Fri Sep 2 09:30:55 2011
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Libraries linked from /sandbox/hzhang/petsc-dev/arch-linux/lib
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Configure run at Mon Aug 22 16:33:40 2011
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: Configure options --download-blacs --download-ml --download-mpich --download-mumps --download-parmetis --download-scalapack --download-sundials --download-superlu --download-superlu_dist --with-cc=gcc --with-clanguage=cxx --with-cxx=g++ --with-fc="gfortran -m64" --download-fftw
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: ------------------------------------------------------------------------
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: PCSetType() line 67 in /sandbox/hzhang/petsc-dev/src/ksp/pc/interface/pcset.c
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [0]PETSC ERROR: main() line 50 in src/ksp/ksp/examples/tutorials/ex56.c
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> application called MPI_Abort(MPI_COMM_WORLD, 86) - process 0
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hong
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Sep 1, 2011, at 12:46 PM, Hong Zhang wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Mark:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Won't just rather run the example in the repo?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> The matrix has 134M rows with about 80 nnz/row, thats a big file.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I know what to do now :-)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> thus do not need your matrices. I'll try to get it done and let you
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> test it soon.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> However, having your big matrix in petsc matrix collection would be useful.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hong
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Sep 1, 2011, at 11:54 AM, Hong Zhang wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Mark,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I'm back :-)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Can you write your matrices into petsc binary files and send to me for testing?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I'll use petsc-dev/src/mat/examples/tests/ex94.c to test MatMatMult().
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Hong
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On Tue, Aug 30, 2011 at 4:43 PM, Mark F. Adams <mark.adams at columbia.edu> wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Folks,
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> I've been scaling the 3D elasticity test and have been struggling to work around the Parmetis limitations (eg, no empty processors) and have some data a big bottleneck in front of me in MatGetSubMatrix. First some data. This is a cube of elasticity, N is the number of nodes on a side, #equations = 3*N^3, P is the number of processors, Setup is measured in two phases, the second has the repartitioning which has some serious performance problems in ParMetis.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> P 8 64 512 4096
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> N 64 128 256 512
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Solve Time 5.5 6.1 7.3 memory error in MatMatMult
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Setup 1) 9.4 11.6 18.4
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 2) 9.3 19.6 168 (really)
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Iterations 18 19 21
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Mfops/sec/core 352 339 316
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> in solve
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> A few things to note, over 95% of the setup times are in MatMatMut, PtAP, MatGetSubMatrix and MatPartitioning, so all the near term performance tuning will be in PETSc core code, except for the partitioning.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> A big problem is MatMatMult, with this call stack:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [2640]PETSC ERROR: MatGetSubMatrices_MPIAIJ() line 773 in src/mat/impls/aij/mpi/mpiov.c
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [2640]PETSC ERROR: MatGetSubMatrices() line 6349 in src/mat/interface/matrix.c
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [2640]PETSC ERROR: MatGetBrowsOfAcols() line 5057 in src/mat/impls/aij/mpi/mpiaij.c
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [2640]PETSC ERROR: MatMatMultSymbolic_MPIAIJ_MPIAIJ() line 113 in src/mat/impls/aij/mpi/mpimatmatmult.c
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [2640]PETSC ERROR: MatMatMult_MPIAIJ_MPIAIJ() line 20 in src/mat/impls/aij/mpi/mpimatmatmult.c
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> [2640]PETSC ERROR: MatMatMult() line 8244 in src/mat/interface/matrix.c
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> line 5050 of MatGetBrowsOfAcols:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ierr = ISCreateStride(PETSC_COMM_SELF,B->cmap->N,0,1,&iscolb);CHKERRQ(ierr);
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> creates a dense IS the size of the global system, this is then use later as the size of a hash table (this defeats the purpose of the hash table). This all uses O(N) memory which is death at even 4K cores.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Anyway, I will start thinking about tackling this MatGetSubMatrices beast.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Mark
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>
>>
>>
>
More information about the petsc-dev
mailing list