Hi Barry,<br><br>The symptom of "just got stuck" is simply that the code just stays there and never moves on. One more thing is that all the processes are at 99% cpu utilization. I do see some network traffic between the head node and computation nodes. The quantity is very small, but the sheer number of packets is huge. The processes are sending between 550 and 620 Million packets per second across the network. <br>
<br>Since my code never finishes, I cannot get the summary files by add -log_summary. any other way to get summary file?<br><br>BTW, my codes are running without any problem on shared-memory desktop with any number of processes. <br>
<br><div class="gmail_quote">On Wed, Jan 18, 2012 at 3:03 PM, <span dir="ltr"><<a href="mailto:petsc-users-request@mcs.anl.gov">petsc-users-request@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Send petsc-users mailing list submissions to<br>
<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a><br>
<br>
To subscribe or unsubscribe via the World Wide Web, visit<br>
<a href="https://lists.mcs.anl.gov/mailman/listinfo/petsc-users" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/petsc-users</a><br>
or, via email, send a message with subject or body 'help' to<br>
<a href="mailto:petsc-users-request@mcs.anl.gov">petsc-users-request@mcs.anl.gov</a><br>
<br>
You can reach the person managing the list at<br>
<a href="mailto:petsc-users-owner@mcs.anl.gov">petsc-users-owner@mcs.anl.gov</a><br>
<br>
When replying, please edit your Subject line so it is more specific<br>
than "Re: Contents of petsc-users digest..."<br>
<br>
<br>
Today's Topics:<br>
<br>
1. Re: generate entries on 'wrong' process (Barry Smith)<br>
2. Re: [petsc-dev] boomerAmg scalability (Ravi Kannan)<br>
<br>
<br>
----------------------------------------------------------------------<br>
<br>
Message: 1<br>
Date: Wed, 18 Jan 2012 12:56:10 -0600<br>
From: Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>><br>
Subject: Re: [petsc-users] generate entries on 'wrong' process<br>
To: PETSc users list <<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>><br>
Message-ID: <<a href="mailto:47754349-9741-4740-BBB4-F4B84EA07CEF@mcs.anl.gov">47754349-9741-4740-BBB4-F4B84EA07CEF@mcs.anl.gov</a>><br>
Content-Type: text/plain; charset=us-ascii<br>
<br>
<br>
What is the symptom of "just got stuck". Send the results of the whole run with -log_summary to <a href="mailto:petsc-maint@mcs.anl.gov">petsc-maint@mcs.anl.gov</a> and we'll see how much time is in that communication.<br>
<br>
Barry<br>
<br>
<br>
On Jan 18, 2012, at 10:32 AM, Wen Jiang wrote:<br>
<br>
> Hi,<br>
><br>
> I am working on FEM codes with spline-based element type. For 3D case, one element has 64 nodes and every two neighboring elements share 48 nodes. Thus regardless how I partition a mesh, there are still very large number of entries that have to write on the 'wrong' processor. And my code is running on clusters, the processes are sending between 550 and 620 Million packets per second across the network. My code seems IO-bound at this moment and just get stuck at the matrix assembly stage. A -info file is attached. Do I have other options to optimize my codes to be less io-intensive?<br>
><br>
> Thanks in advance.<br>
><br>
> [0] VecAssemblyBegin_MPI(): Stash has 210720 entries, uses 12 mallocs.<br>
> [0] VecAssemblyBegin_MPI(): Block-Stash has 0 entries, uses 0 mallocs.<br>
> [5] MatAssemblyBegin_MPIAIJ(): Stash has 4806656 entries, uses 8 mallocs.<br>
> [6] MatAssemblyBegin_MPIAIJ(): Stash has 5727744 entries, uses 9 mallocs.<br>
> [4] MatAssemblyBegin_MPIAIJ(): Stash has 5964288 entries, uses 9 mallocs.<br>
> [7] MatAssemblyBegin_MPIAIJ(): Stash has 7408128 entries, uses 9 mallocs.<br>
> [3] MatAssemblyBegin_MPIAIJ(): Stash has 8123904 entries, uses 9 mallocs.<br>
> [2] MatAssemblyBegin_MPIAIJ(): Stash has 11544576 entries, uses 10 mallocs.<br>
> [0] MatStashScatterBegin_Private(): No of messages: 1<br>
> [0] MatStashScatterBegin_Private(): Mesg_to: 1: size: 107888648<br>
> [0] MatAssemblyBegin_MPIAIJ(): Stash has 13486080 entries, uses 10 mallocs.<br>
> [1] MatAssemblyBegin_MPIAIJ(): Stash has 16386048 entries, uses 10 mallocs.<br>
> [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 11391 X 11391; storage space: 0 unneeded,2514537 used<br>
> [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0<br>
> [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 294<br>
> [0] Mat_CheckInode(): Found 11391 nodes out of 11391 rows. Not using Inode routines<br>
> [5] MatAssemblyEnd_SeqAIJ(): Matrix size: 11390 X 11390; storage space: 0 unneeded,2525390 used<br>
> [5] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0<br>
> [5] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 294<br>
> [5] Mat_CheckInode(): Found 11390 nodes out of 11390 rows. Not using Inode routines<br>
> [3] MatAssemblyEnd_SeqAIJ(): Matrix size: 11391 X 11391; storage space: 0 unneeded,2500281 used<br>
> [3] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0<br>
> [3] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 294<br>
> [3] Mat_CheckInode(): Found 11391 nodes out of 11391 rows. Not using Inode routines<br>
> [1] MatAssemblyEnd_SeqAIJ(): Matrix size: 11391 X 11391; storage space: 0 unneeded,2500281 used<br>
> [1] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0<br>
> [1] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 294<br>
> [1] Mat_CheckInode(): Found 11391 nodes out of 11391 rows. Not using Inode routines<br>
> [4] MatAssemblyEnd_SeqAIJ(): Matrix size: 11391 X 11391; storage space: 0 unneeded,2500281 used<br>
> [4] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0<br>
> [4] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 294<br>
> [4] Mat_CheckInode(): Found 11391 nodes out of 11391 rows. Not using Inode routines<br>
> [2] MatAssemblyEnd_SeqAIJ(): Matrix size: 11391 X 11391; storage space: 0 unneeded,2525733 used<br>
> [2] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0<br>
> [2] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 294<br>
> [2] Mat_CheckInode(): Found 11391 nodes out of 11391 rows. Not using Inode routines<br>
> <petsc_info><br>
<br>
<br>
<br>
------------------------------<br>
<br>
Message: 2<br>
Date: Wed, 18 Jan 2012 14:03:43 -0600<br>
From: "Ravi Kannan" <<a href="mailto:rxk@cfdrc.com">rxk@cfdrc.com</a>><br>
Subject: Re: [petsc-users] [petsc-dev] boomerAmg scalability<br>
To: "'Mark F. Adams'" <<a href="mailto:mark.adams@columbia.edu">mark.adams@columbia.edu</a>><br>
Cc: 'PETSc users list' <<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>><br>
Message-ID: <006f01ccd61c$47c0fc80$d742f580$@com><br>
Content-Type: text/plain; charset="us-ascii"<br>
<br>
Hi Mark, Hong,<br>
<br>
<br>
<br>
As you might remember, the reason for this whole exercise was to obtain a<br>
solution for a very stiff problem.<br>
<br>
<br>
<br>
We did have Hypre Boomer amg. This did not scale, but gives correct<br>
solution. So we wanted an alternative; hence we approached you for gamg.<br>
<br>
<br>
<br>
However for certain cases, gamg crashes. Even for the working cases, it<br>
takes about 15-20 times more sweeps than the boomer-hypre. Hence it is<br>
cost-prohibitive.<br>
<br>
<br>
<br>
Hopefully this gamg solver can be improved in the near future, for users<br>
like us.<br>
<br>
<br>
<br>
Warm Regards,<br>
<br>
Ravi.<br>
<br>
<br>
<br>
<br>
<br>
From: Mark F. Adams [mailto:<a href="mailto:mark.adams@columbia.edu">mark.adams@columbia.edu</a>]<br>
Sent: Wednesday, January 18, 2012 9:56 AM<br>
To: Hong Zhang<br>
Cc: <a href="mailto:rxk@cfdrc.com">rxk@cfdrc.com</a><br>
Subject: Re: [petsc-dev] boomerAmg scalability<br>
<br>
<br>
<br>
Hong and Ravi,<br>
<br>
<br>
<br>
I fixed a bug with the 6x6 problem. There seemed to be a bug in<br>
MatTranposeMat with funny decomposition, that was not really verified. So<br>
we can wait for Ravi to continue with his tests a fix them as they arise.<br>
<br>
<br>
<br>
Mark<br>
<br>
ps, Ravi, I may not have cc'ed so I will send again.<br>
<br>
<br>
<br>
On Jan 17, 2012, at 7:37 PM, Hong Zhang wrote:<br>
<br>
<br>
<br>
<br>
<br>
Ravi,<br>
<br>
I wrote a simple test ex163.c (attached) on MatTransposeMatMult().<br>
<br>
Loading your 6x6 matrix gives no error from MatTransposeMatMult()<br>
<br>
using 1,2,...7 processes.<br>
<br>
For example,<br>
<br>
<br>
<br>
petsc-dev/src/mat/examples/tests>mpiexec -n 4 ./ex163 -f<br>
/Users/hong/Downloads/repetscdevboomeramgscalability/binaryoutput<br>
<br>
A:<br>
<br>
Matrix Object: 1 MPI processes<br>
<br>
type: mpiaij<br>
<br>
row 0: (0, 1.66668e+06) (1, -1.35) (3, -0.6)<br>
<br>
row 1: (0, -1.35) (1, 1.66667e+06) (2, -1.35) (4, -0.6)<br>
<br>
row 2: (1, -1.35) (2, 1.66667e+06) (5, -0.6)<br>
<br>
row 3: (0, -0.6) (3, 1.66668e+06) (4, -1.35)<br>
<br>
row 4: (1, -0.6) (3, -1.35) (4, 1.66667e+06) (5, -1.35)<br>
<br>
row 5: (2, -0.6) (4, -1.35) (5, 1.66667e+06)<br>
<br>
<br>
<br>
C = A^T * A:<br>
<br>
Matrix Object: 1 MPI processes<br>
<br>
type: mpiaij<br>
<br>
row 0: (0, 2.77781e+12) (1, -4.50002e+06) (2, 1.8225) (3, -2.00001e+06)<br>
(4, 1.62)<br>
<br>
row 1: (0, -4.50002e+06) (1, 2.77779e+12) (2, -4.50001e+06) (3, 1.62)<br>
(4, -2.00001e+06) (5, 1.62)<br>
<br>
row 2: (0, 1.8225) (1, -4.50001e+06) (2, 2.7778e+12) (4, 1.62) (5,<br>
-2.00001e+06)<br>
<br>
row 3: (0, -2.00001e+06) (1, 1.62) (3, 2.77781e+12) (4, -4.50002e+06)<br>
(5, 1.8225)<br>
<br>
row 4: (0, 1.62) (1, -2.00001e+06) (2, 1.62) (3, -4.50002e+06) (4,<br>
2.77779e+12) (5, -4.50001e+06)<br>
<br>
row 5: (1, 1.62) (2, -2.00001e+06) (3, 1.8225) (4, -4.50001e+06) (5,<br>
2.7778e+12)<br>
<br>
<br>
<br>
Do I miss something?<br>
<br>
<br>
<br>
Hong<br>
<br>
<br>
<br>
On Sat, Jan 14, 2012 at 3:37 PM, Mark F. Adams <<a href="mailto:mark.adams@columbia.edu">mark.adams@columbia.edu</a>><br>
wrote:<br>
<br>
Ravi, this system is highly diagonally dominate. I've fixed the code so you<br>
can pull and try again.<br>
<br>
<br>
<br>
I've decided to basically just do a one level method with DD systems. I<br>
don't know if that is the best semantics, I think Barry will hate it,<br>
because it gives you a one level solver when you asked for MG. It now picks<br>
up the coarse grid solver as the solver, which is wrong, so I need to fix<br>
this if we decide to stick with the current semantics.<br>
<br>
<br>
<br>
And again thanks for helping to pound on this code.<br>
<br>
<br>
<br>
Mark<br>
<br>
<br>
<br>
On Jan 13, 2012, at 6:33 PM, Ravi Kannan wrote:<br>
<br>
<br>
<br>
Hi Mark, Hong,<br>
<br>
<br>
<br>
Lets make it simpler. I fixed my partitiotion bug (in metis). Now there is a<br>
equidivision of cells.<br>
<br>
<br>
<br>
To simplify even further, lets run a much smaller case : with 6 cells<br>
(equations) in SERIAL. This one crashes. The out and the ksp_view_binary<br>
files are attached.<br>
<br>
<br>
<br>
Thanks,<br>
<br>
RAvi.<br>
<br>
<br>
<br>
From: <a href="mailto:petsc-dev-bounces@mcs.anl.gov">petsc-dev-bounces@mcs.anl.gov</a> [mailto:<a href="mailto:petsc-dev-bounces@mcs.anl.gov">petsc-dev-bounces@mcs.anl.gov</a>]<br>
On Behalf Of Mark F. Adams<br>
Sent: Friday, January 13, 2012 3:00 PM<br>
To: For users of the development version of PETSc<br>
Subject: Re: [petsc-dev] boomerAmg scalability<br>
<br>
<br>
<br>
Well, we do have a bug here. It should work with zero elements on a proc,<br>
but the code is being actively developed so you are really helping us to<br>
find these cracks.<br>
<br>
<br>
<br>
If its not too hard it would be nice if you could give use these matrices,<br>
before you fix it, so we can fix this bug. You can just send it to Hong and<br>
I (cc'ed).<br>
<br>
<br>
<br>
Mark<br>
<br>
<br>
<br>
On Jan 13, 2012, at 12:16 PM, Ravi Kannan wrote:<br>
<br>
<br>
<br>
Hi Mark,Hong<br>
<br>
<br>
<br>
Thanks for the observation w.r.t the proc 0 having 2 equations. This is a<br>
bug from our end. We will fix it and get back to you if needed.<br>
<br>
<br>
<br>
Thanks,<br>
<br>
Ravi.<br>
<br>
<br>
<br>
From: <a href="mailto:petsc-dev-bounces@mcs.anl.gov">petsc-dev-bounces@mcs.anl.gov</a> [mailto:<a href="mailto:petsc-dev-bounces@mcs.anl.gov">petsc-dev-bounces@mcs.anl.gov</a>]<br>
On Behalf Of Mark F. Adams<br>
Sent: Thursday, January 12, 2012 10:03 PM<br>
To: Hong Zhang<br>
Cc: For users of the development version of PETSc<br>
Subject: Re: [petsc-dev] boomerAmg scalability<br>
<br>
<br>
<br>
Ravi, can you run with -ksp_view_binary? This will produce two files.<br>
<br>
<br>
<br>
Hong, ex10 will read in these files and solve them. I will probably not be<br>
able to get to this until Monday.<br>
<br>
<br>
<br>
Also, this matrix has just two equations on proc 0 and and about 11000 on<br>
proc 1 so its is strangely balanced, in case that helps ...<br>
<br>
<br>
<br>
Mark<br>
<br>
<br>
<br>
On Jan 12, 2012, at 10:35 PM, Hong Zhang wrote:<br>
<br>
<br>
<br>
<br>
<br>
Ravi,<br>
<br>
<br>
<br>
I need more info for debugging. Can you provide a simple stand alone code<br>
and matrices in petsc<br>
<br>
binary format that reproduce the error?<br>
<br>
<br>
<br>
MatTransposeMatMult() for mpiaij is a newly developed subroutine - less than<br>
one month old<br>
<br>
and not well tested yet :-(<br>
<br>
I used petsc-dev/src/mat/examples/tests/ex94.c for testing.<br>
<br>
<br>
<br>
Thanks,<br>
<br>
<br>
<br>
Hong<br>
<br>
On Thu, Jan 12, 2012 at 9:17 PM, Mark F. Adams <<a href="mailto:mark.adams@columbia.edu">mark.adams@columbia.edu</a>><br>
wrote:<br>
<br>
It looks like the problem is in MatTransposeMatMult and Hong (cc'ed) is<br>
working on it.<br>
<br>
<br>
<br>
I'm hoping that your output will be enough for Hong to figure this out but I<br>
could not reproduce this problem with any of my tests.<br>
<br>
<br>
<br>
If Hong can not figure this out then we will need to get the matrix from you<br>
to reproduce this.<br>
<br>
<br>
<br>
Mark<br>
<br>
<br>
<br>
<br>
<br>
On Jan 12, 2012, at 6:25 PM, Ravi Kannan wrote:<br>
<br>
<br>
<br>
<br>
<br>
Hi Mark,<br>
<br>
<br>
<br>
Any luck with the gamg bug fix?<br>
<br>
<br>
<br>
Thanks,<br>
<br>
Ravi.<br>
<br>
<br>
<br>
From: <a href="mailto:petsc-dev-bounces@mcs.anl.gov">petsc-dev-bounces@mcs.anl.gov</a> [mailto:<a href="mailto:petsc-dev-bounces@mcs.anl.gov">petsc-dev-bounces@mcs.anl.gov</a>]<br>
On Behalf Of Mark F. Adams<br>
Sent: Wednesday, January 11, 2012 1:54 PM<br>
To: For users of the development version of PETSc<br>
Subject: Re: [petsc-dev] boomerAmg scalability<br>
<br>
<br>
<br>
This seems to be dying earlier than it was last week, so it looks like a new<br>
bug in MatTransposeMatMult.<br>
<br>
<br>
<br>
Mark<br>
<br>
<br>
<br>
On Jan 11, 2012, at 1:59 PM, Matthew Knepley wrote:<br>
<br>
<br>
<br>
On Wed, Jan 11, 2012 at 12:23 PM, Ravi Kannan <<a href="mailto:rxk@cfdrc.com">rxk@cfdrc.com</a>> wrote:<br>
<br>
Hi Mark,<br>
<br>
<br>
<br>
I downloaded the dev version again. This time, the program crashes even<br>
earlier. Attached is the serial and parallel info outputs.<br>
<br>
<br>
<br>
Could you kindly take a look.<br>
<br>
<br>
<br>
It looks like this is a problem with MatMatMult(). Can you try to reproduce<br>
this using KSP ex10? You put<br>
<br>
your matrix in binary format and use -pc_type gamg. Then you can send us the<br>
matrix and we can track<br>
<br>
it down. Or are you running an example there?<br>
<br>
<br>
<br>
Thanks,<br>
<br>
<br>
<br>
Matt<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
Thanks,<br>
<br>
Ravi.<br>
<br>
<br>
<br>
From: <a href="mailto:petsc-dev-bounces@mcs.anl.gov">petsc-dev-bounces@mcs.anl.gov</a> [mailto:<a href="mailto:petsc-dev-bounces@mcs.anl.gov">petsc-dev-bounces@mcs.anl.gov</a>]<br>
On Behalf Of Mark F. Adams<br>
Sent: Monday, January 09, 2012 3:08 PM<br>
<br>
<br>
To: For users of the development version of PETSc<br>
Subject: Re: [petsc-dev] boomerAmg scalability<br>
<br>
<br>
<br>
<br>
<br>
Yes its all checked it, just pull from dev.<br>
<br>
Mark<br>
<br>
<br>
<br>
On Jan 9, 2012, at 2:54 PM, Ravi Kannan wrote:<br>
<br>
<br>
<br>
Hi Mark,<br>
<br>
<br>
<br>
Thanks for your efforts.<br>
<br>
<br>
<br>
Do I need to do the install from scratch once again? Or some particular<br>
files (check out gamg.c for instance)?<br>
<br>
<br>
<br>
Thanks,<br>
<br>
Ravi.<br>
<br>
<br>
<br>
From: <a href="mailto:petsc-dev-bounces@mcs.anl.gov">petsc-dev-bounces@mcs.anl.gov</a> [mailto:<a href="mailto:petsc-dev-bounces@mcs.anl.gov">petsc-dev-bounces@mcs.anl.gov</a>]<br>
On Behalf Of Mark F. Adams<br>
Sent: Friday, January 06, 2012 10:30 AM<br>
To: For users of the development version of PETSc<br>
Subject: Re: [petsc-dev] boomerAmg scalability<br>
<br>
<br>
<br>
I think I found the problem. You will need to use petsc-dev to get the fix.<br>
<br>
<br>
<br>
Mark<br>
<br>
<br>
<br>
On Jan 6, 2012, at 8:55 AM, Mark F. Adams wrote:<br>
<br>
<br>
<br>
Ravi, I forgot but you can just use -ksp_view_binary to output the matrix<br>
data (two files). You could run it with two procs and a Jacobi solver to<br>
get it past the solve, where it writes the matrix (I believe).<br>
<br>
Mark<br>
<br>
<br>
<br>
On Jan 5, 2012, at 6:19 PM, Ravi Kannan wrote:<br>
<br>
<br>
<br>
Just send in another email with the attachment.<br>
<br>
<br>
<br>
From: <a href="mailto:petsc-dev-bounces@mcs.anl.gov">petsc-dev-bounces@mcs.anl.gov</a> [mailto:<a href="mailto:petsc-dev-bounces@mcs.anl.gov">petsc-dev-bounces@mcs.anl.gov</a>]<br>
On Behalf Of Jed Brown<br>
Sent: Thursday, January 05, 2012 5:15 PM<br>
To: For users of the development version of PETSc<br>
Subject: Re: [petsc-dev] boomerAmg scalability<br>
<br>
<br>
<br>
On Thu, Jan 5, 2012 at 17:12, Ravi Kannan <<a href="mailto:rxk@cfdrc.com">rxk@cfdrc.com</a>> wrote:<br>
<br>
I have attached the verbose+info outputs for both the serial and the<br>
parallel (2 partitions). NOTE: the serial output at some location says<br>
PC=Jacobi! Is it implicitly converting the PC to a Jacobi?<br>
<br>
<br>
<br>
Looks like you forgot the attachment.<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
--<br>
What most experimenters take for granted before they begin their experiments<br>
is infinitely more interesting than any results to which their experiments<br>
lead.<br>
-- Norbert Wiener<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<out><binaryoutput><<a href="http://binaryoutput.info" target="_blank">binaryoutput.info</a> <<a href="http://binaryoutput.info/" target="_blank">http://binaryoutput.info/</a>> ><br>
<br>
<br>
<br>
<br>
<br>
<ex163.c><br>
<br>
<br>
<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120118/965a1679/attachment.htm" target="_blank">http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120118/965a1679/attachment.htm</a>><br>
<br>
------------------------------<br>
<br>
_______________________________________________<br>
petsc-users mailing list<br>
<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a><br>
<a href="https://lists.mcs.anl.gov/mailman/listinfo/petsc-users" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/petsc-users</a><br>
<br>
<br>
End of petsc-users Digest, Vol 37, Issue 41<br>
*******************************************<br>
</blockquote></div><br>