Excellent...! thank you Barry <br><br>Waad :)<br><br><b><i>Barry Smith <bsmith@mcs.anl.gov></i></b> wrote:<blockquote class="replbq" style="border-left: 2px solid rgb(16, 16, 255); margin-left: 5px; padding-left: 5px;"> <br>On May 23, 2008, at 11:20 AM, Waad Subber wrote:<br><br>> Thank you for the useful comments. For sure I will consider them. <br>> I've just starting my research by writing a DDM substructuring code <br>> which scales for now up-to 60 CPUs using petsc KSP solver for the <br>> interface problem and Lapack direct factorization for the interior <br>> problem. I split the domain using METIS library and assign each <br>> subdomain to one process then solve the global Schur complement <br>> using parallel preconditioned iterative solver. As an initial <br>> attempt, I solved a 2D elasticity problem (about 100,000 DOFs) <br>> within seconds using this algorithm. I notice Lapack solver for the <br>> interior
problem takes a lot of time compare to the iterative solver <br>> for the interface, so now I am replacing the direct factorization <br>> with petsc KSP solver.<br><br> In my experience you will want to use a KSP direct solve, for <br>example just -ksp_type preonly -pc_type lu; using an iterative solver <br>in such Schur complement is<br>always way to slow.<br><br> Barry<br><br>><br>><br>> I would like very much to have a look at your implementation, and I <br>> think that will be very useful to me.<br>><br>> Thanks<br>><br>> Waad<br>><br>> Lisandro Dalcin <dalcinl@gmail.com> wrote: On 5/20/08, Waad Subber <br>> wrote:<br>> > 1) How do you actually get the local Schur complements. You<br>> > explicitelly compute its entries, or do you compute it after <br>> computing<br>> > the inverse (or LU factors) of a 'local' matrix?<br>> ><br>> > I construct the local Schur complement matrices after
getting the <br>> inversion<br>> > of A_II matrix for each subdomain.<br>><br>> Fine,<br>><br>> > 2) Your R_i matrix is actually a matrix? In that case, it is a <br>> trivial<br>> > restrinction operation with ones and zeros? Or R_i is actually a<br>> > VecScatter?<br>> ><br>> > R_i is the restriction matrix maps the global boundary nodes to <br>> the local<br>> > boundary nodes and its entries is zero and one I store it as spare <br>> matrix,<br>> > so only I need to store the nonzero entries which one entry per a <br>> row<br>><br>> I believe a VecScatter will perform much better for this task.<br>><br>><br>> > And finally: are you trying to apply a Krylov method over the global<br>> > Schur complement? In such a case, are you going to implement a<br>> > preconditioner for it?<br>> ><br>> > Yes, that what I am trying to do<br>><br>> Well, please
let me make some comments. I've spent many days and month<br>> optimizing Schur complement iterations, and I ended giving up. I was<br>> never able to get it perform better than ASM preconditioner (iff<br>> appropriatelly used, ie. solving local problems with LU, and<br>> implementing subdomain subpartitioning the smart way, not the way<br>> currently implemented in PETSc, were subpartitioning is done by chunks<br>> of continuous rows).<br>><br>> If you are doing research on this, I would love to know your<br>> conclusion when you get your work done. If you are doing all this just<br>> with the hope of getting better running times, well, remember my above<br>> comments but also remember that I do not consider myself a smart guy<br>> ;-)<br>><br>> As I said before, I worked hard for implementing general Schur<br>> complement iteration. All this code is avalable in the SVN repository<br>> of petsc4py (PETSc for Python), but it
could be easily stripped out<br>> for use in any PETSc-based code in C/C++. This implementation requires<br>> the use of a MATIS matrix type (there is also a separate<br>> implementation for MATMPIAIJ maatrices), I've implemented subdomain<br>> subpartitioning (using a simple recursive graph splitting procedure<br>> reusing matrix reordering routines built-in in PETSc, could be done<br>> better with METIS); when the A_ii problems are large, their LU<br>> factorization can be a real bootleneck. I've even implemented a<br>> global preconditioner operation for the interface problem, based on<br>> iterating over a 'strip' of nodes around the interface; it improves<br>> convergence and is usefull for ill-conditioned systems, but the costs<br>> are increased.<br>><br>> If you ever want to take a look at my implemention for try to use it,<br>> or perhaps take ideas for your own implementation, let me
know.<br>><br>><br>><br>><br>><br>> > > Now having the Schur complement matrix for each subdomain, I <br>> need to solve<br>> > > the interface problem<br>> > (Sum[R_i^T*S_i*R_i])u=Sum[R_i^T*g_i],<br>> > > .. i=1.. to No. of process (subdomains) in parallel.<br>> > ><br>> > > For the global vector I construct one MPI vector and use <br>> VecGetArray ()<br>> > for<br>> > > each of the sequential vector then use VecSetValues () to add <br>> the values<br>> > > into the global MPI vector. That works fine.<br>> > ><br>> > > However for the global schur complement matix I try the same <br>> idea by<br>> > > creating one parallel MPIAIJ matrix and using MatGetArray( ) and<br>> > > MatSetValues () in order to add the values to the global matrix.<br>> > > MatGetArray( ) gives me only the values without indices, so I <br>> don't
know<br>> > how<br>> > > to add these valuse to the global MPI matrix.<br>> > ><br>> > > Thanks agin<br>> > ><br>> > > Waad<br>> > ><br>> > > Barry Smith wrote:<br>> > ><br>> > > On May 20, 2008, at 3:16 PM, Waad Subber wrote:<br>> > ><br>> > > > Thank you Matt,<br>> > > ><br>> > > > Any suggestion to solve the problem I am trying to tackle. I <br>> want to<br>> > > > solve a linear system:<br>> > > ><br>> > > > Sum(A_i) u= Sum(f_i) , i=1.... to No. of CPUs.<br>> > > ><br>> > > > Where A_i is a sparse sequential matrix and f_i is a sequential<br>> > > > vector. Each CPU has one matrix and one vector of the same <br>> size. Now<br>> > > > I want to sum up and solve the system in parallel.<br>> > ><br>> > > Does each A_i have nonzero
entries (mostly) associated with one<br>> > > part of the matrix? Or does each process have values<br>> > > scattered all around the matrix?<br>> > ><br>> > > In the former case you should simply create one parallel MPIAIJ<br>> > > matrix and call MatSetValues() to put the values<br>> > > into it. We don't have any kind of support for the later case, <br>> perhaps<br>> > > if you describe how the matrix entries come about someone<br>> > > would have suggestions on how to proceed.<br>> > ><br>> > > Barry<br>> > ><br>> > > ><br>> > > ><br>> > > > Thanks again<br>> > > ><br>> > > > Waad<br>> > > ><br>> > > > Matthew Knepley wrote: On Tue, May 20, 2008 at<br>> > > > 2:12 PM, Waad Subber wrote:<br>> > > > > Hi,<br>> > > > ><br>> > > > > I
am trying to construct a sparse parallel matrix (MPIAIJ) by<br>> > > > adding up<br>> > > > > sparse sequential matrices (SeqAIJ) from each CPU. I am using<br>> > > > ><br>> > > > > MatMerge_SeqsToMPI(MPI_Comm comm,Mat seqmat,PetscInt <br>> m,PetscInt<br>> > > > n,MatReuse<br>> > > > > scall,Mat *mpimat)<br>> > > > ><br>> > > > > to do that. However, when I compile the code I get the <br>> following<br>> > > > ><br>> > > > > undefined reference to `matmerge_seqstompi_'<br>> > > > > collect2: ld returned 1 exit status<br>> > > > > make: *** [all] Error 1<br>> > > > ><br>> > > > > Am I using this function correctly ?<br>> > > ><br>> > > > These have no Fortran bindings right now.<br>> > > ><br>> > > > Matt<br>> >
> ><br>> > > > > Thanks<br>> > > > ><br>> > > > > Waad<br>> > > > ><br>> > > ><br>> > > ><br>> > > ><br>> > > > --<br>> > > > What most experimenters take for granted before they begin their<br>> > > > experiments is infinitely more interesting than any results to <br>> which<br>> > > > their experiments lead.<br>> > > > -- Norbert Wiener<br>> > > ><br>> > > ><br>> > > ><br>> > ><br>> > ><br>> > ><br>> > ><br>> > ><br>> ><br>> ><br>> > --<br>> > Lisandro Dalcín<br>> > ---------------<br>> > Centro Internacional de Métodos Computacionales en Ingeniería <br>> (CIMEC)<br>> > Instituto de Desarrollo Tecnológico para la Industria Química <br>> (INTEC)<br>> > Consejo Nacional de
Investigaciones Científicas y Técnicas (CONICET)<br>> > PTLC - Güemes 3450, (3000) Santa Fe, Argentina<br>> > Tel/Fax: +54-(0)342-451.1594<br>> ><br>> ><br>> ><br>> ><br>> ><br>><br>><br>> -- <br>> Lisandro Dalcín<br>> ---------------<br>> Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)<br>> Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)<br>> Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)<br>> PTLC - Güemes 3450, (3000) Santa Fe, Argentina<br>> Tel/Fax: +54-(0)342-451.1594<br>><br>><br>><br><br></dalcinl@gmail.com></blockquote><br><p>