<br><br><b><i>Lisandro Dalcin <dalcinl@gmail.com></i></b> wrote:<blockquote class="replbq" style="border-left: 2px solid rgb(16, 16, 255); margin-left: 5px; padding-left: 5px;"> On 5/20/08, Waad Subber <w_subber@yahoo.com> wrote:<br>> The system I am trying to solve is the interface problem in iterative<br>> substructuring DDM. Where A_i represents [R_i^T*S_i*R_i] and f_i is<br>> [R_i^T*g_i].<br>><br>> Each process constructs the local Schur complement matrix (S_i) , the<br>> restriction matrix(R_i) as SeqAIJ and the RHS vector (g_i) as a sequential<br>> vector.<br><br>Two questions:<br><br>1) How do you actually get the local Schur complements. You<br>explicitelly compute its entries, or do you compute it after computing<br>the inverse (or LU factors) of a 'local' matrix?<br><br>I construct the local Schur complement matrices after getting the inversion of A_II matrix for each subdomain.<br><br>2) Your R_i matrix is actually a matrix? In that
case, it is a trivial<br>restrinction operation with ones and zeros? Or R_i is actually a<br>VecScatter?<br><br>R_i is the restriction matrix maps the global boundary nodes to the local boundary nodes and its entries is zero and one I store it as spare matrix, so only I need to store the nonzero entries which one entry per a row<br><br>And finally: are you trying to apply a Krylov method over the global<br>Schur complement? In such a case, are you going to implement a<br>preconditioner for it?<br><br>Yes, that what I am trying to do<br><br><br>> Now having the Schur complement matrix for each subdomain, I need to solve<br>> the interface problem (Sum[R_i^T*S_i*R_i])u=Sum[R_i^T*g_i],<br>> .. i=1.. to No. of process (subdomains) in parallel.<br>><br>> For the global vector I construct one MPI vector and use VecGetArray () for<br>> each of the sequential vector then use VecSetValues () to add the values<br>> into the global MPI vector. That works
fine.<br>><br>> However for the global schur complement matix I try the same idea by<br>> creating one parallel MPIAIJ matrix and using MatGetArray( ) and<br>> MatSetValues () in order to add the values to the global matrix.<br>> MatGetArray( ) gives me only the values without indices, so I don't know how<br>> to add these valuse to the global MPI matrix.<br>><br>> Thanks agin<br>><br>> Waad<br>><br>> Barry Smith <bsmith@mcs.anl.gov> wrote:<br>><br>> On May 20, 2008, at 3:16 PM, Waad Subber wrote:<br>><br>> > Thank you Matt,<br>> ><br>> > Any suggestion to solve the problem I am trying to tackle. I want to<br>> > solve a linear system:<br>> ><br>> > Sum(A_i) u= Sum(f_i) , i=1.... to No. of CPUs.<br>> ><br>> > Where A_i is a sparse sequential matrix and f_i is a sequential<br>> > vector. Each CPU has one matrix and one vector of the same size. Now<br>> > I want to sum up
and solve the system in parallel.<br>><br>> Does each A_i have nonzero entries (mostly) associated with one<br>> part of the matrix? Or does each process have values<br>> scattered all around the matrix?<br>><br>> In the former case you should simply create one parallel MPIAIJ<br>> matrix and call MatSetValues() to put the values<br>> into it. We don't have any kind of support for the later case, perhaps<br>> if you describe how the matrix entries come about someone<br>> would have suggestions on how to proceed.<br>><br>> Barry<br>><br>> ><br>> ><br>> > Thanks again<br>> ><br>> > Waad<br>> ><br>> > Matthew Knepley wrote: On Tue, May 20, 2008 at<br>> > 2:12 PM, Waad Subber wrote:<br>> > > Hi,<br>> > ><br>> > > I am trying to construct a sparse parallel matrix (MPIAIJ) by<br>> > adding up<br>> > > sparse sequential matrices (SeqAIJ) from each
CPU. I am using<br>> > ><br>> > > MatMerge_SeqsToMPI(MPI_Comm comm,Mat seqmat,PetscInt m,PetscInt<br>> > n,MatReuse<br>> > > scall,Mat *mpimat)<br>> > ><br>> > > to do that. However, when I compile the code I get the following<br>> > ><br>> > > undefined reference to `matmerge_seqstompi_'<br>> > > collect2: ld returned 1 exit status<br>> > > make: *** [all] Error 1<br>> > ><br>> > > Am I using this function correctly ?<br>> ><br>> > These have no Fortran bindings right now.<br>> ><br>> > Matt<br>> ><br>> > > Thanks<br>> > ><br>> > > Waad<br>> > ><br>> ><br>> ><br>> ><br>> > --<br>> > What most experimenters take for granted before they begin their<br>> > experiments is infinitely more interesting than any results to which<br>> > their experiments lead.<br>>
> -- Norbert Wiener<br>> ><br>> ><br>> ><br>><br>><br>><br>><br>><br><br><br>-- <br>Lisandro Dalcín<br>---------------<br>Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)<br>Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)<br>Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)<br>PTLC - Güemes 3450, (3000) Santa Fe, Argentina<br>Tel/Fax: +54-(0)342-451.1594<br><br></bsmith@mcs.anl.gov></w_subber@yahoo.com></blockquote><br><p>